Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With generative AI search results, soon you won't even be able to know whether your site was used for the result or not. Lots of no click queries, resulting in no traffic for the publisher


The much more worrying fact about AI is you won't even be able to know whether the information you're getting is true. I always scroll past the crap at the top to get to the actual site results.


I haven’t been able to estimate that with confidence in a decade; SEO blogspam has seen to that.


The content is true to the extent that a copywriter paraphrased another piece of content that is more authoritative.

My work is getting paraphrased a lot, and they usually get the gist right, although they have zero clue of what they're talking about.


As soon as I see a url that has the exact search query I typed in, I know that this is a content-farm site at best, and a LLM generated one at worst.

And these have started to become > 50% of my search results.

For the query "discord overlay not working wow", I get the following as THE SECOND RESULT: https://freeholidaywifi.com/discord-overlay-not-working-wow/


Given the web as it is today is infested with clickbait, "native content", clout-chasing, undisclosed sponsorship, and other such pathologies, I'm not convinced that the fear of AI making truth more rare is rational.


They are the same problem, what do people think LLM is trained on.

"clickbait, "native content", clout-chasing, undisclosed sponsorship, and other such pathologies" is exactly the data set they used to train ChatGPT


Not once have I received a response from ChatGPT that meets any of those definitions.

…Perhaps the robots are better at truth than you are giving them credit for? :-)


I mean, a matter of degrees still matters. The difference between a bad cut and a sucking chest wound.

What happens when somebody points the chatbots at Reddit? Wikipedia?


What makes you think they haven't? There was already upheaval at Stack* as a result of people writing trash answers with GPT help.


Yeah, had this happen already. I remembered some normally purely carnivorous type of animal had a herbivore species. It was spiders, but I searched for snakes first. One of the top results was an article about how boas can be fed a diet of fruit (they cannot), which must have been AI written with how many other semi-nonsense articles that site had.


This kind of problem is also showing up on Quora. Some of the answers I've spotted are so obviously wrong that you can tell a person didn't write them.


I mean, what is written on the internet you shouldn't take for granted either


That might have been worrying if “I” was known to reliably provide true information, but it never has, so we’re used to knowing that information probably isn’t. Adding an “A” to the equation changes nothing.


LLM’s are significantly less likely to be accurate but are quite good at fooling people. The problem is our existing BS detectors no longer work well. It’s surprisingly close to talking to a talented con man.


It was never completely reliable, but the situation was a lot better in the past, when SEO spam was not so prominent.


Was it? Go down to the local coffeeshop for the daily gossip and you'll hear all kinds of things that aren't true. People love to make stuff up and we have always needed to rely on the concept of credible sources. There is nothing to suggest those are going away.


I asked ChatGPT to give me links to some sources for one of its answers and it responded it didn't have access to the internet. I think this could be "solveable" by adding a "show your work" or "provide references" kind of feature in a future iteration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: