Hacker News new | past | comments | ask | show | jobs | submit login

> Current search is great for facts, alright for generic questions, and annoying for answering something.

I feel like Google used to be pretty great at answering questions until they started pushing ads more aggressively into their results, and then intentionally gimping their results (for instance if you put quotes around a word Google will happily ignore it, whereas this used to actually be an enforced rule).

I'm not sure AI is the solution to this problem at all. I think it's about misaligned incentives and trying to shoehorn a new cash cow into the mix.




What I mean by generic questions is traditional search is pretty alright (in the past and now) if you threw at it "how to set an arbitrary bit in a number". You get plenty of generic articles answering the question just fine, even if you have to scroll past some ads these days. If you were instead using quotes to get a specific answer to something you were doing like 'How to set the "18th bit" in a "u32" using "Zig"' then that's more what I meant by "answering something" without wanting to go piece together the answer from generic articles yourself then search does and has always really sucked (unless you're extremely lucky and some dude posted exactly that example out on the web). This is where LLMs shine, you can ask that question and get an exact answer for your scenario (plus the generic explanation of why it works if that's what you really want) without having to piece together sources yourself.

Misaligned incentives will of course make either side worse but search engines ~2010 were (and now still are) far from being good solutions for all types of queries, they were just closer to being good than anything else we had for them.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: