Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I asked gemini to do a deep research on the role of healthcare insurance companies in the decline of general practicioners in the Netherlands. It based its premise mostly on blogs and whitepapers on company websites, who's job it is to sell automation-software.

AI really needs better source-validation. Not just to combat the hallucination of sources (which gemini seems to do 80% of the time), but also to combat low quality sources that happen to correlate well to the question in the prompt.

It's similar to Google having to fight SEO spam blogs, they now need to do the same in the output of their models.



Better source validation is one of the main reasons I'm excited about GPT-5 Thinking for this. It would be interesting to try your Gemini prompts against that and see how the results compare.


I've found GPT-5 Thinking to perform worse than o3 did in tasks of a similar nature. It makes more bad assumptions that de-rail the train of thought.


I think the key is prompting, and bound boxing assumptions.


When using AI models through Kagi Assistant you can tweak the searches the LLM does with your Kagi settings (search only academic, block bullshit websites and such) which is nice. And I can chose models from many providers.

No API access though so you're stuck talking with it through the webapp.


Kagi has some tooling for this. You can set web access “lenses” that limit the results to “academic”, “forums”, etc.

Kagi also tells you the percentages “used” for each source and cites them in line.

It’s not perfect, but it’s a lot better to narrow down what you want to get out of your prompt.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: