I asked gemini to do a deep research on the role of healthcare insurance companies in the decline of general practicioners in the Netherlands.
It based its premise mostly on blogs and whitepapers on company websites, who's job it is to sell automation-software.
AI really needs better source-validation. Not just to combat the hallucination of sources (which gemini seems to do 80% of the time), but also to combat low quality sources that happen to correlate well to the question in the prompt.
It's similar to Google having to fight SEO spam blogs, they now need to do the same in the output of their models.
Better source validation is one of the main reasons I'm excited about GPT-5 Thinking for this. It would be interesting to try your Gemini prompts against that and see how the results compare.
When using AI models through Kagi Assistant you can tweak the searches the LLM does with your Kagi settings (search only academic, block bullshit websites and such) which is nice. And I can chose models from many providers.
No API access though so you're stuck talking with it through the webapp.
AI really needs better source-validation. Not just to combat the hallucination of sources (which gemini seems to do 80% of the time), but also to combat low quality sources that happen to correlate well to the question in the prompt.
It's similar to Google having to fight SEO spam blogs, they now need to do the same in the output of their models.