I always wondered, if deep research has an X% chance of producing errors in it's report and you have to double check everything + visit every source or potentially correct it yourself. Then does it really save time in helping you get research done (outside of coding and marketing)? .
It might depend on how much you struggle with writers block. An LLM essay with sources is probably a better starting point than a blank page. But it will vary between people.
I do wonder if this will push web publishers to start pay-walling up. I think the economics for deep research or AI search in general don't add up. Web publishers and site owners are losing traffic and human eyeballs from their site.
You might like what we're building in that sense :D (full disclosure, I'm the founder of Beloga). We're building a new way for search with programmable knowledge. You're essentially able to call on search from Google, Perplexity other search engines by specifying them as @ mentions together with your detailed query.
I think generative search itself has room for disruption and I'm not too sure if a chat interface or a perplexity style one is necessarily the right way to go about it.
I'd like to see search (or research in broader sense) a more controllable activity with the ability to specify context + sources easily in the form of apps, agents and content.