As someone who works in the news industry I find it pretty sad that we've just capitulated to big tech on this one. There are countless examples of AI summaries getting things catastrophically wrong, but I guess Google has long since decided that pushing AI was more important than accurate or relevant results, as can also be seen with their search results that simply omit parts of your query.
I can only hope this data is being incorporated in some way that makes hallucinations less likely.
Unfortunately this has just been the reality over the last couple years. People just ignore the hallucination problem (or try to say it isn't a big deal). And yet we have seen time and time again examples of these models being given something, told to summarize it, and still hallucinate important details. So you can't even make the argument that its data is flawed or something.
These models will interject information from their training whether or it is relevant or not. This is just due to the nature of how these models work.
Anyone trying to argue that it doesn't happen that often or anything is missing the key problem. Sure it may be right most of the time, but all that does is build a false sense of security and eventually you stop double checking or clicking through to a source. Whether it is a search result, manipulating data, or whatever.
This is made infinitely worse when these summaries are one and done, a single user is going to see the output and no one else will see it to fact check. It isn't like an article being wrong that everyone reading it is reading the same article, can then comment that something is wrong, it get updated, and so on and so forth. That feedback loop is non-existent with these models
> Anyone trying to argue that it doesn't happen that often or anything is missing the key problem. Sure it may be right most of the time, but all that does is build a false sense of security and eventually you stop double checking or clicking through to a source. Whether it is a search result, manipulating data, or whatever.
Same problem existed before AI summaries.
"Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.
In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know."
> I can only hope this data is being incorporated in some way that makes hallucinations less likely.
The key word is "real-time". LLMs can't be trained in realtime, so it's obviously going to call an API that pulls up and reads from AP news, just like their search engine.
i don't think you can assume that - "real time" in this context could just mean they feed every article into their training system as soon as it's published.
That seems more unlikely to me -- training is not free and takes a long time, so it would not result in "[enhancing] the usefulness of results displayed in the Gemini app" and it being "particularly helpful to our users looking for up-to-date information."
Fine-tuning, which is cheaper and faster, has been proven to not be a good solution to "teach" models new facts.
I think what's most likely here is that Gemini will have access to a form of RAG based on a database of AP articles that gets updated in real-time as new articles are published.
If there's any company who can afford "real-time LLM training" at this moment, I'm 100% sure they will win this AI race since they probably have at least ~10x compute compared to competitors. Of course, no one can do that right now.
The news industry capitulated to big tech the moment it got reliant on big tech for the majority of its revenue. The entire media landscape today is the direct result of that.
Take it a step back further, and you will see that the media landscape capitulated to Big Anything a long time ago. For probably generations now, if we consider people like william randolf hearst and other newspaper men.
Uhh, has your head been in the sand? Look at the average output of your industry without ai. It gets things wrong. It misleads. It hallucinates. It has incentives that fundamentally differ from what the readership seeks in news. The fact that your industry took so readily to the technology to output ever more garbage says it all about the state of the industry vs any condemnation of the fundamental technology.
I think people really don't understand the effort, care and risk that goes into producing quality reporting.
I work with investigative reporters on stories that take many months to produce. Every time we receive a leak there is an extensive process of proving public interest before we can even start looking at the material. Once we can see it in we have to be extremely careful with everything we note down to make sure that our work isn't seen as prejudiced if legal discovery happens. We're constantly going back and forth with our editorial legal team to make sure what we're saying is fair and accurate. And in the end, the people we're reporting are given a chance to refute any of the facts we're about to present. Any mistakes can result in legal action that can ruin the lives of reporters and shut down companies.
Now, imagine I were to go to a reporter who has spent 6 months working on a story about, for example, a high profile celebrity sexually assaulted multiple women, how the royal family hides their wealth and are exempt from laws, or how multinational corporations use legal loopholes to avoid paying taxes, and said, "oh, 1% of people reading this will likely be given some totally made up details".
Given that stories often have more than a million impressions, this would lead tens of thousands of people with potentially libellous "hallucinations".
It simply should not be allowed.
LLMs have their place, for sure, but presenting the news is not it.
Although I agree with every single sentence you've said, we've seen in the past decade how only very small percentage of people actually care about the content of the news. Everyone just discusses and gets their information from the headlines, so this is a natural consequence of "let's just summarize it to a couple of sentences since nobody reads it anyways".
The Gemini models themselves may score well on this, but Google's feature implementations are a whole other thing. AI Overviews frequently take untrustworthy search results (like a fan fiction plot outline for Encanto 2) and turn those into confidently incorrect answers. https://simonwillison.net/2024/Dec/29/encanto-2/
And doesn't bringing in The Associated Press solve this problem? No need for the AI to decide what is trustworthy or not. For the vast majority of people everything The Associated Press publishes is trustworthy.
1.3% isn't great. I'd rather just go, and pay, directly to trusted news sources. Everyone has different tolerance for falsehoods and priorities I guess.
As others have already pointed out, feeding these new articles aren't magically going to make them any more accurate. These hallucinations are going to be on top of any errors in the data sources.
I'm not replying to point that out, I think others have done a better job. It's mostly that this conversation made me think of this classic Babbage quote that I've always enjoyed.
"On two occasions I have been asked, – 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question"
Except when that happens, a clarification is almost always added at the bottom of the article ("This article was amended on [date]. An earlier version said xxx" or some variation thereof).
You're not gonna get a second push notification from an AI summary saying "Oopsies, the previous notification was wrong". Once it's out, it's out, and that sort of damage is difficult to repair.
Yes, but that's going to be on top of the ~1.3% hallucination rate (largely, there's always some very small chance it hallucinates the truth when the article had it wrong - but basically not worth considering).
Anything other than 0% is borderline immoral. Imagine sending a push notification to somebody's phone with a completely made-up headline summary. Even if it happens once in a hundred times, that's too much.
Things like that slowly but surely erode trust and make it harder and harder to trust anything that's generated by AI, especially when it comes to news, where trustworthiness is essential, and probably the main reason people pay for news. See for example https://www.bbc.co.uk/news/articles/cge93de21n0o
This is a ridiculous standard. News headlines at the moment would have an error rate wildly above 1.3%. The articles about Apple having trouble with LLM headlines is that the on-device model is weak and it's trying to compress too much into too few characters. I'd guess the chance of Gemini incorrectly summarising an article to be almost 0%.
Have you ever read a news article on a subject where you have expertise and knew it was inaccurate? The news is probably more inaccurate than you think.
I bet you think the news is accurate all other times. It’s called “Gell-Mann Amnesia”
You’d have to pay quite a bit to get journalists to answer your questions specifically.
The whole isn’t about generating news articles, it’s about getting the model up to date on facts so it can synthesize a newspaper for you. I’d say it’s a way to get journalists to be journalists again instead of clickbait composers - as long as the model doesn’t inject clickbait there itself. I don’t trust Google to not do it sometime, but they aren’t doing it now and the infrastructure is being made for others to consume when Gemini suffers from inevitable enshittification.
> You’d have to pay quite a bit to get journalists to answer your questions specifically.
This isn't what I meant. I pay directly for subscriptions/donations to news organizations that employee journalists that do this original reporting. I don't want a middle man that just messes it up. This goes for LLMs and for free news sites that don't do much more than summarize original reporting. I've seen more than a few times where they inject opinions, mess up facts or put focus on what was originally a small side point in the article.
> There's always a chance what you're reading is wrong - due to purposeful deception, negligence, or accident.
I am quite certain my personal hallucinations level is more than 1.3%, obviously we want our machines to be better than us, but my doctor once said folic acid is not a vitamin.
Abrogate the direct primary inputs. Weather sensors. Wildfire cameras. Police scanners. Court proceedings. Changes to ordinances. New LLC filings. Bankrupcies. Birth records. Death records. The whole corpus of society that is automatically logged and used as the primary data for people to then perform research or develop journalism upon.
That is what you siphon up. And in output you can mad lib out an article just like those johnny on the spot AP reporters do anyhow, filling in the skeleton article about a death or an attack or a banquet or award show with the relevant input concerning the event. LLM isn't even used for finding this input but to just adjust the boilerplate, perhaps to tailor news specifically to the reader's own inclinations based on engagement with other articles collected via fingerprinting.
I recommend you find some journalists who's work you find impressive and ask yourself what types of passive inputs could have written pieces like that.
Those represent what 1% of the bulk output of the field if I had to guess? By volume most news is wire service a la terse AP reports that get reposted everywhere. And they have to be terse because it's breaking news and there is no time to opine beyond reporting the inputs mainly as they are in shortform.
Doubling and tripling and quadrupling down on behaviors that most consumers wish they'd stop doing. You used to work at Google so you must be familiar with how groupthink operates.
I can only hope this data is being incorporated in some way that makes hallucinations less likely.