What are the terms? It is not at all clear from the announcement. "part of this three-year licensing agreement", it _could_ mean the license cost is $1 billion, which Disney in turn invests in OpenAI in return for equity, and they're calling it "investment" (that's what's hypothesized above, but I don't think we know). Disney surely gets something for the license other than the privilege of buying $1 billion in OpenAI stock at their most recent valuation price.
Disney gets the opportunity to tell the board and investors that they are now partnered with a leading AI company. In effect, Disney is now an AI company as well. They haven't really done anything, but if anyone asks they can just say "of course we're at the forefront of the entertainment industry. We're already leveraging AI in our partnerships"
Considering how many other "pelican riding a bicycle" comments there are in this thread, it would be surprising if this was not already incorporated in the training data. If not now, soon.
I don't think the big labs would waste their time on it. If a model is great at making the pelican but sucks at all other svg it becomes obvious. But so far the good pelicans are strong indicators of good general SVG ability.
Unless training on the pelican increases all SVG ability, then good job.
Explicitly in the article, one of the headings is "AI slop is deceptive or low-value AI-generated content, created to manipulate ranking or attention rather than help the reader."
So yes, they are proposing marking bad AI content (from the user's perspective), not all AI-generated content.
How is this any different from a search engine choosing how to rank any other content, including penalizing SEO spam? I may not agree with all of their priorities, but I would welcome the search engine filtering out low quality, low effort spam for me.
Yes, that's why we'll publish a blog post on this subject in the coming weeks.
We've been working on this topic since the beginning of summer, and right now our focus is on exploring report patterns.
US health care outcomes are really not great, even if you are rich. Yes, you live longer than poor people in the US, but still do worse than Europeans, even those with lower incomes [0]. All while spending much more [1]. It's a system designed to siphon money from wherever it can (individuals, governments, companies, etc.), not to provide the best health care.
> Noam Shazeer said on a Dwarkesh podcast that he stopped cleaning his garage, because a robot will be able to do it very soon.
We all come up with excuses for why we haven't done a chore, but some of us need to sound a bit more plausible to other members of the household than that.
It would get about the same reaction as "I'm not going to wash the dishes tonight, the rapture is tomorrow."
I want to make it very clear that this was a lighthearted response from Noam to the "AGI timeline" question.
Noam does not do a lot of interviews, and I really hope that stuff like my dumb comment does not prevent him from doing more in the future. We could all learn a lot from him. I am not sure that everyone understands everything that this man has given us.