Hacker Newsnew | past | comments | ask | show | jobs | submit | Thrymr's commentslogin

What are the terms? It is not at all clear from the announcement. "part of this three-year licensing agreement", it _could_ mean the license cost is $1 billion, which Disney in turn invests in OpenAI in return for equity, and they're calling it "investment" (that's what's hypothesized above, but I don't think we know). Disney surely gets something for the license other than the privilege of buying $1 billion in OpenAI stock at their most recent valuation price.


Disney gets the opportunity to tell the board and investors that they are now partnered with a leading AI company. In effect, Disney is now an AI company as well. They haven't really done anything, but if anyone asks they can just say "of course we're at the forefront of the entertainment industry. We're already leveraging AI in our partnerships"


Yeah - they save face.


The author calls himself a "Real Estate Novelist and recovering healthcare consultant" (https://substack.com/@andrewtsang).


Did he ever have time for a wife?


Ah, so it's fiction?


It is open source (MIT license), Claude should have a pretty good start on it already.


I hope Google is at least acknowledging the origin of the name, even if they are not paying royalties to Randall Monroe.


Don't take my random pondering as any kind of evidence


Considering how many other "pelican riding a bicycle" comments there are in this thread, it would be surprising if this was not already incorporated in the training data. If not now, soon.


I don't think the big labs would waste their time on it. If a model is great at making the pelican but sucks at all other svg it becomes obvious. But so far the good pelicans are strong indicators of good general SVG ability.

Unless training on the pelican increases all SVG ability, then good job.


I absolutely think they would given the amount of money and hype being pumped into it.


Explicitly in the article, one of the headings is "AI slop is deceptive or low-value AI-generated content, created to manipulate ranking or attention rather than help the reader."

So yes, they are proposing marking bad AI content (from the user's perspective), not all AI-generated content.


Which troubles me a bit, as 'bad' does not have same definition for everyone.


How is this any different from a search engine choosing how to rank any other content, including penalizing SEO spam? I may not agree with all of their priorities, but I would welcome the search engine filtering out low quality, low effort spam for me.


Yes, that's why we'll publish a blog post on this subject in the coming weeks. We've been working on this topic since the beginning of summer, and right now our focus is on exploring report patterns.

Matt also shared insights about the other signals we use for this evaluation here https://news.ycombinator.com/item?id=45920720

And we are still exploring other factors,

1/ is the reported content ai-generated?

2/ is most content in that domain ai-generated (+ other domain-level signals) ==> we are here

3/ is it unreviewed? (no human accountability, no sources, ...)

4/ is it mindlessly produced? (objective errors, wrong information, poor judgement, ...)


There’s a whole genre of websites out there that are a ToC and a series of ChatGPT responses.

I take it to mean they’re targeting that shit specifically and anything else that becomes similarly prevalent and a plague upon search results.


A simple definition would be: Its bad if it isn't labeled as AI content or if there is not a mechanism that allows you to filter out AI content.


That's fine.


And in inflation-adjusted terms, rounding to the nearest nickel now is about as significant as rounding to the nearest penny was in 1978.


Hazelnut politics are a big deal in Europe [0].

[0] https://www.newyorker.com/magazine/2025/06/09/how-a-hazelnut... (archive link: https://archive.is/1UTf3)


US health care outcomes are really not great, even if you are rich. Yes, you live longer than poor people in the US, but still do worse than Europeans, even those with lower incomes [0]. All while spending much more [1]. It's a system designed to siphon money from wherever it can (individuals, governments, companies, etc.), not to provide the best health care.

[0] "in some cases, the wealthiest Americans have survival rates on par with the poorest Europeans in western parts of Europe such as Germany, France and the Netherlands." https://www.brown.edu/news/2025-04-02/wealth-mortality-gap

[1] https://ourworldindata.org/us-life-expectancy-low


> Noam Shazeer said on a Dwarkesh podcast that he stopped cleaning his garage, because a robot will be able to do it very soon.

We all come up with excuses for why we haven't done a chore, but some of us need to sound a bit more plausible to other members of the household than that.

It would get about the same reaction as "I'm not going to wash the dishes tonight, the rapture is tomorrow."


I want to make it very clear that this was a lighthearted response from Noam to the "AGI timeline" question.

Noam does not do a lot of interviews, and I really hope that stuff like my dumb comment does not prevent him from doing more in the future. We could all learn a lot from him. I am not sure that everyone understands everything that this man has given us.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: