Every time an analyst gives the current state of AI-based tools as evidence supporting AI disruption being just a hype, I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.
Putting that aside, how is this article called an analysis and not an opinion piece? The only analysis done here is asking a labor economist what conditions would allow this claim to hold, and giving an alternative, already circulated theory that AI companies CEOs are creating a false hype. The author even uses everyday language like "Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ ".
Is this really the level of analysis CNN has to offer on this topic?
They could have sketched the growth in foundation model capabilities vs. finite resources such as data, compute and hardware. They could have wrote about the current VC market and the need for companies to show results and not promises. They could have even wrote about the giant biotech industry, and its struggle with incorporating novel exciting drug discovery tools with slow moving FDA approvals. None of this was done here.
> I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.
Compare: "Whenever I think of skeptics dismissing completely novel and unprecedented outcomes occurring by mechanisms we can't clearly identify or prove (will) exist... I think of skeptics who dismissed an outcome that had literally hundreds of well-studied historical precedents using proven processes."
You're right that humans don't have a good intuition for non-linear growth, but that common thread doesn't heal over those other differences.
If that were happening right now, how would we know? COVID-19 cases were tracked imperfectly but pretty well; is there any equivalent for AI-related job losses?
Right, my point is that we don't have the data to make a similar exponential argument. We can't rule out the possibility that we're currently in the early stages of exponential growth based on direct measurement. If it is exponential, once it doubles enough times, it will show up in overall economic data.
We can also look at the tools, which have improved relatively quickly but don't appear to be improving exponentially. GPT-4 and GPT-4o came out about a year after their predecessors. Is GPT-4o a bigger leap that GPT-4 was? Are GPT-4.5 or 4.1 a bigger leap than GPT-4 was? I honestly don't know, but the general reception suggests otherwise. The biggest leaps recently seem to be making models that perform roughly as well as past ones but are much smaller. That has advantages from the standpoint of democratization and energy consumption, but those kinds of improvements seems to favor a situation where AI augments workers rather than replaces them.
Why not use the promised exponential growth of home ownership that led to the catastrophic real estate bubble that burst in 2008 as an example?
We are still dealing with the aftereffects, which led to the elimination of any working class representation in politics and suppression of real protests like Occupy Wall Street.
When this bubble bursts, the IT industry will collapse for some years like in 2000.
The growth of home ownership was an indicator of real estate investment, not of real world capabilities - once the value of real estate dropped and the bubble burst, those investments were worth less than before, causing the crisis. In contrast, the growth in this scenario is the capabilities of foundation models (and to a lesser extent, the technologies that stem out of these capabilities). This is not a promise or an investment, it's not an indication of speculative trust in this technology, it is a non-decreasing function indicating a real increase in performance.
You can pick and choose problems from history where folk belief was wrong: WW1 vs. Y2K.
This isn't very informative. Indeed, engaging in this argument-by-analoguy betrays a lack of actual analysis, credible evidence and justification for a position. Arguing "by analogy" in this way, which picks and chooses an analogy, just restates your position -- it doesnt give anyone reasons to believe it.
As a developer that uses LLMs, I haven't seen any evidence that LLMs or "AI" more broadly are improving exponentially, but I see a lot of people applying a near-religious belief that this is happening or will happen because... actually, I don't know? because Moore's Law was a thing, maybe?
In my experience, for practical usage LLMs aren't even improving linearly at this point as I personally see Claude 3.7 and 4.0 as regressions from 3.5. They might score better on artificial benchmarks but I find them less likely to produce useful work.
Viruses spread and propagate themselves, often changing along the way. AI doesn't, and probably shouldn't. I think we've made a few movies on why that's a bad idea.
Humans are. We have tools to measure exponential growth empirically. It was done for COVID (i.e. epidemiologists do that usually) and is done for economy and other aspects of our life. If there's to be exponential growth, we should be able to put it in numbers. "True me bro" is not a good measure.
Since I can’t reply under you answer for some reason I put it here.
We can have a constructive discussion instead. My problem was not actually parsing what you said. I’m questioning the assumption if populace collectively modeling exponential change is really meaningful. You can, for example, describe how does it look like when populace can model change exponentially. Is there any relevant literature on this subject that I can look into? Does this phenomenon have a name?
I understand that complex sentences can sometimes be difficult to parse for median Americans or non-native speakers, but we disagree on whether what I said was word salad prior to you rewording it by explicitly enumerating the implied indirect object. As you demonstrated, context clues were ample to determine meaning.
> The criticisms in the cnn article are already out date in many instances.
Which ones, specifically? I’m genuinely curious. The ones about “[an] unfalsifiable disease-free utopia”? The one from a labor economist basically equating Amodei’s high-unemployment/strong economy claims to pure fantasy? The fact that nothing Amodei said was cited or is substantiated in any meaningful way? Maybe the one where she points out that Amodei is fundamentally a sales guy, and that Anthropic is making the rounds saying scary stuff just after they released a new model - a techbro marketing push?
I like anthropic. They make a great product. Shame about their CEO - just another techbro pumping his scheme.
especially when the world population is billions and at the beginning we were worried about double digit IFR.
Yeah. Imagine if COVID had actually killed 10% of the world population. Killing millions sucks, but mosquitos regularly do that too, and so does tuberculosis, and we don't shut down everything. Could've been close to a billion. Or more. Could've been so much worse.
I think you missed the point. AI is dismissed by idiots because they are looking at its state now, not what it will be in future. The same was true in the pandemic.
I remember scientists, especially epidemiologists being quite accurate. I guess the key is to not even have a political angle but instead some knowledge of what you are talking about.
> I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.
Uh, not to be petty, but the growth was not exponential — neither in retrospect, nor given what was knowable at any point in time. About the most aggressive, correct thing you could’ve said at the time was “sigmoid growth”, but even that was basically wrong.
If that’s your example, it’s inadvertently an argument for the other side of the debate: people say lots of silly, unfounded things at Peak Hype that sound superficially correct and/or “smart”, but fail to survive a round of critical reasoning. I have no doubt we’ll look back on this period of time and find something similar.
It goes both ways. Once the exponential growth of COVID started, I heard wildly outrageous predictions of what was going to happen next, none of which ever really came to fruition.
Its an article reformulated from a daily newsletter. Newsletters take the form of a quick, casual follow up to current events (e.g. an Amodei interview). Its not intended to be exhaustive analysis.
Besides the labor economist bit, it also makes the correct point that tech people regularly exaggerate and lie. A great example of this is biotech, a field I work in.
This moment feels exactly to me like that moment when we were going to “shut down for two weeks” and the majority of people seemed to think that would be the end of it.
It was clear where the trend was going, but exponentials always seem ridiculous on an intuitive level.
"Is this really the level of analysis CNN has to offer on this topic?"
It's not CNN exlusive. Newsmedia that did not evolve towards clicks, riling up people, hatewatching and paid propaganda to the highest bidder went extinct a decade ago. This is what did evolve.
This is outdated. Most of journalism has shifted to subscription models, offering a variety of products under one roof: articles, podcasts, newsletters, games, recipes, product reviews, etc.
The best heuristic is what people are realizing happened with uncheck "skilled" immigration in places like canada (and soon the U.S.). Everyone was sold that we "need these workers" because nobody was willing to work and that they added to GDP. When in reality, there's now significant evidence that all these new arrivals did was put a net drain on welfare, devalue the labor of endemic citizens (regardless of race - in many cases affecting endemic minorities MORE) and in the end, just reduced cost while degrading companies who did this.
We will wake up in 5 yrs to find we replaced people for a dependence on a handful of companies that serve llms and make inference chips. Its beyond dystopian.
Can you provide more details about said "significant evidence"? This seems to be a pretty popular belief, despite being contrary to generally accepted economics, and I've yet to see good evidence for it.
Putting that aside, how is this article called an analysis and not an opinion piece? The only analysis done here is asking a labor economist what conditions would allow this claim to hold, and giving an alternative, already circulated theory that AI companies CEOs are creating a false hype. The author even uses everyday language like "Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ ".
Is this really the level of analysis CNN has to offer on this topic?
They could have sketched the growth in foundation model capabilities vs. finite resources such as data, compute and hardware. They could have wrote about the current VC market and the need for companies to show results and not promises. They could have even wrote about the giant biotech industry, and its struggle with incorporating novel exciting drug discovery tools with slow moving FDA approvals. None of this was done here.