> the systems don’t have a path to profitable functionality.
I don’t buy this at all.
Sure loads of ChatGPT wrappers will die but there is definitely money to be made. Robotics is going to be big. Companies are firing customer service people. Etc
Sure that last one is perhaps not desirable but it does prove both practical value and feasibility
And as the price per token plummets and context increases more application will open up.
Just because VC money is keeping some shops open that should die doesn’t mean that’s true for the entire field
AI customer service chatbots are just as prone to hallucination as any other AI chatbot, and companies are finding out that they're liable when their AIs lie to their customers. As we see more companies replacing humans with AIs, we're going to start to see more companies being found liable for the claims that their AIs spun out of whole cloth.
We haven't seen an AI yet which can consistently reproduce the truth, rather than something that sounds a lot like something that could be the truth. When someone makes an AI which is reliably just a knowledge engine, which can take an input, process it, and accurately convey the knowledge available in that input, then that will be a killer app; when an AI can take a company's documentation and summarize and condense it for easier consumption, that AI will be insanely popular in business cases both internal and external. Until then, these customer-service-replacing AI chatbots are going to prove to be more and more of a maintenance and liability headache.
FTA "Unfortunately, the venture-capital-funded AI industry runs on the promise of replacing humans with a very large shell script — including in areas where details matter. If the AI’s output is just plausible nonsense, that’s a problem. So the hallucination issue is causing a slight panic among AI company leadership."
I mean... If the VC's start funding AI companies with crypto is it incest or cannibalism.
Can someone come up with an idea for computing that doesn't involve wasting electricity with GPU's on pictures of cats and dogs?
We are about to find out if the “bullshit jobs” theory was correct. I suspect it is, we could easily replace the flibbertigibbet class of agile project managers with 3 lines of bash (including comments) and my own manager is a poorly formed excel macro.
To be fair, it doesn't look like he was far off on crypto.
Of course being able to call that doesn't at all automatically translate into also being right about AI.
There's lots of people that have successfully predicted one particular bubble, but all of their other predictions have completely failed to materialize (to say nothing of all the wrong ones they made before striking gold). That's just how selection/survivorship bias works.
In this particular case, I actually suspect that both things can be true at the same time: We're in the middle of an AI hype bubble and >90% of current startups will not survive the next 3 years, but AI will still profoundly change the way we work.
There's even a nice historical analogy: We once had the dot com bubble, in which lots of people lost lots of money, but ultimately, the Internet did turn out to be kind of a big deal. It's just hard to go from that general bet on vibes to one that identifies actual companies that are going to bring about all of that.
I think you're right, but I don't think "AI" as we see it today is really the thing we should be looking at when we come out of this bubble. What's really exciting is that we're building an enormous parallel computing infrastructure that's going to revolutionize scientific computing. We're already seeing some glimmers of this stuff like with what deepmind did with protein folding. Once the charletans die off I hope we see a massive acceleration in research in all kinds of fields.
That said, I do think that LLMs, while not the "final form of AI" by any means, are already somewhat of a revolution in human-machine interfaces. Being able to look up facts (or make up convincing lies) is neat, but to me, their real power lies in how good they are in summarizing and translating.
Remember when Siri and Google Assistant were having a very hard time understanding even the simplest commands if they weren't formulated in the exact way they understood, and the "aboutness" problem seemed unsurmountable without AGI?
Now we're suddenly way past that (without any AGI). Even if AI completely plateaus out with LLMs at their present-day strength, I think we could spend years, if not decades, of getting good at meaningfully integrating them into previously very manual workflows. Even an "AI winter" of that shape feels significantly more disruptive than many of the more optimistic crypto scenarios.
> we're building an enormous parallel computing infrastructure that's going to revolutionize scientific computing
We're doing an absolute shit job of it and basing our foundations on piles of sand. We're gonna have to throw it all away and do it right the second time if we ever decide to do something useful.
I don’t think that matters. Out of the Dotcom rubble, we got a lot of useful ideas for tools even though the actual tools all got thrown away. It was the ideas and the “hey, look at what is possible” that survived i.e. Linux came of age properly and now the world runs on it.
AI will quickly pollute itself, but the infrastructure will survive. That most of it will be thrown away is perfectly ok, the ideas will live on.
I'm not sure I agree with that and I'm one of the biggest ai pessimists here. The gpus getting created to do this shit aren't just gonna get thrown out (maybe sold in a fire sale), and they're pretty general purpose. Tons of science needs massive compute.
author here. the key insight is that (a) the AI promotional nonsense is the same as crypto (b) the VCs are literally the same guys who just bombed out on Web3.
AI is not as blitheringly useless as crypto - ML is real and does things! - but those guys are zeroing in on the applications that really are that useless.
Also, the AI guys are making the same excuses for their ghastly power usage that the coiners used to.
Compare the last two S. Altman interviews on the Lex Fridman Podcast. Dude already got lawyers breathing down his neck not to make unsubstantiated claims. I guess, they can't deliver, and the suspicion of fraud is in the air.
Really? You can't think of one single practical use for a non-centrally controlled system for sending -basically money- to others without gatekeepers being able to effortlessly interfere or debase? You completely dismiss the very real cases (all bubble hype uses and scams aside) of people using crypto to escape the worst parts of certain countries horrible inept, corrupt banking and financial systems?
If you flatly deny that any of these uses cases do exist and could become larger, I'd argue that you're speaking from ideological blindness against crypto instead of from a reasoned, thoughtful notion of its problems.
I've only ever heard of one use that wasn't speculative, promotional, astro-turfing, or just a straight up scam. That one use was an anonymous online marketplace to buy and sell illegal drugs.
If that's the only example you can think of, then you know next to shit about the crypto landscape besides whatever spoon fed your own emotional bias, and shouldn't consider your opinion to be worth a damn.
Further examples, since you're apparently too lazy to even do a google search:
There are many more outside the privileged bubble of reliable financial services that many tech people on this site live in. Even if there weren't, the existence of a mechanism for sending others money without easy blocking by arbitrary gatekeepers is an innately valuable thing in an increasingly controlled society.
The personal attack on me is unwarranted, rude, and wrong. I don't know a whole lot about crypto because it's not central to my life, and the things I do know about crypto (for example, that it's lousy with scams) haven't inspired much interest to learn more. That deserves a correction, not sloppy insults.
Maybe you thought I was someone else who deserved it more?
I appreciate you being willing to go find actual material to cite, though. That's valuable.
I have yet to see a comment with a sarcastic "something bad" in it that provides an actual argument.
By the way, what positive effect has crypto had so far? Other than redistributing between some people and wasting enormous amounts of energy?
Personally I use AI all the time, but hallucinations are a problem, data is always a problem in all of machine learning, and it's always good to stay away from the cult-like behavior of people who can find absolutely nothing wrong with new technology.
I don’t buy this at all.
Sure loads of ChatGPT wrappers will die but there is definitely money to be made. Robotics is going to be big. Companies are firing customer service people. Etc
Sure that last one is perhaps not desirable but it does prove both practical value and feasibility
And as the price per token plummets and context increases more application will open up.
Just because VC money is keeping some shops open that should die doesn’t mean that’s true for the entire field