"If you are old enough, or possessed of a certain kind of disposition, you may be thinking, Wait a minute, aren’t you describing Enron? And uh, in some sense, yes! Enron’s whole thing was special purpose vehicles with extremely speculative valuations that were used to take on debt, Luria notes. But Enron lied about what it was doing, and that’s fraud and illegal. (It also got up to other illegal stuff besides.) Nvidia’s relationship with CoreWeave is all happening in plain sight. So are all the relationships with the other neocloud companies. It kind of seems like the tech company version of the GameStop open pump-and-dump."
To answer your question - not unique to AI spending boom - this kind of things was right there for recent major bubbles.
So is this just a hallmark of bubbles in general? They have to start getting silly with the debt financing so they load up SPVs to protect their assets and saddle others with the inevitable losses while they cash out?
I don’t understand how anyone could make this trade. There is no way coreweave outruns those debt obligations…
It’s an extreme case of “fake it till you make it.” They seem to believe they just need to keep the operation afloat until AGI magically appears. Increasingly, it’s clear that neither LLMs nor transformer architectures are going to get them there. That’s likely why many of the original engineers distanced themselves and are looking for new architectures now. Nobody wants their name tied to this fallout when reality lands. I truly believe we will need another tock for our compute backbone and that a lot of the hasty accumulated hardware we see now will be garbage in just a few years. It's true insanity.
I'm building an opinionated take on this. It's shaping up nicely.
If you're a Rust developer reading this, interested in AI + GUI + Enterprise SaaS, and wants to talk, I'm building a team as we speak. E-mail in profile.
> Right now, with their billions and trillions of dollars, they are trying to create a new world run by post-humans without ever having inquired about the opinions and preferences of the rest of humanity. They’re doing this without our consent, and they don’t really care one bit about what the rest of us have to say.
Man this shit makes me so sad for our future. The money train has left the station and I don’t know if enough good people have the will or ability to shut this destructive speed run down.
They're building their own little world, a parallel reality where owning submarine garages and yachts and island hideouts is appropriate; but helping the rest of humanity isn't.
I hope we — the People — realise this faster and decide we don't want to help them out. Divest from their properties, stop using their tools, and keep being nice to other humans. The rift isn't our design, we just have to do what we can. We may lack the ability but not the will.
Teach your kids right, folks. Let them aim for a firm kindness and honesty rather than monetary success.
I think you really want to be marketing concurrently with doing the thing. Saving it all for the end is probably a mistake. And a lot of projects don't have defined "end"s, like open source projects, or websites.
That depends. If OP’s job is marketing then doing it before is doing the thing even if it pisses off the people who’ll have to do the thing OP made up.
Marketing can also lead to you receiving feedback for what the users actually want. Doing the thing does not imply doing the right thing. You can be super productive building the wrong thing that no one wants.
> this was for our first few beta customers from 2017 and we made it clear that there was a human in the loop of the service. LLMs didn't exist yet. It was like offering an EA for $100/mo - several other startups did that as well, but obviously it doesn't scale.
So not necessarily fraud unless they deceived investors. Or he’s covering up his mistake. Getting the popcorn!!
That's not from the LI post from the CTO that the story (or the Futurism story the linked story is apparently rereporting) is based on [0], which has the same straight-up fraud description as appears in both stories: “We told our customers there's an "AI that'll join a meeting.” In reality it was just me and my co-founder calling in to the meeting sitting there silently and taking notes by hand.”
Your quote seems likely to be from an after-the-fact damage control “clarification” post by the CEO [1] describing that the early users as close friends, who knew that it was human assisted and not machine transcription (I say seems likely to be because it expresses something similar to what you claim but slightly more distant from the original story, and doesn't have the quote you present, but it is marked as edited so it seems plausible that it at one point had your quote but decided that it needed even a stronger rewrite of the narrative for PR reasons.)
1. It's not fraud. They were proving the market at an early stage while living on a pizza diet.
2. Their startup now does what it says on the tin. And it's now a unicorn.
3. To those claiming this was "unethical" - a large company providing this service would still record calls and have QA / engineers listening to calls to improve the service.
What the news article and the original post from the CTO clearly and describes is fraud: the service being sold on explicit, false pretenses.
What the later post from the CEO describes (and presents as a calrification, but it conflicts with rather than clarifies the initial description) is not fraud, but the question is was the CTO being loose with the truth to paint a rebel image or is the CEO. being loose with the truth to try to protect the company image after the CTO’s post got picked up by multiple news outlets and people were correctly pointing out that it described fraud?
+1, this strategy is often called a "concierge" MVP. You deliver the service you claim, but behind the scenes everything is actually incredible manual. Once you proved people like the service, you then go make the process less manual. Zappos and Amazon are both famous for doing this.
customers pay for the service, not the method-by-which-the-service-is-provided. If they explicitly sold it as a service without a human in the loop, then I think that's bad. If they just sold transcription..... then this is transcription.
Holy cow.
> These SPVs aren’t subject to the same kinds of regulations as the parent company.
Have SPVs always been used so egregiously or is this a unique feature of the AI spending boom?
reply