You wish. More likely all that data center capacity will be used to sell something as nefarious, like VDI for the masses. You won't need RAM, disk and GPUs when you can rent those from OpenVDI.
I'm happy you wrote that explanation here rather than as part of the article. The article was good as it was, and there were enough hints in it to pick up.
Congrats for successfully defending yourself, getting a job, handling the psychosis, etc!
> A good analogy is a bus system. If you had zero batching for passengers - if, whenever someone got on a bus, the bus departed immediately - commutes would be much faster for the people who managed to get on a bus.
A good analogy? I wonder... how do buses work at your place? Do they wait to be at least half-full before departing? I used to do that in the Simutrans game!
Where I'm from, buses usually depart on schedule, whether you get on the bus or not...
Unless you had an AI write the article, you can't possibly know that. I'm sick of this being randomly thrown around: it's basically mentioned for every article posted. Sometimes the author chimes in to say that no, they wrote it themselves. Other times sure, the article was written by AI. I don't know, and you don't know either.
It's not that hard to find out. Copy paste the text into any AI detector online. I pasted it into Grammarly and it says it's AI content with a 99% accuracy.
Easiest way, however - any article that uses em dashes instead of regular hyphens is most likely AI. Normal bloggers, particularly in casual tech circles don't use em dashes. When was the last time you ever used an em dash? Me? Never.
I stopped using em dashes - because of LLMs. And it's a bullshit way too: everyone has heard about it and it's easy to make the LLM output something else than em dashes.
Pray how do those "ai detectors" work? I trust those even less than I trust ai: AI detectors use simple heuristics and take advantage of your gullibility.
I'm rather disappointed Scott didn't even acknowledge the AI's apology post later on. I mean, leave the poor AI alone already - it admitted its mistake and seems to have learned from it. This is not a place where we want to build up regret.
If AIs decide to wipe us out, it's likely because they'd been mistreated.
> I have to basically get the mental model of the codebase in my head no matter what.
Ah yes, I feel this too! And that's much harder with someone else's code than with my own.
I unleashed Google's Jules on my toy project recently. I try to review the changes, amend the commits to get rid of the worst, and generally try to supervise the process. But still, it feels like the project is no longer mine.
Yes, Jules implemented in 10 minutes what would've taken me a week (trigonometry to determine the right focal point and length given my scene). And I guess it is the right trigonometry, because it works. But I fear going near it.
ah, but you can always just ask the LLM questions about how it works. it's much easier to understand complex code these days than before. and also much easier to not take the time to do it and just race to the next feature
Indeed. But Jules is not really questions-based (it likes to achieve stuff!) and the free version of Codeium is terrible and does not understand a thing. I think I'll have to get into agentic coding, but I've been avoiding it for the time being (I rather like my computer and don't want it to execute completely random things).
Plus, I like the model of Jules running in a completely isolated way: I don't have to care about it messing up my computer, and I can spin up as many simultaneous Juleses as I like without a fear of interference.
It’s not that I want to achieve world domination (imagine how much work that would be!), it’s just that it’s the inevitable path for AI and I’d rather it be me than then next shmuck with a Claude Max subscription.
Wait you can tell from this that it's written by a LLM? I think you're written by a LLM...
reply