Tradingview’s charts couldn’t handle a million data points. They typically just render a few thousand candlesticks at a time, which is trivial with well optimized Canvas code.
Classic HN snark. It’s an example that is supposed to show the edge of its capabilities. You won’t find another word processor that can even come close.
> Imagine buying hardware that will be obsolete in 2 years
Unless the PC you buy is more than $4,800 (24 x $200) it is still a good deal. For reference, a MacBook M4 Max with 128GB of unified RAM is $4,699. You need a computer for development anyway, so the extra you pay for inference is more like $2-3K.
Besides, it will still run the same model(s) at the same speed after that period, or even maybe faster with future optimisations in inference.
The value depreciation of the hardware alone is going to be significant. Probably enough to pay for 3x ~$20 subscriptions to OpenAI, Anthropic and Gemini.
Also, if you use the same mac to work, you can't reserve all 128GB for LLMs.
Not to mention a mac will never run SOTA models like Opus 4.5 or Gemini 3.0 which subscriptions gives you.
So unless you're ready to sacrifice quality and speed for privacy, it looks like a suboptimal arrangement to me.
It’s not just good for small code bases. In the last six months I’ve built a collaborative word processor with its own editor engine and canvas renderer using Claude, mostly Opus. It’s practically a mini Google Docs, but with better document history and an AI agent built in. I could never have built this in 6 months by myself without Claude Code.
I think if you stick with a project for a while, keep code organized well, and most importantly prioritize having an excellent test suite, you can go very far with these tools. I am still developing this at a high pace every single day using these tools. It’s night and day to me, and I say that as someone who solo founded and was acquired once before, 10 years ago.
reply