Great piece — the idea that the web isn’t "closing" but repricing is a powerful way to frame what’s happening.
The staircase cost jumps from anti-bot upgrades really resonated, that’s exactly how it feels in practice.
Efficiency over raw scale feels like the right mental model for the next phase of scraping.
Really appreciated this. It’s refreshing to see someone be this honest about their gaps and growth. A lot of us quietly deal with the same things, so thanks for putting it into words.
The part that surprised me is how much TPUs gain from the systolic array design. It basically cuts down the constant memory shuffling that GPUs have to do, so more of the chip’s time is spent actually computing.
The downside is the same thing that makes them fast: they’re very specialized. If your code already fits the TPU stack (JAX/TensorFlow), you get great performance per dollar. If not, the ecosystem gap and fear of lock-in make GPUs the safer default.
reply