Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really don't like that argument because it assumes your program is important enough to be the only one running on your user's machines; it rarely ever is.

Everything easily wastes 90% of the CPU resources it touches and the task manager is completely oblivious to it, happily reporting high usage. When you have 20+ tabs open and 10+ apps all those "its fast enough" apps combine to create their own variant of hell.

And that isn't even a big workload. Its no wonders computers have increased many orders of magnitude in performance over the last decade, yet user experiences are still generally mediocre.



We're talking here about the time to render JSON to HTML for one page - the page that this user is presumably looking at. If that takes 90% of your CPU for more that a few milliseconds, then it's time to refactor.


Its a bit more complicated.

Its that JSON data, the request to API endpoint(s) to get it, the JavaScript to drive it, the request to fetch that JS and whatnot. That may not waste much CPU indeed, but instead wastes bandwidth and time. This is quite easy to notice on mobile devices.

Your servers now have to serve these API endpoints; static pages can be deferred to proper CDNs. For larger deployments this can drive the server costs to eat your profits rather quickly. Not even considering the fact that the dynamic route was much more development than the static one in the first place.

About the 90% wasted CPU, I was talking about how the CPU constantly waits for memory because very, very few programmers optimize for cache misses and lots of dynamic languages make it impossible to. Waiting on memory still shows as activity in the task manager, but the CPU isn't computing anything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: