It doesn't have to be this way. I am not sure when there was a new rule passed in software engineering that said that you shall never use server rendering again and that the client is the only device permitted to render any final views.
With server-side (or just static HTML if possible), there is so much potential to amaze your users with performance. I would argue you could even do something as big as Netflix with pure server-side if you were very careful and methodical about it. Just throwing your hands up and proclaiming "but it wont scale!" is how you wind up in a miasma of client rendering, distributed state, et. al., which is ultimately 10x worse than the original scaling problem you were faced with.
There is a certain performance envelope you will never be able to enter if you have made the unfortunate decision to lean on client resources for storage or compute. Distributed anything is almost always a bad idea if you can avoid it, especially when you involve your users in that picture.
This type of anti-big-js comment does great on Hacker News and sounds good, but my personal experience has always been very different. Every large server-rendered app I've worked on ends up devolving to a mess of quickly thrown together js/jquery animations, validations, XHR requests, etc. that is a big pain to work on. You're often doing things like adding the same functionality twice, once on the server view and once for the manipulated resulting page in js. Every bit of interactivity/ reactivity that product wants to add to the page feels like a weird hack that doesn't quite belong there, polluting the simple, declarative model that your views started off as. None of your JS is unit tested, sometimes not even linted properly because it's mixed into the templates all over the place. The performance still isn't a given either, your rendering times can still get out of hand and you end up having to do things like caching partially rendered page fragments.
The more modern style of heavier client-side js apps lets you use software development best practices to structure, reuse, and test your code in ways that are more readable and intuitive. You're still of course free to mangle it into confusing spaghetti code, but the basic structure often just feels like a better fit for the domain if you have even a moderate amount of interactivity on the page(s). As the team and codebase grows the structure starts to pay off even more in the extensibility it gives you.
There can be more overhead as a trade-off, but for the majority of users these pages can still be quite usable even if they are burning more cycles on the users' CPUs, so the trade-offs are often deemed to be worth it. But over time the overhead is also lessening as e.g. the default behavior of bundlers is getting smarter and tooling is improving generally. You can even write your app as js components and then server-side render it if needed, so there's no need to go back to rails or php even if a blazing fast time to render the page is a priority.
>The more modern style of heavier client-side js apps lets you use software development best practices to structure, reuse, and test your code in ways that are more readable and intuitive.
Sadly, this is probably where the core of the problem lies. "It makes code more readable and intuitive" is NOT the end goal. Making your job easier or more convenient is not the end goal. Making a good product for the user is! Software has got to be the only engineering discipline where people think it's acceptable to compromise the user experience for the sake of their convenience! I don't want to think to closely about data structures, I'll just use a list for everything: the users will eat the slowdown, because it makes my program easier to maintain. I want to program a server in a scripting language, it's easier for me: the users will eat the slowdown and the company budget will eat the inefficiency. And so on.
> Making your job easier or more convenient is not the end goal. Making a good product for the user is!
This is a very limited perspective. Let me give you an enterprise software perspective:
1) Software maintenance costs. Better maintainable software allows more services to be delivered with a smaller budget.
2) Software is never finished. Ability to respond to new or changing user requirements matters to users and is perceived as part of the quality of service.
3) Software is going to be maintained by someone else. If they can not maintain it then users have to go through another iteration done by another team (best practice: fewer features, but at least the implemented features have more bugs).
> 1) Software maintenance costs. Better maintainable software allows more services to be delivered with a smaller budget.
Better is not the same as easier. This is such an equivocation fallacy since both of those terms are highly subjective. Maintenance costs are measured in numbers and not some imaginary developer ideal of job security.
Code that is easier to read is easier to maintain and easier to debug. Code that is easier to read and more intuitive will, more often than not, result in a better product and better experience for the users.
I disagree with the premise. "Readability" is an excuse people use for writing slow code. It's not an inevitable tradeoff.
Like, most of these people are not saying, "we could do this thing which would speed up the app by an order of magnitude, but we won't because it will decrease readability." They have no idea why their code is slow. Many don't even realise it is slow.
My favourite talking point is to remind people that GTA V can simulate & render an entire game world 60 times per second, 144 times on the right monitor. Is that a more complex render than Twitter?
Computers are really fast, it doesn't take garbage code to exploit that.
IMHO it’s in part because of what I assume are different business models between a game like GTA and many / most businesses where a website / web app is core to their product.
Different business models result in different environments in which to conduct software engineering; different constraints and requirements.
IMHO constant and unpredictable change (which I assume happens less for games like GTA) is one of the big differences, as is the relationship between application performance and profit.
But I like what you’re saying and would love to see that world.
More than a website that preloads its structure into the cache and then transfers blocks of 280 characters, a name & a small avatar, rather than gigabytes worth of compressed textures.
Is the difference because GTA has more "readable" code?
I have other games that do load up quicker than Twitter, which I do think is damning, but it's not really the point I'm trying to get across here.
Well, the "less readable code"—ie, the goddamn mess that a lot of game code is, slapped together barely under deadline by staffs working 80 or more hours a week—is part of why AAA games like GTA have so many massive bugs requiring patches immediately after release.
But then, you brought up GTA and games, which aren't even apples and oranges with a website. Websites—even the Twitter website—don't require GPUs or dedicated memory, they don't have the advantage of pulling everything from the local hard drive, and yet they actually work as designed, not merely in a low-resolution, low-effects mode on computers more than a couple years old.
And while I wouldn't point out the Twitter home page as remotely fast for a web site, have you actually even looked at it recently? It shows a lot more than just a few tweets and avatars. It's got images, embedded video, etc.
This is a dumb argument. My point is that readable doesn't imply slow, and "readability" is not actually the reason slow things are slow, most of the time. I don't think you even disagree with me.
There's definitely another discussion to be had about why web tech is so disastrously slow given what computers are capable of, but it's not worth having here. We're never going to settle that one, and regardless if you are a web guy, you're stuck with JS.
I just logged into nest for the first time on a new laptop. It took 15 seconds to load and get to the screen to change the temperature on the thermostat.
Then I refreshed the page and it still took 10 seconds to reload.
It's actually pretty good analogy, when you're searching for an analogy of something being technically accurate while missing literally the entire point.
Making your code "more intuitive" does not result in a better product. Making a better product does. The argument is that software development is one of the few jobs where the employees' experience seems to be equal or more important than the client experience. Sacrificing a restaurant goer's experience (taste) because it makes the restauranteur's experience (customer LTV) better is a decent analogy.
A: "Okay, but the new place next door serves identical food with a more efficient kitchen, at lower prices you can't match without making significant changes. If you don't improve efficiency somehow you'll start to lose customers."
Because the universal rule is that 90% of everything is terrible, including software. Corollary rule is that work expands to fill all available time, and software expands to fill all available resources.
If you go back to before the ascent of age of front-end frameworks, you would find that there were still tons of sites that were slow and poorly performing, despite running entirely in server-side technologies.
Something that made Google incredibly appealing when it first came out was it’s instantly-loading front page with a single search box and a single button. This was in drastic, shocking contrast in the age where every other search engine portal had a ton of content on it, including news, stock tickers, and the kitchen sink.
In the end, unless the developers of the sites make performance a priority, it makes absolutely no difference what the tech stack is. The problem is that companies don’t prioritize it.
Maintainable code can quickly and easily be extended into new features for the customer.
Unmaintainable code usually results in a ton of support tickets and late nights hunting and fixing bugs that originated from deploying into production that day. This leads to heartache and frustration for the customer.
The customer comes first, yes. Good maintainable code, is a way to achieve this goal.
One can make code more readable and intuitive and making a good product for the user.
I have done both in heavy server side rendering using templates and SPA rendering on the client side. It all comes down to your user base and the devices/browser they are using, if they have an aversion toward running JS in the browser.
By using JS on the server side, you can maintain quite bit of logic on both server and client side. If you are doing web development, why not make it the same. Yes, one shall not trust the client validation, but many people find JS validation to be more user friendly than a form submit.
I find this comment a bit odd. It seems clear to me that code that is easier to write, reuse, test and maintain saves time and money that can then be spent on building a better product. Do you disagree?
Then again, my decision to order a pizza doesn't hinge on whether I have to wait an extra 5 seconds for the initial payload.
But it does hinge on how good the delivery website is though. If you haven't been to a pizza website in the past 10 years, let me point out they are complex with interactive drag-and-drop build-your-own-pizza wizards. Better client-side tech helps build those features to sell more pizzas.
How does your envisioned alternative help sell more pizzas than the heavy-client approach that pizza corporations have decided on?
> Then again, my decision to order a pizza doesn't hinge on whether I have to wait an extra 5 seconds for the initial payload.
Really? Because personally I find I tend to do more business with places that have interfaces that don't make me want to beat the developer with a wrench.
It’s even more important in that case because writing a pizza delivery website isn’t a complex nor new problem so doesn’t need a complex solution.
But to answer your question, if you write clean code then when that company expands it’s operations it is easier for you (or whoever next gets commissioned) to later expand on that site to add more restaurants, thus then allowing your customer to sell more pizzas.
> That is definitely not a feature that requires JavaScript
You’re right it doesn’t but I thought the context had drifted from that topic and onto code quality.
If we’re talking strictly about JS heavy sites then I’m definitely in favour of the less is more approach. There’s times when it makes sense having JavaScript trigger restful APIs rather than have the entire site rendered on the server side. The problem is JavaScript often gets overused these days.
I could write an essay on where I think modern tech has gone off the rails though.
> if you now want to pick on my example.
That’s a strange comment to make considering you presented the example for discussion. Of course people will then “pick on” the example, in fact you’d be the first to moan about a straw man argument if people cited a different example.
One of the truly differentiating factors for good software engineers is being able to recognize when your habits are in harmony with the objectives of the project you're working on. And on the meta-level, developing the sense for how to keep them in harmony with the trajectory of a project which will likely prioritize different things at different points over its lifespan.
Propensity to change is one of the most common features I've found in software projects I've worked on in my career and most software engineering "best practices", as conceived by the authors of opinions about these things, are usually strategies for managing rapid change. i.e. structuring code so it's amenable to change, understandable to the maintainer who inherits your code, has guardrails around important invariants and guarantees via assertions and tests, etc.
The details of how (and to what degree) these things should be done are highly contextually sensitive, and that is where the dogma of "best practices" can start to interfere with creating a good user experience. But I find it a little eye-rolling when people talk about hygienic software development practices and user experience as though they are in opposition. Tests, legible code, flexible structure, etc. are enablers of good user experience, because they're what allow us to change products to fit the needs of our users. They're what allow us to ship things that people can use without them exploding.
The tendency toward asset bloat on the web and just the general use of cheap-in-development-costly-for-the-user solutions (scripting languages, inefficient data structures, verbose serialization formats, piles of dependencies) is definitely an industry problem, but I think its naive to attribute these decisions to lazy devs or devs trying to make their jobs more convenient. In my experience there's two common causes for this state of things:
1. In all seriousness, the nature of capitalism. In reality, most businesses don't actually care about the majority of prospective users. They care about a couple narrow segments of users, and if those users happen to be equipped with hardware to handle this kind of inefficiency (i.e. if they're first world clients on desktop or high end mobile devices with 4G access), the business largely doesn't care. Responsiveness, low resources consumption, low energy impact, etc. are fungible engineering goals if they don't negatively affect your sales objectives.
2. The org hasn't figured out how to incentivize responsiveness, low resource utilization, etc. as objectives. Software developers get requests to focus their efforts on all different manner of criteria, and the without designing incentivization schemes and feedback loops that orient toward these objectives, there is no particular reason why they'll be inclined towards them.
> I'll just use a list for everything: the users will eat the slowdown
"Premature optimization is the root of all evil." You often should use a list until you've identified a specific performance issue. The list isn't the problem. Not actually optimizing is.
I agree if you know, e.g., that the list is going to contain thousands of items or is going to be iterated over in your inner loop. That's not premature optimization, that's just common sense. Hence the qualifier "premature".
> You often should use a list until you've identified a specific performance issue.
Identifying a JS performance issue is something that almost no company ever does unless revenue is obviously threatened (and even then many fail to act). So IMO it pays off to do a little bit of premature optimization in JS land.
> "It makes code more readable and intuitive" is NOT the end goal. Making your job easier or more convenient is not the end goal. Making a good product for the user is!
I think the end goal is more about balancing the value you can create with the budget you have. It's an optimization problem.
If I can deliver 80% of the value for 20% of the price, I will do that.
Until somebody comes around and sells the 100% product for a 50% markup and enough people value completeness and UX more than their money. See Apple's MacBook Pro and iPhone.
Another example: I absolutely hate Amazon Prime Video for its UX and bugs they didn't fix for years (like asynchronous audio). So even though Prime is significantly cheaper in Germany, I rather pay for the more expensive Netflix instead.
I agree with you that it makes sense from a dev point of view but what about the users?
Imagine all restaurants and cooks in the city decide that from tomorrow they want to make their job easier so they will drop ingredients, hygiene and processing quality with 80% so they will work less and make more profits. The ones that won't follow the new way will be put out of business because they will have more expenses and the lazy ones can put some of the new profits evangelizing the new ways making it look cool.
People's decisions actually do hinge on those restaurant examples, though. Gross food? Hair in your meal? Horrible service? Restaurants can't get away with that due to stark business and reputation penalties. People will simply never come back.
Quite a different scenario than all websites taking an extra few seconds to load.
In fact there's very little a website can even do to turn off customers like a restaurant can. Imagine if HN took 10 seconds to load. Who cares? There's no "HN across the street" that I can go to that hinges on a 10 second wait time.
Right, so a native fast application would have a niche of users that care where the apps made with electron will have a larger user base because it will be cheaper to create for the developers but users will pay with lectricity and frustration.
My issue is that you can market some cheap food like "our food is cheap but good enough, come here to save money" where with software is "our software is slow,buggy,eats your battery - use it because we are lazy and we want to use latest coolest language to put on our CV".
Sure when I do a proof of concept toy project I will be lazy and use whatever I like, if I share it it will be free = I have a problem with big projects say a news site that has millions of users and your laziness(or using latest cool stuff) affects such a giant number of people.
There’s no moral obligation to make incredibly efficient and streamlined software. Solving the problem and proving sufficient value is usually enough.
Sure, they might get displaced by a competitor in the future - but by then they’ve probably got a large user base and a warchest to compete with. Slack is a great example of this.
If it’s software that’s life critical, then maybe, but that’s a small minority.
Markets and buyer preferences are always changing - I think it’s better to be agile (ie high developer velocity with talented product managers) to be able to detect and capture these shifts.
Slack has a budget of multiple millions, RipCord has a budget of zero, and one programmer who works on it in his spare time. Which is faster?
Polishing RipCord so it looked indistinguishable from the Slack client wouldn't be that expensive. I would argue its much more "they don't know how" than "they are making a business decision not to."
I'd add that the OP was advocating for static HTML which is, in this day, unsellable to a lot of clients. I love producing static HTML server-side and in house a lot of our internal tools get written like that and operate fine.
Additionally I feel like there was an implication that in such a setup the client would be doing hard processing - i.e. aggregating a raw data set on the fly - I've seen this done and it's terrible in most circumstances and it certainly isn't the norm. The server can do the heavy lifting and then hand things off to the client to do all the minor display adjustments like localization, adjusting for timezones - display stuff.
A well written front-end can provide a far more responsive page by using over-fold only rendering that the server is mostly ignorant of. Both sides of the product should be independent systems setup to treat the other as foreign I/O pipe where data is just being requested and returned.
The more modern style of heavier client-side js apps lets you use software development best practices to structure, reuse, and test your code in ways that are more readable and intuitive.
I have seen phrases like that being used to (over)sell something so many times that I feel suspicious and doubtful whenever I see them --- Enterprise Java™ is sold using similar verbiage, and yet I'd never want to work with it again. Some of the worst codebase I've worked with --- ridiculously indirect and abstracted --- were created and described with such terms.
I'll take spaghetti code over whatever dogmatically following "best" practices produces. The former "flows", while the latter "jumps".
> Every large server-rendered app I've worked on ends up devolving to a mess ... that is a big pain to work on.
I can honestly say the same about every large SPA style app I've worked on and I've worked on several. Rewrites are the norm in the JavaScript arena because code devolves into a complicated mess then people look at it and say "It's just a JS app–there's no reason it should be this complicated!" Then they rewrite it with the latest framework magic, rinse, and repeat.
EDIT: I have to go further and (respectfully) say this comment I'm replying to is a load of BS. It's responding to a big pile of data saying that pageload speeds are not faster today with 'well that's not my experience.'
The other aspect of this comment that sets me off is what I'll call the "you don't have to use JSX to use React" type rebuttal. With this type of rebuttal, you respond to complaints about the way 99% of people use JavaScript by claiming that it's not strictly necessary to do it that way, despite the fact that "that way" is how everyone does use JS as well as the way thought-leaders & framework authors suggest you use it. It's responding to real-world conditions in workplaces with a hypothetical-world where JS is used differently than it is today.
This type of argument always shifts the blame on to individual developers for "using JS wrong." When 90+% of people are "using the tool wrong" there is a problem with the tool and it's not reasonable to shift the blame to every user who's trying to follow the latest "best practices."
I wish we could acknowledge the facts of the JS ecosystem (like those presented in this article) rather than deflect with "not all JavaScript apps..." when mostly what you're talking about is demos & contrived speedtests, not real-world applications.
If 'ifs' and 'ands' were pots and pans, we'd have no need for a tinker.
We side-step large parts of this argument for our more complex business UIs by leveraging server-side technologies like Blazor.
We also recognize that by constraining some aspects of what we are willing to support on the UI that we get profound productivity gains.
Hypothetically, if we wanted some animations in our server-side blazor apps, we could add some new methods to our JS interop shim for explicitly requesting animation of specific element ids. We could also conditionally adjust the CSS stylesheets or classes on elements for animations on round trips of client-triggered events. Putting an @if(...){} around a CSS rule is a perfectly reasonable way to get the client to do what you want when you want. In this model the client has absolutely zero business state. When you go all-in with server-side it means you always have a perfect picture of client state from the server's perspective, so you can reliably engage in this kind of logic.
There are compromises that have to be made. The above is not a perfect solution for all use cases. But, it does demonstrate that with some clever engineering and new ways of thinking about problems that we can get really close to meeting in the middle of several ideal philosophies all at the same time.
> The more modern style of heavier client-side js apps lets you use software development best practices to structure, reuse, and test your code in ways that are more readable and intuitive.
This reads like vague marketing speak. In reality SPA/front end JS frameworks do the exact opposite - violates all kinds of software best practices like duplicating logic server/client, creating brittle tests, conflation of concerns, etc, etc. SPA's front end JS frameworks are an anti-pattern, imo.
SPAs treat the browser just like any other client. If anything, it's more design-/architecture-consistent.
- iOS speaks to your JSON server.
- Android speaks to your JSON server.
- CLI speaks to your JSON server.
- Desktop GUI speaks to your JSON server.
- Other machines speak to your JSON server.
Meanwhile...
- Browsers use browser-specific html endpoints to utilize a historical quirk where they render UI markup sent over the wire that the server has to generate, and now you're dealing with UI concerns on both the server and webclient instead of just dealing with biz-logic and data and the server.
I find it very hard to see how this is somehow what avoids duplicating logic on server/client and conflating concerns.
I think you’re confusing the view layer with adding an entire SPA/JS framework where you duplicate the data model and offload business logic to the client vs a standard req/resp and rendering html/json or even JS from the server. It’s much much cleaner to do the latter. This is speaking from hard earned experience building web apps over the last ten years.
On the contrary, a traditional server-side/jQuery hybrid is what results in duplicated templating and logic. A SPA at least has everything in one place.
Sounds like you mostly had experience with old style server side development (like php with no frameworks maybe?)
Have you tried building an app with a more modern framework, like Phoenix or Blazor? You won't even need to write a single line of javascript. All business logic is on the server (no duplication of models) an can be easily testable.
In my experience, SPAs tend to get overly complex, and they mix (and duplicate) business logic with UI logic.
I've worked on (non-SPA) webapps that are old enough to vote and I just have to disagree with this assessment.
Beyond that, most websites are not webapps, most are serving up static or near-static content so most websites shouldn't be designed and engineered like they are complex webapps that are doing heavy data processing.
This might be the root cause of the "goopy" feeling I get working on web apps. I couldn't get away from it so I just gave up on web apps entirely.
So much of web tooling seems to aim to perpetuate the conceptual model of the web instead of daring to improve on it. Declarative views, for instance, are a breath of fresh air: instead of trying to put a JS/Python/Ruby coat of paint on the same ideas of what a web app should be, they aimed at trying to reduce a view to its essential complexity.
In a sense, being too in love with the web keeps you at a local maximum because you think web programming should be HTML/CSS/JS only.
IMHO the solution is a smart combination of server and client side rendering. Client side rendering in my experience tends to increase the number of requests, especially if you are using a microfrontend approach. More requests means more latency, unless all request are running in parallel, which is not always feasible. Note that with regards to latency the speed of the web did not change much and won't change much anytime soon (at least noton the desktop)With Vue and some other frameworks the complexity of this approach became acceptable in the meantime.
Every bit of interactivity/ reactivity that product wants to add to the page feels like a weird hack that doesn't quite belong there, polluting the simple, declarative model that your views started off as.
This doesn’t need to be the case though. There have always been server-side frameworks which generate the client-side parts for you automatically, so you can mostly avoid writing any javascript even for the interactive parts. Check out rails/turbolinks/stimulusjs (used to build basecamp and hey.com) or the TALL stack (increasingly popular in the php community) for modern day examples.
It is unfair to compare the careless jQuery spaghetti that was pretty common time ago with a client side framework such as React regarding maintainability. If you put some effort (which I'm convinced is a lot less than the effort we put to build SPAs) into doing the "sprinkles" right, it can be as maintainable as the SPA or even more, with the added benefit of not sending so much crap to the end user. There are modern approaches to this (Turbolinks, stimulus, Unpoly, Intercooler, etc...), no need to do manual JQuery manipulation anymore. Pair it with Node, and you still have a pure JavaScript solution without wasting the user's CPU, memory and patience.
The original tradeoff was mega-cap FAANG companies trying to offload processing power to the client. There never was an organic open source push for SPA's or front end JS frameworks. They add a ton of tech debt and degrade the UX. Premature optimization and anti-pattern for everyone but a handful of companies, imo.
The old world was having a complex web stack that included strange templating languages hacked onto languages that were sometimes invented before HTML was even a thing (see: Python) that spat out a mix of HTML and JavaScript.
Then there was the fact that state lived on both the client and the server and could (would...) easily get out of sync leading to a crappy user experience, or even lost data.
Oh and web apps of the era were slow. Like, dog slow. However bloated and crappy the reddit app is, the old Slashdot site was slower, even on broadband.
> They add a ton of tech debt and degrade the UX.
They remove a huge portion of the tech stack, no longer do you have a backend managing data, a back end generating HTML+JS, and a front end that is JS.
Does no one remember that JQuery was used IN ADDITION TO server side rendering?
And for what its worth, modern frameworks like React are not that large. A fully featured complex SPA with fancy effects, animations, and live DB connections with real time state updates can weigh in at under a megabyte.
Time to first paint is another concern, but that is a much more complicated issue.
If people want to complain about anything I'd say complain about ads. The SINGLE 500KB bundle being streamed down from the main page isn't taking 5 seconds to load. (And good sites will split the bundle up into parts and prioritize delivering the code that is needed for initial first use, so however long 100KB takes to transfer nowadays),
> Oh and web apps of the era were slow. Like, dog slow. However bloated and crappy the reddit app is, the old Slashdot site was slower, even on broadband.
Just those that attempted to realize every minuscule client side UI change by performing full page server side rendering. Which admittedly were quite a few, but by far all of them.
The better ones were those that struck a good balance between doing stuff on the server and on the client, and those were blazingly fast. This very site, HN, would probably qualify as one of those, albeit a functionally simple example.
SPAs are just a capitulation in the face of the task to strike this balance. That doesn't mean that it is necessarily the wrong path - if the ideal balance for a particular use case would be very client side heavy (think a web image editor application) then the availability of robust SPA frameworks is a godsend.
However, that does not mean it would be a good idea to apply the SPA approach to other cases in which the ideal balance would be to do much more on the server side - which in my opinion applies to most of the "classic" types of websites that we are used to since the early days, like bulletin boards, for example.
> Oh and web apps of the era were slow. Like, dog slow. However bloated and crappy the reddit app is, the old Slashdot site was slower, even on broadband.
Which reddit app are you talking about, the redesign or old.reddit.com? I ask because the old version of reddit itself certainly wasn't slow on the user side, iirc reddit moved to the new SPA because their code on the server side was nigh unmaintanable and slow because of bad practices of the time.
> Time to first paint is another concern, but that is a much more complicated issue.
That's the thing though, with static sites where JQuery is used only on updates to your data, the initial rendering is fast. Browsers are really good at rendering static content, whereas predicting what JS is going to do is really hard..
The new reddit site on desktop is actually really nice. Once I understood that it is optimized around content consumption I realized how it is an improvement for certain use cases. Previously opening comments opened either a new tab, or navigated away from the current page, which meant when hitting back the user lost their place in the flow of the front page. The new UI fixes that.
Mobile sucks, I use RIF instead, or old.reddit.com if I am roaming internationally and want to read some text only subreddits.
> That's the thing though, with static sites where JQuery is used only on updates to your data, the initial rendering is fast. Browsers are really good at rendering static content, whereas predicting what JS is going to do is really hard..
Depends how static the content is. For a blog post? Sure, the content should be delivered statically and the comments loaded dynamically. Let's ignore how many implementations of that are absolutely horrible (disqus) and presume someone at least tries to do it correctly.
But we're all forgetting how slow server side rendering was. 10 years ago, before SPAs, sites took forever to load not because of slow connections (I had a 20mbps connection back in 1999, by 2010 I was up to maybe 40, not much has changed in the last 10 years) but because server side was slow.
If anything more content (ads, trackers..) is being delivered now in the same amount of time.
New reddit makes it easier to push ads; any other motivation for its implementation is an afterthought. There's plenty of valid criticism that can be levied against the claim that the redesign is "superior" by default. And I think often we confuse amount of information with quality of information exchange. Due (mostly) to the ever increasing amounts of new users that it desires, you could easily make the point that the quality of content on reddit has nosedived. Optimizing for time on site is not the same thing as optimizing for time well spent.
Reddit as a company obviously wants more users; a design that lets people scroll on through images ad nauseam is certainly better than a design that is more information dense, so if that's something you'd cite as an example of "better in certain use cases" then I agree, otherwise there's plenty of reasons to use old.reddit.com from an end user's perspective.
Even if everything you said was true (it's definitely not!) that doesn't explain why the web is bogged down with entirely static content being delivered with beefy JavaScript frameworks.
10 years ago it was static content being delivered by ASPX, JSP and PHP, with a bunch of hacked together JS being delivered to the client for attempts at a rich user experience.
It still sucked. It just sucked differently. I'll admit it was better for the user's battery life, but even the article shows that it was not any faster.
I don't know where this misconception came from - XMLHttpRequest was invented by Microsoft for use in Outlook Web Access, Gmail was essentially a copy of that.
The first web versions of Outlook were plenty fast and usable on my then workstation (PIII 667 MHz w/ 256 meg). In fact, a lot of the web applications made 15 years ago were fast enough for comfortable use on Pentium 4 and G4 CPUs, because most used combinations of server-side rendering and vanilla JS. It was painful to develop, sure, but the tradeoff in place now is severely detrimental to end users.
I am not sure when there was a new rule passed in software engineering that said that you shall never use server rendering again and that the client is the only device permitted to render any final views.
Maybe it's coming from the schools.
I worked with a pair of fresh-outta-U devs who argued vehemently that all computation and bandwidth should be offloaded onto the client whenever possible, because it's the only way to scale.
When I asked about people on mobile with older devices, they preached that anyone who isn't on the latest model, or the one just preceding it, isn't worth targeting.
The ferocity of their views on this was concerning. They acted like I was trying to get people to telnet into our product, and simply couldn't wrap their brains around the idea of performance.
I left, and the company went out of business a couple of months later. Good.
This narrative has been going hard for the last 6~7 years. For me, it's difficult to pinpoint all the causes of this.
I feel many experienced developers can agree that what you describe ultimately amounts to the death of high quality software engineering, and that we need to seriously start looking at this like a plague that will consume our craft and burn down our most amazing accomplishments.
I think the solution to this problem is two-fold. First, we try to identify who the specific actors are who are pushing this narrative and try to convince them to stop ruining undergrads. Second, we try to develop learning materials or otherwise put a nice shiny coat of paint onto the old idea of servers doing all the hard work.
If accessibility and career potential were built up around the javascript/everything-on-the-client ecosystem, we could probably paint a similar target around everything-on-the-server as well. I think I could make a better argument for it, at least.
I think it's not as much narrative as the explosion of the tech field in general.
When i started out tinkering with tech in general in the early 2000s people drawn to the internet were still mostly a smaller core of passionate forerunners many of whom subscribed to artistic values like simplicity, beauty and the idea of zen. This began to change with web 2.0 circa 2008.
Both my Mac and my Pc from 2005 are way snappier than todays OSX or Win10. Not as fast but has way less latency.
Today tech education is aimed at highly paid and fancy careers for lots of kids with little passion for engineering or designing who never learned to use a desktop because they just had an iPad - they hardly know that you can copy and paste and almost don't know what a website is outside of walled gardens, i kid you not.
This year had had the most students ever start in tech related fields, and i know from teaching shortly at university that about 95% have not gone into the field because they love to tinker but because it's highly paid or a "cool career".
I know there are still the oldschool "designers and experimenters" out there but it's all about signal to noise. Of course size, complexity, high level modularized abstraction and dependency hell is also an issue, but this probably won't get resolved as 99% of tech people today don't care and don't remember using a computer that didn't have 500ms of latency when closing a window.
SSD improve programs' loading time, they do nothing for the latency of the UI of those programs.
When win95 arrived and brought the desktop as we know it to the masses it brought with it some latency that has not been reduced since. Subjectively it has increased, it might just be my patiece that goes shorter, so YMMV.
Yes SSD did bring back some of the lost time, and some more, but the programs don't feel as responsive as in "the old times."
I'm not sure what you mean by mainstream, but SSDs were still pretty niche in 2009 and traditional hard drives have remained to be pretty popular until recently due to the lower cost.
It's cargo-culting. Everyone just follows the herd. Once something gets the scarlet letter of being "old" compared to something "new" (both of which are purely perception), it's really hard to ever get back to using the "old" thing (even if it's performance is better or some other tangible benefit).
I remember back around 1999 reading about a guy who had hand-coded his own http server. It was extremely barebones, no server side processing, just fed out html pages. And it was fast, faster than anything you could pay money for. And secure; since it's functionality was so limited compared to all the other web servers, the attack surface was super small. Fast, safe, reliable.
I've noticed that this incessant trendchasing is largely confined to web development, although it has spread a lot from there (Electron...)
To be blunt, I do not care how "new" something is. Newer is not always better, and change is not always good. Churn is not progress. Perhaps those should be the core values that developers need to be "indoctrinated" with, for lack of a better term.
Then again, I'm also probably much older than the average web developer by at least a decade and a half, and saw lots of silly fads come and go.
I think this is the crux of it. It isn't client side or server side rendering, it's simply bad code. Both approaches can be bad if they're poorly engineered. With modern trends we see front end based rendering more prominently, and with that we see a lot of truly terrible implementations. A lot of this is the ease at which you can include a library that does some task for you, but often at the expense of bloating the payload. Good software engineering is hard. It requires effort. It requires more time. It's the antithesis of agile development; moving slowly but being robust. Most companies or side projects aren't willing to choose to build features more robustly if it means cutting the dev speed in half.
> This narrative has been going hard for the last 6~7 years. For me, it's difficult to pinpoint all the causes of this.
I always thought it came with the serverless meme. All those services used to do that cost money, so the decision was made in a lot of places to put the work on the client. New people to the industry maybe haven't connected the dots and think it's for scale when it's really for costs.
I think that’s certainly influenced this trend. Make the client side do all the hard work and server costs go down. I’m not sure how much that cost is, but I’m guessing it could be substantial.
An advanced website (or web app) is likely going to have a lot of JS on it already. Should most blogs? Probably not, unless they want interactive demonstrations of concepts, or a commenting system more advanced than what HN has. But beyond that, JS is everywhere.
So at some point, there is JS running on the browser, code that has to be architected and maintained.
So then someone proposes adding another language on the server into the mix, one that will generate HTML and deliver the JS.
The dev experience is, likely, not as good. It is more complicated in a myriad of ways, debugging is harder, and the tech stack is more complicated and more fragile.
And the thing is, good SPAs are really good. But the opposite strategy having every request round trip the server is going to suck in certain cases no matter what the developer does. Something as simple as the site being hosted far away from the user, or there being an above average amount of network latency, is enough to slow down every interaction.
Now all this said, when I'm overseas and suck on 256kbit roaming, HN is about the only site I can use.
It probably wasn’t the first, but the first notable SPA I can think of is Gmail. Do you think that architecture was just decided on by some noobs right out of school? Since it was novel at the time, probably not. That leads me to believe (along with my own anecdotes and observations) that there are real benefits to SPAs. Are there trade offs? Of course. But there are benefits too.
The idea of an SPA is only strange in the web dev community. An SPA is equivalent to a native app client - they are long running, stateful processes that communicate with a backend. That architecture has worked for decades, and there are things that you can do as a result of having that stateful process that you simply can’t do with server rendering, like caching response data to share with totally unrelated view components later on in the application. You also get to use a real programming language to design your front end instead of living within a template language, which no one acknowledges as the biggest hack of all time. Template languages exist precisely because HTML is not a sufficient UI tool in all cases.
Gmail is probably the example that I hold up as an SPA gone awry — on a new machine & T1 broadband you can still get _very_ slow loading times to open the site at all.
& what it's serving you in the end doesn't feel like it's heavy lifting in this day & age — a subject line & preview of your first N emails — I think the site could be much more responsive if they had a JS enhanced page (e.g. using it for their predictive text) rather than an SPA.
GMail has gone downhill massivly since it first started. I Can read the entire contents of a mail and go back to inbox and it still shows the email as unread. Closing the gmail tab usually comes up with a prompt saying it's busy and am i sure i want to close tha tab. I would consider it as an example of how not to write a SPA these days
I made a comment elsewhere about page load time not being the only dimension of performance. Page load time’s importance gets amortized across the length of the whole session. You should always try and minimize it, but in an application like Gmail you’ll be using it for minutes at a time. Each subsequent interaction is now quicker because there doesn’t need a page refresh.
Page load time is one dimension. We get really hung up on it.
I agree with your reasoning but not how it applies to Gmail :)
I'm always hazy on the term amortized, but my understanding of it was the opposite — in that, you should try to chunk up as much as possible.
If you are using Gmail for a longer period, having to wait for functionality like advanced search or predictive typing is less annoying as the extra time you need to wait is a smaller portion of your session overall. Plus, these things may have lazy loaded by the time you come to use them.
On the other hand, if you just wanted to reference an email quickly — e.g. I often need to dip into email quickly, for example, if I've travelling and have booking confirmations / QR Codes / PNRs etc. in starred messages in the inbox — you'll notice any extra load time so much more.
So to me, feels preferrable to have as small as possible initial load time, then extra functionality progressively loaded in the background.
I do agree for something like Netflix for example that a 'big load up front' is probably preferrable. I'm 'in for the long haul' as I'm going to be watching something that's at least 30minutes up to a feature length film, so a few seconds extra load time is negligble.
When I said “amortized” in this case, I just mean that the cost of the initial page load is spread out throughout the length of the session. Like you said, if you just want to quickly open an email then close the app, the session time will he short and the page load cost will be a large percentage of the session time.
I’m having trouble comprehending some of your sentences (there’s some running on going on). So I don’t understand what you mean by “waiting for advanced search” - is that better or worse in the multi page world? I think it’s worse. With an SPA, all of the UI navigation can happen on the client, which is as quick as possible (no server round trip). The actual searching in an SPA would be via an API and you get a nice spinner in the meantime, again with no page load. All smooth UI transitions which indicate to the user what exactly is happening. No blank page while a new page is computed without knowing what exactly is going on.
sorry writing while eating lunch earlier so no doubt could be phrased better :)
What I was trying to get at was that there are lots of "optional extras" that seem to get loaded up front at the moment that would be better if they were progressively loaded.
I think you see that a lot with SPAs in a way that you didn't when JS was used to 'progressively enhance' sites rather than for all interactions.
I don't think it's a problem of SPAs by definition — you could engineer an SPA to progressively load what's needed, and make the first load very slimline & then only load additional features while idle or on request, which is why I think GMail is a bad example. For example, I've just tried logging into Gmail in a fresh Firefox window (no cache) that I set to throttle to a "regular 3G" connection speed in the browser. It took literally 30s to load (with one email in the inbox).
Gotcha. React Suspense is trying to be the best of both worlds. It allows you to load code for components on the fly, when they are needed, and not all up front.
It's funny to think of developers and users as two distinct constituencies with distinct interests. I had thought any good developer, by definition, would do what's best for the user. Therefore the interests of both groups are the same.
But I suppose you're right that the average developer is their own damn person with their own interests which can only ever partly align with their customer's.
I suppose there are actually three parties, all distinctly interested: the programmer, the employer, and the end user. The programmer wants to get paid and do "good work". The employer is actually a multipart entity: management and investors, each of which have distinct interests. The end user is not a single such person but a mass of individuals, each with a different set of interests.
Some benefits are for developers. And some are for the users. For example, many users, particularly ones who care about the look and feel of things, prefer when there is a smooth transition between screens in response to their actions.
You simply can’t do that with server rendering (barring hacks like Turbolinks, which are just approximating SPAs).
I agree, and I'm more critical of heavy front end Javascript frameworks that are part and parcel of SPAs. Things like the Turbolinks hack you mention, IMO, provide the best user experience, especially fast initial page load. But that can be messy to develop, so we end up with Angular or React which smooth the development issues, but make pages fat and slow for users.
Because that’s what everyone else does when saying that the performance characteristics of the two should be evaluated in the same way, e.g. page load time.
Gmail has been a SPA since launch. There were some pre-launch versions that were not, and it has always had a plain HTML fallback mode.
I believe there may have been some major changes to the way it did the client-side rendering a few years after it launched. The original version was more ad hoc, since it was pretty much the first time anyone (at Google, at least) had built a real SPA. Later they built tools and libraries, and developed a more systematic way of doing things. (I was at Google during that time period, but never worked on Gmail.)
While Outlook Web Access (2000) was earlier, Gmail (2004) and Google Maps (2005) were definitely instrumental in establishing AJAX and the SPA pattern (though the term SPA came much later I think).
As a current student Compsci student who's done an (introductory) web dev class, may I say I have not been taught this way _at all_. In fact this notion of offloading stuff to the client never came up at all in the class. That's just my experience.
I think these ferocious views must be coming from the individual - but I do realise not all courses are the same and this student may have actually been (wrongly) taught this way.
Compsci education is more focused on theory. Your prof is more likely to come from old-school corporate dev or 90s startups and probably has jaded views towards much of this new stuff like a lot of the passionate "designers and tinkerers" do nowadays
I'd bet this doctrine is probably being promulgated by the non-academics at trade schools and boot camps
In my class we learnt server and client side JS.
I feel my prof's approach to webdev was quite modern. It was a very practical class and none of the theory we learnt was actually tested.
It is most likely a result of an echo chamber of students with no or little industry experience. It doesn't help that the younger people generally own client devices with more up-to-date specs.
Also, young people like to cling to new industry trends. It makes sense for them - if they would try to become an expert in something that is decades old, they would have to compete with people with decades of experience, while new tech creates a level playing field for the both the graduates and the veterans. This might also be one of the reasons for why we see such an diarrea of technologies in our industry - coding is mostly done by young people, many of them not more than a couple years into their careers. Pushing for new tech pays off for them.
Some, not all but some, new grads hear something from a prof or maybe departmental philosophy, and they will defend it to the death until they see real life for a while.
Unfortunately, the latest model is where the users are who habitually spend money on new stuff and are likely to pay you for anything.
If you're optimizing for money, rather than good use of system resources, it makes perfect sense.
The underlying platform purveyors unfortunately have the same view, which means that anything before the model before the current model is not supported any more. It probably has an outdated version of the OS. The current OS won't fit. The APIs are changing, and so supporting the old device requires maintaining a separate stream of the code that is backported. Someone has to test it on the old device and OS. And for what? Someone who won't pay.
> When I asked about people on mobile with older devices, they preached that anyone who isn't on the latest model, or the one just preceding it, isn't worth targeting.
Looking at twitter links on my aging iPad is a painful experience these days.
Twitter, where displaying short pieces of text and occasionally images and very occasionally video can require extravagant amounts of processing power that can only be found in the latest hardware and software...
Years ago, it was a common conspiracy theory that the hardware manufacturers were forcing obolescence --- and to a certain extent they still are --- but now it seems software developers have outdone the hardware manufacturers without any help from the latter...
At 22, you bristle at the thought of someone in their 30's or 40's year old calling you a child, and why does the entire car rental industry agree with these tired old farts? It's not fair.
By 30, you start to allow that they might have a point, but you abstain from saying anything because you remember how it feels. You smirk (privately) at Sarah in Labyrinth instead of identifying with her. "It's not fair. It's not fair." Brat. By 35 you've lost track of how many times you've resisted the urge condescend, and you start to lose the war by degrees.
The best lack all conviction, while the worst
Are full of passionate intensity.
I am coming to appreciate Jim Highsmith's position (we aren't solving problems, we are resolving paradoxes, but refuse to see).
It's not that they're wrong, or they're right. It's that everybody is wrong (and always will be). That excitement at finding a new strategy (which the old cynics point out is merely new to you) is the hope of escape. It's also the hope of changing the narrative so that everybody is on an even footing. You aren't competing with people who have 10 years experience in this (the only people who do are 50 and a mix of short memories and ageism prevents them from taking over).
If 'progress' looks like taking our foot out of one bucket and putting it into another over and over, we're just going to spiral toward the future forever, which is going to be boring and slow. Probably we need more specializations based on problem domains instead of techniques. My exceedingly vague understanding of the history of medicine is that they didn't make much progress either until they did that, and that a lot of people died until they started getting serious about issues. We are 75 years old as an industry. It's time to talk about kicking out the snake oil and cure-all vendors.
It's coming from the mega cap FANG companies. There never was an organic open source evolution of SPA's or front end JS frameworks because offloading compute to the client is an optimization that only a handful of companies on the planet need.
It very may we’ll be. When I was at CMU (MISM) part of our curriculum discussed distributed systems. Some took it as another tool in the kit, but many came away thinking it was taught to us because it should be how we do things in the field.
You kidding!? SPAs have a significantly worse user experience with high latency, the vast majority of SPA will just display a blank screen if the latency is too great.
Amen! A person near and dear got let go on some really snarky nonsense etc etc. Really stressful. But guess what? 90 days later they canned 2/3rds of everybody else including a whole bunch of the jerks. Their old office is black; not a soul around. They've got big customer and cash flow probs. When we heard this ... straight to the bar for a couple of celebration rounds. Hee hee hee. Meanwhile the person in question got a way better job.
I mean the logic is pretty straightforward, your servers will never have the computing power of all client devices combined, and the slowest part of any webpage is going to be communication between server & client. If you are writing a web app with such intense JS that people can’t run it that is an indictment of that app, not all client side rendering.
The slowest part of any modern webpage is the part where the client (usually a mobile phone) has to download 1MB+ of JavaScript, parse it, execute it, use it to fetch more code and then display it to the user.
But servers are typically fast at serving up html, which the server isn't rendering, only generating. Very fast these days, particularly with proxies and server caching.
While SPAs are somewhat inefficient, I'm convinced that the unnecessary bloat is mostly related to
A) advertising and/or tracking
B) improperly compressed or sized assets
C) unnecessary embedded widgets like tweets or videos
D) insane things like embedding files into CSS using a data URI and therefore blocking rendering
E) nobody using prefetching properly
These are very loose figures for my computer and internet connection, but a small server-side rendered site is in the 10s of KB, and loads within 2 or 3 seconds. A small client-side rendered site is in the 100s of KB, maybe a little more or a little less, and takes 4 or 5 seconds. The sites that I really hate are in the 5 MB+ range and don't load for anywhere up to 10 or even 20 seconds, which goes above and beyond the bloat caused by client-side rendering.
Yep, but that only takes you into the range of 100s of KBs. To get above 1 MB is possible, but at some point before you hit a 5 MB initial page load it becomes the fault of things other than the technology you're using.
EDIT: For reference, https://www.tmz.com/ with no ad blocker is 19.22 MB (7.51 compressed) and takes 19.99 seconds to load, and they only use jQuery. I don't think the underlying technology is the problem for them.
I don't think the problem is client rendering, especially when you consider latency. Sending views over a network, especially a high-latency, low-reliability one like a cell network, isn't going to beat the performance of doing UI rendering on-device.
Same goes with storage. What's faster: a readdir(3) call that hits an SSD to see how many mp3s you have downloaded, or traversing a morass of cell towers and backbone links in order to iterate over a list you fetch from a distributed data store running in AWS? It's the readdir(3) call.
Giant bundles of unnecessary JS are also bad for performance, but there's a reason why when we had more limited computing resources, we didn't try to make every screen an HTML document that needed a roundtrip to some distant server to do anything. Computing happened on your computer. That's also why native apps on smartphones exist: Apple tried to make everything websites with the first iPhone, it was unbearably slow, and so they pivoted to native apps.
Plain old documents are best as HTML and CSS. Highly-interactive UI isn't.
You know what? I don’t think the trend for client side rendering is the problem. It seems logical. The problem is they hijacking of client side development by frameworks like React that produce a 1 Gig bundle of JS soup and dependencies rolled into a memory hogging ball, when all you needs is 50 kilobytes of basic vanilla JavaScript that would download into the client before you can say “chrome has run out of memory”
Is that really the central problem though? Or is it that there is so much cruft?
Most every time I try to load a web page my sole aim is accessing a little plain text and perhaps a picture or two if clearly relevant and illustrative. But (if it weren't for ublock or the like), for the few KB of the content I actually want I have to wade through irrelevant stock photos, autoplay videos, innumerable placements serving promotions/ads/clickbait, overlays, demands for entering my email address or logging in, social media icons and banners - and that's to say nothing of the stuff I don't see, the trackers and the scripts. Surfing the web like this is frankly a strain, one that we've accepted as normal because everyone does it.
If we serve cruft faster, certainly that will improve speeds, but those gains might simply motivate the powers that be to add more cruft so - just as the case with network speeds - we'll end where we started. We need to be radical and tear web pages down rather than merely focus on serving them faster through technical means.
It seems like a lot of websites could replace react/angular/framework with some simple jquery and html but that's not cool and not good on your resume. So now we have ui frameworks, custom css engines with a server side build pipeline deploying to docker just for a photo gallery.
Difficult to a newskool kiddos fresh out of bootcamp that can't wrap their heads around complexity, maybe.
In my experience it's the frontend frameworks that make for worse code. Callback soup is downright simple and easy-to-follow compared to some of the atrocities that React et al have wrought. Worse, they repackage old tech as "new" while completely reworking the vocabulary and paradigms, so old hats have to relearn shit they already know because some snot-nosed Facebook engineer needs another resume badge for his inevitable 18-month departure
Plain javascript, if you want to go down that route. Jquery is around 30kb of javascript, if I remember correctly.
> but that's not cool and not good on your resume
On the contrary, writing vanilla js is pretty cool and impressive on the resume, when every other developer puts react there; but it's pretty miserable too, compared to using frameworks.
Yes, its basically a shortcut when I am doing a quick first scan through resumes. The moment I see one of those keywords I will flag it for 2nd pass review and move on.
As long as it makes sense when you read it after a few days (i.e. its well organized) and it also works as expected, then you are probably doing well.
Javascript is actually incredibly fast on most devices, especially if you are constraining yourself to the vanilla API and not something like jQuery. Remember, every web browser's JS engine has been hyper-optimized to support decades of half-assed website implementations.
It's really hard to screw up perf on document.getElementById(). There's honestly not a whole lot of ways to hang yourself with the vanilla methods unless you are trying to build something ridiculous like a raytracer or physics engine.
i think the biggest thing for your average app is just causing a lot of layout repaints. as long as you spam requestanimationframe everywhere and group style changes and polling, you'll be good. I was really surprised how good I was able to get things on even an ancient ipad 2 I had laying around
Vanilla JS is pretty cool. Eleventy is even cooler, it gets you most of the things a framework gets you, but gets compiled to fast, vanilla JS and that's what served to the browser.
Svelte/Sapper also has a lot of potential as it only ships the parts if the framework that are absolutely needed instead of the whole thing.
But in reality, you can make plenty of very fast React sites and plenty of slow vanilla JS sites.
Any modern JS Framework gets compiled to vanilla JS. That‘s exactly the problem: Because Browsers don‘t implement ES6+ Syntax natively, it has to get compiled to complicated, long winded ES5 or filled by heavy polyfills. If browsers were all spec compliant, code bundles would be way smaller.
This almost certainly means Internet Explorer, if they’re only referring to ES6. But honestly I love all the feature since then. It’s actually difficult for me to imagine not using TypeScript these days, let alone not having access to asynchronous/await or async iterables, plus all the tiny little syntax improvements it’s easy to take for granted like Array.includes()
It's trendy to take a shot at new things. You need to build a solution to the problem - sometimes HTML can get this done and sometimes it can't. I agree that we've gone off the deep end of "everything must be react" but it's silly to say this is simply to drive resumes. It's mostly under-experienced folks using react as their hammer to deal with any nail, screw or bolt they come across - but that hammer is useful and when you've got a nail you should use it.
I think this is comming from the same direction as desktop computing.
Try to install windows 7 on brand new machine (hopefully you will get the drivers). Regardless of all the new "improvements" in windows 10, it will fly.
What we did to hw performance increase is just staggering. Instead of having software that works much faster, we ate the performance for the sake of cheaper development - filling software with lasignas of huge libraries that in most cases are not needed, employing incompetent developers that know how to code but are clueless about the computer/os/browser they are running their code on, not optimizing anything,...
Suboptimal technologies, suboptimal languages (to make the developers less expensive for the companies), lack of knowlidge. It stacks up, todays webpage is easly a few megabytes, for a kb of text. Due to huge list of third party dependancies that are not really needed, but are there for minor details on web page. It is just crazy and far worse than when server side rendering was "THE thing".
What specific part of windows 10 actually makes it run like ass? I really don't understand what the situation is. Are users being punished for the sake of being able to hook everything with ad revenue?
I have noticed that on Windows Server 2019 when I remove Windows Defender, explorer.exe seems to get 10x snappier (start menu appears more quickly, etc) but it still feels like something isn't quite right.
Static things should be generated server-side (even if those static things are dynamically-gwnerated), and things that change on the page after load, interactively or by timers, should be rendered browser (client) side.
Client side rendering has become popular because it reduces server load... but unfortunately increases processing time in the browser, which can slow things down for users.
There's solutions (like Gatsby, which is its own layer of complexity) and cheats and workarounds, but the standard should be that if a page will contain the same information for a certain state on an initial load for that page, that content should be generated server-side. Anything that can't be or is dependant on browser spec or user interaction should be client-side.
I just don't believe in making the user process a bunch of repetitive static stuff that can be cached on the browser from the server or compressed before sending. There's gotta be more consideration of user experience over server minimization.
This sounds a lot like the old argument of developers not being careful about how much memory/cpu they use. Engineers have been complaining about this since the 70s!
As hardware improves, developers realize that computing time is way cheaper than developer time.
Users have a certain latency that they accept. As long as the developer doesn't exceed that threshold, optimizing for dev time usually pays off.
Server-side is no panacea. I starting paying attention recently, and WordPress-based sites frequently take well over a second to return the HTML for pages that are essentially static—and that's considered acceptably fast by many people running WP-based sites. Slow WP sites are even worse.
Wordpress is not static at all. It support commenting and loads comments by default, it shows related articles dynamically depending on categories and views, it displays different content to different viistors, resize and compress pictures on the fly, etc... and a thousand more things if you are a logged in user, it really is dynamic.
It's actually pretty good considering what it does (if you don't setup a ton of plugins and no ads). There can be 50 requests per page but that's because of all the pictures and thumbnails. The page can render and be interactive almost immediately, pictures load later.
When I consider the "server-side" argument, I think of it in apples-to-apples comparison: custom code that is either server-side or client-side rendered. Wordpress on the other hand is a packaged application, typically used with other packaged plugins and themes. Moreover, many Wordpress sites are run on anemic shared hosting. Custom applications can as well, but I feel that's far less likely.
Everything can be done poorly. The problem with wordpress is it allows a plethora of plugins and hooks that allow for lego-style webapp construction. This is not going to result in cohesive, performant experiences.
If you purpose build a server-side application to replace the functionality of any specific wordpress site in something like C#/Go/Rust, you will probably find that it performs substantially better in every way.
This is more of a testament to the value of custom software vs low/no-code software than it is to the deficits or benefits of any specific architectural ideology.
"If you purpose build a server-side application to replace the functionality of any specific wordpress site in something like C#/Go/Rust, you will probably find that it performs substantially better in every way."
You'd find the exact same thing for a Python or Node site, too.
I mean I just opened a Twitter profile and counted 10 full seconds before the actual tweets loaded. 1-2 seconds is pretty blazing by comparison. How much faster do you need to be than the most popular social media site?
If you use a plugin like WP2Static to render actual static pages, you get far better performance. If you have a well-designed theme that, e.g., doesn't have render-blocking JS, you should have seemingly instant load times.
Your first point may be true, but it completely ignores the reality of a vast number of WordPress sites that don't use a plugin to generate a separate, static, site.
As to your second point, what do you imagine that "well-designed theme[s]" have to do with sites taking well over a second to start returning HTML?
> Your first point may be true, but it completely ignores the reality of a vast number of WordPress sites that don't use a plugin to generate a separate, static, site.
For the record, I would never run WordPress as non-static unless I had no other option. I'm not defending WordPress in any way. I just didn't see the need to mention its flaws because the parent comment already had.
I would personally prefer not to use WordPress at all, but it is the de facto standard for marketing websites, and marketers don't know or care about the performance and security nightmare that is standard WordPress. Since that is the reality, I felt that it was helpful to let people know how to deal with it constructively instead of just deploying insecure, slow websites.
> As to your second point, what do you imagine that "well-designed theme[s]" have to do with sites taking well over a second to start returning HTML?
That was stated in the context of static websites. If you're running non-static WP, you're just fucked.
Alpine has replaced situations where I would previously use vanilla JS or jQuery (i.e. simple UI interactivity, but Vue would be overkill), but is far nicer to use.
LiveWire is perfect for things like data tables—it's not really interactive per se, but a full page refresh to change filtering or sorting sucks, and implementing it as a purely JavaScript component makes it harder to use all the cool Laravel stuff I have on the backend. With LiveWire I can just pass in the path to a Blade partial to use as the table row template, and use all the back-end stuff I like.
That just leaves the complex, high interactivity stuff, which I continue to use Vue for.
LiveWire is missing a couple of features that's stopping me using it in production (namely the ability to apply different middleware to different components), but V2 is out soon so hopefully that will include it. If not I'll probably look at contributing it myself.
> We’ve reached peak complexity with SPA. The pendulum will swing a bit back to things on the server. But it will be a new take — a hybrid approach with different tradeoffs than before. Obviously I’m thinking React will be a part of that wave.
Combine that with NextJS's new features in server side rendering, I think we are going back to that. My React site is server side rendered.
Users say they want this, but then people get upset when list filters don't automatically update the listing when you click them, or when you have a set of drilldown cascaded dropdown and the first doesn't automatically filter the second, or when inapplicable widgets are visible that should be hidden when inapplicable.
Okay, but why does that require a supercomputer to do with acceptable latency?
We had UIs of that complexity on DOS, and they were far more responsive. Modern eye candy has its cost, yes, but that doesn't explain most of the difference.
Do you have any data to prove that server-side rendering will lead to "faster" websites? The article definitely doesn't provide any.
Server-side rendering means the web page will be blank until the server responds. If a majority of the heavy lifting is done on the server, you increase the opportunity of slower server response times. That's a worse UX than a web page gradually loading on the client-side.
Things like CSS cannot be rendered on the server, yet CSS is often a bottleneck to rendering. Same goes for images and fonts. Where's the data showing "client storage" and "client compute" are the culprits of slow websites?
The server has more knowledge than the client. Once economies of scale kick in, you can start to do things like speculative rendering of pages for users based upon prior access patterns, time of day, region, preferences, etc.
For instance, you could have a pre-render rule that says to trip if there's an 80% chance the user is just going to proceed to checkout and not back to the store based upon the type of product in the cart. This would mean that while the user is reviewing their shopping cart & options, the server could be generating the next view. Once the client hits "Proceed to Payment", the server (or CDN) can instantly provide the cached response from memory. This basically takes UX latency down to RTT between client & server if you have a very predictable application or are willing to speculate on a large number of possibilities at once.
Turbolinks and the like just move the problem elsewhere. Great, now your site is no longer blank on subsequent loads but your first input delay has jumped by a magnitude since the server still takes forever to respond. There's still a delay regardless.
Also Turbolinks only becomes useful after the page has loaded. So every fresh visitor is still going to see that horrible flash of blankness, wondering if the site is broke.
That isn't to say it's worse than just default server-side rendering: I think it provides a better UX. But who knows how much, and who knows if it's better than a SPA. Nobody is citing any real data here, just talking out there ass.
There’s no arguing against server rendering being simpler - it objectively consists of fewer components. But to say that it is more performant by nature? You can’t actually argue that. There are plenty of performance downsides to rendering a full page in response to every user action. Don’t forget, an SPA can fetch a very small amount of data and re-render a very small part of the page in response to a user action. There is a noticeably performance benefit to doing that.
Performance doesn’t only boil down to the first page load. Hopefully your application sessions are long, and the longer page load time gets amortized across the session if interactions are performant after that.
Note, I primarily work on enterprise apps where the sessions are long, and the workflows are complicated. Of course page load time matters much more for a static site or a blog / content site.
But to claim that SPAs are all cost with no benefit is just disingenuous. Of course they have their own set of trade offs. But there is a reason people use them, and it’s not some conspiracy fueled by uneducated people. Server rendering isn’t some objective moral higher ground.
Your could hire competent developers who know how these technologies actually work. Server side rendering is better but still not ideal, because the incompetence is reduced from the load event to merely just later user interactions. The performance penalty associated with JavaScript could be removed almost entirely by suppling more performant JavaScript regardless of where the page is rendered.
To me, client-side rendering feels like an end-run around incompetent full-stack devs who don't know how to make server-side rendering fast. So why not throw a big blob of JS at the user (where their Core i7 machine and 16GB of RAM will munch through it), and on the backend, the requests go straight to the API team's tier (who know how to make APIs fast).
There are other advantages to server-side other than the specific professionals involved in the implementation.
Server rendered web applications are arguably easier to understand and debug as well. With something on the more extreme side of the house like Blazor, virtually 100% of the stack traces your users generate are directly actionable without having to dig through any javascript libraries or separate buckets of client state. You can directly breakpoint all client interactions and review relevant in-scope state to determine what is happening at all levels.
One could argue that this type of development experience would make it a lot easier to hire any arbitrary developer and make them productive on the product in a short amount of time. If you have to spend 2 weeks just explaining how your particular interpretation of React works relative to the rest of your contraption, I suspect you wont see the same kind of productivity gains.
This is completely subjective, but if you want reduced maintenance expenses then don’t rely on any third party library to do your job for you regardless of which side of the HTTP call it occurs. Most developers don’t use this nonsense to save time or reduce expenses. They use it because they cannot deliver without it regardless of the expenses. The “win” in that case is that developers are more easily interchangeable pieces with less reliance upon people who can directly read the code.
I am not aware of any such rule, given that I keep coding server rendering using Java and .NET stacks since ever.
The rule to pay attention to is not to follow the fashion industry of people wanting to sell books, conference talks, trainings, while adopting an wait-and-see attitude.
If you wait long enough then you are back at the beginning of the circle, e.g CORBA/DCOM => gRPC.
"With server-side (or just static HTML if possible), there is so much potential to amaze your users with performance."
Actually I am amazing my users with C++ data servers and all rendering done by JS in the browsers. What I do not do is hooking up those monstrous framework. My client side is pure JS. It is small and response feels instant.
Maybe? Only if the business is actually deploying the server-side to folks. Using C++ to run data-servers is a choice you can make - one that I'd be a bit wary of since if C++ has a glaring weakness it's everything having to do with strings and I/O[1] which is going to be a big component of what you're writing.
1. C++ can do these things, and can do them quite performantly - but it takes an amount of effort far exceeding doing the same thing in say, Go or Java.
>"C++ can do these things, and can do them quite performantly - but it takes an amount of effort far exceeding doing the same thing in say, Go or Java."
Not my observation, writing business servers in modern C++ using some libraries is a piece of cake. I do not have any problems with I/O and strings either.
The lack of native marshalling and unmarshalling approaches outside of the style perpetuated by sputf and no support for any on-the-go string variable injection or templating without pulling in libraries.
This is a weakness that can be overcome, but it's a weakness.
I don't see having these things as separate libraries to be a weakness - this way they can evolve independently from the language and can be much more specialized.
The irony for me is that I often see very small applications that use a long list of technologies... and end up taking much longer to build (and load on a client browser) than a server-side rendered application would have.
To be sure, if you can accurately roadmap an application such that you can see how it's going to grow and expand across teams, then you can see where it makes sense to use frameworks to build areas, navigation, components, etc. and then be able to distribute work across teams.
But often very small applications with very small teams are built in a way that is unnecessarily complex, and the expected (later) payoff never arrives.
Well some applications must be able to run on IE 4.0. Granted I do not cover such cases. But I do not really care. So far my clients (from across the world mind you) do not have such requirements, hence it is not my problem. What I do have instead is stellar scalability and performance.
“Why should I care about future generations – what have they ever done for me?” Groucho Marx
Sure, off-loading the work onto the client doesn't help speed.
But Groucho would say now, "Why should I care about the client? What has the client ever done for me?"
Sure, the web pages don't load any faster 'cause they're now running a cr*p load of javascript. And that javascript is running more and more annoying ads. And that's because ads support most websites and there's a finite ad budget in the world and that budget is naturally attracted to the most invasive ads available.
I often consult d20pfsrd.com, a site that hosts the open gaming license rules to the Pathfinder rule system (D&D fork/spin-off). Information itself is just static text and once was, apparently, supported by text adds. But now, naturally, it serves awful video ads as well. I would strongly suspect the site isn't getting more money for this, it's just that now that advertisers can run this stream of garbage, advertisers must run this stream of garbage.
I think that when the UI (in general) was starting to go server side rendered (again) people started to find ways to make it client rendered for speed (again). In fact I can imagine the guys at google building the first gmail said "It doesn't have to be this way."
admin.google.com is a great example of unnecessary, over-engineered and almost comically bad client side rendering.
First off it's painfully slow. Then you go to manage users. There's a list of users; so far so good. Then you try to add a user. First there's a loading(?!) indicator. Then add user dialog shows up. You fill in the form and add a user. Dialog closes and the list of users does not refresh. You don't see the user you just created. It shows up only after you reload the page. How does something like that event happen?
This is coming from someone who built an entire server-side rendering framework with PHP and then added Node.js for sockets and other realtime stuff...
Client first apps are the future.
Look, this is what I was able to get with a mix of clientside and serverside... does it load fast?
The document is loaded from the server and then the client comes and fills in the rest. That first request can preload a bunch of data, to be sure. But then it can’t be cached.
Please read THIS as to the myriad reasons why client-first is better:
> Look, this is what I was able to get with a mix of clientside and serverside... does it load fast?
That's a resounding 'no' from me: [0]
It takes over a full minute to finish loading the page. As to when the titles for the calendar events, the interesting part, first appear, that's about the 30sec mark.
For comparison, this page I'm writing from loaded in 216ms.
Under 3 seconds here on an old $999-when-new Windows 10 laptop.
I have no idea what you are running on that makes it 30 seconds. The site was quite fast. Scrolling is a bit abrupt, it should probably pre-load more aggressively, but other than that the site works really well.
FWIW the page loaded relatively fast for me but the images then loaded in very slowly (both the backgrounds and avatar icons).
I was curious (not picking on you, and I'm hardly an expert) so threw it at gtmetrix and you can see the same (click on Waterfall, the suggestions on the main PageSpeed tab seems pointless).
Well they seem to give an A on almost all metric except small images, they claim we could save 50 KB overall LOL. And the reason they are wrong is that retina displays have 2x density per logical pixel.
The suggestion to use a CDN and set up HTTP Caching is a good one. As well as minimizing Javascript. My point was specifically to illustrate how fast an image-heavy page can be without it. It lazyloads images on demand, batches requests and does many other things to speed up rendering.
I just rewrote my personal website ( https://anonyfox.com ) to become statically generated (zola, runs via github Action) so the result is just plain and speedy HTML. I even used a minimal classless „css framework“ and ontop I am hosting everything via cloudflare workers sites, so visitors should get served right from CDN edge locations. No JS or tracking included.
As snappy as I could imagine, and I hope that this will make a perceived difference for visitors.
While average internet speed might increase, I still saw plenty of people browsing websites primarily on their phone, with bad cellular connections indoor or via a shared WiFi spot, and it was painful to watch. Hence, my rewrite (still ongoing).
Do fellow HNers also feel the „need for speed“ nowadays?
That's fantastic - as near to instantaneous as you need, and it's actually slightly odd having a page load as quickly as yours does; we've become programmed to wait, despite all the progress that's happened in hardware and connectivity. The only slightly slow thing was the screenshots on the portfolio page as the images aren't the native resolution they're being displayed at.
Does the minification of the css make a big difference? I just took a look at it using an unminifier, and it was a nice change to see CSS that I feel I actually understand straight away, rather than thousands of lines of impenetrable sub-sub-subclasses.
Maybe it's me, but I originally learned that the concern of CSS is to make a document look pretty. Not magic CSS classes or inline styles (or both, this bugs me on tailwind), so the recent "shift" towards "classless css" is very appealing.
Sidenote: Yes, the screenshots could be way smaller, but originally I had them full-width instead of the current thumbnail, and still thinking about how to present this as lean as possible. Thanks for the feedback, though!
I use the picture tag with a bunch of media queries to deliver optimized images for each resolution in websites that I build, resizing a 1080p image to only 200px width does wonders to mobile performance while keeping it perfect for full HD monitors.
Since Zola has an image resizing feature and shortcode-snippets, this could be a nice way to automate things away (i‘d hate to slice pictures for X sizes by hand).
Your pages are excellent by comparison to comparable offerings of similar information density mostly seen.
But there's always room for experimentation.
How about preserving a copy of your portfolio page now (and the PNG files it's now using) and giving it an address like /portfolioOLD?
Then using an image editor, ruthlessly resize/resample-at-lower-bit-depth one of your PNG's so their actual rectangular pixel dimensions are about the same size that they appear on a full-size monitor now.
Then ruthlessly compress it until it looks just a little less high-quality than it does now. Just a little bit, you want to be able to tell the difference but you don't want other people to notice. These are just thumbnails anyway.
Use these editor settings on the rest of the PNGs, renaming them accordingly as you go.
Deploy the new portfolio page linking to the resized renamed thumbnails instead.
Just guessing, but I expect it can bring the load time down to about 10 percent of the old portfolio.
And it would be really easy for anyone to A/B test and get representative numbers.
Thank you for using sakura.css, really appreciate it and glad you enjoyed using it! ^_^
On the other hand, I really enjoy working with tailwind. Having html and css "together" works really well with my mental model, and I can iterate very fast with it.
Though setting up tailwind is a bit of a pain, and I still use sakura + good old css everywhere I possibly can.
Very impressive. One cool thing you can further do to improve perceived speed potentially at the expense of some bandwidth is to begin to preload pages when a link is hovered. There are a couple of libraries that will do this for you.
It can shave 100 - 200 ms off the perceived load time, and since your site is already near or below that threshold it might end up feeling like you showed the page before anyone even asked for it.
I have done the same with Hugo on my blog[0], but actually had to fork an existing theme to remove what I would call bloat.[1]
The interesting thing for me is, while I personally certainly feel the "need for speed" and appreciate pages like yours (nothing blocked, only ~300kb), most people do not. Long loading times, invasive trackers, jumping pages (lazily loading scripts and images), loading fonts from spyware-CDNs - are only things "nerds" like us care about.
The nicest comment on my design I heard was "Well, looks like a developer came up with that" :)
Even for most businesses it should be the norm. When you think about it, most businesses have almost no actual dynamic content on their website - other than any login/interactivity features, they might change at most a few times a day...
The businesses with no dynamic content also tend to be the ones who rent a wordpress dev who just finds a bunch of premade plugins and drop 50 script tags in the header for analytics and other random crap.
Oh, wow. I have no idea, there is not much content yet, and zero external dependencies... maybe its the "anon" in the name? I mean, I even bought a Dotcom domain to look ok-ish despite my nickname :/
That's very cool. Nice little project to speed the site. One data point. A cold loading takes about 2.2 seconds; subsequent loads take about 500ms, from a cafe in the Bay Area using a shared wifi.
The cold loading stats:
Load Time 2.20 s
Domain Lookup 2 ms
Connect 1.13 s
Wait for Response 68 ms
DOM Processing 743 ms
Parse 493 ms
DOMContentLoaded Event 11 ms
Wait for Sub Resources 239 ms
Load Event 1 ms
Edit: BTW, the speed is very good. I've tried similar simple websites and got similar result. Facebook login page takes 13.5 seconds.
My guess? It's doing streaming parsing/processing, so it's network bound.
It started downloading html, once it got the first byte it started processing it, but then it had to wait for the rest of the bytes (not to mention the css file to download).
The parent comment is clearly using some really slow wifi, so I think it's likely that's what happened.
FWIW, I re-run the test at home. Cold load is about 400ms; repeated loads are about 240ms.
Cold load stats:
Load Time 409 ms
Domain Lookup 37 ms
Connect 135 ms
Wait for Response 40 ms
DOM Processing 165 ms
Parse 123 ms
DOMContentLoaded Event 8 ms
Wait for Sub Resources 34 ms
The page is 1.03KB of HTML and ~1.5KB of CSS. The HTML has about a dozen lines of Javascript in the footer that, at a glance, seemed only to execute onclick to do something with the menu. I'm pretty sure a 166Mhz (with an M) Pentium could process 1.03KB of HTML and render the page in under 700ish ms, so I agree that that seems oddly slow for any modern device, unless they're browsing on a mid-range Arduino.
Since this runs solely on Cloudflare workers sites itself (no server behind it), it would be quite funny when HN hugging the site would have any measurable effect :D
I have a similar setup for my personal site, although it's still a work in progress. I've really been interested in JAMstack methods lately. I build the the static site with Eleventy, and have a script to pull in blog posts from my Ghost site. To bad I haven't really written any blog posts though, maybe one day :) Anhhow, I really like Cloudflare workers, would recommend!
There is no support for comments in the blog and no pictures at all. No images, no thumbnails, no banner, no logo, no favicon.
Also, no share button. No top/recommended articles. No view counter.
Once you start adding medias it will be quite a bit slower. Once you start implementing basic features expected by users (comments and related articles for a blog) it's gonna be yet again slower.
I remember when my first article went viral out of the blue, I think have to thank the (useless) share buttons for that. Then it did 1TB of network traffic over the next days, largely due to a pair of GIF. That's how bad pictures can be.
> no banner, no logo, no favicon...Also, no share button. No top/recommended articles. No view counter.
All of which I can live without.
Still the best way of sharing content on the web is via a url, which is handily provided, so most of these aren't even needed. As for recommended and view counts, these don't inherently add a lot of value to users. If anything, it's a nice change to have a page that doesn't try and infer my desires for once.
I agree that the comparison is poor - there are business where those media components are required. But an issue with the modern web is that everything has all those components - nobody[1] cried over the lack of a "Share to Facebook" button on CNN. So, while saying stripping out all those components would solve the problem is inaccurate since those components are part of the business requirement - chances are a lot of those components aren't. Maybe you don't still need that "Share to Digg" button or maybe, as a news site, you don't need a comments section - I think it's a mix of both. Websites are being written unreasonably burdened with unnecessary features and those features are usually implemented with out-of-the-box poorly performing JS.
(As an aside - nobody nobody has ever derived value out of a page counter except the owner of the site - who could just look it up in the logs. This isn't really an argument against anything you mentioned but I found it amusing it was one of the things you brought up)
1. Mostly nobody - sure there were some folks, but then again I'd wager a significant portion of those folks were just loud voices echoing from the marketing department.
My own blog is statically generated too. I don’t have most of these either, because as a user I barely care about any of them or even actively dislike them.
Seems mostly good to me after cloudflare caches it, but you have made one annoying mistake: you forgot to set the height of the image, so it results in content shift. Other than that, it's great! :)
"Do fellow HNers also feel the need for speed nowadays?"
I stopped using graphical browsers many years ago. I use a text-only browser and a variety of non-browser, open source software as user-agents. Some programs I had to write myself because AFAIK they did not exist.
The only speed variations I can detect with human senses are associated with the server's response, not the browser/user-agent or the contents of the page. Most websites use the same server software and more or less the same "default" configurations so noticeable speed variations are rare in my UX.
Yes, a lot of people are browsing in less-than-ideal conditions. Many apps fall on their face when you try to use them on a German train with spotty reception.
Very interested on how you used Zola. The moment I wanted to customize title bars and side bars and I was basically on my own. Back then I didn't have the desire (or expertise) to reverse-engineer it.
Have you found it easy to customize, or you went with the flow without getting too fancy?
That's fantastic, all _static_ site need to have this rendering speed, but unfortunately static content applicable to very narrow niche. Most sites have to provide dynamic content to certain range and this is where it becomes incredibly slow
The main culprit, imo is javascript. People/clients want more and more complex things, but javascript and its libraries are the main culprit. Image compression, minification...it helps, but if the page needs a lot of JS, it's going to be slower.
Slightly off topic, but I have a site that fully loads in ~2 seconds but Google's new "Page Speed Insights" (which is tied to webmaster tools now) give it a lower score than a page that takes literally 45 seconds to fully load. Please someone at Google explain this to me. At least GTMetrix/Pingdom actually makes sense.
Users expect load times proportional to the content expected, if I am loading photoshop I don't expect it to be quick.
However if I am loading Reddit, I expect it to be fast, and it seems the websites that should load the fastest are now loading the most 'non-essential' JS, leading to worse performance than peoples expectations.
In my experience JS adds at most hundreds of miliseconds, and thats because people add dozens of marketing/tracking code that bog the site down.
If you run ghostery / ublock the javascript eval time shrinks dramatically. Our web app has a very large Angular app and it still renders in under 200ms, but we don't have any "plugins" due to working with PHI.
Users expect load times proportional to the content expected
And how do you think that happened? It's not a chicken-and-egg problem. Web pages got fat, and users' expectations got lower. Lazy devs have trained people to expect the worst, not the best. The app ecosystem wouldn't be half the size it is if web pages worked as fast as native apps.
If you run ghostery / ublock the javascript eval time shrinks dramatically.
Are you going to be the one to explain to the marketing department why you put instructions for doing so at the top of each of your company's web pages?
Page Speed Insight mainly measure how fast your above-the-fold content loads. It doesn't matter if your page loads a bunch of heavy js and images as long as it's deferred/lazy-loaded and doesn't block initial render. For example, amp pages actually load a lot of js for its component, but it doesn't block above-the-fold render and thus scored really great on page speed insight. Personally, I think page speed insight is one of google's strategy to encourage people to use AMP more.
Edit: Also, I find it comical that when you include recaptcha v3 on your website, your page speed insight score can drop almost 20 points. It is as if google don't want you to use recaptcha at all.
> but it doesn't block above-the-fold render and thus scored really great on page speed insight
Which is ridiculous and just builds in the ability to game the system, because in my experience, amp pages take upwards of 5-8 seconds before anything of use to me actually loads, while the non-amp version loads in a fraction of that time.
I imagine someone is benefiting from amp, otherwise it wouldn't be used, but I have not experienced a single case where the amp version of a site wasn't significantly slower.
Last time I tried WebAssembly website that's loading 4-5 MB of data and it managed to score something like >95 meanwhile page was ready after like 5-10sec.
More specifically the culprit is generally unnecessary string parsing. Every CSS selector, such as querySelectors and jQuery operations, requires parsing a string. Doing away with that nonsense and learning the DOM API could make a JavaScript application anywhere from 1200x-10000x faster (not an exaggeration).
Most JavaScript developers will give up a lung before giving up accessing the page via selector strings. Suggestions to the effect are generally taken as personal injuries and immediately met with dire hostility.
[citation needed] There are a lot of other reasons why client side JS is slow, including page reflows, bad use of the network, etc. I'm not a front end dev but I have fixed many performance problems before, and I've never seen parsing CSS selectors as a bottleneck.
I don't think they mentioned parsing CSS selectors anywhere. Shipping too much code is a problem, because megabytes of JS is expensive to parse, but IIUC that is distinct from your claim.
You are correct in that there many other opportunities to further increase performance. If performance were that important you would also shift your attention to equally improve code execution elsewhere in your application stack.
> and I've never seen parsing CSS selectors as a bottleneck.
It doesn’t matter what our opinions are or what we have/haven’t seen. The only thing that matters are what the performance measurements say in numbers.
EDIT
To everybody asking for numbers I recommend conducting comparative benchmarks using a perf tool. Here is a good one:
I posted a performance example to HN before and people twisted themselves into knots to ignore numbers they could easily validate and reproduce themselves.
Here's an actual benchmark that you can run[1] (why did you not share an actual benchmark?). I get, on my old and slow Android, 500k ops/sec for querySelector.
At 60fps, that allows you to do ~8000 selections per frame, assuming you're not doing anything else. In reality, any app I've ever encountered probably has a few hundred querySelector calls, in total, and if the app is well written, the majority of these are cached meaning they only get called once, not once per frame.
500k ops is ridiculously slow for accessing the DOM on any modern hardware. It sounds fast when aren’t comparing it to anything. Compare that to a similar approach that makes use of the standard DOM methods and no selectors. The only thing querySelectors do faster is allow accessing elements by attribute.
The reason I refuse to post numbers myself is because:
1. I provided a tool where people can run any manner of discovery for their own numbers and see performance differences in various approaches.
2. People, when presented with a valid comparison will irrationally ignore results that challenge their opinion.
EDIT:
I looked closer at the measureit experiments and it seems there is some sort of bias. If you run the same experiment using an element already present in the page the results are the same for querySelector but 50% greater for the getElementById approach. Other perf tools I have tried did not display this kind of bias and they also reported substantially higher numbers for all user agents, most especially for desktop Firefox.
In response to 1, you're asking people to do their own research when you have stated the claim. The burden is on you.
As for 2, if it's such a problem, just don't make the argument. It genuinely comes off as wanting people to agree with you rather than any real interest towards engaging in actual discussion.
> 500k ops is ridiculously slow for accessing the DOM on any modern hardware. It sounds fast when aren’t comparing it to anything.
I didn't say it was fast OR slow. I just said it's not slow enough to be the problem that you're claiming it is.
If you disagree with this, could you provide evidence of a situation where accessing elements with strings is a bottleneck? As I said, I've never seen this in practice and if it can be the case I would like to know more.
> Compare that to a similar approach that makes use of the standard DOM methods
querySelector is a standard DOM method.
> and no selectors
I don't know what you're referring to here. Could you provide an example?
> It doesn’t matter what our opinions are or what we have/haven’t seen. The only thing that matters are what the performance measurements say in numbers.
But your only argument here is not numbers, but appeal to authority:
> > I'm not a front end dev
> But I am.
I don't have any particular reason to doubt you, but if objective numbers should rule the day here, maybe you could link to an article comparing the performance of a simple application using CSS selectors and then switching to using the DOM API?
I'm not the poster in question, but I thought it was pretty well known that querySelector[All] with a class string is shockingly slow - I'm actually a bit confused that everyone here is acting like they said something out of the ordinary. If that's really not common knowledge, it's probably no surprise that people are unintentionally writing websites that run like absolute crap on mobile devices.
Here's a JSPerf (not mine) that you can run and see yourself just how bad the performance is: https://jsperf.com/getelementbyid-vs-queryselector/284. getElementsByClassName runs nearly five million operations per second on my laptop (8th-gen i7, 16GB of RAM, latest Chrome); querySelector[All] with a class name runs less than ten thousand. And the sample HTML is tiny indeed compared to a typical web site or app.
That makes sense, but it has nothing to do with "string parsing", as the OP said. Parsing the query is fast. Executing the query is slow, because it's a more general API.
(And this is why the citation is important: because it shows the real issue, not the one described! Thanks for the clarification.)
I love how big the numbers are in recent versions of Firefox. On my crappy work computer that can barely run Notepad++ I am pulling 1.8 billion ops for getElementById and getElementsByClassName. The performance gaps:
* getElementById vs equivalent querySelector is about 1000x.
* getElementsByClassName vs querySelectorAll is about 250,000x.
* On Chrome getElementsByClassName vs querySelectorAll is only about 426x.
This is a micro-benchmark. In the real world that disparity would magnify almost exponentially with the number of nodes in a given page and the frequency and depth of access requests to the DOM.
I decided to check back in this and this seems like it was entirely a wording problem in your first post. Your post implied that _parsing the query selector string_ was slowing down Javascript websites, which obviously made everyone do a hottake and jump in to say that can't be the case. What you seem to _actually_ be saying is that using query selectors instead of finding elements directly by class/id is slow which yes, that's certainly the case.
Query selectors are slow, yes, but its because they have to parse a string. Other DOM methods don't have a string parsing step. You cannot optimize access to the DOM if there is a string parsing step that must occur first.
That barrier to efficiency is greatly magnified by the complexity of the query string, the size of the dynamically rendered page, and the number of query strings. If not for that string parsing step why would query selectors be any different from any other DOM access instruction computationally?
You're quite frankly leaping to conclusions quite a bit, as well as starting from the (rather flawed) premise that parsing a, what, ten-character string is so slow that it can slow an operation down by three orders of magnitude.
There are any number of reasons why the querySelector API would be computationally different from the getElementX ones, especially considering that they don't even return the same thing.
> They return exactly the same thing: either null or a node or node list depending upon the method in question
...this is another thing that I'm surprised to find that people don't know. Do devs really just use these methods without ever looking at what they return or how they behave?
getElementsByClassName returns a (live) HTMLCollection, not a NodeList. querySelectorAll returns a (static) NodeList. That in itself is an obvious computational difference/potential bottleneck, because the simplest way to implement a live collection is to cache and return the same object on subsequent calls for it. And that's precisely what browsers do (getElementsByClassName('foo') === getElementsByClassName('foo')). In other words, getElementsByClassName called multiple times with the same class doesn't actually do any extra work. The real work for the browser engine, which doesn't actually happen at the point of function call, is watching any tracked class names and updating their associated HTMLCollections when an element matching that class name is added or removed from the DOM.
On the other hand, gathering the static NodeList for a querySelectorAll call requires actually iterating over the DOM to find element(s) matching the selector every time (in the naive implementation), with the trade-off of the engine not having to watch the collection internally.
As an aside, the querySelector method with an ID selector is slower but on the same order of magnitude (a difference of about a few hundred thousand ops/sec for me) as the getElementById method. So if one looks at all the data in the benchmark and not just the class ones in isolation, it becomes clear that merely parsing the selector string is not enough to drop millions/billions of operations per second down to single-digit thousands.
At my current gig we do that too for a dead simple frontend. a couple of simple tables and cards, a navigation and a footer and we're easily over 2mb. But as I have been told that's what happens when Google analytics gets pulled in and all the code needs to adhere to enterprise style DDD (in a react app mind you). Apparently 400 layers of indirection and encapsulation are the way to go...
edit: Just looked it up. header has a logo and 5 links. footer has a scroll to top button and 10 links. both responsive. How many lines of code you ask? A little over 3400.
Atlassian products seem to use a really great code-splitting-like tech: 4MB js bundles are dynamically generated from a list of which modules should be included in the bundle which is dynamically generated by some other js on the page. So you're essentially guaranteed to get the same modules into your cache a dozen times in slightly different 4MB js bundles.
I'm not sure what the tech is called other than the reason I spend multiple hours a week waiting for jira and confluence to load pages with like 300 words on them
Angular, React, Vue, etc. are all getting better reducing the bloat. Javascript packaging and tree shaking is also much better. A lot of old compatibility stuff for IE etc. also can be dumped now. JS compilation got faster. The reason we are not seeing any speedup is the code that is not needed: Tracking and advertisements - this stuff seems to fill any size and speed gains.
I'm not a web developer so I really have no idea about this - is WebAssembly a viable solution or have I just absorbed some hype without understanding the problems with JS?
WebAssembly on its own won't help with web page bloat. As with all things on web pages, it can both be used to add more bloat, or to reduce bloat. Web page bloat isn't mainly a technical problem and can't be solved by technology alone.
No. Webassembly is not faster than JavaScript enough to account for the features and APIs browsers supply with JavaScript. That being said WebAssembly is a superior alternative to JavaScript only when not recreating the interactions already available with JavaScript. Where WebAssembly shines are things like large binary media, games, and avoiding things like garbage collection.
The great irony for Google is that their own tracking pixels for Analytics, GTM, AdWords, Adsense are some of the biggest culprits for pages hanging on loads.
People misuse and abuse JS all the dang time. Take Twitter: you follow a link to a tweet from somewhere else, but first thing you see as the page loads is not the tweet — it's everything but the tweet itself. There's a spinner instead of it, because rendering it server-side would've made too much sense, I guess. Gotta make an API request from the client and render it client-side just because that's so trendy.
In other words, there are too many websites that are made as "web apps" that should not be web apps.
Which is made way worse by the fact that tweets are 280 characters of text. It's absurd that Twitter has a higher delay than a few round trips when they aren't reloading any structural content. Wtf are these people on 300k salaries doing all day?
His point is it's just 280 characters of text to download to the client to show the tweet. The client have downloaded probably 1,000x that much text before he can even see what he wants to see.
Ah - but here you're hitting a wall. You could build a very lean twitter with minimal advertising and a more sane tweet presentation order - but nobody on the business side of things actually wants you to build that. They want you to build a social media platform that can be leveraged with synergies and RoI to maximize user retention and hopefully get a bit of Ad money out of the process somewhere along the way.
It is. And I think this is because customers aren't valued they're commoditized, now personal information is a commodity with a fair bit of value and society has failed to properly punish the misuse of that value so it's sort of an ever lasting font of money.
I wholeheartedly believe that when we solve the issue of undervaluing personal information... that's when Ads become nearly worthless (like they are in print media) and software salaries deflate to the point where those useful by humans tools become the primary product again - instead of tools to use harvesting from humans.
But, I'm quite an optimist - and a cynical one at that.
This is entirely speculations but I think that may come from a very interesting factor - it's not what the engineering team views as a core competency. There may be a big divide in how the business is perceived between marketing and devs and optimizing Ads (this throw away side-component) isn't viewed as a tech priority by the team.
A more reasonable reason might just be that trying to be a platform of everything is a complex proposition and they've accepted so much complexity that all their labour at this point is being poured into maintaining that complexity rather than making improvements.
I disagree with "responsiveness never". Imagine you have a "tabs" component on a page, each tab has some text (for example [1]). With javascript, you can hide the content of the first tab and show the second tab on click, almost instantly.
Without javascript the second tab would be a link to another HTML page where the second tab is shown. Exact same behaviour for the user, however the one with a few lines of javascript will feel way more responsive than the one without, where the user has to wait for a new page to load.
Without Javascript, the tab can be a label that activates a radio button next to a hidden div that also has a input:checked~div { display: block;} CSS on it. No Javascript required.
Yeah, you're right that simple tabs can also be implemented using css, but I still disagree. Another example: how about a simple search on a table [1].
In this case the search is instant. Without JS you would have to have a submit button and wait for the request. Even if you also added a button the JS version it would still feel more responsive, as again, you're not waiting for the request.
I'm not sure I've ever seen a case where I have a data set that's small enough to be quickly searchable (and quickly re-renderable) using client-side JS but big enough that I need dedicated and app-like sort, search, or query functionality. And where such a set of data to exist, that data set would almost certainly be _growing_ with time, meaning even if it started out as something where I can have snappy JS search, as time passes the JS search grows heavier and slower through time.
Additionally, when it comes to client-side spreadsheets I have seen far more terrible half client-side, half server-side implementations (being only able to sort within a page, instead of across all pages of results). If I had to choose one, I'd choose a world were all we had were server side spreadsheets.
I’ve implemented front-end (fuzzy) search in multiple projects over the years. When the dataset is known to be small enough it’s great.
I have also seen the horrible half/half implementations you mention where it should have just been implemented on the server side, and I totally agree with you there.
However it was just an example to show that a unsubstantiated blanket statement like “responsiveness never” is just wrong. I’m not saying doing search in JS is always (or even often) better, but it can be sometimes if done well.
And no, complexity generally doesn't negate this. Earlier this year I built a series of complicated medical forms for a healthcare web site that are all HTML + CSS + < 2K of jsvascript, and they all respond instantly because I did't lean on JS to do everything.
The pages are fast, responsive, work on any device, almost any bandwidth (think oncologists in the basement of a hospital with an iPad on a bad cellular connection), and the clients are thrilled.
Something I've noticed is that Dead Code Elimination for Javascript sucks. Yes, there are the (relatively) single "tree shaking" algorithms, but there because of various language issues, it's very hard to mark code as completely unused. Because of this it's extremely common to pull in a library and have a bunch of code that's never used in production!
Performance on the web is more than just runtime. That time spent downloading the Javascript and parsing it is part of the page's load time, even if the Javascript isn't executed.
I think JS does play a big role, but my guess is that the 3rd party stuff is lots heavier than the scripts that actually make the page work. You have to write tons of code before the load time even matters, but when you have a bunch of "analytics", advertising, and social media integration scripts it adds up, especially when each ad is essentially it's own page with images and scripts.
If you use Privacy Badger or similar plugins, you see that it's not uncommon for websites to have an obscene amount of these.
TLDR: I think ads are slowing the internet down way more than React apps.
Yes, and the main culprit of JavaScript is excessive GC, because of excessive object-creation.
Developers are careless because JS is so fast, but they forget the hundreds of milliseconds of GC-pauses that occur because they never think about the memory they are allocating and throwing away each second.
The last 5 years there has been a dramatic shift away from HTML web pages to javascript web applications on sites that have absolutely no need to be an application. They are the cause of increased load times. And of them, there's a growing proportion of URLs that simply never load at all unless you execute javascript.
This makes them large in MB but that's not the true cause of the problem. The true cause is all the external calls for loading JS from other sites and then the time to attempt to execute that and build the actual webpage.
Yeah, it is perfectly possible to build javascript heavy websites without making them unusable until all is downloaded and processed...
I'm rebuilding a site for a client and using a bunch of dynamic imports, if you don't touch a video route, you'll not download the videojs bundle at all, I set a performance baseline for the site to be interactive and anything that makes it go over the baseline needs a good reason to be there.
I really don't think this is an optimal solution. This makes every page load slowly, instead of just the first page. Waiting to do work doesn't make the work any quicker.
Not really, most of the bundles are pretty small, in the 30/40Kb range gzipped, even smaller with brotli, the problem is when the user has to wait 5s to start using the site, waiting half a second to load the first page is ok, waiting a quarter second to open other page is also ok.
It isn't purely about speed, it is about perception, a very slow first page load is way more annoying that a couple half or quarter second loads distributed over a long interaction, after the js is cached all is nearly instantaneous, but you don't have to wait a while to start using the site in your first visit.
If you want you can also wait for the first page to download completely and them import the remaining js while the user is on the first page, but I didn't tried to see how well it works.
It depends on the use case. For something like a web storefront, this is probably true. For a web application where I spend a long time doing work, I'd rather have a bit of a wait upfront followed by snappy responses on all the other pages. Even a half-second load time can be frustrating when I'm in the middle of a task.
The last two years have had a dramatic shift away from SPA and toward JAMstack, which is more or less pre-rendered SPA. The result is not only faster than SPA, but faster than the 2008-2016 status quo of server-scripted Wordpress and Drupal sites.
I blew off JAMstack as another dumb catchphrase (well, it is a dumb catchphrase), but then inherited a Gatsby site this past spring. It absolutely knocked my socks off. The future is bright.
From what I can see JAMstack is just model (api) view (markup) controller (java-script) for web browsers. But it needs a new name because MVC is lame. While PSA treated the browser as a thick client with a very slow disk accessed over the network. And static HTML treats it as a thin client (like X-windows).
There hasn't been a new idea in computing since the 70s. The only thing we've done is mutate a square peg into a round one and back again. Each time patting ourselves on the back for the sheer brilliance of the move.
I've been spending close to a day trying to figure out how to respond to this statement. It's like you're saying, "From what I can see, the 2020 Volkswagen Golf is just another flightless bird, but it needs a new name because calling it a duck doesn't sound fancy."
It's not a design pattern, and the element of it that uses one (React) usually implements Flux, not MVC.
As for "no new ideas in computing since the 70s," are you actually proposing that we go back to building websites like we did in the seventies? Um, sure. Brilliant.
>I've been spending close to a day trying to figure out how to respond to this statement. It's like you're saying [...]
No, it's like I'm saying: Software development is fad driven and shallow. The details change but those details are unimportant and ultimately pointless. Bad metaphors are exactly the type of shallow thinking I'm talking about.
A GUI is not a car, a duck, or a fish. It is a UI. That the web is being used as an interactive GUI is a travesty. Without knowing the history of why it was invented in the first place - extremely high latency and low bandwidth of the 90s internet - you will never understand why mutating it to a full GUI is a terrible idea. And why we should have used any of the dozen well engineered technologies that are not hypertext based but work with the internet (low latency high bandwidth) we have today.
It's gotten to the point where an X server in an Amazon data center and a VNC client on a phone is more responsive than facebook, twitter and reddit on mobile.
>As for "no new ideas in computing since the 70s," are you actually proposing that we go back to building websites like we did in the seventies? Um, sure. Brilliant.
Yes, using TeX would be an infinite improvement over the mess we have today. Imagine having one language for everything in a site, instead of the three (five? with markdown and js frameworks) you need today.
Most SPA frameworks work fine with the back button and bookmarks--they'll either use the URL fragment (#) or the history API. I'm sure there are lots of developers who don't bother to turn on that functionality, but that's not an inherent flaw with SPAs.
(For the record, I don't think that every app that is an SPA should be, but I do think that they have a place.)
If left unoptimized, JavaScript can delay your pages when they try to load in users’ browsers. When a browser tries to display a webpage, it has to stop and fully load any JavaScript files it encounters first.
Yes. And while I will continue to defend it in case of interactive dashboards, webapps and status screens if you are not writing on of those, you shouldn't be using javascript. If we want to fix the internet, browsers should ask the user for permission for each cookie and javascript should be disabled by default.
Try to open 10 actual html websites. Now try to open 10 SPA. Tell me which feels slower. SPA slowness is not exclusively from all the third party JS and CSS loads. A lot of it is just innate to being an application instead of a document.
Assuming there are repeat visitors. Someone sent you a link to an article. You don't care which website the article is on, you just want to read it and close it without exploring whatever else the website had to offer. So you download a several-megabyte JS bundle to be able to read several kilobytes worth of text. By the time you encounter another link to that particular website, your cache of its assets would be long gone.
Not necessarily true. I've seen many SPA apps where the API calls that get made when you navigate between pages are slow too. So subsequent pages are slow also in addition to the first page.
This isn't even a hard technical constraint. With async javascript "module" loading, and stuff like the new federated webpack stuff, you can have a lot functionality loaded in on-demand.
That won't always work out or be necessary though.. For an app like Lucidchart or something, I can deal with DL'ing updates every now and then. I spend most of the time in the running app.
In theory. In practice most SPAs are slow at first load and slow when changing pages then crash when too much data is requested.
Beyond that, unless your users are a captive audience (have to use your software for their job) you should probably be optimizing for a positive first impression.
> The last 5 years there has been a dramatic shift away from HTML web pages to javascript web applications on sites that have absolutely no need to be an application.
SPAs are a lot easier to keep secure though, so if you don't want your private data leaked then they're a much better option.
Any and all scripts that are loaded at any point in the SPA stay for the duration of the visit. 3rd party scripts are a big risk and not being able to unload JS scripts and associated in-mem/in-page symbols etc. is a timebomb waiting to explode.
There's a very clear separation between the front end and the back end, which makes it easy to write tests that allow-list only the specific data that's supposed to be getting returned.
The article doesn’t dig into the real meaty topic - why are modern websites slow. My guess would be 3rd party advertising as the primary culprit. I worked at an ad network for years and the number of js files embedded which then loaded other js files was insane. Sometimes you’d get bounced between 10-15 systems before your ad was loaded. And even then, it usually wasn’t optimized well and was a memory hog. I still notice that some mobile websites (e.g. cnn) crash when loading 3p ads.
On the contrary, sites like google/Facebook (and apps like Instagram or Snapchat) are incredibly well optimized as they stay within their own 1st party ad tech.
Do you know why modern sites are slow? Because time isn't invested in making them faster. It isn't a technical problem, people take shortcuts that affect the speed. They will continue to do so unless the website operators decide that it is unacceptable. If some news website decided that their page needed to load in 1s on a 256Mbps+10ms connection they would achieve that, with external ads and more. However they haven't decided that it is a priority, so they keep adding more junk to achieve other goals.
Exactly this, and it's happening everywhere in the software world.
Why bother writing binary search? Linear search is fast enough. Why bother sharing pointers? Copy all the strings. Easy. Data compression? Forget it. Deleting files is totally irrelevant, etc, etc.
Here's another way of looking at it. As long as you're below a certain threshold, users would much rather you implement 5 new features than spend that time squeezing 100ms on the load time.
Another related side-reason why it's slow is we use higher and higher levels of abstraction, exactly to increase development velocity and be able to add more features quicker. I could write a native app in pure assembly and have it be blazing app, or I can write a webapp on top of web frameworks running in Electron, but in a fraction of the time. As long as my app is usable, I'll get all the user while the other person is still trying to finish their app.
Might have to cut some features but the point is the same. Companies will build sites to the level of slowness that they think is tolerable. If their user's internet connection gets better that means that they can be less efficient and add more features.
As someone working on improving we vitals metrics, ad networks are 100% the biggest issue we face. Endless redirects, enormous payloads, and non optimized images. And on top of all that, endless console logs. I wish ad providers had higher standards.
JS whitelisting and ad blocks anecdotally confirm this.
The number of sites that will be loading images and js from three or four or more different ad/tracking and CDNs is nuts, plus the various login and media links, and I feel zero guilt for not participating in this advertising insanity.
Tightly put together pages with only a handful of JS loads are damn near instant over gigabit fiber.
Cannot fully agree. I use adblockers since their invention. Twitter and reddit need more than 5 seconds to load for me on mobile wifi. Ads have their impact but it's not the sole problem.
You had me until the end. I still see significant lag and mismatches on Facebook's page from simple actions like pulling up my notifications or loading a comment from one of them. For the later, I see it grind its gears while it loads the entire FB group and then scrolls down a long list to get to the comment I want to see.
(This is on a Macbook Air from 2015, but these are really simple requests.)
I agree on the Ad Networks causing massive site bloat.
However, in terms of Facebook, I'd say it was well optimized considering its complexity prior to the recent redesign. But ever since the new design, my Macbook Pro can barely type on that site anymore. My machine has a 2.5 GHz Quad-Core Intel Core i7 and 16 GB of RAM.
That's pretty sad. Responsive design is a great idea, but in terms of how it is sometimes implemented you're getting X numbers of styles to load a page.
I think the reason why web sites are as slow as they are is because they can be. Double the speed of the Internet and all computers and soon enough the content will expand to be just as slow as before.
As someone who just recently worked on reducing page load times these were found to be the main issues
1- Loading large Images(below the fold/hidden) on first load
2- Marketing tags- innumerable and out of control
3- Executing non critical JS before page load
4- Loading noncritical CSS before page load
Overall we managed to get page load times down by 50% on average by taking care of these.
Can someone who understands more about web tech than me please explain why images aren't loaded in progressive order? Assets should be downloaded in the order they appear on the page, so that an image at the bottom of the page never causes an image at the top to load more slowly. I assume there's a reason.
I understand the desire to parallelize resources, but if my download speed is maxed out, it's clear what should get priority. I'm also aware that lazy loading exists, but as a user I find this causes content to load too late. I do want the page to preload content, I just wish stuff in my viewport got priority.
At minimum, it seems to me there ought to be a way for developers to specify loading order.
But it is an opt-in feature, which is not supported in older browsers.
In modern frontend development we are heavily optimizing images now. Lazy loading is one thing, the other is optimizing sizes (based on viewports) and optimizing formats (if possible).
This often means you generate (and cache) images on the fly or at build-time, including resizing, cropping, compressing and so on.
Is progressive image loading still a common thing? I'd guess for a lot of connections it actually hurts more than it helps - until you get to that fat image-heavy site.
I suppose simple order in the HTML document would be a heuristic that works almost always, but due to CSS the order is actually not guaranteed. You need to download images before doing the layout first too as you don’t know the sizes beforehand.
It's not just CSS screwing up the order! On my own (simple) sites, I can see all the images I put on a page getting downloaded in parallel—with the end result being that pages with more images at the bottom load more slowly even above the fold.
That's what the width and height attributes on the img tag are for. They're hints. Things can be redrawn later. Although I think I've been seeing a lot fewer images on the internet lately, but they must be hidden with css.
Understandable but for most use-cases (if your images are hosted on a reliable CDN and are optimized) lazy load should work fine. Lazy load works based on the distance of the image from the viewport so it may not load too late.
Chromium based browsers now natively support lazy-loading.
But then they always seem to not come in until I scroll to them, which is too late and just means I have to wait even more! What they ought to do is download as soon as the network is quiet.
This is what lazy loading does. It doesn’t actually load images that are “below the fold”. Or at least that’s what it should do. Images should only load once your start scrolling down.
Add to that a ton of un-cached images will run up against the browser concurrent connection limit(non-HTTP2) and cause queuing.. Now you have latency back in the mix between batches :/
"The hope is that the progress in hardware will cure all software ills. However, a critical observer may observe that software manages to outgrow hardware in size and sluggishness. Other observers had noted this for some time before, indeed the trend was becoming obvious as early as 1987." - Niklaus Wirth
Speed is determined by business requirements, not capabilities.
I have hundreds of opportunities for optimizations in my apps. I could make them fly. But the business side says the current speed is good enough and to focus on new functionality. So that's what I do.
This is certainly how I feel. I see so many things that people complain about in my work app like not being able to submit with enter or breaking the back button and it makes me sad because this is not the fault of JS, we could have had all of that working but just didn't have the time to make it work because new features are more important.
(Somewhat related)
Although I’ve been out of the scene a while, it always felt like PC gaming was best when the GPU manufactures hadn’t just introduced a significantly improved architecture AND the consoles were at the end of their life cycle. The worst times were when the newest, super expensive GPU monster would come out and the newest consoles were released.
I started to not feel excited for more powerful hardware. The performance ceiling was higher but I felt the quality (gameplay, performance, art style, little details) of games temporarily dropped even though the graphics marginally improved on the highest end.
Yes and no. I think they grow hand in hand. It's similar to how games nowadays still run at ~30-60fps just like games 10 years ago and games 20 years ago. It might sound like a silly example but it showcases what I mean. As long as you're above a certain threshold, most people would rather have better graphics than more framerate. Similarly, both users and developers would rather have access to more stuff than have a website that loads slightly faster.
I could try to write a game in pure assembly, and that may run super freaking fast, but that would take me orders of magnitude more time than writing the same game in Unity. Similarly, I could write an app from scratch, or I could make a web app with dozens of powerful frameworks like React, and ship it with Electron, in probably a fraction of the time. As long as the app is usable, then this is the smarter move. I can develop 10 features in the time it takes to develop 1 for my competitor.
The industry moves fast, users want features, not a site that loads 100ms faster.
It’s comical. I’ve got 2 gbps fiber on a 10 gbps internal network hooked up to a Linux machine with a 5 GHz Core i7 10700k. Web browsing is just okay. It’s not instant like my 256k SDSL was on a 300 MHz PII running NT4 or BeOS. Really, there isn’t much point having over 100 mbps for browsing. Typical web pages make so many small requests that don’t even keep a TCP connection open long enough to use the full bandwidth (due to TCP’s automatic window sizing it takes some time for the packet rate to ramp up).
iMac Pro (10-core, 64GB ECC RAM, 2TB NVMe SSD) with 1GbE connection here.
Ever since I bought this machine (and swapped the ISP) I came to understand that it was not my machine at fault; it's most websites that are turtle-slow.
Funny. I'm using a thinkpad T60 from over 10 years ago, clocked down to 1000 Mhz, because why not, and running a modified version of the links browser that I've hacked together with guile for expandability and every web page takes less than a second to load and render. Truly flying around the web. For many tasks I can run circles around your setup.
When I visit a web page, I load 1 file. 1 file. Show the page and then start loading the images, if any.
You can gain much by not loading any javascript, not caring a bit about css and having a very fast HTML parser and renderer.
I have 10Gbps fiber and have the same experience, although downloading from steam or whatever is nice. It's still worth a small premium for being a new uncongested fibre network that doesn't slow down at night but the overall speed doesn't make a difference for the reasons you mentioned.
As someone else with gratuitously fast internet I almost wish I could preemptively load/render all links off of whatever page I'm on in case I decide to click on one. (I imagine this would be fairly wasteful).
I got a fair increase in responsiveness in my site by preloading links on hover and pre-render (pre-fetch all sub resources) on mousedown (instead of waiting for mouse up)
I've recently started using a Firefox extension called uMatrix and all I can say is, install that and start using your normal web pages and you'll very quickly see exactly why web pages take so long to load. The number and size of external assets that get loaded on many websites is literally insane.
It's the cascade of doom... we have an internal benchmark without ads, and it's funny to see without ads we load something like 6Mb of JS but if you load the ads, the analytics cascade of hell will load 350Mb of JS to display one picture.
6MB of JS without ads is unfortunately on the small side these days. Worked for a company that had 15MB of JS, after minification and compression (mostly giant JS libraries that we on the dev team used one small feature of).
I've been using uMatrix for ages and it was baffling to me how some websites that are literally just nice looking blogs have an unreal number (i.e. 500+) of external dependencies.
I love uMatrix but it can be a serious hassle to get an embedded video to play sometimes. Sometimes I'll allow scripts from the embedding site and suddenly there are dozens or hundreds of new dependencies popping up and not my video. At this point I really have to ask myself if it is worth it. Maybe if I'm lucky its a YouTube video and I can track it down on YouTube's site, but if not it's going to be a big headache and a lot of reloads before the stupid thing plays.
I just got in the habit of turning it off, when I'm either too lazy to bother, or I'm on a site I'll probably never visit again. sites that are already setup ofc generally stay that way.
The big headache is when you have a site half-setup -- its correct for all of your usage, and then you try something new and you get a video that doesn't load, and you sit there waiting until you realize umatrix probably found something new
Same problem here (not using uMatrix but something similar being NoScript) => I'm now constantly using 2 browsers, Firefox with NoScript as a base, then temporarily switching to Chrome to access the sites which are demanding in terms of dependencies as you described.
It's easy to convince atechnical people to adopt uMatrix when you show them that they can watch shows without ads and load pages in less than a second.
I think this is a general problem with technology as a whole.
Remember when channel changes on TVs were instantaneous? Somehow along the way the cable/TV companies introduced latency in the channel changes and people just accepted it as the new normal.
Phones and computers were at one point very fast to respond; but now we tolerate odd latencies at some points. Apps for your phone have gotten much much bigger and more bloated. Ever noticed how long it takes to kill an app and restart it? Ever notice how much more often you have to do that, even on a 5-month old flagship phone? It's not just web pages, it's everything. The rampant consumption of resources (memory, CPU, bandwidth, whatever) has outpaced the provisioning of new resources. I think it might just be the nature of progress, but I hate it.
> Somehow along the way the cable/TV companies introduced latency in the channel changes and people just accepted it as the new normal.
The technical reason is that digital TV is a heavily compressed signal [1] (used to be MPEG2, perhaps they have moved on to h.264) with a GOP (group of pictures) length that is usually around 0.5-2 seconds. When you switch channels, the MPEG-2 decoder in your receiver needs to wait until a new GOP starts, because there is no meaningful way to decode a GOP that's "in progress".
[1] And the technical reason for the compression is that analog HD needs a lot more bandwidth than analog NTSC/PAL/SECAM, while raw HD transmission would need an absurd amount of bandwidth per channel (about a gigabit/s for 1080p30). So HD television pretty much requires the use of digital compression. Efficient digital video compression requires GOP structures.
Right, but there's a social/business reason why this is true. There are various things that could be done technically to fix this, for example, you could have three decoders running simultaneously so you always had the adjacent channels buffered, you could send keyframes more often, or you could even use a totally different compression scheme that didn't use keyframes.
The real reason problems like this aren't solved is that the organization does not allocate resources to fix them. The status quo has been deemed to be good enough and not leading to the loss of too many customers to the competition, so that's where it stands. This pervades all engineering -- everything just barely works, because once it barely works, for most things, it's good enough, and no more resources are deployed to make it better.
I think some boxes advertise that, but how much does it help? On an analog TV you could be switching channels at a rate of say 3-4 per second while still registering the channel logos and stopping on the right one (by going one back immediately). One receiver in either direction isn't going to keep the charade up. Some cleverness, like decoding adjacent channels, but if the user switches a channel down, have the "up" decoder jump to the 2nd channel down in anticipation of the user going another channel down etc., might help, but still. You can't feasibly emulate the channel agility (across ~800 channels on satellite) of an analog RF receiver in a digital system with a modulation that has a self-synchronisation interval of up to a few seconds.
> you could send keyframes more often, or you could even use a totally different compression scheme that didn't use keyframes.
GOP length / I-frame interval directly relates to bitrate. Longer GOPs generally result in a lower bitrate at similar quality; in DVDs or Blu-rays I believe the GOPs can be quite long (10+ seconds) to achieve optimum quality for the given storage space.
Non-predictive video codecs are usually pretty poor quality for a reasonable bitrate (like a bunch of weird 90s / early 00s Internet video codecs), or near-losless quality but poor compression (because they're meant as an intermediate codec).
There are technical reasons for this I guess (as you pointed them out). But what gp is talking about is really a ubiquitous phenomenon of not trying to do anything with it.
Users want to change channels quickly? Okay, let's decompose a problem into what they really want to do, then make N "preview of i-j" channels in 720p with channel pictures in e.g. 8x5 grid and show numbers so that a user could just dial them on a remote. After finding an interesting preview, it's not an issue to push some buttons and wait a couple of seconds to get 2160p or whatever quality there is. Didn't like it? "99<n>", "ok", repeat. This solution would be a jquery of tv world, dumb, straightforward, non-automated, but it would be a thousand (fourty to be honest) times better than nothing.
When I had an hd tv set, I just checked 3-5 channel numbers that I remembered and turned tv off if there was nothing of interest, because you could easily spend half an hour by just peeking them one by one.
Right but your assumption is that bandwidth efficiency is the only important thing. Why don't they use 10 second intervals if they're more efficient?
My point is that there are many ways you could apply engineering, money, bandwidth, etc to reduce or eliminate the problem but they don't because they see it as good enough to not lose customers.
Honestly it seems pretty obvious to me that the I-frame intervals that are typically used are the outcome of a balance between usability (having to wait until a clear picture can be viewed) and bandwidth as well as processing requirements.
Bandwidth matters because spectrum is a finite commons. Digital TV required giving up bands that were in use for other things (e.g. wireless microphone systems and some ham bands) as it is, using even more bandwidth would have required even more spectrum. Other people have stakes in the spectrum for very good reasons and often much more important reasons than "I wanna watch TV". Even for satellite TV, which uses frequencies high enough that there is lots of bandwidth is limited, because using a lot of bandwidth would require new LNBs and multiswitches for all customers.
> My point is that there are many ways you could apply engineering, money, bandwidth, etc to reduce or eliminate the problem but they don't because they see it as good enough to not lose customers.
Can you name one for each category you bring up? Especially the "engineering" one would be interesting. Obviously dedicating an even huger chunk of spectrum to "I wanna watch TV" would make things easier, so that's not really interesting. And "let's just give each customer a TVoIP stream that can start immediately" is also a pretty obvious "money is no concern" (and also "we don't care about customer adoption costs") option.
I think we're really saying the same thing but from a slightly different perspective.
We agree that there a whole series of parameters that can be traded off against each other:
1. Bandwidth required
2. Number of channels
3. CODEC complexity/engineering effort/cost for encoders and decoders
4. Iframe interval
5. Number of tuners/decoders in receiving equipment
6. Resolution
7. Frame rate
8. Video quality
So the standardization people decide how to set those, and basically channel switch time gets set to the maximum value that doesn't cause people to cancel their service.
If you want to see something pretty astonishing that can happen if you set the tradeoffs differently, check this out: https://puffer.stanford.edu/player/
The difference between 576i50 (PAL equivalent, free version of the main private channels) and 720p50 (that's what German public service uses for "HD"; it's not actually 1080p, although they use a pretty high bitrate) is pretty stark, the difference between 576i and 1080p is even more obvious. Although TVs don't really have a neutral setting and try their best to mess every image up as much as they can.
All the time, and unless the images are next to each other you don't really notice unless you're looking for it. Not because there isn't a big difference, but because you just don't really care that much unless it is called to your attention.
I've found pixel-count improvements stop mattering for me somewhere around DVD quality for a lot of content. I can tell the difference between that and 1080P, but stop caring once I'm actually watching the show/film. For the occasional really beautifully-shot film or some very detailed action movies, I guess I might care a little. 1080P versus 4K, I don't notice the difference at all unless I'm deliberately looking for it. And that's even with a 55" screen at a fairly close viewing distance.
What does make a difference? Surround sound. I'd take 5.1 or up with DVD-quality picture over 4K with stereo any day, no hesitation.
I remember a clear improvement in the legibility of things like onscreen scoreboards in sports broadcasts. In the NTSC era I would squint at the TV trying to figure out if the score was 16 or 18 for a football game, and for a basketball game you just had to keep track of it in your head since the scoreboard wasn’t even persisted onscreen. Other things, like telling different players apart, are also easier these days (you can even make out their facial features in a wide shot!)
Hmm, those 2 seconds in between channel changes might be enough to insert a micro ad that could be preloaded in the background. That would appear instantaneously while your channel loads, it would feel a lot more responsive.
Most video players can decode an "in progress" stream just fine. This obviously involves quite a few artifacts for the first 0.5–2 seconds or so, but seeing artifacts is generally preferable to seeing a totally blank screen.
Implicitly TV users are also prohibited from making their own technical decisions. I claim this is because the TV predates the era of open computing platforms like Unix.
> I claim this is because the TV predates the era of open computing platforms like Unix.
I claim it's because DRM, since if it wasn't for DRM wanting control of the whole reception to display path to be protected, you could just stick an open computing device in between the streaming signal and the display device, and make your own technical choices.
You still can, AFAIK, for content that doesn't require HDCP.
I remember having a TV tuner card back in the day on my old desktop and this is exactly what I did. I don't watch much TV or anything so.I haven't really checked, but are such cards available today with the encrypted digital cable that's ubiquitous everywhere? Even most broadcast is digital now and requires a decoder box between.
> the TV predates the era of open computing platforms like Unix.
Do you mean Unix-like? I'm not sure what you mean by "open" in this instance, but Unix definitely was not very open, leading to nascent platforms and movements such as GNU and Free/Libre software.
True, and the decoders in receivers already have this capability, since they keep decoding even if the forward-error-correction can't save the stream any more, resulting in similar artifacts. I'm not sure why it's not done on a channel change, perhaps it's the manufacturers not wanting their TVs to routinely show glitched footage, or perhaps advertisers don't want it because it would be against Corporate Design rules to glitch-effect the logos they use on the daily.
Our cable box has three decoders. My gf can watch one channel and record two others at the same time.
Yet does it use any of those extra decoders when not recording to proactively decode the previous and next channel, or something smart like that? No, of course not...
The incoming channel data stream is saved as-is. It will need a demultiplexer to separate out one channel from the multi-channel data stream, but it won't need to decode that stream, which is the intensive bit. Decoding happens when you play it back later.
> proactively decode the previous and next channel
Do you find yourself actually moving up and down the channels, and not through the guide to somewhere else entirely different than where you once where? My first move if I'm switching channels is to go to the guide, not to a channel one above or below my current channel.
I suppose it could run on the previous channel, but it certainly can't guess my next channel.
The first startup I worked with was an IPTV startup that would send the I-Frames for the channels on either side of the channel you were watching at the higher bandwidth tiers so you could channel flip instantly like the old days.
Toxic culture and relationships so the startup imploded, but there was some cool tech.
CRTs did tend to take a couple seconds to properly warm up, charge up and reach final image size (the image is directly scaled by the acceleration voltage, which is a high impedance source charging a not-so-small capacitor formed by the aquadag on the inside and outside of the picture tube).
CRTs did need a moment to stabilize the image, but they showed signs of life virtually instantly. At the least they'd make a little noise and had buttons with tactile feedback so you knew something was happening. Many screens today have capacitive touch buttons and have 5 seconds or more between a button 'press' and anything happening at all, leaving you to wonder if you even managed to successfully press the power button in the first place.
Yes! I remember movies and TV shows would have scenes where a character is called and told to turn on the TV for breaking news. They'd see it the story instantly, and that was actually realistic! (Assuming it was big enough news to be on all channels.)
Today if you had such a scene they'd be like, "okay <presses remote, waits five very immersion-breaking seconds>".
> Today if you had such a scene they'd be like, "okay <presses remote, waits five very immersion-breaking seconds>".
But they still have these scenes in movies
The way these scenes work now is "picks up phone - check the news!", they grab the remote, and turn the volume up on one of the many already running and tuned in wall-mounted 24/7 on TVs ... :)
Are we talking about seeking in media players, or streaming websites? If I watch a random twitch stream I just see a loading throbber, while it loads, not a artifacted version.
I don't buy this. We had an old cable box that had instantaneous channel-switching. The cable company made us switch to a brand new one. Same signal coming in, but the new one was infinitely slower. It wasn't some change in the signal that made things slow, it was the software and the complete lack of care for performance.
You get the cable box for the new system BEFORE they switch the old system off, or else you wouldn't have service until you got a new box. That's why you saw behavior change with the new box.
It simply isn't possible to send all the channels to customers at all times. There isn't enough bandwidth. So, the cable box at your house negotiates with the central or regional system so only a subset of channels are sent to you. There is no other way to do it in digital cable systems, and the switch to digital was made because it uses significantly less bandwidth than analog.
The old cable boxes were literally switches, with all channels flowing into the box all the time. Now switching is virtual and done on the server (plus all the software encoding/decoding).
Right now I would be satifsied with at least some caching of the menus, so it doesn't have to pull the data everydamntime I scroll up or down. Come on, it should be able to remember what channels I have for more than 5 minutes in a row.
It probably does cache the data, and it just takes ages to draw. Manufacturers really cheap out on settop box CPU and RAM specifications in order to make the pricing attractive to cable companies.
Yes and they still have insurance to cover the loss, anyway.
You didn't think a cable company was going to deal with it's customers fairly, did you?
Every time you interact with a customer, there is an opportunity for profit. Telecommunications companies, cable TV included, are masters at this type of behavior.
Another reason for the lag is that the modulation scheme is complex and requires considerable time to both acquire the actual demodulator lock (in fact, the symbol in typical terrestrial DTV system is surprisingly long) as well to acquire synchronization of the higher level FEC layers.
> The technical reason is that digital TV is a heavily compressed signal [1] (used to be MPEG2, perhaps they have moved on to h.264) with a GOP (group of pictures) length that is usually around 0.5-2 seconds.
Just because they transmit keyframes and deltas in the steady state doesn't mean they need to wait for the next keyframe when starting the stream. They could also choose to send you the current reconstructed frame as a keyframe immediately instead of waiting several seconds for the next pre-made one. The cost to them would be epsilon (one new stream client per channel to be the source of reconstructed keyframes), and the customer experience difference would be noteworthy.
Sorry. My context here is cable boxes, which do communicate bidirectionally and somehow take even longer than DVB receivers to change channels. If you're actually receiving everything all the time, then there are other things you can do to make channel switching not take ridiculously long.
No. It will work regardless. I'm talking about having a single additional client decompress the broadcast and act as a first-frame keyframe source based on the decompressed stream. In a traditional compressed stream, a reconstructed frame is already the context for the next delta after the first. Sending a frame reconstructed by someone else allows you to immediately begin using deltas, and it's a close enough approximation of the original uncompressed frame that the immediate result will be quite good even if not perfect, especially compared to waiting.
At worst it would only be as bad as a transcoded stream for the first few seconds until the subsequent true keyframe arrives. That's loads better than no stream at all.
There's additional issues with changing channels than just GOP sizes. For cable TV signals are still channelized into 6MHz channels. QAM modulation is used which gets about 40Mbps per 6MHz channel.
An MPEG transport stream is sent over that 40Mbps "channel". Inside of that transport stream are several elementary streams, each carrying either video, audio, or data/program information. A TV station will be carried by a combination of a video (MPEG-2/4) and audio stream. These streams are packetized such that a receiver won't get buffer underruns or overruns. The transport stream is further packetized. Adjacent station broadcasts aren't always carried on the same transport stream.
So when you hit the button to "change channels" the cable box may need to tune to a new radio channel, wait for transport stream packets for the station, and then buffer enough data to start playback.
The GOP size then plays a part because the decoder needs an I-frame to do anything useful with the P-frames inside the GOP. Cable providers use longer GOPs to get better quality at a given bitrate, I-frames are much larger than P-frames so the more you send the higher the bitrate you need.
Sending I-frames for other broadcasts in the same transport stream would be a waste of bandwidth. The data used for these rarely used extra I-frames takes away from bandwidth programming in the stream could use. There's also still the lag from tuning a new channel and waiting for those I-frames to show up. So you've wasted bandwidth for a very slight increase in responsiveness.
Currently I'm talking only about sending _one_ additional I-frame per station switch at the time of the switch, not always sending them or sending them for unrelated broadcasts. I just object to the idea that it's reasonable for a feed to start with seconds-worth of unusable P-frames, and it doesn't seem to be strictly necessary.
Cable boxes do not typically have bidirectional connections. The cable head end doesn't know about channel changes. Don't think of cable TV like web streaming.
Cable systems don't often re-encode streams they receive from upstream sources. The video from upstream is already compressed so the elementary streams are remuxed into the cable system's channel plan. The cable head end doesn't receive raw video and encode it in real-time. Even local OTA stations just get remuxed rather than re-encoded.
Sending some out of order I-frame every time a channel was changed means every stream has to be decoded in real-time at the head end in case someone flips a channel. Cable is a shared medium so every channel flip by every user on a node will to be sent to everyone.
Even IPTV isn't a bidirectional signal. It uses IP multicast and the tuners just change their multicast address to change stations. The stream is the same MPEG transport stream as the QAM signal but sent as IP packets. The head end still doesn't decode every station to send bespoke I-frames in the event someone flips the channel.
> Cable boxes do not typically have bidirectional connections
This hasn't been true in many places since the takeover of digital cable in the 2000s. Without a bidirectional data link, how do you think cable internet works? Or cable telephony? Or pay-per-view for that matter? It's all on the same line into the same box.
> Cable systems don't...
Don't isn't couldn't. When talking about solutions to problems, merely describing a problem isn't where the conversation ends.
> Sending some out of order I-frame every time a channel was changed means every stream has to be decoded in real-time at the head end
That's what I described. The cost of doing it is epsilon.
> Cable is a shared medium so every channel flip by every user on a node will to be sent to everyone.
If you can stream Netflix while watching TV without hurting your neighbors, then you can receive one single I-frame.
How does a DVB-T Transmitter know about a receiver switching channels? A DVB-S signal is just forwarded by a space relay fed by an uplink site fed by a number of stations. These are not interactive duplex media like the internet or by possibly some forms of cable TV.
Sorry. My context here is cable boxes, which do communicate bidirectionally and somehow take even longer than DVB receivers to change channels. If you're actually receiving everything all the time, then there are other things you can do to make channel switching not take ridiculously long.
I'm not sure if this is the whole story. Watching DVB terrestrial TV on my television, channel changing is definitely slower than a good analog TV, but it's much faster than any satellite/cable box I've ever used. And there's a lot of variance between cable boxes. I strongly suspect some of them are doing something silly.
OTA TV (Rabbit Ears picking up Digital TV - c.f. older analog NTSC), at least here in Canada, is still using MPEG2. The stream is typically around 20 Mbps, including the audio, which is Dolby Digital 5.1 (except certain channels, such as TVO (TV Ontario), which is DD 2.0).
It's all based on ATSC. While there is an ATSC 2.0, they skipped that in Canada, there will be ATSC 3.0, which will use HEVC (H265) for 4K. But don't hold your breath on that...the TV stations aren't exactly opening their wallets to upgrade anything!
And yet using puffer[0] I get instant channel switches at 1080.
There is no technical reason you can't have instant channel switches, it's just that they aren't making the right technical decisions to allow it and/or don't want to pay for it.
It seems to me that we have lower performance, exponentially higher resource consumption, and often no more functionality in a “modern” web app / Electron app compared to some desktop apps from the 90’s. And for what? To have even worse UI’s and UX’s that never conform to platform guidelines? Where’s the progress?
The big "dream" of web-based technologies was to allow designers the freedom to do whatever they want. But doing this has a lot of costs, which we are all paying today. In the early days of GUIs, a group would design the OS, and everyone would just use that design through the UI guidelines. It was, in my opinion, the apex of GUI programming, because you could create full featured apps without the need of an experienced designer, at least for the first few iterations. Now, I cannot even create a simple web app that doesn't look crap, and any kind of web product requires a designer to at least make it minimally usable. And the whole architecture became so complex because of the additional layers, that it looks more like a Baroque church than a modern building.
The desktop apps from the 90s were better because they could actually show more information on a 600 by 800 pixel screen than a modern app can do on a full HD screen, because it puts too much whitespace.
Not to mention flat design, which is without exception bad design. Buttons have borders, show them.
Yeah, I remember when that first started happening, and I agree it was really frustrating. The funny thing is that I no longer channel surf, so I no longer hit this. I realized the other day that I haven't watched cable TV in a long time. My TiVo is empty and has been for a long time, but I watch TV every night using streaming devices instead.
> The funny thing is that I no longer channel surf
The cynical side of my brain feels that this is exactly what TV stations and show producers wanted. If it's fast and easy to spin the dial and maybe stop on a competitor (and more importantly, watch their ads instead of yours) then you have to put our quality product and not annoy the customer with as many loud and irritating ads, which means that you don't make as much money from the content that bookends your ads.
The worst is they "fixed" it with "digital knobs" now. But due to anti-glitching logic, they are still not as responsive as you want and annoying to use. Especially volume controls.
There's nothing quite as annoying as having to use the cheapest rotary encoder, that already feels like shit, but the firmware is also polling it incredibly slowly, and is debouncing it incorrectly, and it probably bounces like hell, so not only does it feel bad to use it, but it actually doesn't work half the time and goes a step or two in the wrong direction, even if you spin it evenly into the other.
These things are like these fake volume controls reddit made up.
I remember thinking this when OS's started butting transparent blur effects on various UI layers. As someone who's worked with computer graphics, I understood that high quality blur effects are relatively expensive. Most computers these days can handle it without breaking a sweat, but It's like why are we using these resources on the desktop environment, who's job is basically to fade into the background and facilitate other tasks?
I don't think it's the nature of progress so much as it is laziness. Most developers (myself included) don't worry much about optimization until the UX performance is unacceptable.
I sometimes wonder what the world could be like if we just froze hardware for 5 years and put all of our focus on performance optimization.
I invented the translucent blurred window effect. It first shipped in Mac IE 5.0 in April 2000 (the non-Carbon version only), for the autocomplete window, and later that year for the web page context menu (first in the special MacHack version).
The effect doesn't have to be expensive, and my original implementation was fast on millenium era machines.
The goal is preserve the context of the background, while keeping foreground data readable, in a way that is natural to our human visual system. If you are doing it properly you also shift the brightness values into a more limited range to diminish the contrast and keep the tonal values away from the chosen foreground text color (this is also cheap). Done properly it is visually pleasing with virtually no effect on readability.
People have coded some boneheaded imitations along the way though. They don't add the blur, or they don't adjust the brightness curve, or they make the radius much too big, or they compute some over exact Gaussian blur that is too slow.
It's the nature of blur that it doesn't have to be exact to be visually pleasing and convincing.
The context of the background is self-maintaining. Reducing the viz/readability of the overlay text does nobody but the art critics any favours. It was, and remains, a bad idea.
It's shown up in Vista and some other Windows versions since that. I though they used it fairly tastefully, but I'm not a Windows user so haven't spent much time looking at it. Apple currently has the blur radius so high they have to exaggerate the saturation to make the effect noticeable, which looks OK but I prefer a more optically realistic level of blur.
I think it was to do with graphics acceleration becoming more commonplace. It was my guess that these additional effects weren’t originally intended to use desktop cpu .. tho I guess once they became the norm who knows .. although the recent trend towards “flat” UI seems to be reversing that trend ..
While I can see the technical reason why channel changes take a while with digital TV, I've always wondered why switching the digital input of your TV, or just changing the resolution of the connected devices takes so long. On most TVs it's over two seconds. The signal is in most cases just a stream of standalone, uncompressed frames. Switching from HDMI1 to HDMI2 should take a few milliseconds.
I’ve always assumed that’s because TV manufacturers choose the most under-powered SoC they can get away with, don’t put half as much RAM as it needs, and let loose incompetent programmers who don’t have to use the damn thing ever. Still very frustrating.
Ok so HDCP might be an explanation although I wonder if it has any impact if it's not in use. Don't really know how it works tbh.
As for the rest: Resolution negotiation is optional and wouldn't matter if the device is already outputting at some resolution, also this happens on different pins, so even if the device would first query the EDID of the TV to figure out what mode it prefers, the TV could meanwhile already display whatever the device is outputting. Same with CEC, this is another protocol on different pins than where the picture data is sent. HDMI really is just DVI with some extras, at least for the older versions.
I don't know why (but would be interested), but even PC screens are usually very slow at switching (>1 second delay), and for some inexplicable reason practically all of them turn their backlight off while doing it.
Alternately, the value of low-latency experiences is not as high as we believed - or it's poorly measured.
Particularly in Enterprise software, the time to complete a workflow or view specific data matters a lot - the time to load a page is a component of that, but customers will gladly trade latency for a faster e2e experience with less manual error checking.
In consumer the big limiting factor is engagement, a low-latency experience will enhance engagement. However it's possible to hide latency in ways that weren't possible before such as infinite scrolls and early loading of assets. The engagement on the 50th social feed result has less to do with the latency to load the result, and more to do with how engaging the content is.
Meanwhile, most of those ways to hide latency have other issues, like the good old "link I need is below the infinite scroll" or "I just want to go back to a specific time, but I have to scroll through every single thing between then and now instead of just jumping 10 pages at a time". Which we would avoid if instead we tackled the actual problem.
I once had a glitch on my iphone back in iOS 9.x. The glitch made it that all transitions/animations were disabled and it was a fantastic experience to click an app and have it open instantly. Turning off animations in IOS settings doesn't make it as fast as that glitch did unfortunately.
Microwaves are another example. Used to just turn a knob to the number of minutes you want, and done. Now it's five button presses. (Maybe not exactly a latency thing, but a UX one that makes it slower).
At least some microwaves will run N minutes when you press the N button once. Also you can hit the add 30 seconds button a couple of times, and it starts running instantly.
It is the nature of progress: progress is doing more with less. That’s increased productivity.
Of course software now uses more computing resources, so that’s not doing more with less. But the computer is cheap. What’s expensive is the humans who program the computer. Their time is expensive, and getting experienced, expert humans is even more expensive.
So we now have websites that have rich features bolted together using frameworks. Same for desktop software, embedded systems, and whatever else. They’re optimized for developer time and features, not for load time because that’s not expensive, at least not in comparison.
As a user the only solution I see to this is to use old fashioned dumb products rather than cheaply developed “smart” ones. For instance I’m not going near a smart light switch, or a smart lawn sprinkler controller. Old dumb ones are cheap and easy and fast and predictable.
>But the computer is cheap. What’s expensive is the humans who program the computer.
This is a nice half-truth we tell ourselves, but that's not the full story.
There exists plenty of optimizations where the programmer-time would be smaller than the additional hardware cost. And those losses compound. But they're a little too hard to track, and cause is a little too far divorced from effect.
I did our first ever perf pass on an embedded application as we started getting resource constrained. I knocked 25% off the top in a week. Even if I had spent man-months for a 10% savings, try and tell me that's more expensive than spinning new boards.
That's not to say we're opposed to hardware changes; we do them all the time. But the cost curve is weighted towards the front so it's more attractive to spend a non-zero amount of developer time right now to investigate if this other looming spend is avoidable. That's not the case when you're looking at controlling the acceleration of an AWS bill that spreads your spend out month to month through eternity.
Who wants to spend a big chunk of money up front to figure out if you can change that spend rate trend by a tiny percentage? Even if you do, and get a perf gain, but someone else on the team ships a perf loss? Then it doesn't feel real, and you can only see what you spent. Even if you have good data about the effect of both changes (which you don't), the fact the gain was offset means the sense of effect is diminished.
And rather than investigate perf, people can always lie to themselves that the cost is all about needing to "scale." That way they convince themselves not only was there nothing they could have done, the cost is a sign that their company is hot shit.
And if perf has any impact on sales, cause and effect are even further apart. You might be able to measure the effect perf has on your sales website directly, but if that feedback loop involves a user developing an opinion over days/weeks? Forget about knowing. Oh, sure they'll complain, but there are no metrics, so we get the rationalizations we see in this thread.
I don't think it's the nature of progress. There's no question we have the _capacity_ to engineer our way out of these problems. They aren't unsolvable. But a lot would have to change before the necessary resources are mobilized against these kind of problems instead of churning out yet more bloat, which, let's face it, is what most of us are doing with our time every day.
>Remember when channel changes on TVs were instantaneous? Somehow along the way the cable/TV companies introduced latency in the channel changes and people just accepted it as the new normal
Phones used to be rotary dial but then touch tone phones with number buttons were introduced. I was reading an article about human brains and its expectations. Going from a touch tone phone to an old rotary style phone seems excruciatingly slow to our brains. Depending on the number a 1 on each it's very close in duration but a 9 or a 0 on a rotary compared to a touch tone 9 or 0 seems glacial in speed.
Even worse; in some countries, touch tone exchanges took a while to roll out, with the result that keypad phones which supported both pulse dialing and tone dialing were common. And then people never switched them over to tone dialing. There were people pulse dialing into the 21st century.
The rampant consumption of resources ...has outpaced the provisioning of new resources. I think it might just be the nature of progress, but I hate it.
Someone had to more or less decide to handle it that way for some reason. So I am skeptical that "it's just the nature of progress."
Remember when you could look at a book, a guide, to see what was on and when? It never took more than a minute or two. Then came the TV channel with the slowly-scrolling list of channels (early 90s). Try figuring out today which shows will be available tomorrow at a particular time... and whether you have subscribed to that channel. Good luck. I don't think it is even possible anymore.
Eh, most cable companies now have a dedicated guide menu with manual scrolling and search features rather than a scrolling channel, I'd say that's one of the few things to have improved since old days
> Remember when channel changes on TVs were instantaneous?
Remember when turning a TV on was <0.5s?
My current dumb TV takes a good while to turn on. When I press the power button, it takes about 1s for the indicator light to change than another 2 or 3 to begin displaying anything.
The image took some time to stabilise (and was black at first), but the things turned on instantly, with the light indicator visible without delay (and the sound of the CRT turning on).
Now, quite often I have to wait 5s to see whether the button push was registered, push again because the TV still does nothing, and then watch as the thing turned on just as I was pressing and interprets the second push as a signal to go on standby (5 more seconds with an obnoxious message about it going to sleep, and yet 5 more to wake it up). It’s like the USB-A that needs to be rotated twice every time you want to plug something.
Interesting. I meant CRTs. Maybe my memory is a bit rose tinted.
Still, I remember never having confusion about whether my CRT TVs where responding to the power on button press or not. There are plenty of times where I turn my current TV off since I think it didn't receive the first button press.
Most CRTs made an obvious _noise_ when turned on (actually, one of the curses of good high-frequency hearing was that they made an obvious noise for the whole time they were on, though I think a lot of people couldn't hear them after warmup). That helped, I suppose.
The history of the automobile is another such example. Brief summary: it turns out the invention of the car didn't really save anyone time, it just enabled sprawl.
The purpose of technology (in the POSIWID sense) is to concentrate wealth.
Who are these "people" you're talking about? I would much rather live close to everything and walk, bike or bus. Instead I have to have a car because work is 3 miles away, doctors are 5-30 miles away, the grocery store is 5 miles away, the nearest convenience store is a mile away. And I live in the middle of one of the biggest metro areas in the country!
I'd like to have a farm in the middle of downtown too. It isn't possible. You get more choices in the same amount of time with a car, and get more space as well. Without a car you better like the doctor withing waking distance (my family considered them quacks and has heard plenty of stories about people almost dieing because obvious things were not caught in regular visits, instead the ER had to figure them out when it was almost too late). Ymmv, I like choices.
Of course cars do take up a lot of space, but even factoring that in you have more choices in a reasonable time with a car than without.
Don't read this as me approving of cars. I understand the appeal and drive, but I hate it.
Your last paragraph is so true. I feel that's an issue we are facing a lot in every domain. I see people writing highly inefficient back-ends Because they believe anyways we are going to scale it horizontally, so it doesn't matter. The similar applies to frontends too I guess, people don't care to optimize apps either. They're like anyways the minimum configuration on which our apps run are getting better day by day, so why not build a resource hogger!
I personally believe we should start having "bootcamps" which talk about optimizations, and the cost of stupid designs. I'm also looking forward to compile time and run-time optimizations so that we don't even have to rely on the developers for it.
> I think it might just be the nature of progress, but I hate it.
I suspect that the root cause is that nobody understands what's going on from the UI down to the hardware, and nobody is incentivized (or even allowed) to spend the time it would take to actually do so.
It used to be software would get written, and then iterated over to optimize and refine it to make it smaller/faster. That got too expensive, so the devs just depended on CPU speeds increasing and drives getting larger.
Having used 14.4k dialup on a 486DX2-66MHz at 640 x 480 VGA, I really don't think 2020s technology is outstandingly slow.
I don't even think most phone apps are bloated, because most phone apps - and web sites - are just electronic forms decorated with a bit of eye candy.
Security and reliability worry me far more. Many sites have obvious bugs in $favourite_browser, and some just don't work at all. Some of this is down to ad blocking, but that shouldn't be a problem - and the flip side is that blocking ads, trackers, and unwanted cookies seems to do wonders for page load speeds.
I've used an 8 bit computer with a 300 baud modem. One bbs I dialed into was a 8 bit computer with a whopping 4mb of ram. It was the fastest response time of any computer I've every used. Everything was in ram, and coded in assembly with speed and low code space as the concern.
Modern computers should be much faster, but they aren't. They do more, but when you do something you notice the slow speed.
The only thing cell phones have on copper land lines is portability. In all other ways they suck. I think we only tolerate this because most of us have completely forgotten how much better land lines were, or we never experienced them to begin with. The first big hit to phones came with the cordless models. No longer comfortable to hold against your ear, the earpieces got flat or even convex, and mashing it up against your head made for an unpleasant converation. But hey, we got rid of cords! And then the change to cell phones, with their tiny fraction of available bandwidth, terrible sound quality, high latency, high failure rate, etc.
The astonishing thing is that bandwidth isn't a big deal now, and we could have improved basically all aspects of mobile calls to be within spitting distance of what we used to have 30 years ago.
No wonder people don't like to talk on the phone any more.
> The only thing cell phones have on copper land lines is portability. In all other ways they suck
Isn't that a little like saying "The only thing boats have over cars is that they can go on water. In all other ways they suck"? Portability is the entire point. Even in the "good old days" most people would have accepted nearly any tradeoff for the ability to carry even the simplest global communications device with them.
Depends on your network and your landline but I guess a fairly isshy network compared with a fairly common POT landline you would definitely see a difference. Packet switching (Vs circuit switching) alone should in principal introduce a latency - then there’s a lot more interconnects - and on top of that whatever hocus pocus they use to optimise their (digital) bandwidth usage. Of course modern land lines probably are more like mobile now but not unreasonable to expect circuit switches analogue POT are still widely used.
Cell phones have encoding/compression delay on the voice part, delays waiting for a transmit slot if the air interface is time division multiplexed, sometimes a jitter buffer to allow for resends/error correction, and often less than ideal routing (lots of people are using numbers from out of state, which may require the audio path to traverse that state).
Originally POTS was circuit switched analog connection between you and the other party --- only delays from the wires, and maybe amplifiers. Nowadays POTS is most likely digitally sampled at the telco side, but each sample is sent individually --- there's no delay to get a large enough buffer to send, because for multiplexed lines each individual line is sampled at the same rate and a frame has one sample from each.
> Ever noticed how long it takes to kill an app and restart it? Ever notice how much more often you have to do that, even on a 5-month old flagship phone?
Is this an android problem? I don't really ever have to close apps unless the app itself gets stuck into a broken state and forcing close to restart can correct the issue.
> Ever noticed how long it takes to kill an app and restart it? Ever notice how much more often you have to do that, even on a 5-month old flagship phone
This is often intentional. Take a look at any OS or software with animations. Slowness for slowness sake. The macOS spaces change has such a slow animation, it's completely useless. Actually, macOS has a ton of animations to slow things down, but luckily most can be turned off. Not the spaces thing. Android animations are unbearable and slow things down majorly. Luckily they can be turned off, but only by unlocking developer mode and going in there. It's clear whoever designed these things has never heard of UX in their lives. And since these products are coming from companies like Google and Apple, which have UX teams, it leads me to think that most UX people are complete idiots. Or UX is simply not a priority at all and these companies are too stupid to assemble a UX team for their products. Hard to say which is the case.
Or, perhaps, maybe you're just not the target audience and those animations are designed as visual indicators for less experienced users?
Those animations are absolutely a product of well researched UX design, it's just design that's intended to make the UI more accessible by showing users the flow of information and how the structure of the interface changes in a visual manner, rather then design intended to address the needs of power users. The animations used in the Spaces feature on MacOS is a good example of that, where apps and desktops slide and zoom around to make it absolutely obvious that the apps you have open haven't just disappeared. That's quite important for a fairly advanced desktop manipulation feature like that.
Modern operating systems are designed for broad audiences, and that includes people who aren't as savvy with technology as we are. That means accepting some level of tradeoffs between the speed that pro users want, and UI accessibility that necessitates slowing things down somewhat. In the case of desktop OS's there's still usually ways for power users to disable that stuff and of course Terminal for those who don't really need a UI at all. And then there's a lot of different flavors of Linux that make no attempt at appealing to a less technical audience.
But just because you're not the target audience doesn't mean the UX team are "idiots" or that the companies are "stupid". The amount of novice or casual users is orders of magnitude higher then power users who care only about efficiency, and for better or worse those users always come first.
I'll believe they have UX teams when they offer an easily accessible option to turn those things off. There's zero reason why they can't target both use cases with a simple toggle to turn animations on and off. The stupidity is expanded when this exists but is not easily accessible. Those that haven't thought of that yet are indeed stupid (Apple). Some videogames have the same issue with unskippable cut scenes. Am I not the target audience there either? If not, then who is? Who wants to watch the same cut-scene a thousand times? The UX is equally horrific in both cases and in both cases, clearly no thought went into the UX whatsoever.
The spaces transition seems relatively fast for me with Accessibility > Display > Reduce Motion on. This switches it from sliding to crossfade.
Locating the window I'm looking for takes longer than the animation and I can start looking immediately during the fade. Even with the ctrl+number shortcuts, I can't get my hands back onto the home row before the animation finishes.
Hands back to home row? Mine never left. The fade is a little better, almost usable, but not quite. Certainly not something I want to see hundreds of times a day while I'm trying to get work done. Same problem with full screen apps which also use this system.
Thank you. It's one of my most-mentioned law or principle, and I keep having to Google it to remind myself what it's called or so I can link it to a colleague. It's fun to be part of a discussion with other people who already know about it (and that's a fairly intelligent discussion, too; the security angle is interesting).
Part of the problem is analagous to traffic congestion / highway lane counts: "if you build it, they will come". More lanes get built but more cars appear to fill them. Faster connection speeds allow more stuff to be included, and the human tolerance for latency (sub 0.1s?) hasn't changed, so we accept it.
Web sites and apps are sidled with advertising content and data collection code; these things often get priority over actual site content. They use bandwidth and computing resources, in effect slowing everything down. Arguably, that's the price we pay for "free internet"?
Finally (and some others have mentioned this), the software development practices are partially to blame. The younger generation of devs were taught to throw resources at problems, that dev time is the bottleneck and not cpu or memory -- and it shows. And that's those with some formal education; many devs are self-taught, and the artifacts of their learning manifest in the code they wrote. This particularly in the JS community, which seems hellbent on reinventing the wheel instead of standing on the shoulders of giants.
I was on AT&T's website the other day (https://www.att.com/) because I am a customer and I was just astonished how blatantly they're abusing redirection and just the general speed of the page. (ie: Takes 5-10 seconds to load your "myATT" profile page on 250MB up/down).
It's 2020. This should not be that hard. I've worked at a bank and know that "customer data" is top priority but at what point does the buck stop? Just because you can, doesn't mean you should.
Hundreds of comments yet not one questions the fact that the premise of the article might be flawed. They're using "onload" event times and calling this "webpage speed" (there's no such thing as webpage speed btw[0]). It's especially known onload is not a very reliable metric for visual perception of page loading[1] (visual perception of loading = what most think of as "page speed"), that's why we have paint metrics (LCP, FCP, FMP, SI, etc). Tools like PageSpeed Insights/Lighthouse don't even bother to track onload.
In fact, HTTPArchive (the source of data the article uses) has been tracking a lot of performance metrics, not just onload. Some have been falling, some have been rising, and it depends on the device/connection. Also, shaving 1 second off a metric can make a huge difference. These stats are interesting to ponder about, but you can't really make any sweeping judgements about it.
It looks like people just want to use this opportunity to complain about JavaScript and third party scripts, but for above-the-fold rendering, this isn't usually the only issue for most websites. Frequently it's actually CSS blocking rendering or other random things like huge amounts of HTML, invisible fonts, or below-the-fold images choking the connection. Of course, this doesn't fit the narrative of server-side vs client-side dev very well, so maybe that's why there's hundreds of comments here without any of them being an ounce skeptical of the article itself.
One thing is bothering me is how browsers themselves are becoming ridiculously slow and complicated.
I made a pure HTML and CSS site, and it still takes several seconds to load no matter how much I optimize it, after I launched some in-browser profiling tools, I saw that most of the time is spent with the browser building and rebuilding the DOM and whatnot several times, the download of all the data takes 0.2 seconds, all the rest of the time is the browser rendering stuff and tasks waiting each other to finish.
Yeah. Because modern tech is bloat. Started on a JavaScript-based search tool the other day. ALL the JavaScript is hand-coded. No libraries, frameworks, packages. No ads. Just some HTML, bare, bare-bones CSS, and vanilla JavaScript. Data is sent to the browser in the page, where the user can filter as needed.
It's early days for sure, and lots of the code was written to work first and be efficient second, so it will grow over the next few weeks. But even when finished it will be nowhere near the !speed or size of modern web apps/pages/things.
It rather has slowed down with some websites, or those websites did not exist back then, because they would not have been possible.
Just today morning, when I opened my browser profile with Atlassian tabs (Atlassian needs to be contained in its own profile), there were perhaps 7 or 8 tabs, which were loaded, because they are pinned. It took approximately 15-20s of this Core i7 7th Gen, under 100% CPU usage of all cores at the same time to render all of those tabs. Such a thing used to be unthinkable. Only in current times we put up with such state of affairs.
As a result I had Bitbucket show me a repository page, Jira showing me a task list, and a couple of wiki pages, which render something alike markdown. Wow, what an utter waste of computing time and energy for such simple outcome. In my own wiki, which covers more or less the same amount of actually used features, that stuff would have been rendered within 1-2s and with no real CPU usage at all.
Perhaps this is an outcome of pushing more and more functionality into frontend client-side JS, instead of rendering templates on the server-side. As a business customer, why would I be entitled to any computation time on their servers and a good user experience?
Not a surprise. Most people writing commercial front end code have absolutely no idea how to do their jobs without 300mb of framework code. That alone, able to write to the basic standards and understand simple data structures, qualifies my higher than average salary for a front end developer without having to do any real work at work.
There is also a big "runs-on-my-machine" factor. Where the machine is the developers high-tier laptop hooked up to Gigabit LAN and Fiber WAN with <5ms ping and >100Mbit symmetric.
Luckely there are tools like Lighthouse[0] but with all the abstractions and frameworks inbetween it is often impossible to introduce the required changes without messing up the quality or complexity of the code/deployment.
An uncompressed RGB 1920x1080 bitmap is 6,220,800 bytes. When your webpage is heavier than a straight-up, uncompressed bitmap of it would be... something's gone wrong.
We're not quite there, since web pages are generally more than one screen, but we're getting close. Motivated searchers could probably find a concrete example of such a page somewhere.
What’s funny is that’s how Opera Mini achieves its great compression for 2G and 3G network use... it renders mostly server-side, with link text and position offsets/sizes, last I used it...
No, you can not add the whole c++ std lib into our code. Yes, I know it is useful. Yes, that will save you 2 hours of work. However, the code no longer even fits into the 1MB of flash we have. Yes we can ask for a new design the management would love to spend 500k on spinning a new design and getting all of the paperwork for it done, and our customers would love replacing everything they have for functionally equivalent hardware that now costs 50 dollars more each. But at least you saved 2 hours writing some code.
I understand where you're coming from, and I appreciate the constraints of embedded system design, but this is a pretty extreme scenario.
Management should seriously consider whether the 0.01c saved on that 8Mb chip is worth the design overhead from very tight constraints. There is most likely a pin-compatible 16Mb chip that would eliminate all the pain.
Yes, I know that in high volumes every fraction of a penny counts. But if you frequently find yourself engineering your way out of trivial constraints, you might be doing it wrong.
My point was it was not a new thing. 8MB would have probably been well over 250 dollars worth of new parts at that time. Plus the re-cert and if you could fit it on the existing board. These days you would not be talking with a small amount even in the init design (hopefully).
No. It’s a tragedy of hiring bias. Interviewers, when conducting mass interviews to fill an open position, tend to prefer what they perceive to be is either the strongest technical candidate or the candidate who makes them feel the least insecure.
Then the candidate is selected and goes to work where the selection bias fades into expectations of conformity onto a bell curve. The people who conform to the middle of the curve are generally the ideal employees. The people at the low end of the curve are either released or retained as padding against future layoffs. The people at the high end of the bell curve are an anomaly. Those people are far more productive but are willing to use less popular conventions to achieve superior results which tends to result in friction.
Computers are AMAZINGLY fast, EVEN running JavaScript. Most of us have forgotten how fast computers actually are.
The problem is classes calling functions calling functions calling libraries calling libraries.....etc etc
Just look at the depth of a typical stack trace when an error is thrown. It's crazy. This problem is not specific to JavaScript. Just look at your average Spring Boot webapp - hundreds of thousands of lines of code, often to do very simple things.
It's possible to program sanely, and have your program run very fast. Even in JavaScript.
I think the problem is that languages like Javascript and object oriented languages in general actually incentivize this kind of design. Most of the champions of OOP rarely ever look at stack traces or anything relating to lower-level stuff (in my experience, in general). Then you take that overhead to the browser and expect it to scale to millions of users. It just doesn't make sense. No amount of TCO is going to fix the problem either.
APIs are going to be used as they're written, and as documented. So as much as there is a problem with people choosing to do things wrong, I think the course correction of those people is a strong enough force. At least in comparison to when the design incentivizes bad performance. There's basically nothing but complaining to the sky when the 'right' way is actually terrible in practice.
I don't think people are claiming javascript execution speed is the culprit. Javascript can be slow but computers are also fast. However, loading all that javascript takes a long time especially if the website isn't optimized properly and blocks on non-critical code.
This assumes that the thing that should be held constant is complexity, and that the loadtimes will therefore decrease. On the contrary, loadtime itself is the thing being held (more or less) constant, and the complexity is the (increasing) variable.
Progress is not being able to do faster the same things we used to do, but to be able to do more in the same amount of time.
These seem to be equivalent, but they're not, because the first is merely additive, but the second is multiplicative.
As a backend dev now working on frontend tasks and primarily with Javscript and Typescript, I think I might have an insight. Server-side engineering is in some "well-defined". Software such as the JVM, operating systems behave in a rather well-defined manner, support for features are predictable and by front-end standards, things move slowly, providing time for the developer to understand the platform and use it to his/her best.
The browser platforms are a total mess. An insane number of APIs, a combinatorial explosion of what feature is supported on what platform. And web applications move fast. REAL fast. Features are rolled out in days, fixes in hours and frameworks come and go out of fashion in weeks. It is no longer possible for devs to keep up with this tide of change and they seem to end up resorting to do libraries for even trivial tasks, just to get around this problem of fancy APIs and their incorrect implementation and backwards compatibility. And needless to say, every dependency comes with its own burden.
Web platforms are kinda a PITA to work with. On one hand Chrome/Google wants to redefine the web to suit their requirements and FF, the only other big enough voice really lags in terms of JS performance. Most devs nowadays end up simply testing on Chrome and leaving it at that. My simple browser benchmarks show anywhere between 5-30% penalty in performance for FF vs Chrome.
Unless we slow down the pace of browser API changes and stop releasing a new version of JS every year and forcing developers to adopt them, I guess slow web will be here to stay for a while.
You could probably say the exact thing about video game consoles and loading times/download speeds/whatever. The consoles got more and more powerful, but the games still load in about the same amount of time (or more) than they used to, and take longer to download.
And the reasoning there is the same as for this issue for webpage speed or congestion on roads; the more resources/power is available for use, the more society will take advantage of it.
The faster internet connections get, the more web developers will take advantage of that speed to deliver types of sites/apps that weren't possible before (or even more tracking scripts than ever). The more powerful video game systems get, the more game developers will take advantage of that power for fancier graphics and larger worlds and more complex systems. The more road capacity we get, the more people will end up driving as their main means of transport.
There's a fancy term for that somewhere (and it's mentioned on this site all the time), but I can't think of it right now.
One of my heuristics for video game quality is the main menu transition speed -- you only care about the menu animations once, on first view; after that, you want get something done (eg fiddle with settings, start the game, items, etc). So it should be fast, or whatever animation skippable with rapid tapping. Any game designer that doesn't realize this likely either doesn't have any taste, or is not all that interested in anyone actually playing the game.
This heuristic has served me stupidly well, and repeatedly gets triggered on a significant proportion of games -- and comes out correct
The actual level loading times of games doesn't matter all that much. Games go out of their way to be(feel) slow/sluggish/soft/etc
Despite an increase in computer speed, software isn’t faster. It does more (the good case) or it’s simply sloppy, but that’s not necessarily a bad thing because it means it was cheaper/faster to develop.
Same with web pages. You deliver more and/or you can be sloppier in development to save dev time and money. Shaking a dependency tree for a web app, or improving the startup time for a client side app costs time. That’s time that can be spent either adding features or building another product entirely, both of which often have better ROI than making a snappier product.
ROI might say a developer should build a different product instead of speeding up the old one. Or perhaps it’s better to get 200 less satisfied customers than make the 100 existing ones more satisfied. That can be done by using the resources for marketing, features, SEO. In the end, when you are optimizing there is always something you are not doing with that time.
Whether hundreds of users value the time they gain by not waiting for page loads isn’t relevant either unless it actually converts to more sales (or some other tangible metric like growth).
Most people seem to get more confused and hesitant when pages are loaded with more features, most of which are irrelevant to their neeeds of the moment. (Of course flat design makes this hesitation worse.)
And theory talks about "cognitive overload" and "choice paralysis".
This is caused by induced demand. This comes up a lot for car traffic [0]. If you build wider roads, you will almost always just see an increase in traffic, up to the point where the roads are full again. The metaphor is not perfect, but I think it is fairly apt.
Expanding infrastructure increases what people can do, and so people do more things. In some cases, it just decreases the cost of engineering (you can use more abstractions to implement things more quickly, but at the cost of slower loading sites). But in the end, you should not expect wider pipes to improve speeds.
It’s the marketing team’s fault. I proposed a common, standardized solution for showing promotions on our website, but no... they wanted iframes so their designers could use a WYSIWYG editor to generate HTML for the promotions. This editor mostly generates SVGs, which are then loaded in to the iframes on my page. Most of our pages have 5-10 of these iframes.
Can someone from Flexitive.com please call up my marketing coworkers and tell them that they aren’t supposed to use that tool for actual production code?
Can someone also call up my VP and tell them they are causing huge performance issues by implementing some brief text and an image with iframes?
Can someone fire all of the project managers involved in this for pushing me towards this solution because of the looming deadline?
The reason websites have gotten worse is because they don't make performance a priority. That's all it is. Most sites optimize for ad revenue and developer time (which lowers costs) instead.
The reason is that web designers treat newly improved performance as an excuse to either throw in more load (more graphics, more quality graphics, more scripts, etc.) or let them produce faster at the cost of performance.
Nowadays it is not difficult to build really responsive websites. It just seems designers have other priorities.
It frustrates me that the same applies to CPU power, RAM, and disk space. We have orders of magnitude more of each, but the responsiveness of apps remains the same. At least from my subjective experience.
If someone has a good explanation of what has happened, I'd love to know the cause and what can be done to fix this.
I understand that some of this has gone to programmer productivity and increased capabilities for our apps, but what we've gotten doesn't seem proportional at all.
I frequent one forum only through a self-hosted custom proxy. This proxy downloads the requested page HTML, parses the DOM, strips out scripts and other non-content, and performs extensive node operations of searching and copying and link-rewriting and insertion of my own JS and CSS, all in plain dumb unoptimized PHP.
Even with all of this extra work and indirection, loading and navigating pages through the proxy is still much faster than accessing the site directly.
I'm developing a tiny-site search engine. upvote if you think this product would interest you. The catalog would be sites that load < 2s with minimal JS
my initial prototype has been selecting sites based on e.g. overall asset size (e.g. html + js + img) or DOM size. I'm trying to identify key "speed" indicators that can be inferred without requiring a full browser (every expensive for indexing)
Going to whatever random media site without it enabled is a couple mb per page load (the size of SNES roms.. for text!).
With content blockers enabled it was a couple kb per page load.
Three orders of magnitude difference in webpage size due to data harvesting...
Now, imagine how much infrastructure savings we would have if suddenly web browsing was even just 1 order less data usage. Would be fun to calculate the CO2 emission savings, ha.
In spite of an increase in mobile CPU speed, mobile phone startup time have not improved (in fact they became slower).
In spite of an increase in desktop CPU speed, time taken to open AAA games have not improved.
In spite of an increase in elevator speed, time taken to reach the median floor of an building have not improved.
My point is, "webpage" has evolved the same way as mobile phones, AAA games and buildings - it has more content and features compared to 10 years ago. And there is really no reason or need to making it faster than it is right now (2-3 seconds is a comfortable waiting time for most people).
To put things in perspective:
Time taken to do a bank transfer is now 2-3 seconds of bank website load and a few clicks (still much to improve on) instead of physically visiting a branch / ATM.
Time taken to start editing a word document is now 2-3 seconds of Google Drive load instead of hours of MS Office Word installation.
Time taken to start a video conference is now 2-3 seconds of Zoom/Teams load instead of minutes of Skype installation.
>My point is, "webpage" has evolved the same way as mobile phones, AAA games and buildings - it has more content and features compared to 10 years ago. And there is really no reason or need to making it faster than it is right now (2-3 seconds is a comfortable waiting time for most people).
What features? I don't know anything substantive a site can deliver to me today that it was not capable of 10 years ago. The last major advance in functionality was probably AJAX, but that doesn't inherently require huge slowdowns and was around more than 10 years ago.
The rest of your comparisons are dubious:
>Time taken to do a bank transfer is now 2-3 seconds of bank website load and a few clicks (still much to improve on) instead of physically visiting a branch / ATM.
This is the same class of argument as saying that (per Scott Adams), "yeah 40 mph may seems like a bad top speed for a sports car, but you have to compare it hopping". (Or the sports cars of 1910). Yes, bank sites are faster than going to ATM. Are they faster than bank sites 20 years ago? Not in my experience.
>Time taken to start editing a word document is now 2-3 seconds of Google Drive load instead of hours of MS Office Word installation.
Also not comparable: you pay the installation MS Word time-cost once, and then all future ones are near instant. (Also applies to your Skype installation example.)
> What features? I don't know anything substantive a site can deliver to me today that it was not capable of 10 years ago. The last major advance in functionality was probably AJAX, but that doesn't inherently require huge slowdowns and was around more than 10 years ago.
Okay, a site that was announced less than 24 hours ago. That's not what a typical site looks like that demonstrates your claim that most of these bloated sites are only bloated to provide advanced functionality that they can't otherwise. Did Buzzfeed or the typical news site just start offering video editing?
This hipster atttiude of replacing proper software with barely functional webapps really gets on my nerves.
People use and will continue using Skype and especially MS Office. It is much more functional that gSuite alternatives and moving people to castrated and slow webapps is not progress.
Here on HN we like to complain about JS frameworks and Single Page Apps. Yes, they can be slow. But they also power some great interactive web experiences like Figma or Slack that just aren't feasible to build any other way.
The low hanging fruit here is content websites (news, blogs, etc) which are loaded down with hundreds of tracking scripts, big ads, and tons of JS that has nothing to do with the content the user came to read.
Privacy Badger reported 30 (!!!) tracking scripts on that page. Even with PB blocking those, it still takes ~15s before the page is usable on my MacBook Pro with a fast connection.
It's just a bunch of text and some picture galleries. It loads like it's an IDE.
Employees building the web pages are rewarded for doing "work". Work typically means adding code, whether it's features, telemetry, "refactoring" etc. More code is generally slower than no code.
That's why you see something like Android Go for entry-level devices & similar "lite" versions targeting those regions. These will have the same problem too over time because even entry-level devices gets more powerful over time.
The problem is that organizations don't have good tools to evaluate whether a feature is worth the cost so there's no back pressure except for the market itself picking alternate solutions (assuming those are options - some times they may not be if you're looking at browsers or operating systems where generally a "compatibility" layer has been defined that everyone needs to implement).
While I agree with the idea and I am not happy about slow apps, the truth is, it's focused on technical details.
People don't care about speed or beauty or anything else than the application helping them achieve their goals. If they can do more with current tech than they could with tech 10-20 years ago, they're happy.
Every statically backed research on customer behaviour I have ever seen says otherwise. The more you slow down the page or app the less customers like and use it or buy the product being sold. As someone with a homemade site for our business I can say that it is extremely easy to be faster than 95% of sites out there and it makes a huge difference, also on Google. Tiny business with homemade website in top 1-3 on Google was mindbaffling easy because everyone use too many external sources and preschool level code. Especially the so-called experts. Most are experts in bloat.
I've made a multiplayer webgame (https://qdice.wtf) that is under 400kb total download on first load [1]. Even when booting into a running game it's not much higher.
Load times and bloat are one of my pet peeves, that's why I optimized a lot for this, although there is still room for improvement.
Everything is self hosted, no tracking bullshit, no external requests. I used Elm, which apart from being nice for learning FP, has a very small footprint compared to other DOM abstractions.
[1] Last time I looked, it might have grown a tiny bit due to UGC. I don't have access to a computer rn.
(yes he actually said that, to an entire department, the context was that people will fill the pipe up with junk if they're not careful and it made more room to deliver value by not sucking)
exactly this. i remember when programmers took the time to make sure their programs didn't take up alot of memory. as we got more ram, many became lazy about memory optimization cause well... the computer has plenty. same thing here with webpages. there was a time where you needed to optimize your site cause of the modem that everyone used. now everyone has dsl or higher so there isn't an incentive to optimize your site.
Of course not. People remain convinced that the internet will cease to exist without advertisements all over the place. Web pages are now 10mb+ in size, making 20 different DNS calls, all of which ad latency. And for what? To serve up advertisements wrapped around (or laying themselves over) the content that we came to read in the first place.
Maybe i'm just old, but I fondly remember web pages that loaded reasonably fast over a 56k modem. these days, if I put anything on the web, I try to optimize it the best I can. Text only, minimal CSS, no javascript if at all possible.
To me it’s more evidence that increased speed and reduced latency is not where our real preferences lie: we may be more interested in the capabilities of the technology, which have undoubtedly improved.
Wages constantly increase due to economic prosperity (on average, I realize they have dwindled in the past 50 odd years), and every single year the majority of people have nothing in their savings accounts.
It's been that way since the dawn of time. [0]
This is a human economy problem, not a technological one imho.
If you give a programmer a cookie, she is going to ask for a glass of milk.
Here's an idea I posted on reddit yesterday. Seemed like it was shadowbanned or just entirely ignored.
# Problem
Websites are bloated and slow. Sometimes we just want to be able to find information quickly without having to worry about the web page freezing up or accidentally downloading 50MB of random JavaScript. Etc. Note that I know that you can turn JavaScript off, but this is a more comprehensive idea.
# Idea
What if there was a network of websites that followed a protocol (basically limiting the content for performance) and you could be sure if you stayed in that network, you would have a super fast browsing experience?
# FastWeb Protocol
* No JavaScript
* Single file web page with CSS bundled
* No font downloads
* Maximum of 20KB HTML in page.
* Maximum of 20KB of images.
* No more than 4 images.
* Links to non-fastweb pages or media must be marked with a special data attribute.
* Total page transmission time < 200 ms.
* Initial transmission start < 125 ms. (test has to be from a nearby server).
* (Controversial) No TLS (https for encryption). Reason being that TLS handshake etc. takes a massive amount of time. I know this will be controversial because people are concerned about governments persecuting people who write dissenting opinions on the internet. My thought is that there is still quite a lot of information that in most cases is unlikely to be subject to this, and in countries or cases where that isn't the case, maybe another protocol (like MostlyFastWeb) could work. Or let's try to fix our horrible governments? But to me if the primary focus is on a fast web browsing experience, requiring a whole bunch of expensive encryption handshaking etc. is too counterproductive.
# FastWeb Test
This is a simple crawler that accesses a domain or path and verifies that all pages therein follow the FastWeb Protocol. Then it records its results to a database that the FastWeb Extension can access.
# FastWeb Extension
Examines links (in a background thread) and marks those that are on domains/pages that have failed tests, or highlights ones that have passed tests.
The degree to which desktop load times are stable over 10 years is in itself interesting and deserves more curiosity than just saying "javascript bad"
Plausible alternate hypotheses to consider for why little improvement:
* Perhaps this is evidence for control theory at work, ie website operators are actively trading responsiveness for functionality and development speed, converging on a stable local maximum?
* Perhaps load times are primarily determined by something other than raw bandwidth (e.g. latency, which has not improved as much)?
* Perhaps this is more measuring the stability of the test environment than a fact about the wider world?
If this list of changes is accurate, that last point is probably a significant factor -- note that e.g. there's no mention of increased desktop bandwidth since 2013.
While I don't disagree about this problem, most of the comments in this thread are lacking significant context and ignoring obvious problems with server side rendering.
Wordpress is arguably the best and most prominent example of SSR. It is horrible, and a vanilla install of Wordpress generally returns content in 2-3 seconds.
While Javascript adds bloat to the initial page load, generally it reduces significantly (or eliminates entirely) further page loads on a domain. For example, if I have an Vue app, it might take an extra second to load but then it will never have to load any client-side assets again (technically).
The other thing that makes most of these arguments is that they are disingenuous when it comes to making arguments about payloads and computing. It takes may take a significant amount of processing power to generate a json payload, but it most certainly will take an ever larger amount to generate all of the scaffolding that goes with a normal HTML page. Redrawing the HTML on each page load also increases overall network traffic, duplicates load across every page on the service (see Wordpress, again), and centralises front-end complexity in the backend.
On one hand, yes, end-user expectations have gone up. Back in the early naughts it was perfectly fine to wait ~8 seconds for an image to load block by block, kicking around the layout and content as it did so - and that was the status quo. It was fine. Nowadays if I don't get all icons and thumbnail-ready images near-immediately I assume something is wrong at some layer.
Another factor is how things are going over the wire. It's easy to point to web developers and say "Why not use SSR everywhere?" while they'll point back and say "Client-side rendering lets the server scale better". As with most such complaints, the truth is often somewhere in the middle - SSR should be aggressively used for static content but if you have a non-trivial computation that scales linearly it is worth considering offloading to the client, especially if you're running commodity hardware.
Then there's the question of what we're doing. It very much used to be the case that most everything I did was over an unsecured connection and virtually all interactions resulted in page navigation/refresh - never anywhere close to being below my perception. Nowadays, many actions are below my perception (or at least eagerly evaluated such that it seems they are) while non-trivial actions are often going through SSL, requests balanced across multiple app servers, tokens being authed, and eldritch horrors are invoked by name in UTF-8 and somehow it all gets back to me around the same time as those page refreshes were back in the day.
This most certainly isn't to say that we don't have room for improvement: we most certainly do. But like most systems, as the capabilities improve so, seemingly, do the requirements and interactions that need to be supported over it.
On a parallel front, it feels like something similar has happened with computers too. Laptops have gotten better (more cores, more RAM, SSD) over the years, but still the more frequent interaction occurrence seems to be me waiting for the computer to respond because every tiny application or website now consumes hundreds of GB of RAM .. and memory pressure.
The worst case of this for me was a completely static site which, sans images) loaded in under 100ms on my local machine. I inlined styles, all images with width and height so there is no reflow when the image arrives, no chaining of css resources, deferred execution of a single javascript bundle, gzip, caching, the works. Admittedly it was a simple page, but hey, if I can't do it right there, where then?
Anyway, it all went to st as soon as another guy was tasked with adding share buttons (which I have never once in my life used and am not sure anybody else has ever used).
I won't optimize any pages over which I don't have complete control. Maybe if a project has a CI/CD setup that will catch any performance regressions, but other than that, too much effort, thankless anyway, and on any project with multiple fronted devs, the code is a commons and the typical tragedy is only a matter of time.
Load times in this article are attributed to httparchive.org, which gathers data using WebPageTest. [1] By default, WebPageTest emulates a 5Mbps down / 1Mbps up connection speed for all tests, to provide a more stable basis for performance comparisons. httparchive.org's load times therefore aren't affected by improvements in general network speed.
Am I missing something here? httparchive.org is not an appropriate source for the comparison this article makes. A large repository of RUM data would be needed for that comparison.
Counterintuitively, the stability of page load times in httparchive.org suggests that page performance hasn't improved or worsened enough to make much difference on a 5Mbps connection.
This could be part of it. The shift to mobile computers which are necessarily wireless which means a random round trip time due to physics, which means TCP back-off. That, combined with the tendency to require innumberable external JS/CDN/etc resources that require setting up new TCP connections work together to make mobile computers extra slow at loading webpages.
Bandwidth and latency aren't the same thing! High-latency networks sometimes don't ever load some websites, even if there is reasonable bandwidth. I remember one time when I was in Greenland having to VNC to a computer in the US to do something on some internal HR website that just wouldn't load with the satellite latency.
This reminds me a lot of Freakonomics podcast episode (https://freakonomics.com/2010/02/06/the-dangers-of-safety-fu...) where they discuss different cases where increased safety measures just encouraged people to take more risk, resulting in the same or even increased numbers of accidents happening. A good example is that as football helmets have gotten more protective, players have started hitting harder and leading with their head more.
Devs have been given better baseline performance for free based on internet speeds, and adjust their thinking around writing software quickly vs. performantly accordingly, so we stay in one place from an overall performance standpoint.
This is known as Jevons paradox in economics and the classic example in modern times is rates of total electricity usage going up while devices have become ever more energy efficient.
In the case of devs it's not just staying in the same place though. You get more complete frameworks, more analytics, better ad engines, and faster development pace.
It might not be what we wanted, but it is a benefit
Get over it. The modern web sucks.
My 2003 Thinkpad T41 was never a raging powerhouse, but is was "usable". It is no longer usable. Nothing in the hardware has changed, but the software, the browsers, and the web at large have all changed drastically.
I do embedded system co-design. I write software for a living, but I work closely with the hardware teams, including (at my last employer) ASIC features. I have to flop hats between 'software' and 'hardware' all the time. And there are clearly times that I throw the software team under the bus.
"Hey the last product ran in 128MB, but memory is cheap, so can we have 4GB this time? We want to split all the prior pthreads across containers!"
You think I am joking?
Browsers and web content have done the same.
"But look at all the shiny features?"
I don't want the shiny. I just want the content, and fast.
Another factor is a wider use of all sorts of CMS (WordPress etc) for content presentation, combined with often slower/underpowered shared hosting and script heavy themes.
On some cheap hosters it may take a second just to startup the server instance, that's before any of the outgoing requests are done!
Yep - to the executives, saving a couple of (theoretical) hours of development work is worth paying a few extra seconds per page load. Of course, the customers hate it, but the customers can't go anywhere else, because the executives everywhere else are looking for ways to trade product quality for (imaginary) time to market.
Engagement falls off when there is a delay in experience past a certain point. Usually considered around 100ms to 150ms with extreme drop off at a second or higher. This has to go with human perception and can be measured through a/b analysis and similar.
Engagement does not get better if you go faster past that point. Past that point, you should have a richer experience, more things on the page, whatever you want. or reduce cost by spending less on engineering, certainly don't spend more money on a 'feature' (speed) that doesn't return money.
Ad networks are run on deadline scheduling. Find the best ad in 50 Ms, don't find any old ad as quick as possible.
Haven't others been involved with engagement analysis found the same?
there's a really funny thing that happens with websites on phones. i try clicking something sometime after the page loads and just when i click it, bam! something new loads up on the page, all these elements shift up or down and what I just clicked on isn't at its place anymore and i end up clicking on something completely different. All this happens in split of a second just between the moments when I look and decide to click on something and before my finger actually does the tap. This is so so common across the mobile web. Such a stupid little thing but also highly annoying. Even if we could collectively solve this one little thing, i would consider our UI sensibility to have improved over the years.
HTTP Archive uses an emulated 3G connection to measure load times, so of course faster internet for real users won't make the reported load times go down.
- Users are not the customers, so there's little point in optimizing for their experience, except to the extent that it impacts the number of users your customers reach with ads.
- Users do not favor faster websites, so as long as you meat a minimum performance bar so they don't leave before the ads load, there's little to gain from optimizing the speed.
- For users that do care about load-time, it's hard to know before visiting a page whether it's fast or not, and by that point the publisher has already been paid to show you the ads.
A helpful solution would be to show the load time as a hover-over above links, so that you can decide not to visit pages with long load times.
The problem of course isn't the internet "speed" but latency. ISPs advertise hundreds of Mbps but conveniently forget to mention latency, average packet loss and other connection quality metrics.
Correct me if I’m wrong but I believe ISPs use a combination of hardware and software to “throttle down” network connections as they attempt to download more data. For example if I try to download a 10GB file on my personal computer I’ll start off at something like 40mbps and it will take 15 seconds before they start allowing me to scale up to 300mbps. I assume that when downloading things like websites, which should only take tens or hundreds of ms, this unthrottling could also be a significant factor in addition to latency, depending on what the throttling curve looks like.
Also ISPs oversell capacity, which they’ve probably always done, so even if you’re paying for a large bandwidth that doesn’t mean you’ll ever get it.
Could it also be that servers resources in general are lower and that there are more clients/instance than before?
With all this virtualization, Docker containers and really cheap shared hosting plans it feels like there are thousands of users served by a single core from a single server. Whenever I access a page that is cached by Cloudflare it usually loads really fast, even if it has a lot of JavaScript and media.
The problem with JavaScript usually occurs on low-end devices. On my powerful PC most of the loading time is spent waiting for the DNS or server to send me the resources.
You will find gems just by checking the third-party cookies associated to those websites. I can see cookies from dexdem.net, doubleclick.net, and facebook.com on my chase.com account home page.
I used to support a web-based enterprise system used worldwide. After the business management escalated a complaint that the system was slow, I technically showed that them all the bells and whistles, doohickeys, widgets, etc that they insisted on as features, took up nearly 10MB to download, while Amazon, in comparison, took less than 1MB.
This was taken under advisement and then we got new feature requests the next day that would just add more crap to the download size. But they never complained again.
I remember reading a story of how engineers at Youtube were receiving more tickets/complaints about connectivity in Africa after reducing the page loading time.
They were confused and thinking w/ the bandwidth limitations and previous statistics it didn't make sense, something about previously not having useful statistics.
Turns out due to reducing the page size, they finally were able to have African users load the page, but not the video.
I thought that was interesting, and puts it right at that email at lightspeed copypasta.
This is an example of Jevon's Paradox - increases in the efficiency of use of a resource lead to increases in consumption of that resource - https://en.wikipedia.org/wiki/Jevons_paradox
Part of the reason I created Trim (https://beta.trimread.com) was simply the realization that I didn’t want to load 2-7 MB of junk just to read an article.
Trim allows you to often chop off 50% - 99% of the page weight without using any in-browser JavaScript.
Just a few weeks ago I saw a size comparison of React and Preact pages. While Preact is touted as a super slim React alternative, in real-life tests the problem were the big components and not the framework.
This could imply that we need to slim down code at a different level of the frontend stack. Maybe UI kits?
This could also imply that frontend devs simply don't know how to write concise code or don't care as much as they say.
It's a hard pill to swallow that 20 years since I started using the internet websites perform worse on vastly superior hardware, especially on smartphones.
This sounds a lot like the old argument of developers not being careful about how much memory/cpu they use. Engineers have been complaining about this since the 70s!
As hardware improves, developers realize that computing time is way cheaper than developer time.
Users have a certain latency that they accept. As long as the developer doesn't exceed that threshold, optimizing for dev time usually pays off.
Same with most computers. Recently migrated from an iMacPro to a Ryzen 3900X with Linux. Linux feels so much faster then the iMac, which when I use it now feels very sluggish for a 8core Xeon 32gb machine. This just shows how computers are kept slow so you want to upgrade </foilhatoff>
It has lazyloading of images, components, memoizing the tabs, batching requests, the works. Actually it can be made a lot faster using browser caching.
The issue at its core is HTML. It was not designed for complex, rich interfaces and interactivity that modern web users want. So JS is used. Which is slow and needs to be loaded.
The heavy use of JS is basically just hacking around the core structure of the internet, HTML/DOM problems.
I think this misses the point. Latency becomes the dominate factor very very quickly. Increases to webpage speed only really help for large file downloads (and sometimes not those) and increased user/device concurrency.
Edit: this also explains the REVERSE trend in mobile.
"Progressive Enhancement" touts using the basic building blocks first and growing from there, but it hasn't been updated to explain when and where to offload the computations. Are there any best practices in circulation about balancing the workload?
Part of the problem is that modern JS frameworks make it incredibly easy to mess up performance. I have seen mediocre devs (not bad, but not great) make a mess of what should be simple sites. Not blaming the frameworks, but it is still a problem to be addressed.
I think most software is built with some level of tolerance for performance, and the stack / algorithms / feature implemented are chosen to meet that tolerance. Basically, as hardware gets faster, it's seen as a way to make software cheaper.
I think we need to start differentiating between webpage speeds and web application speeds. Namely, a webpage would work if I disable the JavaScript in my browser, but a web application would not. By this definition, web page speeds has improved a lot.
Try to avoid CSS, JavaScript, animations of any kind, and especially inline pictures and videos. This can reduce the time needed to load it greatly. (There are times where the things I listed are useful, but they should generally be avoided.)
This would be a useful metric to use in ranking websites.
The bloat of a web page seems to be inversely related to the value of the contents. The best websites have little in the way of graphics, but are information dense.
This is a phenomenon of all progress though right? The more roads you have the more cars you get, the faster a computer the slower the software. Andy’s law: the faster something gets the lazier humans can become.
and the the fact that chrome hasn't added https/3 in mainline as even a flag even though the version that their sites use has been enabled by default on mainline chrome for years.
exactly. internet speed never was the issue. i am in a place where international network speed is a fraction of the domestic speeds. (often less than one mbit) yet websites are still just as fast. it all depends on how fast the server responds to the request, and almost never on how much the site has to load, unless there is a larger amount of images involved.
Server side rendering: page loads super fast, cool. You click on a menu: wait for the new page to come. Click on another button: wait. Click, wait, click, wait, click, wait...
The larger problem is that the web was never meant to be used the way we use it. We should be making cross-platform apps that use simple data feeds from remote sources
Title should be "Despite an increase in Internet speed, webpage speeds have not improved", since webpage speeds have not acted in spite of internet speed.
It’s much like the problem of induced demand in transportation: more capacity brings more traffic. More JavaScript. More ad networks. More images. More frameworks.
Blinn's law says: As technology advances, rendering time remains constant. Usually applied to computer (i.e. 3D) graphics, but seems applicable here too.
The problem is that we are trying to make websites do what they were never meant to. We should be making cross-platform apps that use simple data feeds.
Really? My text only websites that I've written and hosted for myself are really snappy. I wouldn't know the feeling.
All I needed to do was spend a weekend scraping everything I needed so that I could self host it and avoid all the ridiculous network/cpu/ram bloat from browsing the "mainstream" web
just for a little snark to defend client side use - lemme know when you find a responsive bin packing algorithm i can do server side that doesn't choke out the dom
Now, it's pretty much a normal news website in that it shows a long list of articles, some pictures and then text.
I am running a standard laptop computer given to me by my company. My internet connection is pretty fast. Even with ads blocked on the entire website, that thing is slooow.
1. The pictures have an effect where they are rendered in increasing quality over time, supposedly so you see them earlier.
This doesn't work, as they load much more slowly than normal HTML pictures that load instantly given my internet connection.
2. The scrolling is more than sluggish. This is, in part, because the website only loads new content after you scroll down. So instead of having a website that loads and where you can just scroll, which would make TONS of sense for a website where you quickly want to check the headlines, you have this terrible experience where every scrolling lags and induces a new "loading screen".
3. If you click on an article, it is loaded as a single page app with an extra loading screen, which is somehow slow for some reason.
4. Once in the article, the scrolling disaster continues. But now even the text loads slowly while you scroll. How can you not just have the text load instantly? It's a news website. I want to read! I don't want to scroll, wait for the load, and then continue to read.
5. There is a second scrolling bar besides my browser's scrolling bar. Why? Who thought that's a good idea? The scrolling bar's top button disappears behind the menu bar of the website. Why?
6. To use this website, one needs to scroll through the whole article to get it to load, then scroll back up, then read. Still, each time the menu bar changes size due to scrolling, my computer gets sluggish.
7. Javascript Pop ups. Great.
8. Every time your mouse cursor moves accidentally over any element of the website, gigantic pop ups show up out of nowhere and you can't continue reading. Annoying!
This website presents news. It's not better at it than earlier ones, it's worse. None of the things make the experience any better and it gives no more benefit to reading news than older, plain html news websites. The reading experience is an unmitigated disaster for no reason whatsoever.
Who greenlit this? Why?
If you are a web developer, you work in a business where the state of the art has notably gotten worse. A lot worse.
At this stage, I would be seriously worried about the reputation of the profession if I were you. Sad!
I spent a decent amount of time making my website how I want the web to be: fast, straightforward and unintrusive.
Making it fast was pretty easy. Remove anything that isn't directly helping the user, compress and cache everything else, and use HTTP2 Server Push for essential resources. There were other optimisations, but that took me below the 500ms mark. At ~300ms, it starts feeling like clicking through an app - instant.
However, there's no point in serving slimy GDPR notices, newsletter prompts and SEO filler text at lightning speed. Those add a lot more friction than an extra 500ms of load time.
Quite frankly, this is bigger than the server vs client comments I've seen. This is not some new phenomenon. The efficiency of code and architecrure has declined over time for at least the last 30 years. As compute and storage costs have come down dramatically, the demand for labor has gone up. Who decides what's really important in a project - the business. That comes down to cost. If you can save money by using cheap hardware and cheap architecture, then save money by using your human resources for output vs efficient code...
An observation I've made over decades is that people stop optimising when things get "good enough". That threshold is typically 200ms-2s, depending on the context. After that, developers or infrastructure people just stop bothering to fix issues, even if things are 1,000x slower than they "should be" or "could be".
Call this the performance perceptibility threshold, or PPT, for want of a better term.
There's a bunch of related problems and effects, but they all seem to come back to PPT one way or another.
For example, languages like PHP, Ruby, and Python are all notoriously "slow", many times slower than the equivalent program written in C#, Java, or whatever. When they were first used to write websites with minimal logic, basically 90% HTML template with a few parameters pulled from a database, this was okay, because the click-to-render time was dominated by slow internet and slow databases of the era. There was, a decade ago, an acceptable trade-off between developer-friendliness and performance. But inevitably, feature-creep set it, and now enormous websites are entirely written in PHP, with 99% of the content dynamically generated. With rising internet speeds and dramatic performance improvements in databases, PHP "suddenly" became a huge performance pain point.
In that scenario, the root cause of the issue is that the attitude that "PHP/Python/Ruby" is acceptable because lightweight code using them falls under the PPT is a false economy. Eventually people will want a lot more out of them, they'll want heavyweight applications, and then having locked into the language is now a mistake that cannot be unwound.
The most absurd example of this is probably Python -- designed for quick and dirty lightweight scripting -- used for big data and machine learning, some of the most performance intensive work currently done on computers.
Similarly, I see astonishingly wasteful network architectures, especially in the cloud. Wind the clock back just 10 years, and network latencies were vastly lower than mechanical drive random seek times. Practically "any" topology would work. Everything split into subnets. Routers everywhere. Firewalls between everything. Load balancers on top of load balancers. Applications broken up into tier after tier. The proxy talking to the app layer talking to a broker talking to a service talking to a database talking to remote storage. Nobody cared, because sum fell under the PPT. I've seen apps with 1/2 second response times to a trivial query, but that's still "acceptable". Multiply that by the 5 or so roundtrips for TCP+TLS for every layer, because security must be end-to-end these days, and its not uncommon to see apps starting to approach the 2 second mark.
These days, typical servers have anywhere between 20 to 400 Gbps NICs with latencies measured in tens of microseconds, yet apps are responding 10,000x slower even when doing no processing. Why? Because everyone involved has their own little problem to solve, and nobody cares about the big picture as long as the threshold isn't exceeded. HTTPS was "easy" for a bunch of web-devs moving into full-stack programming. Binary RPC is "hard" and they didn't bother, because for simple apps it makes "no difference" as both fall under the PPT.
Answer me this: How many HTTPS client programming libraries (not web browsers!) actually do TCP fast open and TLS 1.3 0-RTT handshakes? How many do that by default? Name a load balancer product that turns those features on by default. Name a reverse proxy that does that by default.
Nobody(1) turns on jumbo frames. Nobody does RDMA, or SR-IOV, or cut-through switching, or ECN, or whatever. Everybody has firewalls for no reason. I say no reason, because if all you're doing is doing some ACLs, your switches can almost certainly do that at wire-rate with zero latency overheads.
It always comes back to the PPT. As long as a design, network, architecture, system, language, or product is under, people stop caring. They stop caring even if 1000x better performance is just a checkbox away. Even if it is something they have already paid for. Even if it's free.
1) I'm generalising, clearly. AWS, Azure, and GCP actually do most of that, but then they rate limit anyway, negating the benefits for all but the largest VM sizes.
So this has gotten to the point for me where it is a big enough burning painpoint where I would pay for a service which provided passably fast versions of the web-based tools which I frequently have to use.
In my day-to-day as a startup founder I use these tools where the latency of every operation makes them considerably less productive for me (this is on a 2016 i5 16GB MBP):
- Hubspot
- Gmail (with Apollo, Boomerang Calendar, and HubSpot extensions)
- Intercom (probably the worst culprit)
- Notion (love the app - but it really seems 10x slower than a desktop text editor should be imo)
- Apollo
- LinkedIn
- GA
- Slack
The following tools I use (or have used) seem fast to me to the point where I'd choose them over others:
- Basecamp
- GitHub (especially vs. BitBucket)
- Amplitude
- my CLI - not being facetious, but using something like https://github.com/go-jira/jira over actual jira makes checking or creating an issue so quick that you don't need to context switch from whatever else you were doing
I know it sounds spoiled, but when you're spending 10+ hours a day in these tools, latency for every action _really_ adds up - and it also wears you down. You dread having to sign in to something you know is sluggish. Realistically I cannot use any of these tools with JS disabled, best option is basically to use a fresh Firefox (which you can't for a lot of Gmail extensions) with uBlock. I tried using Station/Stack but they seemed just as sluggish as using your browser.
It's probably got a bunch of impossible technical hurdles, but I really want someone to build a tool which turns all of these into something like old.reddit.com or hacker news style experience, where things happen under 100ms. Maybe a stepping stone is a way to boot electron in Gecko/Firefox (not sure what happened to positron).
The nice things about tools like Basecamp is that because loading a new page is so fucking fast, you can just move around different pages like you'd move around the different parts of one page on an SPA. Browsing to a new page seems to have this fixed cost in people's minds, but realistically it's often quicker than waiting for a super interactive component to pull in a bunch of data and render it. Their website is super fast, and I think their app is just a wrapper around the website, but is still super snappy. It's exactly the experience I wish every tool I used had.
IMO there are different types of latency - I use some tools which aren't "fast" for everything, but seem extremely quick and productive to use for some reason. For instance, IntelliJ/PyCharm/WebStorm is slow to boot - fine. But once you're in it, it's pretty quick to move around.
Can somebody please build something to solve this problem!
Ohh God, can't upvote this enough. I feel the same. I work as a frontend dev and I just can't believe the amount of stuff we do in the user's browsers just because it's convenient for us, developers or because asking backend devs to do it will take longer or because it's the way to do it in React. SPAs can be faster of course, but most of the times they are not, and they are a lot worse than their equivalent rails or django app because your company just doesn't have the resources Facebook has. And even Facebook is terribly slow, so not sure what the benefits are at the end of the day.
Talking of reddit, I just cannot use it. I rely on old.reddit.com for now and the day it goes away I will only use it from a native client on my phone, or just not use it anymore.
I feel like repeating myself in every single comment I do on this topic but I really believe that tools such as Turbolinks, Stimulus or my favourite: unpoly are highly underrated. If we put 20% the effort we put on SPAs on building clean, well organized and tested traditional web applications we would be in a much better place, and faster (in the sense of shipping and of performance).
We should focus more on the end user and the business and a bit less on what's cool for the developers.
Yep, there are some websites which _should_ be SPAs because they are actual applications - for instance, I can't imagine Google Docs or Trello as an MPA.
But, many websites are a graph of documents (like Reddit), so trying to model them as an SPA just massively increases complexity and introduces some really tricky problems. We moved from an SPA -> MPA and haven't looked back (with Intercooler/Stimulus/alpine).
One of the main parts is that, because you don't need to manage and reconcile state in two places, you have much less complexity. When we need a single component that needs to be very interactive (for instance, we have an interactive table viewer which allows sorting and searching), we embed a little bit of React or whatever -- but that's kind of a last resort, and it's as stateless as possible.
I think handling state sensibly in a pure SPA architecture is actually much more complex than people give it credit for. A Redux + React + REST architecture can be done properly - but it also introduces a huge number of potential rabbit holes which have a high ongoing maintenance cost, especially if you do not have a team of very experienced FE engineers.
New Reddit is a great testament to just how badly it go can when you fight against "the web as a collection of documents" and what browsers originally did. For instance clicking on the background of a Reddit post to then navigate you "back" in the SPA, instead of using your browser back button - it's actually insane.
None of this is to say that templates can't be a bit painful themselves at times too - not sure what happened to https://inertiajs.com/, but I quite like the idea of that approach too.
If I want to download a 1GB file, I do a TLS handshake once, and then send huge TCP packets. I can get almost 50MB/s from my AWS S3 bucket on my 1GB fiber, so it takes ~20secons.
However, If I split that 1GB up into 1,000,000 1KB files, I incur 1,000,000 the handshake penalty, plus all of the OTHER overhead with nginx/apache and file system or whatever is serving the request, so my bandwidth is significantly lower. I just did an SCP experiment and got 8MB/s average download speed and cancelled the download.
The problem here is throughput is great with few big files, but hasn't improved with lots of little files.
With server-side (or just static HTML if possible), there is so much potential to amaze your users with performance. I would argue you could even do something as big as Netflix with pure server-side if you were very careful and methodical about it. Just throwing your hands up and proclaiming "but it wont scale!" is how you wind up in a miasma of client rendering, distributed state, et. al., which is ultimately 10x worse than the original scaling problem you were faced with.
There is a certain performance envelope you will never be able to enter if you have made the unfortunate decision to lean on client resources for storage or compute. Distributed anything is almost always a bad idea if you can avoid it, especially when you involve your users in that picture.