Your link doesn't work on mobile. Yes, I get that it's a dev site, but as Shopify is aware most users encounter most sites for the first time on their mobile device. I'm not the only person hitting this page on their phone wondering "I wonder what this looks like - is this interesting enough for me to remember for later?" With no demo page the odds that gets a yes are far lower than if there were a demo page.
Fair, we'll have demo links and examples to share in the future. The focus right now is to get hands-on developer feedback on ergonomics, API shape, etc.
I've demoed https://hydrogen.new to many folks.. It's fun to see the eyes light up when they realize that, with one click, they have a fully virtualized and custom environment running in their browser. Amazing product & platform.
SSR's main goal is to reduce the time to first interactive and improve SEO, not to bring the internet back to 1995. The page loads without JS, but JS libraries clearly load after the fact to decorate the page.
The key to unlocking fast first-render is combination of streaming SSR (i.e. streaming HTML instead of just JS blobs), which is enabled by adopting Suspense (i.e. async the data fetch and stream data when available in HTML response, which then hydrates relevant component). RSC layers on top: it establishes a clear boundary between client and server logic, which enables better bundling strategies (as you highlighted), but also per-component and efficient subtree updates for subsequent interactions.
RSC is early but we've been working with the React core team on our use case and, based on the past few months of work and progress, we feel pretty confident about the direction. In particular, shifting legacy React apps towards Suspense+RSC will be a big shift for many, but we don't have that constraint.. We have the "luxury" of starting anew and we're leaning into the bleeding edge because it enables all the right primitives for commerce: fast first render, efficient updates, open space for optimizing bundles and RSC transport protocol, etc.
@cramforce nailed it. One thing I'll add.. I would strongly encourage everyone to collect "field" (real user measurement) data for each of these metrics via their own analytics, as that'll give you the most depth and flexibility in doing root cause analysis on where to improve, etc. The mentions of CrUX and other Google-powered tools are not to create any dependencies, but to help lower the entry bar for those that may not have RUM monitoring already, or will need some time to get that in place.. For those users, we offer aggregated insights and lab-simulations (Lighthouse) to get a quick pulse for these vitals.
It's not a matter of first vs rest but observation that input while the page is loading is, often, where most of the egregious delays happen: the browser is busy parsing+executing oodles of script, sites don't chunk script execution and yield to the browser to process input, etc. As a result, we have FID, which is a diagnostic metric for this particular (painful) user experience problem on the web today.
Note that Event Timing API captures all input: https://github.com/WICG/event-timing. First input is just a special case we want to draw attention to due to the reasons I outlined above. That said, we encourage everyone to track all input delays on their site, and it's definitely a focus area for future versions of Core Web Vitals -- we want to make sure users have predictable, fast, response latency on the web.
Didn't post before, because looks like link farming, but http://test.thirtytwomachine.com . If I remove the inline `data:image` files, pagespeed doesn't error out anymore. When I hit this problem, I found other people getting this exact behavior according to their online posts. Nuke the inline svg and pagespeed works again.
Also, the error message is a bummer. "An error occurred while fetching or analyzing the page". If you told me some parser state in the error, at least I could feed that back to you guys, even though it'd confuse a great many web developers.
I think you're misreading the copy: "PSI estimates this page requires 1 additional round trips to load render blocking resources and 0.0 MB to fully render. The median page requires 4 round trips and 2.7 MB. Fewer round trips and bytes results in faster pages."
Just looked at a reasonably popular post with 116 comments using Chrome developer tools. Came in at 230KB of html and 4.5KB of css/js/images. This would qualify as way larger than the median page, since most pages are stories with <10 comments, user profiles, or individual comments.
How is google getting 2.7MB? Are they also fetching the third party URLs in discussion threads? Or maybe they mean median session?
I believe that refers to the median of all analyzed pages, not the median page on that domain.
I pointed the tool at a Tumblr blog, and I see: "PSI estimates this page requires 6 additional round trips to load render blocking resources and 1.3 MB to fully render. The median page requires 4 round trips and 2.7 MB. Fewer round trips and bytes results in faster pages."
I think 2.7MB is the size of the average web page on the Web on mobile (3.4M on desktop).
Which is quite scary. (Edit: though this may refer to pages that are being developed and tested in Page Speed, see the other comment).
I wrote a whole (private, small) website with pictures (photos and images) and styles that includes a maze using JavaScript in 0.6 MB total. I didn't spend too much time in optimizing this.
I cheated a bit: links in this website point to anchors in the same HTML file. This does mean that without this trick, one page would be even lighter.
I would find it hard to write a 3MB webpage without doing it on purpose. Something is wrong with Web development. Stop wasting resources!
Ripppp I just put up a new site and Google is saying that it took 6 round trips and 4.1mb to fully render.... http://www.themindsetapp.com . It's just a wordpress site atm, any tips to reduce it?
Independent and unrelated to Observatory project, but if anyone is curious to dig deeper into Alexa top ~500K and how it changed over time, take a look at the HTTP Archive dataset. Some examples @ https://discuss.httparchive.org/latest.
I like and enjoy the book.I have been searching for this link for a while as it was posted on HN sometimes ago and i forgot to bookmark it.Thank God that someone re-posted it today.
And keep up the good work.