Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AMP Is What HTML Should Have Been (kartick-log.blogspot.com)
77 points by _qjt0 on Aug 12, 2016 | hide | past | favorite | 89 comments


HTML isn't a renderer-output-description-language like PDF or PostScript. It was always intended to deliver content alongside semantic markup and intermixed with only basic formatting. Later, nearly all non-semantic tags got deprecated when CSS came of age; together they deliver structure and presentation. The image tag argument is a red herring: in HTML, that the exact resulting render is up to the browser's layout engine based on the CSS and the viewport.

The blog post isn't wrong, but it's also not right. Today's web isn't slow because of HTML's features, shortcomings, or other properties; it's slow because of excessive reliance on client-side scripting, including client-side DOM manipulation, ad serving, tracking, and A/B testing. Turning off Javascript vastly improves pageloads, and did -- once upon a time nearly 10 years ago -- improve UX, but these days more sites break without Javascript than those that don't.

AMP can impose constraints because it's a corporate effort to re-frame content in a way that provides centralized ad serving, tracking, and the like. It gives the content publishers the same tools (ads, tracking) they have to provide themselves with on the open web, while simultaneously benefiting those like Google who will put their own viewport and context around that content. It's not that much different from when an individual wants to publish on Medium or Tumblr, so the platform provides the author with a box to put text in, a dashboard, view tracking, and stuff the author wants, while they get to host interesting content with which to attract an audience for ads. It's a win-win, symbiotic relationship, while on the self-hosted web, every publisher is on their own.

This isn't about HTML vs AMP at all. It's about business models, which are given in AMP, but left as an exercise for the publisher on the open web.


> Today's web isn't slow because of HTML's features, shortcomings, or other properties; it's slow because of excessive reliance on client-side scripting [...] Turning off Javascript vastly improves pageloads

Sure, disabling JavaScript will improve pageloads. However, are you suggesting clicking a link and waiting for an entirely new page to load is good UX?

Good UX is instantaneous feedback, client-side code is a requirement... Unless you want some kind of enormous super-spec that is supposed to encompass anything anyone could ever wish to express in pure markup. In which case, good luck getting that supported and rendering quickly.

The whole point of JavaScript (or any client side code, including mobile apps) is that it vastly improves UX.


For many pages, loading an entirely new page is perfectly fine UX, and way better UX if the alternative is dynamic loading and all kind of performance-eating ad scripts, scroll hijackers, ...

If your site is a traditional content-heavy website, you need to load a full page of content with both ways, it doesn't matter that much if you are loading the skeleton around it or not, and there is way less chance of accidentally breaking something. I don't need instantaneous feedback that you have begun to squeeze content through the pipe (if I'm on mobile the content will take time to load anyways, on a wired connection you can deliver a full page quickly enough)

Sure, real "apps" are a different thing, but they are not the ones people typically complain about being overloaded with scripts.


I have to wonder what it would be like if, instead of reinventing the page load process using Javascript frameworks, the web had gone in the direction of increasingly refined frameset functionality.


HTML5 sections are very similar to how frames were used. The biggest problems with frames were individual scrolling and opening in anew tab would lose context, both of which are solvable.


You raise some good points. Within the same website, loading a different server-generated full page will typically load many of the same assets like CSS, and header images, which will remain cached by the client for subsequent loads.

It's true that some markup framing the content is duplicated; incidentally in the olden days we used <frame> tags to separate a site-global navigation from the page-specific content, but that went out of style -- it had URL addressability drawbacks and no convention ever developed to fix it.

But ultimately the client needs to render the View, while the server (at some point presumably) originates the Model. Somehow you have to turn the M into the V, and there have been hundreds of different takes over the last couple decades on how exactly to accomplish this. Classically, every HTML page was the complete view, but then the DOM API was introduced to allow this view to be mutated imperatively after the View has been rendered.

The main innovation of client-side scripting was Ajax, which was explicitly about lazy- and/or continuous loading of small snippets of innercontent, like a single Tweet or a couple Google Maps tiles. This emboldened some to push more and more state onto the client, and now we have entire frameworks that do MVC or MVVM in (client-side) Javascript. If we had a way besides the DOM API to morph one state of the View to another; say, a declarative HTML-diff, or, something like XSLT or JSON-patch, I'm sure people would take it. If this sounds a lot like Virtual DOM, it's because they solve the same problem.


> However, are you suggesting clicking a link and waiting for an entirely new page to load is good UX?

Maybe they aren't saying that, but sure I'll bite. Why wouldn't it be good UX? I would challenge anyone to argue that non-JS/static UIs are anything but simple, intuitive and easily teachable. I have had the displeasure of repeatedly having to explain to family/relatives, how to use these "modern" UIs to the point where I'm fairly convinced that none of them have any utility, as far as UX is concerned.


If it loads quickly, why not? My connection is a sub 3ms ping to the nearby datacenters, what exactly is the bottleneck?

It's not to say there isn't a place for SPAs but I would argue many of these would be better suited to being full fledged desktop apps instead (better performance and snative integration).


Okay, for you there's no bottleneck, just woeful UI/UX. Seriously, seeing just one stateful element at a time, then pressing "Next". Wow, just wow.

I fully expect that when I type the first few digits of my credit card number that the site knows precisely which card provider I'm supplying and updates the UI (highlight the card type) accordingly and immediately update the security code field to request my 3-digit MasterCard CCV vs. a 4-digit AMEX CID.

To be clear, I'm not suggesting code needs to run on a client. One day when we all have fast enough connections (with minimal latency), including on mobile network, then perhaps all the code can run server-side and we just be streamed the output (video) - which again, updates in real-time!

However, living in the real world as it stands today, client-side scripting is a fantastic solution to a real-world problem i.e. latency and bandwidth constraints, and real-time responsive UIs.


> My connection is a sub 3ms ping to the nearby datacenters, what exactly is the bottleneck?

Heh, tell that to someone in, say, Australia.


> Sure, disabling JavaScript will improve pageloads. However, are you suggesting clicking a link and waiting for an entirely new page to load is good UX?

If done right there is not much wasted and there is not much to wait for as most of the things which are not new already are cached. Most of the time such slowness results from unnecessary bloat or slow code on the server side.

> The whole point of JavaScript (or any client side code, including mobile apps) is that it vastly improves UX.

In practice it worsens UX more often than not.


On my case I wanted it to have stayed as plain interactive documents.

I contribute to this goal by only allowing native applications on my devices, unless I don't have any good alternative to switch to.


HTML may have intended to deliver semantic markup, but web application developers want pixel-perfect layout. That ship has long sailed.

No disagreement about excessive scripting, ad serving and other bloat. But those are orthogonal to my points. Are there some things in AMP that would make HTML faster or improve the UX? If so, let's evolve HTML based on what we learnt.


Sidelining well established standards isn't the solution to the web's problems. This is one of the instances where being open source it's not enough and coming from Google it is actually worrying. A very obvious Trojan horse in my opinion.


Even though technology may be superior, the implementation by Google news is terrible. You cannot scroll properly because the page hijacks it and replaces it with a janky scroll mechanism. You cannot share news links with anybody using the browser share option. Also, the whole thing refuses to work without js enabled.

So give me plain old HTML any day. It renders fast, scrolling is predictable and smooth, and the URL is right there if you want to share.


> You cannot share news links with anybody using the browser share option.

This. I want the user to know they are on my site and be able to share the URL when they want to. Google News, admittedly the only implementation of AMP I've seen so far, fails in making visible that basic unit of the Internet -- the URL.


I think scrolling on AMP pages is really fast on iOS. Not sure which device youre feeling it slow on


AMP pages on iOS devices perform just fine. For a while at least (not sure if its still the case), AMP pages seemed like they scroll-jacked and changed the rate of scrolling, which throws you off when you expect pages to move a certain speed for a given input. I remember going 'why the hell is it moving this way' every time I hit an AMP page, and cursing the awfulness of it.

The worst UX sin though is when your finger moves ever slightly to the side, which is incredibly common when holding your phone. Most of the time Safari ignores that input as it didn't meet the threshold to go back/forward. AMP pages, however, seem to implement a canvas or something and wind up being really twitchy on left/right input. Not to actually go back/forward mind you, but to do something completely unexpected - reinterpreting left/right swipes to be navigating through entirely different articles. This is something one only discovers by accident, and something I'm not sure very many people would find utility in. It's another case where Google is trying to add their own entirely non-standard view of how UI should work rather than following the conventions of the platform they are on (ahem Material Design iOS apps).

And, just because they hadn't had enough of messing up typical reading UX, Google News AMP pages add a large fixed header so they could show you those many pages you can scroll through with little pagination dots. Sadly, this prevents Safari's own URL header from collapsing as you scroll down, meaning maybe 20% of your view is covered by interface instead of 5%. My assumption is that Google News decided that it was better for users to have the opportunity to view another article + their ads rather than providing a quality reading experience on the article the user actually wanted to read.

TL;DR: whatever AMP's tech offers is more than countered by the pointlessly awful implementation by Google News. If you're in an unfortunately enough situation to have to tap on an AMP page, the best possible outcome is to immediately go into Safari Reader View so your screen is 100% content and your every tiny input isn't likely to take you someplace you do not want to go.


Maybe what dingo_bat means is that scrolling the AMP pages works fine, but scrolling the news.google.com homepage is janky. (Which is what I also notice on my Android 5.0.1 phone running the latest Chrome.)


Android (SGS7 edge) here. I wouldn't call it slow. It's just different from the normal scrolling. And unpredictable. And frame skips sometimes.


Might be specific to the Galaxy — Samsung still forces its own browser on phones instead of the stock Chrome other manufacturers rely on.


That's wrong. Samsung includes both. Google forces OEMs to include it's apps if they want to use the Play store and Play APIs.

But yes, I use the Samsung browser because of the performance and feature benefits, and my complaints may not be applicable to chrome.


It's doing the same thing on my desktop chrome.


It's fast on my Nexus 5 too.


I really am surprised that a mobile technology would compromise so much on how flicking the page to scroll works. I'm curious as to what kind of user testing this went through before it was cleared for production usage.


did you mean the web version of it or the app?


The Web version.


The best thing about this article is that it is short, a rare feat in today's world. So I don't want to scrutinize it too hard on its mere two examples:

It says, like Google's Accelerated Mobile Pages (AMP) project, HTML should (1) require each img tag have the height and width specified, and (2) allow only the CSS filters that can be GPU-accelerated.

These are nice and all, but the problem is this. Why stop there? Why not forbid illegible fonts, text columns that are too wide or too narrow, green text on a red background, and assortment of other "poor UX"? Why not enforce that no page exceed the byte length of a Russian novel? (Now that's a law I can get behind [http://idlewords.com/talks/website_obesity.htm].)

HTML could be better, but like all open-source projects, I'm amazed how good it is, and even more so how much better it is than any alternative offered for money.


Can we just create a community standard of plain-jane HTML to separate it from all this "publisher" junk that AMP stands for, and then build a gigantic web ring and a search engine and all that? Like tilde club but with standards enforcement. I'd put a weather forecast site like that on my home screen in an instant.


Here's a possible set of restrictions:

1. HTML5, without syntax errors. UTF-8 only.

2. IMG and EMBED tags must have size info.

3. All CSS, Javascript, and font files must have subresource integrity hashes. The browser can cache anything with subresource integrity, even across site boundaries, since the hash guarantees the content value. This will speed up asset loading. No more loading "jquery.js" over and over.

4. All off-site content (ads, trackers, etc.) must go in iframes. All iframes must have a height and width, so rendering can proceed without waiting for them. This eliminates delays caused by slow ad servers. Iframe content is requested at a lower priority than main page content.

5. Rendering doesn't start until the entire main page is loaded, or a <page> tag is encountered for really long pages. Almost everybody has enough bandwidth now that we don't really need to frantically re-render as pages come in.


Number 3 is already possible and widely used by minification frameworks. They generate a file with a hash in the filename that is cached forever. It cuts network time but browser still re-parse it every time.


> Can we just create a community standard of plain-jane HTML to separate it from all this "publisher" junk that AMP stands for, and then build a gigantic web ring and a search engine and all that?

These guys might be a good first step: https://indieweb.org/


AMP doesn't "allow only CSS filters that can be GPU-accelerated". It allows only CSS properties that Blink (and other current engines) can GPU-accelerate via layers. There is no need for this restriction in general, and I think a better approach for the long run is to adopt rasterization strategies that don't have such unnecessary performance cliffs.

(Disclaimer: I work on one of these libraries.)


Would be interested in taking a look at your project if you'd offer a link. :-)

The Amp guys make the claim in one of their videos[1] that the GPU can do nothing if a page-layout event is triggered. Amp is geared to minimizing the number of times this has to happen, by demanding that elements are size-bounded for example; the same reason for the CSS GPU-acceleration limits.

Hopefully new render engines will address this, but having them out in the wild is a ways off even if they were ready today. Amp seems like a pretty reasonable stop gap. Although I find their demo site[2] not a perfect experience with all of my normal Firefox plugins. Another comment in that video mentioned above is that Amp sites should be faster without an ad blocker.

[1] https://youtu.be/hVRkG1CQScA

[2] https://www.ampproject.org


> The Amp guys make the claim in one of their videos[1] that the GPU can do nothing if a page-layout event is triggered.

That's a Blink restriction, not an inherent one. It's also an onerous restriction. Having these footguns is a bad way to treat Web developers.

> Would be interested in taking a look at your project if you'd offer a link. :-)

WebRender [1], part of Servo.

[1]: https://github.com/servo/webrender


The AMP team 100% agrees with you. But the project goal is to be fast in current generation browsers. As soon as the current generation can perform well with more relaxed rules, the rules in AMP can be changed accordingly.


I agree with the annoyances. Try reading any major US news site on your phone. You see a headline you want to read and tap on it. Sometimes the page is unresponsive. When it becomes responsive, it has loaded some new links to other articles, and what you tapped on is no longer on screen. Now, you're navigating to sothing you either didnt want to read or maybe already have.

This is a problem with all major news sites. CNN, NBC, FOX all do this to varying degrees. The main page is static, subsections are injected via async requests. It's annoying on a desktop (because you cant necessarily go back to where you were) and infuriating on a phone. I can't imagine what it's like for people on a limited data plan.

I also find it annoying when I see multiple links I want to read, but it's not obvious if I click on one, read it and go back that the other is now replaced.


The accusation is valid, but is nonetheless missing the point.

The evolution of technology always starts with ramification of features, and mature with a much smaller set of them, which have been proven to be really useful and efficient.

At the beginning, no one really know that html should just be like AMP. So we tried, and it largely worked. Use this to prove that html should start like AMP is like to claim the cave man 10000 years ago should use internet to communicate, which is plain irrational.


In addition it means we've arrived at this solution backwards.

It's like banning all screws because you've got a kickass shiny hammer.


How so? How could we know what does and doesn't work without trying it out first? Languages designed by commission usually don't get very far.


Agreed. My blog post was meant more like: where do we go from here? How do we take what we learnt and use that to make HTML better?


"Allows only features that are fast/can be gpu accelerated" is a nice idea in theory, but in practice it's wrong. The web started out as a platform where no features were accelerated, and browser vendors started incrementally adding faster software implementations of features, and then eventually GPU-accelerated implementations. More obscure/uncommonly used features tend not to get an optimized implementation, because carefully tuning every single web platform feature is extremely expensive to do. So if AMP forbids things that aren't fast Right Now, you're freezing the state of the platform forever until vendors get around to improving features nobody uses... and then you have to wait until every vendor ships the optimized implementation before anyone can use it in AMP.

The fact is that there are many barriers between web apps and native-level performance. AMP's focus on delivering good performance on mobile is to be applauded, but you don't do it by trapping software in the past.

(Context: Previous member of Firefox and Chrome teams)


> So if AMP forbids things that aren't fast Right Now, you're freezing the state of the platform forever until vendors get around to improving features nobody uses... and then you have to wait until every vendor ships the optimized implementation before anyone can use it in AMP.

Agreed that AMP is the wrong approach. But I think it's not that difficult in terms of programmer-months to fix the rasterization problems. You just need the right design as opposed to '90s style imperative APIs.


When a new feature is being developed, whether it can be GPU-accelerated on day 1 should be a factor in the decision. If it can't, or it's tricky, perhaps we should think twice before even developing the feature. Rather than it being an afterthought, as it seems today.

Of course, all this is in the context of features that make sense to GPU-accelerate, like CSS filters.


I'll upvote this for the discussion, but I still think it's unreal that we had to create an entire proprietary AMP/Instant Article standard rather than guidelines for making working mobile sites. Even an HTML validator (hello 1999!). It just seems like every ancient WAP solution that's gone out of fashion.


You reminded me of Google Gears (disclosure: I worked on it), which was supposed to have played a part in getting the W3C committee to work on HTML5 seriously.

Likewise, HTTP2 languished for a long time, but took off quickly once SPDY was implemented in Chrome + Safari + Firefox.

As I was told, the best way to spur a standards body into action is to threaten to make them irrelevant.


I just flipped my blog over to AMP, which required essentially no change at all, except adding their cdn'd (but not client-cached) JS and some weirdo attributes here and there.

I run a pretty tight technical ship, so that non-cached JS ends up being more than half the page weight. I don't _really_ care about the additional ~40kb, but I guess I'm not seeing any advantages yet.

AMP != An HTML-subset. If it were, that'd be fine, and I'd basically agree with the article. But AMP seems to also require me using some strange JS from an ad company, which I'm extremely skeptical of.


Pardon my ignorance, but when you specify the height and width of an image, wouldn't this hamper the ability to utilize media queries/responsive design? I understand what AMP is trying to solve, but it seems to be backwards in terms of how CSS can be utilized for responsive design.


You can set an amp image to be responsive which resizes it to the declared dimension's aspect ratio!


It theoretically does, yes. The width/height could probably be specified through media queries, but I don't know how well that would perform in modern layout engines - I expect they've implemented this in Chrome for AMP but I haven't tested it.


I am just trying to wrap my head around the use case for AMP.

On one hand, you have a faster loading site/page which is good for users and helps with SEO ranking, but as a developer, you lose some flexibility in terms of specifying your layout for multiple cases.

I guess if the majority of your traffic is mobile-based in areas with typically slower internet connections, then it makes more sense to utilize something like AMP. But if that is not the case, it seems like you have to do twice the work for not much gain...


The use case is that it will improve your search rankings, and giving you an argument to force your organisation and your ad-partners to make your site faster. It's effectively Google making up a tool they can force-feed big publishers to get them to make websites that perform less horribly, which makes the top results of Google search on mobile more useful and leaves more traffic to look at more pages and more Google ads.

There is no reason why a normal website can't be as fast or faster than an AMP one, but AMP gives the framework to get organisations that otherwise wouldn't care enough to do so.


This is the exact strategy Microsoft used for lock in way back when. You could use ASP and other MS technologies and they even worked well. Until you wanted to exit their walled garden - then they were a real pain to integrate with anything usual.


Isn't AMP available as an open-source library? I don't see the lock-in.

Besides, Google has announced that search rankings will be affected by speed, not by what library you use. So, if your site is already optimal (doesn't benefit from AMP) or you want to use some competing project that also does a great job, you won't be penalised.

I don't see the walled garden argument here.


They don't mean you have to specify a fixed size in pixels. You can specify it in terms of page-width percentage for example. The specification must just be enough to allocate a bounding box in the page layout before downloading the image.


If that is the case, that seems sane.

I had jumped to some conclusions; if I am not mistaken, CSS needs to be written inline for AMP and so I assumed that the spec (based on the author's post) would require that the image be specified in exact pixels.


This seems to me like punishing users (in this case web developers) for the mistakes of standards organizations of the past (W3C).

HTML is what it is and it's not going anywhere. AMP is not a solution to anything, because the whole world isn't going to adopt it.

A real solution is to write your HTML rendering engine to eliminate behaviors you don't like. If you don't want repainting/reflowing of the page when the img dimensions aren't known then don't start rendering until you know them.

But browser vendors aren't going to do that because users DO prefer reflow/repaint over waiting for the page to load. So this is a silly argument.


If they handn't fooled around with XHTML set of standards, we could probably already have something like XAML on the browser, with its GPU backend.

Alas, what we have is a Frankenstein stack, which when one does the magic incantations of HTML and CSS, maybe just maybe if the browser and corresponding version are the right ones, those transformations will land on the GPU.


> If they handn't fooled around with XHTML set of standards, we could probably already have something like XAML on the browser, with its GPU backend.

HTML + knockout is a lot like xaml, but a lot simpler.


given that AMP has no explicit browser implementation right now, but is just some guidelines and JS, implemented on top of the "mistaken" standards: web developers already can write pages conforming to the goals AMP tries to enforce, but too many of them didn't.

Stricter browser engines would punish devs more, to avoid punishing devs with AMP?

I'm not a big fan of AMP, but I'm not sure if intentionally worsening browsers for unliked but valid markup would be better. Users will leave a browser doing so.


I'm quite happy with how the web platform has proven to be powerful and flexible enough time and again to stay relevant. Javascript is probably doing some nasty (but well-animated) things on Flash's grave, meanwhile HTML is annoying the newest W3CWG drafts with that they-wanted-to-replace-me-with-word story.

The web needs interactivity and without a powerful open platform it's going to be 2 or 3 incompatible, non-free silos. No links & nothing too abnormal please – not one piece if information outside of some AOL/MS/Apple/Google-controlled curated marketplace.


Please don't make me (as a designer) know what the exact width of an image is, ahead of time.


Please don't make me (as a user) deal with pages that re-flow every three seconds as images load, pushing the paragraph I was reading off-screen.


The size of an image is in the header right at the front, so arguably it's not that hard to avoid the reflow. Just prefetch the headers. This is already done for video files and audio files that have the relevant info.

It's still ideal to put the sizes inline if you can, though.


For the people who are downvoting, some more details on how this would work from a person who actually works on web platforms and standards:

Along with crypto and improved compression, HTTP2 offers multiplexing. HTTP also has historically offered range requests, where you can request a small subrange of a file.

This means that a browser could collect image URLs when doing its initial parse of the page (pretty easy, since you want a small, simple page for AMP anyway) and then issue a set of requests for the headers of all the images - for PNG, at least, you know the header will be of known, relatively small size. The header requests can all be multiplexed by HTTP2, which means that you will be able to get all the headers before you start getting the full bodies of the images. You won't have to download the headers or images twice since you can download the rest of the image bodies with another range request. Range requests could also be cached under this scenario.

This optimization can be implemented (in theory) by every browser engine and doesn't require changes to developers' content or web servers. And it would be an improvement that is web-wide instead of just focused on AMP.


you can do even better with http2. on a request to get a page the server can just optimistically send the image headers to the client up front with the page text then backfill the image bodies in whatever order the client wants


Oh yeah, I forgot about server-driven preloading. There's some great stuff in HTTP2, thanks for pointing it out!


This already happens, but the extra roundtrip still kills performance.

Not having to make an extra HTTP request is the only way to get good performance in this case.


The result of this will probably be websites doing even more work on the server to conform to the requirements of AMP, which if done correctly will increase request latency. Then the AMP developers get to look at their (client-side) metrics and go look how much smaller we made our numbers!

(I still think AMP is a good idea, but side-effects need to be considered at scale)

An ideal solution for simple pages would be to run a build tool before deployment that adds width/height attributes to all your image tags by checking the files. You could probably do this pretty easily before minification.


Unfortunately as a user I either have to deal with tons of reflows as the page loads or make you specify the image size up front. Since you know what the size of the image is, the last part doesn't seem that big of a deal.


It is usually not the images that cause the reflows, but the unsized containers for actual contents. Also reflowing paragraphs of text.

No, you should not have to specify sizes of every span in existence of insert manual line breaks. That makes the pages just fall to render properly on current multitude of screen sizes.


This would typically be added by a CMS or similar software. Note, that only the aspect ratio is required (including support for media queries), so you don't have to know actual pixels which may be device specific.


Of all the things that cause pages to reshuffle on the modern web un-sized images are the least of my worries.


Yikes looks like someone didn't setup SSL properly on the project's website (http redirects to https) https://www.ampproject.org/ it'd be nice to see some of the examples.

edit Google cache isn't loading everything - at least to my knowledge.

edit 2 Yay web-archive to the rescue!

http://web.archive.org/web/20160811220337/https://www.amppro...


AMP team member here. Not sure I understand the issue.

It's by design that our site is only available via HTTPS. Is there a good reason for why you can't consume the HTTPS version?


I am not entirely sure, as a web developer I've never seen a certificate pass all SSL checkers yet chrome and firefox on my computer (up-to-date) won't load them. I pondered a MITM attack but it's literally the only site this happens on - not exactly sure what is causing the https to not load on my end.

edit All I get is "ERR_SSL_PROTOCOL_ERROR" - I'm pretty sure I tested this link out at work but I will again on Monday to verify it happens there (currently I'm tethered as well as vpn'd to PIA and I don't think my provider T-Mobile is doing anything to block this one ssl)


No, HTML is what HTML should have been.

AMP is an optimization with narrow application focus than HTML has that introduces a number of things (including coupling, limitations on technologies other than HTML, etc.) that are undesirable in general for the broad role HTML has, but tolerable trade-offs for the focus of AMP.


To take an example, in what context is it better to have img tags that don't specify their width and height?


The AMP presentations at Google IO were a bit strange.

They were throwing jabs at native applications while at the other tracks everyone was talking about native Android applications, how to run native Android applications on ChromeOS and how to stream native Android applications.


This an intro, conclusion and 3 paragraphs that don't say much more than "HTML is shit because people don't specify image size and this irritates me".

I can't tell what's worse: this post, or the fact that people are responding to it.


> For example, images should have their width and height set so that the page doesn't re-layout when they're loaded.

I hate this idea. What if I just want the photo to be there at its natural size? Why should I keep updating my CSS when I change the photo?


> Why should I keep updating my CSS when I change the photo?

They're talking about the HTML image attributes so layout placeholders can be rendered, not CSS. The original height and width should be specified and styling can optionally be applied in CSS.


Replace CSS with HTML?

The question applies to either.


This feels like something that could be dynamically inserted server-side.


I like Firefox's Reader Mode for cluttered pages, and this approach could be widely adopted and improved upon. However, anything that catches on too much is going to get peppered with ads or paywalls, one way or another.


and through the years

and perfectionist tears

it echoes, echoes:

worse is better


By that logic .doc is the greatest format ever, or in the top 5 at least.


It is easy to criticize in retrospect. If HTML was not what it is, we may never have reached where we are and said ok we need to step back a little and figure things out for the multitude of consumption mechanism.


Not intended as finger-pointing, just forward-looking. HTML was great historically. What do we do now? How could we improve things?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: