Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Decoding the HTML 5 video codec debate (arstechnica.com)
39 points by nickb on July 6, 2009 | hide | past | favorite | 26 comments


"Some critics even contend that it's not possible to advance Theora without inevitably hitting a patent wall."

This article and the HTML 5 codec issue in general really illustrates the stifling effect that software patents are having on technical progress.

Are there any compelling reasons left to keep software patents around? It seems like they're doing way more harm than good.


There is no debate. Hixie removed the relevant portions from the spec. There is now no required codec. Unless the landscape changes significantly, there will be no required video codec.

It's unfortunate, but it isn't the catastrophe that the article implies. The vendors that are going to support Theora out of the box are going to do so anyway, and the vendors that are going to support h264 out of the box are going to do so anyway. Standardizing on either wouldn't have done nothing but dissociate the spec from reality.


Standardizing on either wouldn't have done nothing but dissociate the spec from reality.

So now web designers have to deal with incompatibilities introduced in the last century, as well as ones introduced in yet-to-be published specs.

What is the point of standards if everyone is going to ignore them? The HTML5 committee could have saved a lot of time by not writing the spec and instead concentrating on a free reference implementation. (When browser vendors have to implement a spec on their own, they are going to cut corners to make things easy. If they can just cut-n-paste from a reference implementation, they will probably just use that.)


What is the point of standards if everyone is going to ignore them?

That's the point. That's why that part was removed. If Theora had been mandated, Apple would have ignored it. If h264 had been mandated, Mozilla would have ignored it.


The end result is the same. You can't use <video> if you want it to work on all platforms.


Ah, but you can like so:

  <video>
      <source src="example-video.mp4" type="video/mp4" />
      <source src="example-video.ogv" type="video/ogg" />
  </video>


I'm not really sure how it's going to work in Chrome, though. The article says they are going to support both codecs. As a result it's up to browser which video source to choose. It's interesting whether it will pick the first source or the 'preferred' one (presumably, mp4). Is it possible to set the source preference in HTML? Like:

  <video>
      <source src="example-video.mp4" type="video/mp4" />
      <source src="example-video.ogv" type="video/ogg" default/>
  </video>
or something similar?


The first supported <source /> is used (from the spec).


A reference implementation based on what spec?


Whatever the authors of the reference implementation felt like writing. Just like all the major browsers that are supposedly "standards compliant", but without pretending to be something it's not.

(People don't want a standard saying how to play videos on the web. They want to be able to say <video src="foo.video"> and have it work for all their users, including those using free software.)


An Apple representative has reached out to Xiph and their lawyers (SFLC) to make another attempt to address the (overblown, imho) submarine patent issue to the satisfaction of Apple's lawyers and so add support for Theora and Vorbis. So the story isn't over for Theora as the HTML5 video standard yet.

http://lists.xiph.org/pipermail/theora/2009-July/002415.html

It's worth pointing out that no-one has suggested H.264 as a recommended codec, not even Apple or Google who are both shipping H.264 support. Since it is patent encumbered it simply doesn't pass the first test for getting into a W3C standard. You'd be as well to suggest Flash or Silverlight be added to the spec, it would get shot down just as quick.

Apple apparently tried to get one of the less advanced H.264 profiles made royalty free, so it could go into the standard, but was getting pushback from some of the other patent holders in the MPEG-LA. That would have been interesting as it would have been a "first hit is free" kind of deal, where Safari/Quicktime would be able to support the higher levels of H.264 but Mozilla/Chromium/Linux wouldn't and Opera presumably would refuse to do so on principle.


As a web developer I prefer the last option listed: allow video to use any and all codecs installed on the users computer.

BUT, expose this information to the server, so I know which video to serve. Add an Accept-Video-Codec and an Accept-Audio-Codec header, and that is all you need to do.

For extra points send some information on screen resolution in the headers, instead of making me use javascript for it.

The reason this is good is that browers don't need to support anything. They just use the media libraries on the computer. If I want more codecs, I install them, I don't need to convice the browser to support them.


The licensing fees for H.264 that go into effect next year should kill it off pretty quickly.

Of course, we should really just get rid of software patents.


Film/video post-production guy here. Agree about software patents but I'm not so sure about the licensing in this case. The thing is, h.264 really is very very good at what it does.

I don't think it's so bad to reward that with a license fee, and the patent administrators are benign dictators. There are lots of exceptions to allow startups to use it, there are sub-inflation caps on the fee increases, and the maximum license fee per hardware device is 20 cents. Online is a bit more complex but still very cheap. full terms here: http://www.mpegla.com/avc/AVC_TermsSummary.pdf

On principle, I lean towards a totally free software solution. But pragmatically, this is a very fair approach to administering a patent for a technology which delivers a lot of value, and and I think h.264 will persist for that reason.


Is it really so cheap? If so, it seems that it should be possible to create a mechanism which could cover these fees for open-source projects. Perhaps someone like Mozilla or Canonical or other open-source advocates could create a fund for such things? In an ideal world, we wouldn't have to pay licensing fees for this. But this seems a small price to pay for a standard video codec. Is there anything like this for .mp3 support?


The problem with patent licensing and open-source software is that patent license agreements are usually geared towards shipping physical items (such as DVD players, etc.) Commercial, proprietary software tries very much to behave like physical items (one single producer, building and shipping items to to customers) so patent terms like "20c per unit" work quite well.

Open-source, on the other hand, has a most unphysical habit of proliferating without the original creator's involvement. If Canonical promises to pay 20c per CD they ship, and 10% of those CDs are imaged and torrented and downloaded and burnt and re-imaged and re-shared all over the web, nobody knows how many "units" are out there, and there's no reasonable way Canonical (or Mozilla, or any other prospective licensee) can comply with any patent license more complicated than "do whatever you want".


it seems that it should be possible to create a mechanism which could cover these fees for open-source projects. Perhaps someone like Mozilla or Canonical or other open-source advocates could create a fund for such things?

Since the fees are capped, some organization could just pay the cap, distribute a "free" codec, and write the cost off as a marketing expense. Mozilla can afford to do this, but refuses on principle.

Is there anything like this for .mp3 support?

Fluendo distributes a "free" MP3 decoder; they pay the license fee on behalf of users.


I don't remember what the mp3 situation is - it's probably on Wikipedia. Your fund idea is pretty good, but if it's client-side chances are you can either reference graphics card hardware on newer systems or be exempted on older ones (which might not be an issue anyway, as it's quite processor-intensive and doesn't run that well on older CPUs). You can email them on the website if you dig around.


Using "whatever codecs the user has installed" is the very non-solution that has already been in place for over a decade by means of the <object>/<embed> tags. The result of that was eleventy hojillion attempts to make the ultimate codec, every single one a failure.

Even when <object> based video worked, the plugin architecture made it slow, unstable, unpredictable and generally a horrible user experience. The only reason we have video at all today is because Flash provides it internally.

Any web video standard that fails to standardize the codec has made zero direct progress. However, it looks like the HTML5 attempt may have resulted in an acceptable defacto standard: a browser must support one or both of H.264 and/or Theora. Since the <video> tag makes it easy to provide both formats, I think this is a compromise that web developers will be able to live with.


In my eyes there are two actual conclusions from all this:

1. flash will remain the dominant video platform for the near future.

2. This stalemate between theora and h.264 is a huge win for chrome, the only browser that will support both.


So doesn't Theora just win? Most Mac users seem to use Firefox, so Safari has a marketshare just a touch above Chrome. Still essentially a rounding error.


I do not believe most Mac users use Firefox. Also, don't forget mobile.


Even if everybody used Firefox, YouTube still wouldn't adopt Theora due to perceived bandwidth/quality issues.


Isn't it possible those perceptions will change? Or even the realities underneath, if those perceptions are correct?


could you elaborate here? what are the perceived band/quality issues you mean?


http://en.wikipedia.org/wiki/Comparison_of_video_codecs as a jumping-off point; http://etill.net/projects/dirac_theora_evaluation/ for a flawed but quick analysis.

In brief, Theora is more open-ended (supporting up to terapixel resolutions!) but at the price of being far less optimized or well-supported for acquisition and delivery needs today. h.264, while processor-intensive and subject to long-term limitations delivers outstanding performance right now and is widely supported.

Also, Theora brands itself as ideal for web delivery with limited bandwidth leading to graceful rather than abrupt degradation, which is great, but of declining utility as bandwidth and reliability increase. It isn't pushed as a great choice for high-quality delivery, leading to a lack of awareness among content-providers who would rather use use h264 for everything from blu-ray to Youtube.

Now, that doesn't mean Theora has no future - as it matures it will become more desirable as a supported option, and the long-term architecture will help with that. Right now, however, it doesn't offer any significant technical benefit, like massively smaller compression or massively better image representation, compared to competitors. Making film and video is expensive and the patent licensing costs are such a tiny fraction of that cost that content providers and distributors really don't care. To them, Theora is a solution in search of a problem (indeed, many have never even heard of it). There is no economic incentive to deliver in this format, so they won't.

tl;dr programmers care about Theora, video people don't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: