Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The big thing that makes HTTP/2, HTTP/3 and even Gemini hard to implement is TLS. I theory, HTTP/2 don't require it, but in practice, it does.

You need to implement a variety of ciphers, manage certificates with expiration dates, etc... And if you wanted to implement all that yourself, people will yell at you for doing your own crypto. So yeah, you need a library. But not just that. You need a way to update your certificates, so you can't have a package (or even a single executable) that you can just run and have a server that serves static pages. You could make a self-signed certificate that lasts a thousand years, but good luck getting it accepted.



Generally speaking, you also need some sort of an operating system to make use of HTTP, and yet that doesn't figure into the complexity of HTTP.

In classic HTTP TLS was layered beneath it, providing important degrees of freedom, including the freedom to not use TLS, which can be especially important for experimentation and development.

Prediction: If HTTP/3 manages to substantially replace classic HTTP+TLS, QUIC is destined to become a kernel-provided service like TCP, shunting all that complexity behind an OS abstraction and freeing user space. The fact QUIC uses UDP is an important aspect here because a performant userspace QUIC stack conflicts with classic, high-value abstractions like file descriptors and processes; abstractions which make it viable (i.e. cheap) to have a rich, diverse ecosystem of languages and execution environments in userspace. More importantly, HTTP will have come full circle.


> QUIC is destined to become a kernel-provided service like TCP

I think this might happen in the opposite direction to how you think: the prevalence of containers is likely to thin down the OS layer to just arbitrating the PCIe bus, and you'll get monolithic applications that handle all the layers inside themself. So the QUIC layer and the web server and the app-routing and the app itself will all happen inside the same process.


Service VMs, in IBM-speak, where the "OS" is a hypervisor called VM.

Reinvented a bit later as libOS (library operating systems) and unikernels.

Even Linux got in on that a bit:

https://lwn.net/Articles/637658/

It isn't a stupid idea, but the only way to update anything is to rebuild the world. Luckily it's a small world after all.


> The fact QUIC uses UDP is an important aspect here because a performant userspace QUIC stack conflicts with classic, high-value abstractions like file descriptors and processes

Only because the implementations of fds in popular kernels are absolute rubbish at allowing userspace to extend them, except for perhaps ptys. A userspace implementation of a protocol can make a domain socket or just pass the client one half of a socketpair, but it cannot change the way accept() behaves, let alone add its own sockopts.

Even if you extend a kernel to do this, I suspect there are going to be interesting security implications when processes can suddenly receive fds that behave in funny unexpected ways. Much like the current mess with Linux namespaces: not because the idea is inherently bad, simply because we waited this long to try it.


> QUIC stack conflicts with classic, high-value abstractions like file descriptors and processes; abstractions which make it viable (i.e. cheap) to have a rich, diverse ecosystem of languages and execution environments in userspace

Do you mind expanding on that with a sentence or two?


UDP is streams in the very classical definition: stream objects (bytes, probably, though a more faithful implementation would contain chunks/arrays) go out without predefined start/end. In other words, imagine that a single `read()` (or whatever) call returns random chunk from HTML source and it is your job to make sense of where in the space of whole page that chunk comes from. There is even no guarantee that consecutive read will return data "in order".

"File" abstraction layer has predefined start, end, length, order, and so on. You need some buffering magic layer on top of UDP that provides facilities mandated by file abstraction layer.


Worse, a socket read can return a UDP packet from any client as QUIC associates connections using 64-bit identifiers in the UDP payload, not using IP/port pairs. Ditto for the fact there can be multiple logical streams per connection. So any user space QUIC stack is responsible for routing packets for any particular connection and logical stream to the correct context using whatever bespoke API the stack provides.

IOW, with QUIC a file descriptor no longer represents a particular server/client connection, let alone a particular logical stream. From a server perspective, a user space QUIC stack is effectively equivalent to a user space TCP/IP stack. (Relatedly, user space QUIC server stacks are less CPU performant than HTTP stacks using TCP sockets, unless they use something like DPDK to skip the kernel IP stack altogether, avoiding the duplicative processing.)

The BSD Sockets API for SCTP (the original, non-UDP encapsulated version) permits 1-to-1 socket descriptors for associations--logical streams within a connection--in addition to an API for 1-to-many--1 socket descriptor representing a connection, over which you can use recvmsg/sendmsg to multiplex associations (logical streams). So you have 3 options for using SCTP from user space, from a dead simple, backwards compatible sockets API that matches how TCP/IP sockets work, to the low-level QUIC model where user space handles all routing and reassembly. I would expect kernel-based QUIC to work similarly, except combined with kernel-based TLS extensions, which is already a thing for TCP sockets.


> UDP is streams in the very classical definition

What do you mean exactly? UDP is not a stream protocol, it is an unreliable datagram one. A single read will return exactly one UDP datagram.

It is TCP where a read will return incremental data which is not necessarily aligned to what the other end wrote.

Of course QUIC builds a reliable stream abstraction on top of it, but that's not different from TCP.


In the classical definition stream is "sequence of data elements", without any additional guarantees. In this context TCP is stream transformer/filter: stream of "raw" packets is transformed into payload stream.

> What do you mean exactly? UDP is not a stream protocol, it is an unreliable datagram one. A single read will return exactly one UDP datagram.

This is exactly what makes UDP a stream of datagrams. Data is defined over one single UDP datagram - UDP gives exactly zero guarantees about data across datagram boundaries. There is no first, there is no last, there is no next/previous datagram, there no guarantees and expectations whether more datagrams will follow.

> Of course QUIC builds a reliable stream abstraction on top of it, but that's not different from TCP.

Key word "reliable", which is subset of streams. In goes stream of datagrams, out goes ordered (and otherwise identified) stream of bytes. Reading your comment it seems that you equate "stream" with "asynchronously fetched buffer" meaning from C++ and others, which is a stricter definition of streams.


I never heard of UDP defined as a stream (of packets), and it collides with the canonical definition of stream protocols from UNIX, BSD sockets, and the IP stack.


No, it does not collide with "canonical" (colloquial, I'd say) definition of streams, when you think about it. It's only a matter of guarantees on a stream. It is not buffered reads/writes that make a stream.

I can take a bunch of XMLs, "stream" them into a TCP socket on one end, feed that socket to "XML stream processing library" on another end and... Watch it blow up, because apparently the only streamy thing about the library is that it can handle asynchronously filled buffer, but will nevertheless blow up if fed anything but a single well-formed xml ¯\_(ツ)_/¯


It is not colloquial when talking about network protocols. From socket(2):

          SOCK_STREAM
              Provides sequenced, reliable, two-way, connection-based
              byte streams.  An out-of-band data transmission mechanism
              may be supported.
Not sure what your reference to buffering or async buffers and XML has to do with the topic.


Here you go, right in the documentation you get that TCP streams are streams with additional qualifiers. Not simply streams. The whole thing we are arguing about: TCP streams are subset of streams.

> Not sure what your reference to buffering or async buffers and XML has to do with the topic. An illustrative point for what happens when assumptions from a contrived case are applied on a more relaxed case.


Seems like there are some unspecified assumptions here. For one, an assumption that TLS is the only game in town. Another is that "you", as in "you must do this" or "you must not do that", applies to every person equally,

When one uses TLS today, chances are very good that it's using something from djb, someone who "made his own crypto". Maybe the assumptions, stated as "rules", do not apply equally to everyone. For example,

"And if you wanted to implement all that yourself, people will yell at you for doing your own crypto."

In fact, before HTTP/2 existed, djb did exactly that, as a demonstration that it could be done.^1 It succeeded IMHO because it worked. People can "yell" all they want, but as above, these same people if they use TLS are probably using cryptography developed by the person at which they are "yelling". Someone who broke the "rules". Perhaps there is evidence that HTTP/2 would exist even were it not for the prior CurveCP experiment. But I have yet to find it.

The word used in the parent comment was "implement" and the suggestion is that attempts to "implement" would not succeed. Perhaps the reason they might "fail" is not a technical one. Perhaps "success" in this instance really refers to acceptance by certain companies that are making "rules" (standards) for the internet to benefit their own commercial interests. It may be possible to implement a system that works even if these companies do not "accept" it. If so, then the problem here is the companies, their fanboys/fangirls (watch for them in the comment replies), and the undue influence they can exert, not the difficulty of implementing something that works.

IMHO, getting something "accepted" by some third party or group of third parties is a different type of "success" that getting something to work (i.e., "implementing"). It's the later I find more interesting.

1. https://curvecp.org




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: