The discussion of standards seems like it unhelpfully conflates the reality of standardisation by bodies like the IETF that have no discernible authority and don't want it even if it were possible - with "de facto" standards which it says are just whatever people do in practice. Not so.
The IETF is not a conventional SDO, nor indeed a conventional organisation of any sort, since it has no members, and it operates on "rough consensus" rather than having some specific formal process that invariably (see Microsoft's interaction with ECMA and ISO) would be gamed by scumbags.
But nevertheless those are de jure standards that come out the far end, the result of "getting all major stakeholders in a room" albeit that room is most often a mailing list since only a hardcore few can attend every IETF physical meeting. The IETF even explicitly marks standards track RFCs distinctly from non-standards track ones. If you contribute documentation for a complete working system, rather than (as Google did with QUIC) a proposal based on such a system that needs further refinement, it'll just get published as an Informational RFC. Such RFCs are how a bunch of Microsoft network protocols are documented, by Microsoft. Whereas months of arguing and back-and-forth technical discussion have shaped the IETF's QUIC and will continue to do so, the documentation for MSCHAPv2 (commonly used in corporate WiFi deployments) is an informational RFC so a Microsoft employee just dumped it as written, no chance for anyone to say "Er, this protocol is stupid, change it not to shove zero bytes into this key labelled C or else anybody can crack user passwords after spoofing the AP". So they didn't, and you can.
To be fair, the parent comment is slightly misleading. I don't know the exact story of MSCHAPv2 but note that it is an informational RFC published by the pppext WG: https://tools.ietf.org/html/rfc2759
For an RFC to be published by a WG, it must first be "adopted" by the group, which means a first draft is submitted by the author, and then debatted (sometimes lightly) until the group agrees that it fits the topic and is suitable for adoption. Similarly, once the RFC is adopted, it goes through a series of calls by the WG chair where people have opportunities to comment, until it is finally published. Informational RFCs have lighter requirements than standard tracks one, so they are easier to get published, but they still get some amount of review and comments before publication.
In fact, even "independent submissions" with "experimental" status (that do not go through a WG at all, https://tools.ietf.org/html/rfc2026#section-4.2.1) get reviewed before publication. The reviews in that case are private, but a RFC editor is responsible for sanity-checking the document and sometimes requests additional input from reviewers specialized in the domain area covered by the draft.
[Edit: the actual WG for MSCHAPv2 was https://tools.ietf.org/wg/pppext/, not "Networking" which is just the generic name on top of the RFC]
Although you're correct that there was a drafting process for MSCHAPv2, the actual protocol it describes had already shipped in Windows. As a result "But this is a bad idea" would not have been a useful contribution to the drafting process, the zero draft describes the exact same protocol, just with different words.
Edited to add:
The drafting process wasn't worthless, it fixed typographical errors, unclear descriptions, and so on. For example the zero draft insists Windows usernames are "Unicode" (UCS-2) but actually they're just ASCII, the examples show ASCII encoded hexadecimal but the text in the zero draft specifically calls it Unicode. And originally the document repeatedly says something is a 16-bit value in the text while showing a 24-bit value in structures, the final RFC has corrected this to split out an 8-bit "reserved" all zero field in the structure when this happens. In at least one place the RFC seems to "extend" the protocol compared to the zero draft, but again this isn't a response to Working Group feedback, it's documenting a patch Microsoft shipped in later Windows versions after the zero draft.
I don't know how much a WG chair could have usefully interfered here. As I say it's documenting something that already existed, so "fixing" it to document a more secure protocol nobody was using wouldn't help. The IETF's role here was to help people interoperate with Microsoft's solution, your non-Windows OS that can sign-in to a corporate WiFi system with Windows domain servers is enabled by this documentation.
>However, in those discussions, a related concern was identified; confusion between QUIC-the-transport-protocol, and QUIC-the-HTTP-binding. I and others have seen a number of folks not closely involved in this work conflating the two, even though they're now separate things.
>
>To address this, I'd like to suggest that -- after coordination with the HTTP WG -- we rename our the HTTP document to "HTTP/3", and using the final ALPN token "h3". Doing so clearly identifies it as another binding of HTTP semantics to the wire protocol -- just as HTTP/2 did -- so people understand its separation from QUIC.
Google didn't make the distinction between transport layer and HTTP-layer on top when they called their development "QUIC", it was one thing. IETF decided to split these during the standardization.
Perhaps a better question would be why did Google decided to experiment with QUIC in Chrome instead of HTTP/2 over TLS over SCTP over UDP in Chrome. And the answer is probably because TLS over SCTP would require more RTTs.
A WebRTC "RTCDataChannel" is an SCTP-over-DTLS-over-UDP stack, and webservers exist to stream to these. They're just mostly proprietary, existing as part of vertically-integrated stacks like that of Google Hangouts (i.e. its "app sharing" feature.)
> But moving from TCP to UDP can get you much the same performance without usermode drivers. Instead of calling the well-known recv() function to receive a single packet at a time, you can call recvmmsg() to receive a bunch of UDP packets at once.
TCP is a streaming protocol, there are no datagrams to read one at a time. Nothing stops you from reading the entire kernel buffer (containing multiple HTTP messages) in to userspace in one syscall.
HTTP requests are small, and a read() call only gets you the data from a single connection, so you get one to few packet(s) worth of data per syscall. In contrast, recvmmsg can get you a large bunch of packets across all "connections" in a single syscall.
Good point. Only that makes sense for servers though since for clients (web browsers etc) sockets will be 'connected' and they will still have lots of sockets anyway. They'll still be epoll()ing or similar.
Not withstanding the other benefits of QUIC, the UDP vs TCP thing wrt to crossing the kernel-userspace boundary doesn't seem that significant.
For clients these numbers don't matter as much, many thousands of packets per second doesn't really happen to them. For a server with a fast pipe it does happen. I hope one of the big providers will share numbers. I don't think many of them actually use user-space stacks with TCP, would be interesting if QUIC improves performance measurably.
The problem is from userland you don't get to decide how much data the kernel hands you with the read call, and by default the kernel will try not to let its buffers fill too much so you end up with a lot of context switches as you read out one window's worth of data at a time.
You could do something similar with TCP if there were kernel support, but the article suggests that putting this much complexity in the kernel's TCP stack is a bad idea because it increases the chance of failure--which can be catastrophic in kernel space. Better to have your web browser/server crash instead of the entire machine.
I feel like the adoption of HTTP/3 is going to be much much slower than HTTP/2... Besides Google Cloud do any of the major cloud providers have UDP load balancers?
Disagree. There's more incentive to move considering the advantages it offers. If anything it feels like people were reluctant to adopt http2 and http3 is a bigger leap that we'll all want to move forward with.
The challenge for http/3 will be middleboxes not endpoints. Put another way: it won't be hard to break 50% of traffic on http/3 but you will still be running 2 10 years from now if you want to reach the last 10% of users.
While this is true, a lot of software is not prepared to have connections change IP address. Lot's of networking software works on the idea of a loop which assigns each connection (5-tuple) to a thread. If QUIC were to be more widely used, the association of the connection would have to be linked to the connection identifier in QUIC, which is different than the 5-tuple. Basically, any software that assumed 1 connection maps to 1 file descriptor, is going to have to be rewritten.
This also means the kernel, which could previously steer traffic to a connection onto a single core. Realistically, QUIC will need a BPF filter to inspect the connection identifier and steer to the same core in the event of an IP address change.
All this is to say: I don't think most software is ready for QUIC, even if the protocol allows for cool things.
> Basically, any software that assumed 1 connection maps to 1 file descriptor, is going to have to be rewritten.
Yeah, but I don't think that's a hard thing to do, realistically speaking. Instead of mapping a source ip/port to a destination in the LB it's mapping a QUIC session ID instead.
> This also means the kernel, which could previously steer traffic to a connection onto a single core.
Most likely the kernel will not be rewritten to support this, since QUIC is a user space protocol. Also, the LB might just be able to rewrite the packet so it looks like it came from the LB when it sends it on to the server. The end server would never see the ip address change.
Not unless you want to get through firewalls seamlessly. UDP "state" tracking is a thing, and if my firewall sees a UDP packet destined for it without knowing the remote IP address it's going to drop it.
If the client side changes it's IP address, it's a lot easier say than if the server side does. Typically because the server won't reject new UDP packets coming in from the internet.
The load balancer will route any ongoing sessions with the session id, and not the ip address.
Absolutely, but the original commenter was stating that it would allow either endpoint to their their IP easily (and thus making load balancing easier). I was disputing that.
Client changing their IP address is fine, the comment I was replying to was stating that it would allow both sides to change their IP at will (and allow for better load balancing). I was disputing that.
I hope we get in-kernel implementations of QUIC at some point because having to find a portable third-party library for userspace sounds about as appealing as installing Winsock on Windows 95.
But the whole point of QUIC is that it is a userspace implementation. From the QUIC viewpoint (and I take no sides in this) kernel implementation is death for a protocol because it freezes its specification and behaviour in slow-to-update systems. This is why they found they couldn't "just improve TCP".
I get why Google, which controls a great deal of the software on both ends of a very large number of connections, finds a settled standard inconvenient.
But from my perspective, as somebody who uses Google software but does a lot of other things too, I like when we have standards that are implemented by many different people and aren't controlled by a single vendor that is eager to maintain or extend their large market shares in many areas. Can that be slow to change? Sure. But the speed is proportional to how much the change benefits people besides Google.
Personally, I hope that QUIC is a first step toward taking the lessons learned and implementing them widely, rather than something that will evolve at a rapid pace precisely as long as Google needs it to and then stop.
http.sys is strictly for listening for inbound connections and not a general purpose HTTP API. Amongst other things, it lets multiple applications listen to port 80 on Windows.
Yes, http.sys is an http server implemented in the Windows Kernel it’s the http server IIS and all other http based windows services use and has an API for 3rd party servers.
The point being that there are advantages to implementing a kernel or hybrid mode HTTP server and Microsoft has done it on Windows some other implementations exist but other than MF/Big Unix I’ve never seen them in actual use.
I don’t think there is much of a point of implementing an HTTP client library in the kernel tho since the performance should be an issue really on the client side.
The history of IIS vulnerabilities with in kernel execution and the time it takes to get comprehensive patching seem like pretty substantial disadvantages.
I don’t necessarily disagree that there are disadvantages as well that said I got by far more often root on boxes due to core server or application level vulns on Linux machines than system on Windows boxes through an IIS vuln (app vulns are still a problem as well as running your worker services as privileged users but that isn’t related to http.sys) since those are more often than not patched.
I don't know much about this, so do you mind elaborating? Wouldn't userspace implementations be safer and easier to update as we spend the next few years sussing out security and performance issues?
This post groks QUIC. The most important thing about QUIC is it frees applications from the tyranny of the kernel TCP state machine. Today all TCP sockets (at least, on Linux) are subject to the same system-wide parameters of the TCP state machine, none of which are appropriate for any particular application. With QUIC we will finally have each application in control of its own retry timers and other parameters. That is going to be quite beneficial, especially on mobile where packet loss is so common.
Those do not affect TCP state machine parameters like RTO(min), ATO, and TLP timeout. These are internal to the kernel and are either static, or can only be set systemwide. For example the minimum delayed ack timeout in Linux is just 40ms and can't be changed except by recompiling the kernel. 40ms is a totally inappropriate number for ATO in a datacenter or other low-latency setting. Other numbers like RTO(min) are specified in RFCs as 200ms, again completely inappropriate in a low-latency setting.
QUIC also frees us from other outdated misfeatures of TCP such as timestamps in milliseconds when they should be in microseconds.
Again, as someone who does not completely grok QUIC I am not dismissing it. I was hoping if some parameters were not exposed work would be put toward making these available through existing interfaces.
Linux network maintainers have repeatedly rejected attempts by Eric Dumazet and others to expose delack_min and other parameters to userspace. This is the kernel tyranny to which I refer.
Notably many of the people whose proposals have been shot down by linux netdev are currently working on QUIC.
Somewhat related: I do hope userspace network stacks get easier to stand up (with strict testing). It would be nice to move an attack surface like that out of the kernel.
QUIC is somewhat comparable to SCTP. But in both situations you wouldn't be using HTTP/2 on top of it. One of the main points of HTTP/2 is multiplexing several streams over a single transport stream. This isn't needed anymore with QUIC or SCTP, since they already perform stream multiplexing. The only thing which is left is transferring HTTP semantics over those QUIC or SCTP streams. The most simple way to do that would be doing HTTP/1.1 over them. However as far as I know (haven't read the latest state) QUIC uses a more sophisticated mechanism, which also provides header compression and caching across multiple streams, similar to HTTP/2. So it adopted some parts of HTTP/2, but isn't really HTTP/2 over QUIC/SCTP.
The shorter timeout would likely be for downstream app servers in the same datacenter, not for the general internet. That is, adding a route for a specific network not because the default route doesn't work, but for tuning that setting only for that network.
Most of the deadliest DDoS attacks happen over UDP. Spoofing, reflection and amplification just to name a few. Many businesses just deny UDP to protect themself against the on going DDoS threats.
I feel this move won't make internet a better and safer place, but let's see.
Or let's actually read about QUIC before quickly commenting on it.
QUIC uses two mechanism to make sure you cannot do such attacks:
* it requires a proof of IP ownership (exactly like TCP sequence numbers) to setup a connection ID (pretty much, you're able to receive the server's response to finalize the connection) [1]
* it requires the client's first message (client hello) to be padded to at least the size of the server's response. Which implies that an attacker would require as much bandwidth as is spent by the server performing the attack, making the attack as practical as the attack without QUIC. [2]
But, since it runs on UDP, a hacker could attack few DNS servers and amplify a UDP attack toward a Quic server. This is true for all reflection and amplification attacks.
Hence, Quic is vulnerable to receive huge amplification attacks +100Gb and soon 1Tbps.
It will not make internet a safer and better place.
Even video game companies used to use UDP and they move away because UDP is too dangerous. They now use TCP with a kind of websocket techno to not allow UDP.
Many big enterprises don't allow UDP toward their critical infrastructure.
Games like agar.io and slither.io use websockets over TCP because browsers don't allow you to use UDP packets. The author of one of them (sorry I forget which) blogged about their adventure, and IIRC they still have lag issues, and there isn't a way to resolve them without switching to UDP.
I've personally worked on multiplayer game engine code and I assure you that UDP is far superior for VOIP and game state packets. TCP requires far too much overhead, requires packets be received in order, etc.
These make no sense for a game engine. If we have a sequence of player movements, lets say their X position [1, 2, 3], but we miss a packet [1, -, 3] we're fine, we only want their most recent packet. But the protocol will require acknowledgment and that packet to be resent, so it will require 8 different packets be sent, instead of 3! We don't even need the packet!
A lot of games are implementing web based technologies for their UIs (Panorama for example) and those will of course use TCP but that's not what the actual game server uses for VOIP/game state
Agreed. For most fast paced multiplayer games, UDP would be better. It depends on the specific use case though. TCP has some advantages when you need features like authentication and encryption because packets are guaranteed to be delivered in-order for the life of the connection; this feature is important for TLS block ciphers. With UDP, you may end up having to reinvent some features that are already offered by TCP and your solution might end up worse overall. So you have to find the balance between code simplicity and performance.
TCP can cover more use cases than UDP but for some use cases this will be at the expense of performance.
Slither.io and Agar.io use exclusively TCP but that's only because browsers don't allow you to send/receive UDP packets. If you've ever played those games on any network or device with shaky internet then you'll know those games have lag issues, and the only way to optimize it more would be to switch to UDP (Which they can't)
Indeed, nearly all FPS/MMO/RTS games are realtime and need UDP, only some messages need to be reliable/ACK'd or ordered and TCP is overkill except for turn based games.
Real-time games have to be UDP or more typically a variation of Reliable UDP (RUDP) [1]. Many networking kits are based on reliable UDP and common early implementations as the core/base of their network layers such as enet [2] or RakNet [3] (Unity, Unreal, Sony, Oculus and more). RUDP or variants are UDP with channels, ordering, priority as well as ACKs where needed for reliable/must deliver messages through the use of a return UDP ACK datagram for verification. Reliable UDP is a set of service enhancement such as congestion control, retransmission, thinning server algorithms that allow a Real-time Transport Protocol (RTP) for media broadcasts even in the presence of packet loss and network congestion.
Reliable UDP ACKS are used commonly in areas like global events such as game start, game end, player entered, player died, player hit etc, all other positioning/action is UDP broadcast with dropped packets lerped [5] and slerped [6] out with interpolation [7] and extrapolation to deal with lag compensation [8] and client prediction [9][10][11]. Sometimes this also involves channels and grid/graph areas where only messages to players around you or in that area are required to ACK when needed i.e. player hit/death.
Most large real-time games are just UDP broadcast for 99% of action. TCP is almost never used in real-time action games like FPS, MMO, RTS etc.
Rarely are TCP and UDP combined, rather RUDP or later something like SCTP, allows streaming/real-time capable broadcasts with enough verification/reliable messages where needed. Combining TCP and UDP can end up with queuing issues that affect both TCP and UDP traffic [4] so most games just go with reliable variant of UDP.
Gaffer on Games has a good section on why UDP is used in games [12]
> The web is built on top of TCP, which is a reliable-ordered protocol.
> To deliver data reliably and in order under packet loss, it is necessary for TCP to hold more recent data in a queue while waiting for dropped packets to be resent. Otherwise, data would be delivered out of order.
> This is called head of line blocking and it creates a frustrating and almost comedically tragic problem for game developers. The most recent data they want is delayed while waiting for old data to be resent, but by the time the resent data arrives, it’s too old to be used.
> Unfortunately, there is no way to fix this behavior under TCP. All data must be received reliably and in order. Therefore, the standard solution in the game industry for the past 20 years has been to send game data over UDP instead.
> How this works in practice is that each game develops their own custom protocol on top of UDP, implementing basic reliability as required, while sending the majority of data as unreliable-unordered. This ensures that time series data arrives as quickly as possible without waiting for dropped packets to be resent.
> So, what does this have to do with web games? The main problem for web games today is that game developers have no way to follow this industry best practice in the browser. Instead, web games send their game data over TCP, causing hitches and non-responsiveness due to head of line blocking.
> This is completely unnecessary and could be fixed overnight if web games had some way to send and receive UDP packets.
That's because other UDP based protocols don't have tcp like sequence numbers or other anti-spoofing measures. Quic has a source address token that makes it hard to spoof.
As far as I understand, the parent's point is that QUIC makes your network vulnerable to attacks from spoofed non-QUIC services because you have to allow UDP packets to reach your network - their usual practice is to protect yourself by just dropping all UDP, but as soon as you want a single system to serve QUIC, then you can't do that anymore and need to inspect all these UDP packages even if it's an extreme amount of some amplified DDoS garbage.
>The problem is fairness in the presence of network congestion. To a large extent it depends on most TCP implementations using the same congestion control algorithm, or at least algorithms that have the same general behavior. Google's developed a new algorithm called BBR that is robust, but also unfair. When a TCP connection implementing the NewReno algorithm shares a congested link with another one implementing BBR, the BBR grabs the lion's share of the bandwidth:
>QUIC specifies NewReno as default and mentions CUBIC, but the choice of algorithm is left to the implementation. I can easily envision Google using BBR for connections between Chrome and Google properties, which means Google traffic would be prioritized over competitors'. Over time, more players would implement BBR in a race to the bottom (or a tragedy of the commons) and Internet brown-outs as in the 1980s and 1990s would come back.
Race to the bottom is one thing. Yes, so-called “TCP unfriendly” protocols will muscle out any protocol that reacts to probability p random packet drops by a one-over-root-p reduction in throughput. But that does NOT mean that BBR will fail to avoid congestive collapse. The one-over-root-p behavior is outdated and actually harmful on wireless links, anyway; it was designed for the assumption that all losses are congestive losses; but today, many losses are purely random and not an indicator of congestion. BBR and other modern TCP-unfriendly congestion control protocols are a necessary step. TCP-friendliness must die.
Under the hood, what's happening is that the physical layer is reporting that a packet failed to decode, and the link layer is attempting retransmission at a series of lower and lower fallback rates. It's designed this way because if it fails to deliver a packet, TCP will freak out. There's an RFC about the general case of designing link layers to hide random losses: https://tools.ietf.org/html/rfc3366
It's not so much
just because "tcp will freak out" as much as it is understanding no transport layer retransmission algorithms operating on the scale of 100+ms can be a replacement for 100us+ link layer reliability mechanisms like FEC and retransmission.
> and Internet brown-outs as in the 1980s and 1990s would come back.
That part definitely needs more justification. An algorithm playing badly with NewReno doesn't mean that we'd be worse off if every system switched to it.
Eagerly waiting for you to start sending patches for every system, including proprietary and unmaintained ones, and ensuring that those get included by default.
Why would he start sending patches for them? He already benefits from BBR himself. He doesn’t need to lift everyone else to enjoy the benefits, unlike the IPv4 to IPv6 transition.
Well I was responding to a claim that it would ruin all traffic, including the people that switched. It's not my job to defend against an entirely different argument.
And this kind of thing needs more numbers in general. Maybe in a large mix of traffic, outside the edge case of a saturated link with two streams, it doesn't dominate too badly. Maybe because TCP incorrectly blocks so often, the average 'victim' user still benefits overall because only some of their devices are unpatched.
Or maybe you shouldn't really be discussing actually deploying things that harm users of the most widely used protocol out there, but instead change the new thing to deal with the world that it lives in .
The existing congestion control algorithms have flaws that make them slow down far too much in certain situations. I don't know if it's possible to make something that doesn't have those flaws and never outcompetes them. We can probably do better than BBR, but I don't know if we can do a perfect job.
You analogy doesn't work, because there is nothing pedestrians can do to counter cars driving on the sidewalk. But there is a counter to TCP BBR--adopting TCP BBR yourself.
AQM is smarter than it was in the nineties. At worst network admins will have to deploy cake with triple-isolate to ensure fairness between hosts. It is even part of the mainline kernel now.
Is it possible to turn encryption off? If I'm running a cluster of sensors on a remote airgapped network, the ease of using tools like tcpdump and nc far outweigh the need for encryption, especially if one is power constrained.
Presumably you're not being forced to use http/3 and can continue to use whatever you're using now? Or is there a particular reason that you want to move to http/3 for that network?
I searched the page for proxying and tunneling, neither of which is discussed. I'm sure switching to UDP will have dramatic consequences for these features.
I’m no web standards expert, but I’m surprised that this standard will be implemented in userspace rather than in the kernel. Seems like a rather odd choice: will there be a way to string together the requisite kernel calls together to achieve the same functionality, or will I be forced to link against a “priveleged” library that makes these calls for me?
Part of the advantage is that you can update the protocol implementation independently of the OS. So google does not have to lobby OS vendors to implement TCP fast-open and other extensions to make their stack faster.
Is there much hope of this leading to us being able to set up QUIC transport connections (ie without HTTP) in the browser? This could be huge for browser games and other low latency apps.
WebRTC is powerful but seems to be limited by not having a good portable server implementation, due to complexity.
There is a new protocol called QUIC, which runs on top of UDP. That is, to routers/firewalls/middleboxes etc. that are not specifically aware of QUIC, it will just look like UDP traffic.
QUIC provides TCP-like features (reliable streams, with retransmissions of dropped packets etc.) plus more. As to why QUIC instead of improving TCP; experience has suggested that TCP has essentially become ossified, meaning that middleboxes will drop TCP packets using new features (see e.g. ECN). Thus in QUIC there's a focus on ossification-resistance (mandatory encryption, and as little information as possible exposed outside the encryption, etc.).
HTTP3 runs on top of QUIC (which runs on top of UDP). Hopefully that makes it clearer.
Just another question, are the "middleboxes" well prepared for a massive switch towards UDP traffic? I don't know the network hardware world very well.
Also I thought TCP traffic was benefitting from hardware implementation and optimization, I guess that's also wrong.
> Just another question, are the "middleboxes" well prepared for a massive switch towards UDP traffic? I don't know the network hardware world very well.
I'm not an expert either, but AFAIK QUIC development has been heavily influenced by the requirement to work on the "real" internet with various middleboxes of varying quality.
And it's not like it's going to be an instant change. QUIC is AFAIK already used between Chrome and google/youtube, and once QUIC & HTTP3 are official, it'll be a very long tail.
> Also I thought TCP traffic was benefitting from hardware implementation and optimization, I guess that's also wrong.
Well, going all the way (that is, TOE), hasn't been that popular, and e.g. the Linux kernel has refused to support such cards in the mainline kernel. What is common, and very useful, is checksum and segmentation offloads. Checksums are present in both TCP and UDP, and AFAIK NIC's capable of TCP checksum offload can also do UDP checksum offloading. Similarly for segmentation offloading, except in the UDP world it's called fragmentation offload. Though I guess for QUIC, receive fragmentation offloading won't work as long as the kernel and HW don't understand QUIC, as they won't understand that two incoming UDP packets can be merged.
Working without javascript requires development time. Running old versions of HTTP doesn't. Version 1.1 is going to be fully supported for a long time.
Running old versions of HTTP requires nothing because people will need to support legacy devices for a long time, and it's obviously already implemented in all major components (Servers, CDNs, Clients, etc.)
You (will) run HTTP/3, because it is more efficient, meaning that your servers will be able to service more request.
You run HTTP/2 and HTTP/1, because a lot of people are still using that, and you don't want to lose them. This especially applies to mobile devices, many of which are stuck with software that cannot be updated for various reasons.
There's no threat of the majority of websites going HTTP/3 anytime soon. By the time that might be a possibility, Tor will catch up.
Unless they lose 99% of their users, there's enough demand. Your level of pessimism on this specific detail is ridiculous. Tor might not last forever, but it won't be lack of HTTP support that kills it.
If you wait a few years for HTTP/3 to settle, proxies will be available that could be glued into tor inside a weekend hackathon.
> If you wait a few years for HTTP/3 to settle, proxies will be available that could be glued into tor inside a weekend hackathon.
In 16 years they have not managed to support UDP and now you say of a project (you are not familiar with...), that they can get it up over a weekend hackathon.
Do you have a contact address? I could mark my calendar for 2025. "some proxy with HTTP >= 3 and HTTP < 3 will exist" is a very basic prediction. It wouldn't have to be integrated into the tor codebase either, just spawned by the tor process.
Tor doesn't have UDP support right now because it doesn't need it for anything. The last 16 years are not equal to the next 16 years, surprisingly enough.
Still not getting it. Every site is hosted. The hosting company needs to spend money on HTTP/2, and be allowed to use it. The networks need to allow TCP through. All the steps down to the transport layer now require legacy maintenance.
Lots of things need to happen for HTTP/2 to 'stay alive'.
Your use of the future tense does not convince me.
Downvoted because of the "you geniuses". You're making a fair point that I hadn't thought of yet (I'm also a fan of Tor), but the delivery method is just plain rude.
If you want to quibble that the five listed aren't always the top five or that the order isn't exactly that listed in the article, or that this isn't valid criteria, go ahead, but this seems like a pretty minor point and tangential to the article.
Can we assume the main motivation for google is to be able to track mobile users better? Particullary as more users are using various blocking methods and the legal environment regarding 3rd party trackers I questionable.
"With QUIC, however, the identifier for a connection is not the traditional concept of a "socket" (the source/destination port/address protocol combination), but a 64-bit identifier assigned to the connection."
A more likely reason is that Google and other internet companies see internet infrastructure as an economic complement: the better and cheaper the internet is, the better and cheaper the total internet + Google services package is. That's why they do so much web performance advocacy, make their own browser, keep trying to kill off ISPs etc. Every dollar you spend on internet service is a dollar you don't spend on digital subscriptions, and every second you spend waiting for pages to load is a second you're not consuming content and watching ads.
I don't see any reason to assume that. As the article explains, portable network connections have been a goal in network research for many years.
Also, Google benefits from a faster and more secure Internet and its employees have the freedom to pursue that in a wide variety of ways. They aren't all mustache-twirling villains.
But I do think the security and privacy implications should be explored. What could an attacker do with a persistent connection?
I didn't think about Google spying even more on users but I also frowned when I read that sentence. It seems to lower the bar for hacking into the connection, but probably that 64 bit identifier is embedded in the encrypted data so it should be hard to get it.
To be fair to the proponents of QUIC I add the next sentence, which depicts a use case:
"This means that as you move around, you can continue with a constant stream uninterrupted from YouTube even as your IP address changes, or continue with a video phone call without it being dropped"
>With QUIC, however, the identifier for a connection is not the traditional concept of a "socket" (the source/destination port/address protocol combination), but a 64-bit identifier assigned to the connection. This means that as you move around, you can continue with a constant stream uninterrupted from YouTube even as your IP address changes, or continue with a video phone call without it being dropped.
This is the ultimate dream of every surveillance company & gov't. Of course Google is solving this "problem."
In a typical scenario, when you connect to a website with one IP address, then change your network and connect again with different IP but using the same device/browser, that website knows that you are the same user. I don't see how having an identifier inside encrypted connection makes anything worse.
This is definitely a risk. There are valid needs to wanting to resume a connection as it hops between gateways, but I definitely see abuse for this. The identifier doesn’t necessarily tie you to a location or name, but once you can associate that it is a risk.
IIUC, the stream identifier is not a persistent client identifier but more similar to a TCP connection.
So yes, as opposed to TCP, it will be able to work with changing IP addresses, but other than that, it's still a relatively short-term identifier. Google et al will still have to use cookies and whatnot to identify users over longer times.
> I mention this because one of the things that's missing from your education about the OSI Model is that it originally envisioned everyone writing to application layer (7) APIs instead of transport layer (4) APIs. There was supposed to be things like application service elements that would handling things like file transfer and messaging in a standard way for different applications. I think people are increasingly moving to that model, especially driven by Google with go, QUIC, protobufs, and so on.
The IETF is not a conventional SDO, nor indeed a conventional organisation of any sort, since it has no members, and it operates on "rough consensus" rather than having some specific formal process that invariably (see Microsoft's interaction with ECMA and ISO) would be gamed by scumbags.
But nevertheless those are de jure standards that come out the far end, the result of "getting all major stakeholders in a room" albeit that room is most often a mailing list since only a hardcore few can attend every IETF physical meeting. The IETF even explicitly marks standards track RFCs distinctly from non-standards track ones. If you contribute documentation for a complete working system, rather than (as Google did with QUIC) a proposal based on such a system that needs further refinement, it'll just get published as an Informational RFC. Such RFCs are how a bunch of Microsoft network protocols are documented, by Microsoft. Whereas months of arguing and back-and-forth technical discussion have shaped the IETF's QUIC and will continue to do so, the documentation for MSCHAPv2 (commonly used in corporate WiFi deployments) is an informational RFC so a Microsoft employee just dumped it as written, no chance for anyone to say "Er, this protocol is stupid, change it not to shove zero bytes into this key labelled C or else anybody can crack user passwords after spoofing the AP". So they didn't, and you can.
[Edited: wording tweaks near start, sorry]