Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Maybe I'm the only person who thinks that trying to make existing internet protocols faster is wasted energy. But who am I to say anything.

If you have a valid argument to support your claim, why not present it?



They are already expected standards so when you create optimizations you're building on functions that need to be supported additionally on top of them. This leads to incompatibility and sometimes often worse performance as what is being experienced here with QUIC.

You can read more about such things from, The Evolution of the Internet Congestion Control. https://groups.csail.mit.edu/ana/Publications/The_Evolution_...

A good solution is to create a newer protocol when the limits of an existing protcol are exceeded. No one thought of needing HTTPS long ago and now we have 443 for HTTP security. If we need something to be faster and it has already achieved an arbitrary limit for the sake of backward compatibility it would be better to introduce a new protocol.

I dislike the idea that we're turning into another Reddit where we are pointing fingers at people for updoots. If you dislike my opinion please present one equal to where that can be challenged.


> A good solution is to create a newer protocol when the limits of an existing protcol are exceeded.

It’s not clear to me how this is different to what’s happening. Is your objection that they did it on top of UDP instead of inventing a new transport layer?


No, actually what I mean was that QUIC being a protocol on UDP was intended to take advantage of the speed of UDP to do things faster that some TCP protocols did. While the merit is there the optimizations done on TCP itself has drastically improved the performance of TCP based protocols. UDP is still exceptional but it is like using a crowbar to open bottle. Not exactly the tool intended for the purpose.

Creating a new protocol starting from scratch would be better effort spent. A QUICv2 is on the way. https://datatracker.ietf.org/doc/rfc9369/

I don't think it addresses the problems with QUICv1 in terms of lightweight performance and bandwidth which the post claims QUIC lacks.


Creating a new transport protocol for use on the whole Internet is a massive undertaking, not only in purely technical terms, but much more difficult, in political terms. Getting all of the world's sysadmins to allow your new protocol is a massive massive undertaking.

And if you have the new protocol available today, with excellent implementations for Linux, Windows, BSD, MacOS, Apple iOS, and for F5, Cisco, etc routers done, it will still take an absolute minimum of 5-10 years until it starts becoming available on the wider Internet, and that is if people are desperate to adopt it. And the vast majority of the world will not use it for the next 20 years.

The time for updating hardware to allow and use new protocols is going to be a massive hurdle to anything like this. And the advantage to doing so over just using UDP would have to be monumental to justify such an effort.

The reality is that there will simply not be a new transport protocol used on the wide internet in our lifetimes. Trying to get one to happen is a pipe dream. Any attempts at replacing TCP will just use UDP.


While you're absolutely correct, I think it is interesting to note that your argument could also have applied to the HTTP protocol itself, given how widely HTTP is used.

However, in reality, the people/forces pushing for HTTP2 and QUIC are the same one(s?) who have a defacto monopoly on browsers.

So, yes, it's a political issue, and they just implemented their changes on a layer (or even... "app") that they had the most control over.

On a purely "moral" perspective, political expediency probably shouldn't be the reason why something is done, but of course that's what actually happens in the real world...


There are numerous non-HTTP protocols used successfully on the Internet, as long as they run over TCP or UDP. Policing content running on TCP port 443 to enforce that it is HTTP/1.1 over TLS is actually extremely rare, outside some very demanding corporate networks. If you wanted to send your own new "HTTP/7" traffic today, with some new encapsulation over TLS on port 443, and you controlled the servers and the clients for this, I think you would actually meet minimal issues.

The problems with SCTP, or any new transport-layer protocol (or any even lower layer protocol), run much deeper than deploying a new protocol on any higher layer.


QUICv2 is not really a new standard. It explicitly exists merely to intentionally rearrange some fields to prevent standard hardcoding/ossification and exercise the version negotiation logic of implementations. It says so right in the abstract:

“Its purpose is to combat various ossification vectors and exercise the version negotiation framework.”


You posted your opinion without any kind of accompanying argument, and it was also quite unclear what you meant. Whining about being a target and being downvoted is not really going to help your case.

I initially understood your first post as: "Let's not try to make the internet faster"

With this reply, you are clarifying your initial post that was very unclear. Now I understand it as:

"Let's not try to make existing protocols faster, let's make new protocols instead"


More that if a protocol has met it's limit and you are at a dead end it is better to build a new one from the ground up. Making the internet faster is great but you eventually hit a wall. You need to be creative and come up with better solutions.

In fact our modern network infrastructure returns on designs intended for limited network performance. Our networks are fiber and 5g which are roughly 170,000 times faster and wider since the initial inception of the internet.

Time for a QUICv2

https://datatracker.ietf.org/doc/rfc9369/

But I don't think it addresses the disparity between it and lightweight protocols as networks get faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: