Hacker News new | past | comments | ask | show | jobs | submit | shockinglytrue's comments login

Please review the HN community guidelines before posting something like this in future


I prefer Newspeak


With thanks in part to the security community and part to Intel, the budget that can be spent avoiding a syscall just keeps increasing every year


Centralizing the log files for the majority of Internet services in the hands of a few companies will never be good for security


That's a laugh. Are you seriously implying people who benefit freely* from Cloudflare's state of the art network firewalls and anti-ddos shields are somehow more at risk than others because they contracted with a particular security provider instead of rolling their own security?

That assertion will require lots of hard evidence because the vast majority of indicators indicate otherwise.

* free or less than 20$/mo


This read more like an advert than a comment, and that is certainly the most loaded question I've ever seen in written English.

Giant centralized troves of personal data are a huge risk to everyone. Civilization pivots and mutates rapidly, and it's clear to nobody at any particular time which way it may lean next, or whether it will lean toward them or on top of them. If anyone had told you there'd be global mass beheadings of century-old statues 3 weeks ago, I suppose you'd have been laughing then too.


Far from it. It's quite likely going to be over 5 years if not a decade before it would be possible to run a pure-HTTP3 service without risking connectivity problems

The problem is similar to the IPv6 transition, except thanks to the browser monopolies, it's possible at least for network providers to quickly feel significant pressure to fix their networks. But there will always be some networks that will never be fixed

edit: for those inexplicably downvoting this, please pay attention to the parent comment's question, and the Internet's long chequered history of adopting new protocols in any setting. TCP port 443 isn't going to magically disappear overnight, or indeed any time soon. This is evidently true because it has been true for all prior transitions. Mail still flows to many places unencrypted despite the standardization of STARTTLS 21 years ago. The long tail has only gotten much longer in those intervening 21 years.


HTTP/2 and HTTP/3 do not change the semantics of HTTP. That means you can run reverse proxies to server HTTP/1 services as /2 and/or /3 and vice-versa. As a result the transition will be a lot easier than the transition to IPv6. I expect that the transition in corporate networks will be faster -- the opposite of the IPv6 case -- because there is a lot of appeal to HTTP/3.


> because there is a lot of appeal to HTTP/3

I thought the primary appeal of HTTP/3 is for mobile clients, and bad connections in general, because it circumvents TCP head of line blocking and connections can persist across networks.

That doesn't feel terribly relevant in corporate networks.

(not disputing that it's not comparable to the v6 transition)


His claim of corporate appeal is completely unsubstantiated and I think when he says corporate he isn't referring to enterprise.

Enterprises largely won't give two shits about HTTP/3.

Just last week I took ownership of another department's decade old app written in VB.NET WinForms. The former dev team was putting the finishing touches on the C# WebForms refactor. It's been interesting taking a step back in time to Dev practices from 2008.


The appeal of H3 is smaller memory footprint.


The transition will be faster because of that a big chunck of the internet is gatewayed through a few big players (e.g. Cloudflare, AWS, CDNs, the new wave of static deployment services like Netlify and Zeit.co, big websites like google, facebook, netflix, etc...)


You realize that HTTP/2 is still nowhere near to being adopted by corporations? It's really far fetched to plan HTTP/3 and expect any adoption.


Most _domains_ may not support HTTP/3, but I fully expect that within a few years most _traffic_ will be HTTP/3.


Qualys' "SSL Pulse" says 47.1% of surveyed sites offered HTTP/2 and about 30% of surveyed sites offer TLS 1.3

Increasingly "corporations" out-source this problem to specialists who are only too pleased to use newer technologies with better performance and collect the same money.


Because 47% of sites run on Cloudflare or similar CDN that started enabling HTTP/2 for non-paying customers.

The application servers running the site do not accept HTTP/2 and most likely can't support it at all (we're a python shop and none of the web frameworks we use could do HTTP/2 when we looked into it).


But exactly that's the point. It can be handled transparently in a much easier way then the Ipv6 switch. (Because IPv6 is so much more different then just IPv4 + more addresses, and worse, many people don't realize it and treat it as IPv4 with more addresses which resulted in many problems).


For HTTP/2 at least, I think the main benefit in terms of performance applies to the "last hop", so you still get a more reliable experience even if the connection between the CDN/proxy and app server is http/1.1


True, through many web-sites don't have to care about the performance difference.

But for companies like CloudFlare or Googl HTTP/2 means less traffic overhead (multiplexing+header compression) and can save them a lot of bandwidth (aka. money) with that.


Exactly. AWS and GCP load balancers accept HTTP/2 but allow for those requests to be forward to backend instances as HTTP/1 because of this.


In fact AWS ALB does not even support HTTP/2 on the backends which is really annoying.


The fact that UDP involves much smaller PCBs than TCP alone will drive adoption of HTTP/3 because it will free up a fair bit of memory.

More availability of HTTP version gateways in load balancers and other reverse proxies is all that's needed, and that's coming along.


Protocol Control Block for anyone else wondering https://www.oreilly.com/library/view/tcpip-illustrated/02016...


You still need to keep your unacknowledged data buffered somewhere. If the kernel isn't holding it then it's in userspace.


More specifically no PCB, for UDP itself.


It's not nil. For "connected" UDP sockets, it's smaller than TCP's, but not nil because, well, buffers. And for non-connected UDP sockets there's still buffers. The main thing is that you can have much less buffer space because you might always be willing to drop packets. Ultimately you can have much lower memory pressure from those buffers and the smaller PCBs.


This is only mostly true, they do subtly change some semantics.

As a motivational example, it is possible to encode a colon in a header name in http2 but not in http1.1, and this does not violate the RFC which only blacklists "\0\r\n".


Why would network providers have to fix their network? Why 5 years to adopt?

HTTP/3 and QUIC are based on UDP. This is very different to the IPv6 transition.


There are a ton of networks (think big corporate networks, schools, shared apartment wifi) that enforce too many weird port restrictions. Many of those places rarely get network or config updates. I don't think it's as bad as IPv6, but there are a lot of people for whom it isn't going to just work out of the box.


Sure, but for that people a non small part of the internet is already broken. Like websockets being broken and in turn slack being broken.


websockets is carried on TCP. Often bootrstapped on HTTPS tcp/443.


"Network providers" isn't really the issue. It's all the corporate and school networks that block UDP, or run broken spyware MITM boxes, etc. Chrome has to do a TCP vs UDP race to figure out if UDP connectivity to the internet is broken.


In some cases it is the ISPs. I'll share an example:

Some broadband ISPs struggle with the fact that their customers get compromised and join botnets. Over the last few years UDP has become the ddos attack of choice. Broadband access networks struggle with how to mitigate this. Some try to block the command and control (C2) and some try to go the customer outreach angle. For example, notifying them that they have a compromised machine or putting them in a walled garden with a website that pops up telling them they've been impacted. The problem is that outreach is costly and not super effective. So they found another option: apply throttles on UDP. A few have done this and it's led to big problems because from a user experience QUIC works enough - and then falls apart.

Some of the access providers have changed the throttles to be less aggressive while others have resorted to being aggressive on the topic ("you should have made a new protocol and consulted with us!").


We might want to have a SCTP-based HTTP/4 down the road. That would surely benefit from some fixes on the network side.


See, that will NEVER happen. Completely impractical. SCTP has a different protocol number in the IP datagram header and many devices will either drop or malfunction when faced with protocol numbers they don't understand. UDP and TCP (protocol numbers 6 and 17) are well-supported, by practically all devices.


Why not package QUIC in IP directly without UDP in between?


UDP is as clean as you can get it. It is more or less free of any overhead. And networks know UDP already. A new IP protocol is far more likely to be rejected in the network.


This, exactly; it's the actual reason UDP exists. It's a design smell for anything to ask for a new IP protocol number.


A protocol separate to UDP and TCP altogether would suffer from middlebox interference problems.


Aaaa, you mean the problem of -smart- stupid pipes. These do and will exist all the time and this is an opportunity for them to realise how detrimental they do is to Internet.


They're inextricably woven into the fabric of the internet, and unfortunately can't be wished away.


> The problem is similar to the IPv6 transition,

Maybe similar but much smaller then IpV6 with much less problems, because most web frameworks will transparently support HTTP/1, HTTP/2 and HTTP/3 for the large majority of use-cases.

> Mail still flows to many places unencrypted

Mainly because getting a TLS certificate wasn't that easier for a lot of people in the world until recent years and the standard being written in a way which can be easily (mis-) understood as you having to have support unencrypted sending/receiving of mails. (It requires it for sending for local, i.e. implicitly by OS user account authenticated same machine mail.).


> Far from it. It's quite likely going to be over 5 years if not a decade before it would be possible to run a pure-HTTP3 service without risking connectivity problems

There won't be a pure HTTP/3 service anyway. HTTP/3 requires negotiation. It is announced from a previous HTTP/1.1 or /2 request using a "Alt-Svc" header. Typical clients will not try to connect using HTTP/3 directly


I think we'll see some DNS version of alt-svc that doesn't require TCP to bootstrap.. see httpssvc and svcb


> Now all those traffic shaping middle-boxes are worthless

No reasonable implementation of encrypted SNI has been proposed or standardized. Those middleboxes are still more than useful

AFAIK in QUIC there is some light obfuscation of the ClientHello, but it is not intended to be an anti-filtering measure, middleboxes can still fish out any presented name with a little bit of new code


What about EKR's

https://datatracker.ietf.org/doc/draft-ietf-tls-esni/

... do you feel is unreasonable?


Unsurprisingly for a spec from Fastly & CloudFlare, the privacy offered is predicated on the existence of large centralized providers that due to their size cannot be blocked. One outcome of this design is that if you want to offer truly private service to an end user, you must have a relationship with one of these providers, otherwise your traffic, even if it implements the spec, becomes easily identifiable as its EKR config was served by some unique non-shared infrastructure.

In practical terms I guess it is reasonable, but viewed from the angle of how the Internet was originally intended to work, it is obviously abhorrent and self-serving.


eSNI can only effectively prevent people from distinguishing things which aren't otherwise distinguishable anyway. This is not a forgetfulness potion, if you already know by some other means where I'm going then eSNI doesn't fix that.

If cat-videos.example and elect-bob.example are just names for the same IP 10.20.30.40 then we can use eSNI to prevent eavesdroppers discovering which you visited and that's all.

But if you've got 10.20.30.40 assigned by your ISP for your personal web server then eSNI can't hide that, you can use eSNI to prevent eavesdroppers learning whether visitors were looking at snakes-control-nasa.example or soup-does-not-exist.example but if all you host are crazy conspiracy theory sites then they don't need to know which one is which to block all of them, that's just how IP works.

The configuration for eSNI is delivered over DNS, so it's up to you to choose how you want get secure DNS.


But doesn't that criteria also describe every open source mailing list ever?


Wish all distros shipped https://github.com/vozlt/nginx-module-vts by default. It's a minor pain to self-build


This is cool!


It always strikes me as uncannily brave to see a post like this. So many statements associated with one username..

- I no longer work at $company, and their stuff sucks

- ergo, they fired me, or I left on bad terms

- I clearly didn't get on well with my coworkers, as I'm happy to shit on their work from across the pond

- ergo, I have some deep attitude problem I'm likely to bring to my next placement


Stream is for VOD, live TV is an entirely different problem


Most of these sessions seem to be pre-recorded, so not especially.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: