The reality is that these days there generally isn't any packet loss, so UDP vs TCP isn't such an issue as it might have been in the past. In fact TCP has a number of advantages these days such as easier firewall traversal, WebSockets, etc.
>The reality is that these days there generally isn't any packet loss
Do you have any sources/data on that? Genuinely interested where you drew that conclusion from. From my vantage point (anecdotal), ISPs and carriers routinely under-provision, congest peerings, and don't find a problem well until after a number of customers complain.
Bufferbloat means that at many potential bottlenecks, you don't get any packet loss until well after the point at which your latency-sensitive application (be it TCP or UDP) has given up hope. Major peering points are pretty much the only place where you won't see >1second queues when congested, simply because they can't afford buffers that deep for the rates they operate at. But for ordinary everyday occurrences, your congestion will be somewhere around the last mile, where you cannot expect packets to be dropped within a reasonable timeframe.
Bufferbloat is the undesirable latency that comes from a router or other network equipment buffering too much data. It is a huge drag on Internet performance created, ironically, by previous attempts to make it work better. The one-sentence summary is “Bloated buffers lead to network-crippling latency spikes.”
The bad news is that bufferbloat is everywhere, in more devices and programs than you can shake a stick at. The good news is, bufferbloat is now, after 4 years of research, development and deployment, relatively easy to fix. See: fq_codel: wiki. The even better news is that fixing it may solve a lot of the service problems now addressed by bandwidth caps and metering, making the Internet faster and less expensive for both users and providers.
The introduction below to the problem is extremely old, and we recommend learning about bufferbloat via van jacobson’s fountain model instead. Although the traffic analogy is close to what actually happens… in the real world, you can’t evaporate the excessive cars on the road, which is what we actually do with systems like fq_codel wiki.
I measure packet loss on my cable connection every 5 mins of the day. Also Ive been developing realtime collaborative applications using tcp for over 20 years.
I work on protocols that use sketchy wifi on mobile units that roam among access points. Sometimes they roam into RF black holes. TCP is all kinds b0rk3d for what I am doing.
I'm not sure how UDP is going to help if you have no connectivity.
These days TCP copes as well as UDP with temporary complete loss of connectivity, due to fast recovery. It is really just packet loss that kills TCP, but that isn't generally an issue these days if you're in the USA/Canada/Europe/Japan/Korea/Australia, and your wifi isn't crapping out.
> The reality is that these days there generally isn't any packet loss
I don't know which reality bubble you live in, but this is utterly false. I live in the country side and get routinely an average of 40% packet loss on many, many websites. There are plenty of occasions for packets to get lost somewhere between a client and a server.
That article is about network failures and misconfigurations. It's looking at reliability on a vastly longer timescale than the domain of congestion control operating on traffic from interactive applications. Basically all of the problems described in that article would simply result in a game dropping the connection, whether it was using TCP or UDP.