Hacker Newsnew | past | comments | ask | show | jobs | submit | howdjisjje's commentslogin

You solution to bandwidth congestion is for everyone to use 3x+ more bandwidth than they need?


Is this a bot?

Firstly, I'm not solving anything. I'm explaining why fallback URLs are not equivalent to CDNs.

You don't use a CDN because your site doesn't work, you use it because it's faster.

Secondly, no, doing an occasional speed test, using data you'd be downloading anyway, then selecting an endpoint between speedtests does not increase bandwidth usage by 3x.

Baffled.


Some people use CDNs as regular static-site web hosts, or hosts of an SPA client JS blob when they have an otherwise-"serverless" architecture. CDNs are not always about serving large media assets.


They're not consuming 3x bandwidth if they're bailing out after downloading less then a kilobyte of a video that's tends or hundreds of megabytes in size.

That extra bandwidth is a rounding error in the grand scheme of things.

It could be important, though, for the client to signal the server to close the connection. Theoretically the connection would drop after several seconds and the server would stop transmitting, but I could imagine some middleware cheerfully downloading the whole stream and throwing it away.


The problem with with this approach is that you're only considering time to first byte, which is part of the equation especially in the case of smaller files like scripts, but in case of larger files like video segments, throughput is more important. If you only wait for 1kB to download, then you essentially measure time to first byte.

Then the instruction to stop the download is not instantaneous, so by the time you realize you have downloaded 1kB on the client side, the server might already have sent the whole video segment on the other side, so this is not the way to go in order to optimize congestion


For video you can fetch different chunks from different endpoints simultaneously, not the same chunk, therefore not wasting bandwidth at all.


This is more or less what we do with our our client-side switching solution at Streamroot: we first make sure the user has enough video segments in its buffer, and then we try to get the next video segments for different CDNs, so we're able to compare the download speeds, latency and RTT between the different CDNs without adding any overhead. You don't necessarily download the segments at the same time, but with some estimation and smoothing algoriths you're able to have some meaningful scores for each CDN. The concepts and problematics here are very close to the problematic of bandwidth estimation for HTTP Adaptive Bitrate streaming formats like HLS & DASH, because you have an instable network, and you are only able to estimate the bandwidth from discrete segment measurements. [0]

If you want to do it on a sub-asset (video segment, or image or JS file) level, it's possible by doing byte-range requests (ask bytes 0-100 from CDN A and 101-200 from CDN B), but in that case you still add some overhead for establishing the TCP connection, and in the end as you need the whole asset to use it, you'll just limit the download speed to the minimum of the two.

[0]https://blog.streamroot.io/abr-algorithms-work-optimize-stac...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: