I'm still confused by all of this. Surely most video services are capable of adapting to lower bandwidth (and congested) links very easily. I personally have a fibre line directly to my house, and whilst the peering isn't great with my provider (Frontier), the network is more than capable of doing basically any video task. Why should the video bitrate be dropped because some second rate ISPs are oversubscribing super badly? Isn't all of this totally against the principles of net neutrality?
The unplanned for surge has been in videoconferencing (for work and for school), and while yes they adapt in theory to lower bandwidth, they're terribly affected particularly by the higher latency that tends to come with it.
The problem isn't that your videoconferencing goes from 1080p to 360p -- it's that it stutters and stops working for seconds at a time, so students completely miss content, and introduces a lag that makes it impossible to have conversations because people are always talking over each other.
While YouTube buffers ahead, so playback tends to be smooth regardless.
And clearly this has nothing to do with net neutrality, which is about ISP's throttling types of content. This is the content provider itself. And they're giving you the option to manually go back to hi-def.
So I hardly see what there is to complain about. YouTube is just trying to make sure kids are more likely to have usable online classes.
> The unplanned for surge has been in videoconferencing (for work and for school)
Thing is, ISPs in the UK (including mine) are saying that typical evening use is 10-20x typical daytime use, so the uptick in remote working isn't actually an issue for them at all.
Might be worth mentioning that current evening use is probably much higher than typical evening use, due to people staying home & streaming TV/movies or video chatting with friends, instead of going out & visiting people etc.
So remote working during daytime might not be an issue, but increased recreational use of internet in the evening might be. (I'm just speculating here, haven't seen anything from the ISPs about this)
Its most likely the combination of lots of people working from home and people in retail/etc sitting at home mid day streaming content so now the peak time is all day.
They've seen a 35-60% increase in daytime traffic, but it's still only half of what they're seeing in the evenings and nowhere near their network capacity.
I'm not saying that it isn't reasonable to lower streaming quality (a luxury item) at a time when the internet has become an even more critical link in our well-being - be that economic, social, or even physical as information on the physical danger gets transmitted over it. It's also reasonable to guess that there are a lot of people at home streaming content mid-day. However, the data seems to indicate that while daytime usage is up, it's still well below evening peaks and the increase in traffic isn't as huge as many people are probably thinking.
BT may claim it is nowhere near their capacity but everybody on my team has seen really noticeably degradation in home internet performance over the last couple of weeks (this includes people who work from home daily).
And which home internet is that? - or have your co-workers brought from one of the cheaper providers.
That is assuming the bottle neck is not caused by wifi - if there are others using the wifi spectrum kids partner, neighbours etc - that could explain it
Lots of companies do, and have had to quickly buy new license, and are finding out they don't have the network capacity to handle all their employees working from home.
> The problem isn't that your videoconferencing goes from 1080p to 360p -- it's that it stutters and stops working for seconds at a time, so students completely miss content, and introduces a lag
Bufferbloat.
Ideally, at least for TCP applications, with perfect network congestion control, reduced bandwidth due to streaming should only degrade data rate, but never massively increase the latency. A slight increase is normal, but a huge increase is not. Unfortunately, in real life, it's often the case, and this aspect is often overlooked by vendors, developers, and sysadmins, making things even worse than it should be.
Network hardware or software is optimized for throughput, and its performance under heavy traffic is often not tested. One common practice is using a buffer that is as large as possible, so that you can push the data rate to the maximum and avoid dropping packets at all cost. This is called Bufferbloat and it's a latency disaster. When the upstream bandwidth is saturated, the downstream buffer keeps accepting more bytes/packets, effectively breaks the proper feedback signal used by congestion control algorithms, so it never kicks in in a timely manner, and the FIFO nature of the buffer means the latency of a saturated link is always as high as the buffer size. As long as something is using the bandwidth, the network will always be slow like a snail. When people start having this problem, they simply blame the bandwidth-intensive application, and set a QoS up to prioritize things like DNS and VoIP, and to punish downloaders (and big networks usually have incrediblely complex and elaborate rules), This appeared to "fix" the problem, but it does not solve the underlying problem, all it does is moving the problem to the low priority queue. Another trouble is unwanted buffer can exist everywhere, in applications, operation systems, and underlying hardware, and since there's a conflict between throughput and latency, and some buffers are even technically necessary (Wireless networks are the worst offender), there's still a long way to do to fix everything.
I am only speaking from my experience of managing small LANs, but I got most of the information from Dave Täht and Jim Gettys, who have been working on this issue in the last 10 years. Gettys writes extensively on bufferbloat in his blog that states similar issues exist at a much larger scale, including the ISP edge and backbone. Perhaps the vast majority of the case we are seeing here is caused by a lack of bandwidth and not related to bufferbloat, but I guess bufferbloat is still here to blame for at least 20% of the cases.
> Ideally, at least for TCP applications, with perfect network congestion control, reduced bandwidth due to streaming should only degrade data rate, but never massively increase the latency
Fundamentally, TCP is designed to get all the bits to their destination. That’s great for file transfer, but not ideal for streaming video— at the transport level, there’s no way to decouple data rate from latency because it’s not allowed to drop anything.
UDP is the latency-prioritized transport protocol for the internet. It drops packets that it can’t handle because they are expected to contain out-of-date information anyway. It’s generally mote complicated to use because all of the flow control needs to happen at the application layer.
This is the best explanation I've read about TCP vs UDP in more than two decades!
I like that it describes them in terms of goals ("If you prioritise X over Y, then use..."), not mechanics like stream-oriented.
I wish more technology choices were presented in terms of the fundamental tradeoffs in high-level goals rather than mechanics of the particular abstraction.
> at the transport level, there’s no way to decouple data rate from latency because it’s not allowed to drop anything.
Yes, fundamentally, TCP is not designed for applications with controlled latency.
I only used TCP as an example here because TCP is supposed to provide a reasonable congestion and flow control mechanism at the protocol level, its behavior under congestion is well-specified, and suitable as an example. Unlike UDP, which depends on applications, as you said,
> It’s generally mote complicated to use because all of the flow control needs to happen at the application layer.
I'm actually aware of some UDP programs that deliberately send multiple packets in a row without any regards of flow control, winning extra throughput at the expense of other programs.
On the other hand, the TCP specification includes Slow Start, Sliding Window, Congestion Control algorithms, and recently, Explicit Congestion Notification. A main design goal is the detection of available bandwidth in an end-to-end manner, so that the sender does not send at an unnecessarily high data rate which the network or the receiver is incapable of processing. Packet drops is usually required in TCP as a feedback signal to enable detection of mismatched bandwidth. It's not always optimal (a well-known example is wireless networks - a dropped packet often does not indicate mismatched bandwidth), and it doesn't have to use dropped packets as the feedback signal (TCP Vegas & TCP BBR use latency, ECN uses a status bit), but in practice, the vast majority of systems (BIC, Cubic, Veno, CTCP) do require it, and it represents 80%+ of the installation.
> UDP is the latency-prioritized transport protocol for the internet. It drops packets that it can’t handle because they are expected to contain out-of-date information anyway.
It's supposed to work in this way. Unfortunately, two issues exist, the first challenge is applying the congestion control techniques invented for Bufferbloat to UDP, this needs to be done in a per-app basis. Another issue is, Bufferbloat is not simply a breakdown of flow control, but the existence of the large, uncontrolled FIFO buffer in networking programs, drivers and devices without queue management with latency in mind, which leads to an inherent latency in a saturated link. It's equally detrimental to TCP and UDP, as it takes an unusually long time for any packet to leave the FIFO queue. Breaking protocol-level flow control is only an additional effect that keeps adding fuel to the fire.
An earliest example was the large buffer in TX queue in DOCSIS modems, discovered, documented and reported by Jim Gettys, and fixed in a later DOCSIS specification. Another was the uncontrolled TX queue with 2000 frames in many Ethernet drivers that plagued early Linux kernels. At that time, fixing the problem in a LAN was as easy as, "ifconfig eth0 txqueuelen 1", effectively disabling it. Later, queue management was introduced and it's no longer a problem.
But many devices are still broken, the only way to stop it, is to ensure the uncontrolled buffer never have a chance to be filled, either by upgrading the upstream bandwidth to be greater than one could ever use, or by traffic policing your own TX bandwidth. Then you can only cross your finger and hope there isn't another uncontrolled buffer in another piece of network equipment at downstream (usually works at home, where there's only a router and a PC that runs the latest Linux/BSD systems). And even when it's the case, TX buffers on the opposite direction are not under your control, and ISP edge can be a place for them to exist.
Please report back to the list so we can refine our knowledge. (It's like Covid-19 testing. We cannot know the incidence of infection if we don't test lots and lots of people to find a true rate...) Thanks.
> YouTube streams over QUIC, which measures and mitigates buffer delays.
YouTube itself largely mitigates buffer delays, it uses BBR with TCP and QUIC with UDP, good news for YouTube and its users, but what happens when there's an interaction between bufferbloat-aware flows and other applications? For example, under congestion, my observation is that TCP BBR usually "wins" the game against TCP Cubic/Reno, and is able to utilize a large portion of bandwidth that others couldn't. A 10x increase of throughput is not unseen before, the bandwidth share won't be "fair" until everyone has upgraded. I'm not sure if it's the case for YouTube, but I guess it's a possibility.
AIUI google uses BBR which attempts to not fill up buffers and thus only add minimal extra latency. Plus in many cases the last mile is the bottleneck, in which case proper queue management on the router or its counterpart can keep the latency low. I think fq-pie is a mandatory part of docsis 3.1 for that reason.
Congestion should only result in limited bandwidth, not in significant latency spikes, and youtube will automatically lower resolution anyway if available bandwidth is too low to sustain playback.
I think you're talking about a theoretical best-case scenario for YouTube where I'm talking about the real world with videoconferencing.
For people at home, the bottleneck is often something like their apartment building or block or neighborhood -- wherever their ISP didn't provision enough. Obviously "proper queue management" on your home router doesn't affect this. To the contrary, a series of routers full of buffers means that congestion absolutely does increase latency, and drastically.
"Bufferbloat" is a known phenomenon. We may wish it didn't exist, but it does.
So your claim that "no changes should be needed" to ensure videoconferencing latency is unimpacted is just wrong, and flat in the face of all real-world evidence.
Have you ever tried personally using videoconferencing on a saturated network? This isn't theory... it's something you can try yourself. (I used to work in videoconferencing and latency caused by network congestion was the #1 quality issue.)
I also wonder how much of this is caused by poorly tuned home WiFi setups. You have congested airwaves, poor channel choices (what’s this channel 2? Nobody’s on that!), and the tendency for everyone to run their access points on MAX power to increase their “range” - all ways to totally demolish real world multi client WiFi throughput.
> Have you ever tried personally using videoconferencing on a saturated network?
Yes, my domestic connection often is saturated by p2p software. My router is running CAKE, thus video conferencing runs fine even when the link is close to saturation.
> For people at home, the bottleneck is often something like their apartment building or block or neighborhood -- wherever their ISP didn't provision enough.
In that case better AQM would have to be on those bottlenecks of course.
My 4yo did a "circle time" over Zoom with his preschool class yesterday. I was skeptical, but he loved it and it was a huge morale boost to see his friends. This is a hard time for little kids.
My martial arts dojo is running “virtual” sessions. Is there some theoretical pedagogical advantage to this over nothing? Yes. But the real advantage is having a community to connect to. The video is important for that.
I don't even mention it if others don't turn on their video, but I have to admit I struggle much more to follow someone through audio alone. Ironically I listen to a lot of podcasts and audiobooks, but there I can just press the button to "jump back 30s", which I do on most information dense episodes.
I do wonder if we could adapt deepfake tech to improve compression. A keyframe plus tracking of facial features could provide a very low bandwidth simulacrum of video calls.
This is a plot device in Vernor Vinge's "A Fire Upon The Deep". Something seems off in communications from an apparently friendly spaceship. Use of cached facial models means that whoever/whatever controls the ship now is just puppeting the models of the crew over a deliberately low bandwidth link.
I've switched to working from home, like the rest of the company, I don't have a webcam and I think it could help, perhaps not for all meetings, but at least for the daily SCRUM meeting.
I mean, everyone is already isolated. You want to remove a few people showing their faces? It’s currently the only few minutes of face to face contact a lot of us have.
> No one is forcing you to download video you don't want.
If its a work meeting...then yes they are absolutely being forced to download video they don't want. They could possibly minimize the window, but then they lose out on key things like muting and screensharing. I'm not aware of any meeting software that lets you turn off the video of other participants (though you can usually do it for audio).
We share tons examples of work via video so maybe you get a bunch of white dudes (and yes the higher corporate meetings are), but on team scale it is very useful.
Youtube and other sites are voluntarily lowering their bandwidth because 4k video is probably less important that somebody's online class. I'm sure they save a pretty penny as well.
That's true (although in some cases it may be hard to see text on blackboard), anyway it's just about YouTube making the default SD, but HD option is still available.
> Users will still be able to watch in high definition if they want, but will have to choose to do so.
Actually, we (a tribal community college) went with GoToMeeting because it was $19 per instructor (we only needed 20 - some classes in the vocational area are going to do limited face-to-face) and a lot easier for the instructors to deal with. We will probably download all the videos, but I'm not sure we are going to post them to YouTube or let the students catch the replay at GoToMeeting.
No, they can just follow the link and rerun it with all the uploads and a transcript.
[edit]Also, I'm pretty sure that since our instructors are at home they would probably prefer we put something a little more polished and formal on YouTube to represent their work[/edit]
That's more applicable to large classes at the college level.
College seminars, as well as all of grade school, are conducted live. High school doesn't have "lectures" the way college does, or middle school or elementary school.
Its a really hard option for instructors who aren't computer folks and stuck at home. The Business tier allows for 250 participants which is damn good for most classes. I get the feeling a lot of schools will come up with the money for a couple of months for a paid service. We're out $380 a month for our 20 instructors. Also, some of the paid services are allowing schools to use free accounts.
It really shouldn't be hard. There are endless amounts of guides on how to set up OBS and stream to YouTube. You can even get live support by volunteers within a few minutes of asking for help.
Most of the time you don't even need to really set up much. You just take the key and plug it into OBS.
My wife streams using OBS to Twitch. She set it up and it worked for her. Until it didn't suddenly. Then came the very long googling and debugging sessions.
Why, when I can spend $380 and have a setup that is easy to use and doesn't require me to train on. They watch a nice video and we don't have to do a deploy of OBS to a bunch of laptops.
I'm all for free or open source software, but I'm still dealing with a commercial entity either way. Plus, I don't have to worry about YouTube's algorithm flagging some content.
YouTube's algorithm will demonetize you, but it's very rare that they will make your content inaccessible.
Also, the probability is very high that if you would have run into problems with OBS, then you're going to run into problems with whatever commercial solution you go with. Most of the problems people have with video capture and streaming don't stem from the specific recording software you use. It stems from how your drivers, OS, and settings work together on your hardware.
YouTube's algorithm will demonetize you, but it's very rare that they will make your content inaccessible.
We deal with Native American history and YouTube is a true pain in the butt where the brutality of the truth is concerned. Plus, the instructors have many more tools available on GoToMeeting, and would also like things we put on YouTube to be a bit more polished then a broadcast from their dining room table. Also, both YouTube and GoToMeeting are commericial, non-open source solutions.
Also, the probability is very high that if you would have run into problems with OBS, then you're going to run into problems with whatever commercial solution you go with.
OBS is a very fine, professional solution to streaming. Its capabilities bring complications and is not specifically designed for facilitating meetings. GoToMeeting has a paid professional support staff to answer any questions and provides tools to help.
Most of the problems people have with video capture and streaming don't stem from the specific recording software you use.
I would need to see proof of this statement. GoToMeeting seems very easy for our instructors to start and work with.
I love open source, but we are dealing with something here that requires a solution now which mean spending $380/month on GoToMeeting is a very good idea. We cannot do face-to-face training and deploying OBS and a working setup to 20 laptops at homes, just doesn't seem like a good use of time and way more costly than the $380/mo for a couple of months. Plus, its an easy recommend to other institutions that don't have full IT departments.
Here's how setting up OBS goes for most users: you install the application and copy paste the stream key into the right box. Then you add what you want captured into a scene and you're done.
Most of the fiddling that happens happens because of your specific hardware and encoders you have. A commercial solution faces the same problem.
On OBS? There are hundreds, likely thousands of them, but most of them are about optimizing it for bitrate. That's not really needed for a stream for teaching.
What if people have paid for that bandwidth though, dedicated lines, just not so massively oversubscribed or similar. I kind-of don't think this should be so blanket reduction of all the countries and all the regions, just like the Netflix reduction. It's easier and cheaper for them like this, but it really should be more intelligent.
And considering the major uptick is caused by video for work and for classes, I would guess that is 99% domestic, so what would be the cause of a massive increase in international traffic?
...and initially those videos have to be seeded by central servers.
CDNs are caches. I am sure it gets much more complex with systems at Youtube scale, but they don't keep all files at all edges. Furthermore, they know what the fuck they are doing, and if ISPs and governments are going "shit, our bandwidth is nuts, hey Youtube/Netflix can you cool it a little?" then maybe there is something to it.
Why and how is everyone an armchair NOC operator/epidemiologist/supply chain manager? How is it so hard to believe that when you get past certain politicians/bureaucracies, there are smart people working to figure out these hard problems that likely know a little bit more than we do?
Google has private fibers to shovel things between their data centers. And the point of a CDN is that it acts as a force multiplier. You cache once and it can serve the same video to many users, conversely this means far less data needs to traverse many hops
Could watching over VPN to other territories be an issue? or would that just be noise in the metrics? I'm primarily thinking of something like watching disney+ from countries outside their current distribution
It's not just the ISP that are oversubscribed. Google probably didn't budget for people all around the world suddenly sitting at home watching Youtube all of a sudden.
> Why should the video bitrate be dropped because some second rate ISPs are oversubscribing super badly? Isn't all of this totally against the principles of net neutrality?
No. This is entirely separate from net neutrality. That principle says, "No ISP should discriminate on the basis of content."
In the case of an oversubscribed ISP, the "magic of the markets" that so many lobbyists talk about is that you can simply switch to a new provider. Oh, you only have one provider? Tough cookies.
you're making a big assumption that most of the world has great in-home broadband access. this isn't true in many places (e.g. India). mobile networks would get crushed by the uptick in traffic combined with the shift in geographic usage that will be caused by the lockdown
Yes, you're right. Better let this play out and see what happens, people's lives are less important vs. your ability to stream HD content because you have a great connection.
Others are commenting on the second-rate ISPs that you mention. I will only comment on the "net neutrality". This virus can potentially kill every man and woman over 80. That's our parents, grandparents, uncles and aunts. If I get to watch videos of cats for two months on 480 instead of 1080, and that may save 500 people, bring-it-on!
Unfortunately there may be/will be people that because they can't enjoy a movie, they will go for a walk. That doesn't help. So we're captive of the second-rate-ISPs, the stupid people that want to go for a walk because "internet is slow". I'll take "that bullet" to just save even ONE grandparent.
100% on going for walks. The issue is that it is being abused/misused and thus we see one (European) country after the next one imposing bans on circulation.
Common sense is not so common. If govs encourage "people going for a walk" then majority of Greeks, Italians, Spaniards would be in the parks with a bottle of wine and sandwiches having picnics "while out on a walk".
It's the stupid ones that ruin it for all. The ones who don't want to limit themselves for a couple of weeks for the greater good.
The problem may become that if we all exercise the "right to picnic" at the same time, the nearest green hill will have 5000 people, which defeats the purpose.
If more people can work from home, or attend classes online because teleconferencing and online classes are functional, and not unusable and stuttering it means more people stay home.
I've been working from home (80+%)for the last 5 years and it's the best thing that ever happened to me. At the same time I have been taking classes from Coursera, Udemy, YouTube, Vimeo, BrightTalk, and more. I also have a Netflix subscription.
Losing some quality on Netflix didn't hurt me. Wathing trainings on 720 instead of 1080 was also just fine (and I do need to save locally, watch again, take screenshots, write on them, meaning I need good quality).
Most video conferences are not audio-only with perhaps a low-quality/bitrate screenshare. We are moving to the direction of "all of us" using it, and cutting corners in quality while companies gear up to increase capacity/bandwidth/throughput. It won't take a week, but once we get to the next level, and I hope we stay there (WFH, commute/polue less) we'll all be in a better place!
I'm a little confused, would you mind explaining how the lethality of the virus is related to your thesis? The way I interpreted your post makes it seem like it's just two unrelated thoughts (net neutrality and the lethality of covid) that got mixed together
I wonder if the reason to lower quality is really because of being generous to the rest of the world. That's just not Google.
I am quite into the online advertising world and I know that the current cpm/rpm ad revenue in many premium countries went down by 30-60%. This means a way lower margin for Google. One way to decrease the loss is lowering the total bandwidth
I'm all for calling out Google where it's warranted, but come on, I think the simplest explanation for this (reducing default bandwidth - note it doesn't prevent people from manually choosing HD - helps the world without costing Google much of anything) is the correct one. Lowering their bandwidth isn't going to make a lick of difference in their overall stock price one way or the other.
I don't know that it's calling them out to describe a specific business reason behind a decision. It's not unreasonable for a business to adjust to maintain financial stability, especially for a "free" product served in exchange for ads.
I'm not sure it would actually save them any money at all. They are still transcoding into the higher quality, so you don't save compute resources. The videos themselves are served from edge CDN nodes. Through peering agreements I doubt Google is paying much for bandwidth.
I'm not sure where this would save Google any money. It seems to just be a helpful (if more symbolic) thing to do.
That's not how costs for these companies work. They have CDNs inside the networks of large ISPs, and everything else goes over leased lines with fixed capacity. They aren't paying by the byte like some Comcast overage fee.
Youtube doesn't offer a "no video" option for free because the ToS for the free service says you are not allowed to separate the video and audio streams. This is to prevent free streaming on devices like Sonos since it's harder (if possible at all) to monetize those videos. It's also why when using the free service the app doesn't allow streaming with the screen off.
A 0 pixel video would defeat this by being in effect an audio stream separated from the video. I always wondered why manufacturers don't just resize the video stream to 1 pixel and use any of the device LEDs to "display" the video thus bypassing this part of the ToS.
> Youtube doesn't offer a "no video" option for free because the ToS for the free service says you are not allowed to separate the video and audio streams.
Isn't this backwards? It seems more likely that the TOS follows the business decision here. I believe old versions of the official mobile apps continued to play the video with the phone screen off.
I imagine the ToS were written as such in order to make sure some features can be monetized (like offline or screen-off playback). But this just means they would not offer any feature for free that undermines this decision. A "0px" video would effectively turn it into an audio-only stream and be an "official" way for others to bypass some of those restrictions. It would make streaming to all devices that now require a Premium/Music membership, or stream ripping (something they don't actually sue for but insist it's illegal) fair game.
youtube-dl -k gave me two files, one webm/opus audio file and one mp4/av1 video file.
The files had roughly same sizes, the audio being 9728k large, the video 9476k.
The video had 127 keyframes, out of 17030 frames total, distributed over 567 seconds of video (one keyframe every 4.5 seconds). I don't have a method at hand to measure the encoded size of the keyframes but if I extract the first 90k of the file using dd (bs=90k count=1), I get 21 frames, so that's a lower bound for the size that's actually needed to encode one single keyframe, more shouldn't actually be neccessary.
So of the 9.4M video file, > 99% is waste. Quite literally as 1% of 9476k is 95k. With the addition of audio, the amount of waste has to be adjusted to roughly half of the total audio+video data. Still a large amount of data that can be saved, especially at youtube scale where reductions of traffic in the sub-percent range are material for promotions.
This could also be solved by youtube detecting these situations and simply spacing keyframes farther apart or using duplicate frame decimation/vfr encoding. Seeking would still be fast since it could just fetch and skip a huge bunch of empty P-frames.
For static image videos YouTube's H.264 version of the video is generally significantly smaller than either the VP9 or AV1 versions. Either YouTube's detection and modification of the encoder settings for this type of video or something intrinsic to the x264 encoder delivers better results from their H.264 encoding pipeline.
The 480p video-only file sizes for this video as reported by youtube-dl have VP9 at 9.70MiB, AV1 at 9.25MiB, and H.264 at 3.55MiB.
So we give them something 13 kb-ish[0] and they compress it to 4.16 MB. I'm sure I'm already not putting it kind enough. Combined with the tiny battery in my phone I really feel like people are pulling a prank on me. I'm a behavioral experiment now! I think my roomba just looked at me.
YouTube's a video encoding site. The original video that was given to YouTube will have been substantially larger than 13 kb-ish.
If your content is audio only then use an audio encoding site like SoundCloud. Or, if you really want to use YouTube, then you need to get them to develop an audio specific option.
I think we already have tons of brilliant people developing video codec. I'm sure they are all well aware there isn't a way to define part(s) of a video a slide show. What is missing is people pointing out how silly this is. The keyframes also define the seek points. Its a mess.
Yes, the formats and players should be changed to also allow more seek points than the key frames, exactly for these "picture not changing" scenarios but with the audio content we want to seek and consume.
And then additionally, there should be better recognition of the "picture not changing" conditions, to allow better use of such a feature.
I moan about this a lot with codec ppl. I think they are just building on top of what they have?
I was trying to distribute a 3 hour lecture with a single slide. I suppose seeing the speaker for a few seconds would be nice too. haha, I'm asking a lot? The slide had a lot of detail so the video uhhh I mean the jpg got enormous and few keyframes so no seeking? No thanks, ill do the "compression" in javascript. (eeewww!)
If it's a solid color or simple enough for the static image compression algorithms in play to optimize away, even those keyframes would be tiny. Some codecs will just optimize those keyframes away entirely.
The 'they' part is critical. YouTube relies on users to provide actual content. Digitalized music is uploaded by people with vast differences in tech abilities, access to tools, and even personal tastes. Some get creative and want to have HD city time-lapse as background and power point animation of lyrics. What do you do with those?
It is not impossible to go over all music videos, automatically fetch lyrics and replace video with static album cover or something, but who would do it? If YouTube does this then they are responsible for any copyright infringement, and they become huge competition to user-created content on their own platform.
I like the concept of it, but the app has atrocious UX on mobile.
For a specific example, it has a nice toggle above every track, where you can flip between video and audio-only. It works great, except that I cannot find a way to force it to stay on audio-only. As soon as the current track ends, the toggle resets back to "video".
You can use youtube-dl to pull down audio-only tracks. I do this frequently for talks without much in the way of visual presentation that I then listen to as podcasts.
youtube-dl -F https://youtube.com/video-url
That will give you a list of formats, pick an audio-only one, and use the -f flag to download that format.
I think the iOS player will stream only audio if the app is backgrounded; it seems to restart the stream to get video back if the app is brought to front again.
As others have said, if the "video" is a static image, the bandwidth usage should be negligible in practice. But I've seen lots of promotional album streams that intentionally add busy graphics, motion lyrics, etc; while they're sometimes cool, they're wastes of bandwidth and processing if I just want it play audio in a background tab.
Background audio play only works if you're paying for the subscription service. Otherwise both video and audio stops when the app is backgrounded.
Many times I've listened to a YouTube video or stream without watching it by just starting the video and then pocketing my phone and being careful not to touch it. I hear the audio and continue my shopping or cleaning or whatever while battery and bandwidth is wasted displaying the video to the debris inside my pocket.
I can only assume Google has run the numbers and decided it's actually more profitable for them to make something so silly and wasteful on both sides of the connection necessary, but that's what it is.
Your phone might be saving you some battery; there is a sensor to detect when your face is close to the phone, turning off the screen and disabling touch. Not sure if it actually works when it's in your pocket though. There's a serious "light in the refrigerator" problem, where trying to measure it changes the outcome
It's really easy to see if that sensor is active. It's not fridge light at all, it triggers at a distance and you could test against your shirt to see if the light shines through.
I've never seen it be active outside of a phone call.
NewPipe free on f-droid and it allows "download audio" in whatever format is available, it's great on phone for music from non-traditional studio sources (NewWaveRetro) etc.
This is an option in the mobile app, but only if you're a paid premium user. Otherwise, to listen to audio only on mobile you need to keep the screen on.
You're still pulling down the video data itself, it's just being removed after-the-fact. But this will save your battery on a laptop since you're not rendering the video.
I hope YouTube keeps this, even as a preference - I am often satisfied with 720p, and it uses less bandwidth consumption for those occasions when I am constrained.
Is this ultimately just to free up bandwidth in areas limited by poor last mile infrastructure? The US has some of the cheapest transit network bandwidth costs in the world. Most ISPs can peer to YouTube(Google) for free minus the cost of equipment/data center ports. Thus, is this just addressing the last mile problem again, when the video finally makes it to the neighborhood and you just have too many people on a single DOCSIS node?
What is the primary source on this? None of Alphabet's mouthpieces contain this information. I glanced at Google's own blog, various YouTube blogs, several official Twitter accounts, and SEC filings in EDGAR. I search PR Newswire etc.
I have noticed Youtube's landing page to be quite slow to load compared to most other sites. I had assumed it was some stupidity in my low-bandwidth network connection: but I now wonder if Youtube's just getting killed by all of us staying at home.
I understand that this isn't a problem where I live. The traffic and communications ministry has already stated that the networks are going to be fine. The peak is just shifting towards daytime and our networks are built so that there's still ample room on top of the peak.
It's totally okay to manually set the YouTube player to 4K for this use case.
For low-motion static content like a slideshow or console window, a ultra-low frame rate usually results in acceptable fidelity while minimizing bandwidth, but not all hardware decoders (think Chromecasts, TVs) play well with them, so it wasn't possible to go down this road in general.
(Disclaimer: This is my personal opinion and not that of my employer.)
Seems like another thing that would help is every browser and media service adopted AV1 for video and WEBP for images. That itself would save an insane amount of bandwidth (maybe a quarter of the world's bandwidth).
The real 'hack' is to subscribe to Google Play Music All Access. A music streaming service that rivals Spotify, and comes with YouTube Premium (and YT Music) bundled in.
In my country that's not the case, subscribing to Google Play Music does not grant access to YouTube Premium.
Also the "Google Play Music All Access" is 4.90 USD and YouTube Premium is 5.80 USD, not much of a difference.
Also a family pack for Play Music isn't available. And the Family Pack (5 members) for YouTube Premium is 8.69 USD, which is what I'm on and all slots are occupied, so it's obviously a better deal.
As you can deduce from my comment, I think $10 a month is too much. I'll just download the video and watch it offline later.
(I understand the sentiment, but it's actually more like $15 a month here. Plus I make about a fifth of what a developer in the USA might make, pre-taxes. After taxes, rent, and groceries, $15 is about 10% of my monthly budget. I am not going to spend it on cat videos.)
The problem is that the price is poorly adjusted to local markets. The subscription in, say, Spain is actually higher than in Sweden, which makes no sense!
Everyone here goes on about how "it's just the price of 3 cups of coffee", but here it's most definitively not.
In Romania it's 5.37 EUR for the solo account, or 8.06 EUR for the family pack.
5.37 EUR is the price of 2 Starbucks lattes, 8.06 EUR the price of 3 lattes.
Not insignificant, but given the lack of ads (very important for my kid to not be exposed to brainwashing crap), or option of playing in the background for my phone, it's totally worth it. And I think overall I watch more YouTube than Netflix.
> 5.37 EUR is the price of 2 Starbucks lattes, 8.06 EUR the price of 3 lattes.
Yeah, and that's why there are like three Starbucks in my whole EU country. A latte in any coffeeshop here costs 1.2€, because they're adjusted to local salaries.