For a long time now for me youtube-dl downloads YouTube videos at 50kb/s, which makes it impractical to use in conjunction with mpv. I need to leave youtube-dl running in the background for several hours to then watch the video in mpv.
Now from the comments I've found out about yt-dlp[1] which claims to fix this issue[2]. Will check it out.
yt-dlp is absolutely fantastic. It has all sorts of neat features and fixes too, like better support for bigger playlists and even SponsorBlock integration. Also they were super helpful whenever I had an issue and even added fixes or features after I opened them!
I think questions like « Prove that the union of finitely many compact sets in a topological space (X,τ) is also compact » would certainly achieve rate limiting and boost global math skills.
Is this a quasi-captcha because of some legal reasons? I would assume math problems are easy to solve for a computer (easier than for most of the humans, actually).
It's just a flippant way of saying "hiding the URL or parameters behind running JavaScript that serves no other purpose than obfuscation". See also this commit:
Of course this is easy to solve, but every layer of obfuscation they add generally requires someone to go and work around it in yt-dlp. It's mutually assured time waste on both sides.
It's mostly plug and play; on desktop install the browser extension and it just works!
There is some tuning to be done based on your own personal preference as you can tune it to _only_ skip sponsors, but by default it skips a variety of fluff
They're all marked by other users of the extension and I've never come across a malicious marking, so it's got a neat community, sadly I'm never early enough to contribute.
In the likes of YouTube Vanced (third party YouTube fork with integrated sponsorblock on android) it's simply a player, I resume the youtube-dl alternative works the same way
Came here to say this. I even tried with a VPN to see if the download speed wasn't throttled with it, but it's still ridiculously slow.
Thinking of switching to yt-dlp, but then how does yt-dlp get around the throttling? Does it emulate a browser to make it look like a normal viewing of a video?
I downloaded a big (like 330gb) audiobook playlist month ago. It started at like 3mbit but YT throttled it to 64/kb after a few hours but still responded.
Now its throttled like everytime. Will look into yt-dlp.
yt-dlp is a fork of youtube-dl, it's being throttled exactly the same, they just merge the fixes quicker. It's mostly a "cutting edge" branch of youtube-dl (not sure how much gets merged back tho).
So there is a bug in the Twitter video download. When Twitter went to 280 chars it started showing up. 280 chars is too long for a file name (on Windows at least.) I sent YouTube-dl a pull request with a fix but not only did they not accept it, they brushed me off in a way that gave me the impression they didn't want fixes from the outside. I'm fairly old and this was my first attempt at contributing to open source. Is this the way it goes typically? I imagine they get a lot of people with b.s. pull requests so they get jaded. It's understandable I suppose.
The bug still exists in yt-dlp. I run a local copy with my fix for those times when I need it.
The complaint is that it's come up repeatedly ('Not again. Use search') - agree it's a bit of a brush off, and it's not helpful for anybody (perhaps most of all the maintainers themselves!) that it doesn't link any of the others, or the one that was merged, or the first one with a better explanation for rejection, or whatever.
Something I would say though (that as above doesn't seem to be the issue, but who knows maybe it contributed) is that you could put a bit more time into the PR description & title. There's little human-readable (without following a link) information in the title. Your checked boxes are all broken (`[x ]` & `[ x]`) despite the instructions on how to do it (`[x]`) that you've left in (that also indicates a bad template in fairness, should be a comment (`<!--`) so only you see it, while editing). It's hard to read, separating your own description from the templated instructions.
You also didn't follow up on it in a way that seems like you're bothered, nor at all for several months. When you open a PR like this you're being helpful and fixing something yes, but you're also asking someone to maintain it for you going forward, and really you just proved that you wouldn't have anything further to do with it. Maybe if you'd followed up when it was closed asking for a link to where it had been addressed previously so that you could help test it ('since it is still not working for me with the latest released version or master @ deadb33f') it would have gone differently, perhaps the maintainer simply misunderstood what you were fixing, and would've reopened it.
Great! Thanks for the feedback. I had something that I thought I could throw over the fence and then get on with my life because I had a solution for myself, but now that you mention it, that's not a good way to approach it. Instead I deleted it and forgot about it until today.
The thing that threw me off was the "Not again. Use search before you implement anything" comment. Ok, so if it's a recurring issue why isn't it fixed? It's not a big fix.
The best approach I've come to over the years is "don't make people think" - so, if you have the time - make something almost zero effort to see that it works and zero effort to apply.
This could mean screenshots - or sometimes something like a youtube video, or just logs of before and after.
Show visually the before and after, and try and write descriptive titles etc.
If it's github, then a PR (instead of just an issue).
Having said that, I don't always have the time.
There are also slightly more dark patterns, like if you are fixing something including "broken" in the description of the bug will grab more attention.
[EDIT]
As an example on this PR:
I would have written a title like:
"Don't fail on filenames over max length on Windows and Linux"
In the description, try and write the thing that is fixed first (you dive into the implementation) so that if someone only reads the first few lines they still know what's up.
Then dive into the implementation.
Imagine the person has very little attention span - maybe they are scanning the bugs while their small child is bugging them, you need the important info first, and you need to grab them (but don't go as far as click bait).
On a similar thing, use hyperlinks when you mention other bugs (e.g. you mention) #15024 - just make everything as little effort for them as possible.
Throwing things over the fence as a drive by is rarely useful. Maintainers are busy people, so dropping off a clue that would take Sherlock Holmes time to solve isn't helpful.
Imagine you're the maintainer being asked to look into something. Would you rather have all of the information provided in as much detail as possible so that you could dive right in and fix it, or do a deep dive into something that may or may not even be a problem but something from a grumpy person on the internet?
I guess it seemed like such a trivial fix. What's to explain? If you know the code you go to where you make the file name and put a limit on it. I guess there are other considerations that I am not aware of. I was not grumpy. I thought I was doing everyone a favor. To be treated like they were doing me a favor induced cognitive dissonance.
Edit: Just to clear I love youtube-dl and use it all the time. I greatly appreciate all the work that went into it. I have no problem with the maintainers. I only posted my original comment because I want to contribute to open source now that I am retired and wondered what went wrong my first time. Now I know and I thank everyone who took the time to comment.
Long ago I created a bug for this (not a PR) myself. IIRC the problem is the author had some grand design for platform neutral filename handling or something, so didn't want some simple fix that would be "temporary."
It's just silly and lacking perspective. Letting the bug sit in there while procrastinating on some larger project. Waiting for some ideal circumstance, but not acting to create it, so nothing happens. But, to do the "temporary fix," you have to admit you're not acting on your goal.
Fitting that the author would abandon the project, if they are making such intentions and not following through.
> The complaint is that it's come up repeatedly ('Not again. Use search') - agree it's a bit of a brush off, and it's not helpful for anybody (perhaps most of all the maintainers themselves!) that it doesn't link any of the others, or the one that was merged, or the first one with a better explanation for rejection, or whatever.
I love using automation for this. I'm often frustrated that people didn't bother searching, and I don't have enough time in the day to do the searching for them. I configure my repos so that I can just apply the "duplicate" label, and a bot will leave a polite and cheerful message thanking people and asking them to search next time to avoid duplicates.
It successfully fails to evoke my exact mental state at the moment and gives people the exact info and reasons for rejection they need, all without me having to lift a finger.
I am sorry to hear that; you have been unlucky indeed, as it is not like that usually. I would suggest you to submit the same patch to yt-dlp.
My only PR for youtube-dl has been waiting for ~2.5 year (#21837), so I proposed the same patch to yt-dlp, which has been quickly merged after a brief review (#1920). I found yt-dlp developers friendlier indeed, and I get notified if there is an issue or a PR that I could be interested in reviewing or contributing to.
Sometimes developers are volunteers, have too much to do, and they can get frustrated. Having good procedures is critical, like testing/CI, pre-commit with blake8 and such instead of asking to read a very long list of rules, using GitHub merge queue, involve previous contributors automatically, share the burden with others as much as possible, etc.
Yes, I imagine it depends on the project. I read both good stories and bad about it. I haven't let it discourage me. I'll consider resubmitting to yt-dlp. Thanks!
There's this pull request that works on fixing the problem (not merged yet), I don't know if it is yours under a different alias, but you can see the work and edge cases involved for something that a priori seems simple ("just truncate the name!" haha). Also the dude making the PR looks like an external contributor. It seems like normal polite open source interaction there to me, despite what could be perceived as a negative initial reaction. What might come across as dry or brushing off could be genuine concern at unforeseen consequences.
Not mine. It gets kind of interesting when there are a lot of emojis in the tweet because they get expanded, as in the emoji becomes the descriptive name for it, like American flag hand clap hand clap heart laughing out loud. This causes it to become too long.
What I don’t understand is affected versions of youtube-dl/yt-dlp aborts download when the resultant filename just happens to be too long, and there’s no quick and easy way to temporarily work it. Ephemeral URLs could become stale before that is done.
I would say about 75% of projects are like that, they don't want or take external contributions or if they do the rules are often so opaque to doing so properly they often aren't worth the effort. The rest have been a mixed bag of unpleasant but ultimately taking the bug fix to one project where I and the creator had a blast of a time for 6 months after he just gave me commit rights after my second patch and left me to it once I said what I was building towards.
I think the good ones that make this easy and fun are pretty rare, I definitely check the pull requests and bugs for any project before I consider contributing nowadays.
Especially projects that are run by corporations. They often demand you sign a CLA, which means getting a lawyer involved to understand what you're giving up (for each project you want to help!) Open source has become so hostile, I don't even bother any more.
Contributing to open source is often a mixed bag. Some projects are receptive, some aren't. & arguably this is fine: maintainers offer the source so that you can scratch your itch without needing their approval
Personally I consider it a plus if my PR gets any feedback. They owe me nothing
Sounds like your PR would've been a nice merge tho, thanks for playing
The vast majority of OSS maintainers I've interacted with have been extremely helpful and grateful, even though I'm a pretty mediocre developer and my code never works without making major changes to it.
This is an exception. Probably he is simply burned out.
Weird that they wouldn't jump on that. Sorry to hear about it.
Sometimes authors end up over-identifying with their software or processes, so to speak. Or regular bug reports hit harder for them on a given day.
I've never had that kind of thing happen myself, but years ago I participated in a FOSS project that was similar with feature patches. It was amazing to see the patches that came in, that didn't meet the maintainer standard, that would have been awesome (graphics software). The only people who got patches submitted were the ones who hung around, stayed active in the community, and maintained a long narrative about the changes the maintainer wanted to see. It was a really high bar.
But that's just one example...I had really good results lately with projects like Nim generously taking up and diving into bug reports from complete noobs like me.
I have sympathy for open source authors who are often under paid and unappreciated, as has been in the news recently. Then you have big blowups like Log4j and actix-web. For the life of me, I don't understand how someone could dump on a programmer when they are getting the software for free. The whole idea is that if you don't like something, fix it yourself.
depends entirely on the project. Some maintainers are a bit curmudgeonly, others are very welcoming to contributions. If it's on github I tend to check the closed pull requests sections before deciding to contribute these days, gives a good indicator of whether contributions are regularly merged. Plus you can read a few closed/merged ones to avoid common pitfalls
I think a project like this is extremely difficult to maintain due to the nature of being essentially being a web scraper. They support a ton of sites that each may make changes that suddenly break the tool, completely out-of-band. Maintainers are then stuck with people filing issues, several PRs to fix, and perhaps to inability to even test the fixes (ex. geo restrictions, lack of subscription, language barriers, etc.) all while being pushed to ship a release with the fix ASAP.
I think a project like this likely could benefit from the per-website code being maintained separately from the main codebase. This would allow for out-of-band hotfixes and have disjoint groups that are willing/can maintain each site. Of course this introduces other issues, namely versioning and cross-cutting refactors.
Both youtube-dl and mpsyt suffer from inactive owners who refuse to relinquish control. It is their right to do what they want with their projects. Fortunately, yt-dlp is a better managed project. Unfortunately I only found out about it a couple of days ago.
I've been involved with MPSYT a while back and has been given admin rights to the mps-youtube repository under the mps-youtube organization, but not to pyfa which mps uses. At some point after the owner stopped actively maintaining the project, me and some other contributors at the time was given the ability to merge pull requests.
The impression I got from discussions around that time was that we could maintain but weren't supposed to lead the project.
I've long since stopped using the tool and therefore lost both interest and knowledge about it to properly maintain it, but I've been reluctant to pass my rights on to someone else. It kind of breaches the mandate I feel I was given.
I'm at the point where I'dd b e happy to give the rights to someone if it would keep the project alive. However, I personally think it would be better if someone forks and renames the project. Mostly due to a lot of 3rd party packages being released many years ago which I assume would not be properly updated if the project were to become active again. The only one who can push a new version onto pip is the long inactive owner.
If someone wants to make a fork, I'dd merge any PR that updates the readme to a maintained fork.
Is there a command line yt-dl alternative (for youtube to mp3 conversions) that doesn't require Python?
Something utilizing basic Unix tools, C or Perl would be perfect. I did find a simple Perl script [1], apparently written in 2020, but it didn't work for me out of the box, possibly due to some simple regex issue.
Seems weird to me to frown upon python and then being ok with perl.. but whatever floats your boat! Perhaps check out https://github.com/iawia002/annie which is Go so you can get a static binary.
Thanks! I'm not frowning upon Python, I just have an ancient computer with a really minimal Linux system. For that reason I'm avoiding Python, but I definitely need Perl for a few other tasks. Also, yes, I happen to be one of those guys who actually likes Perl's syntax, general ideas, and the oddball humor of its community.
The hash should load the /hacks/ page and then scroll to the element with the "youtubedown" id. This works in every browser from like the last decade... or two.
Thank you, this can replace my own plugin I've hacked together some years ago. (Not published, was mainly a learning exercise.)
The most frustrating thing is: my plugin still works fine. But I cannot easily enable it permanently any more. I can temporary load it. But my own Firefox doesn't allow me to trust myself permanently. Maybe I'd have to submit it for signing or use some special version of Firefox... can't be bothered. It feels so disempowering.
Ha, just a couple weeks ago I wrote a small GUI frontend for youtube-dl, to download a video with chapters and create a different .mp3 or .mp4 file per chapter (useful for music compilations). I created it with the express goal of using it with Open With. It's a very useful extension, and I've used it with youtube-dl for ages.
Pretty sure like 1% of add-ons are actively monitored by Mozilla, and using the username chosen to criticize free software is ridiculous. For example, darktrojan is a member of the Thunderbird core team, see https://wiki.mozilla.org/Modules/Thunderbird
Can't get it to work on macOS, having tried all possible commands. There seem to be open issues about it in openwith repo. But definitely seems like a good option. Thanks!
No. Also, I tried that with youtube-dl but it doesn't work reliably (throttling still takes effect, with all but one connection simply timing out). [1]
yt-dlp uses a different method than youtube-dl (I believe it uses the API used by the Android app, instead of by the youtube.com website).
If they're accurately reproducing the behavior of the Android app, Google can't change or remove that API without breaking versions of the app that people already have installed. (And a lot of Android users will hang on to old versions of apps basically forever.)
Does anyone know if the two groups tried working together at one point? I know they have some disagreement over sponserblock, but beyond that was there any attempt to recruit the dlp people?
Since mpv 0.34 it even looks for yt-dlp first, and fallback to youtube-dl otherwise. Before that you had to set an option, or just symlink youtube-dl to yt-dlp.
mpv packages in linux distros seem to be pulling in yt-dlp now too; at least on OpenSUSE Tumbleweed, the mpv package recommends yt-dlp and not youtube-dl.
On one hand I'm sad to see youtube-dl's demise. But its prompt replacement gives me hope for the strength of the community to overcome whatever challenges the IP industry might present.
Now from the comments I've found out about yt-dlp[1] which claims to fix this issue[2]. Will check it out.
[1]: <https://github.com/yt-dlp/yt-dlp>
[2]: <https://github.com/ytdl-org/youtube-dl/issues/29326>