Tube Archivist is quite heavyweight as it's meant to do heavy full archiving of YouTube channels and search through positively huge libraries. I'm getting the sense that it's a data hoarding tool, not a casual web video watching tool. I found that I just want to add a few channels to my media library, for which I use Jellyfin already.
For people looking for a more lightweight option of that kind, I run the following script hourly [1]. This script uses yt-dlp to go through a text file full of YouTube RSS urls (either a channel RSS or a playlist RSS works for channels where you're only interested in a subset of videos) [2] and downloads the latest 5 videos organized in folders based on channel name. I watch these files by adding the output folder in a Jellyfin "Movies" type library sorted by most recent. The script contains a bunch of flags to make sure Jellyfin can display video metadata and thumbnails without any further plugins, and repackages videos in a format that is 1080p yet plays efficiently even in web browsers on devices released in at least the last 10 years.
It uses yt-dlp's "archive" functionality to keep track of videos it's already downloaded such that it only downloads a video once, and I use a separate script to clean out files older than two weeks once in a while. Running the script depends on ffmpeg (just used for repackaging videos, not transcoding!), xq (usually comes packaged with jq or yq) and yt-dlp being installed. You sometimes will need to update yt-dlp if a YouTube side change breaks it.
For my personal usage it's been honed for a little while and now runs reliably for my purposes at least. Hope it's useful to more people.
Hope you don't mind that I adapted this into a quick container image[1]. (Feel free to scoff at the idea of taking a simple bash script and making it into a massive Docker image; you're right, I just wanted a convenient way to run it in Linux environments that I don't have great control over.) I know it's not a huge script, but nonetheless if you want I can add a LICENSE/copyright notice in my fork/adaptation, if you want to pick a license for this script.
Oh great! Yeah as you can probably tell from the script I'm using it as a (locally built) container in my own setup. Feel free to pretend it's BSD licensed if that helps :)
Downloading whole channels and searching them shouldn't be heavy weight. I do that with yt-dlp and an index stored in a SQLite db.
inb4 dropbox/rsync reference. yeah yeah, I'm not saying everybody should do it like this, I'm just saying that archiving and indexing/searching needn't be heavyweight. I'm sure there's plenty of utility in a nice GUI for it, but it could easily be a light weight GUI.
yt-dlp also has flags to write subtitle files and .info.json files, which at least Emby can automatically pick up and use, if not Jellyfin.
I haven’t yet wired up the bits to use whispercpp to automatically generate subtitles for downloads, but I have done so on an ad-hoc basis in the past and gotten (much) better results than the YouTube auto-generated subtitles.
It looks like Django + SQLite is used for user accounts, but all other data storage happens in Elasticsearch.
It's an interesting design decision. I would have gone all-in on the database, and used SQLite FTS in place of Elasticsearch for simplicity, but that's my own personal favourite stack. Not saying their design is bad, just different.
Perhaps Elasticsearch was chosen because they also index video comments and subtitles, making full-text search a key feature. But I agree, SQLite FTS might suffice, and much of the metadata could be better managed using a traditional Django structure.
It would be great to add embeddings to the index, possibly using one of your Python tools.
Does it save the video thumbnail as well? Video description? Comments? Channel name? Channel avatar? etc
Currently I use yt-dlp to manually download individual videos that I want to keep. At the moment I only save the video itself. And most of the time I then also paste the URL of the video into archive.is save page and web.archive.org/save so that there is a snapshot of what the video page itself looked like at the time. But this is still incomplete, and relies on those services continuing to exist. Locally saving a snapshot of the page like that, and then also saving the thumbnail and perhaps more of the comments would be nice.
I’ve been using the ytdl-sub docker container for managing automated download of channels and importing them into Plex
https://github.com/jmbannon/ytdl-sub
I've had significant problems running this for extended periods.
It will crash and then restoration will fail internally with corruption errors, requiring reading through docker logs or just starting over from scratch completely.
I always dream of writing a proxy server—-where all videos—-irrespective of device—-get stored in a local cache and served without going outside on subsequent requests.
Gonna try this one, and gonna take that direction.
Cache hit rate. How big is your cache? Large enough to get a hit rate that economically saves you money vs the cost of the SAN?
Can you cache YouTube videos?
Can you intercept YouTube videos? You'll need a root cert installed on your client devices.
And, here's the worst: many applications do cert pinning so they'll refuse to load, even if the signer is in the root store. They require a specific signer.
I had this same idea around 2012ish. And had the naive impression that you could cache youtube videos on a proxy level (using Squid). Boy was I wrong, even then YT was employing heavy tactics to prevent people from caching videos at the native http cache header level. And it was at that point I realized something was wrong with the internet.
https has put the kibosh on a lot of this type of stuff, unless you’re willing to set up a local CA and trust the root cert on ~every device that will use your network.
Well, if you're self hosting for yourself, friends, and family.. it isn't likely to be a thing you'll want to care about fixing when it eventually breaks in mysterious ways.
Better to use sqlite or just a blob of yaml in a file if self hosting might be involved.
Sure, you’re hosting a few hundred or thousand videos, a blob of yaml can be good enough. I have no history if this project, but I suspect they started out like that. Something like Elasticsearch usually comes into play when you are ingesting a lot of data point very quickly, so I would assume the authors wanted to archive fairly huge amounts of videos.
It can do full text search (subtitles, comments) right in the ui. It seems to be designed for large scale backups. I use it for a couple hundred important to me or likely to be removed videos and it works excellent, too.
Would have preferred a more unsurprising db like psql or SQLite for my usage, but they also support data dumps so if needed I can escape.
Looks great, I will try it, since YouTube broke my scirpts a while a go...
The way I was using them was to create a playlist named "save" and pulling from it once a day. It worked for a while, but YT started to ban somehow my script. Tube Archivist looks like would be ideal for that.
It might just be that you were banned because you were doing it exactly once a day.
I use YT's RSS feature to follow channels and playlists I'm interested in and discovered that (somewhat ironically) if I have it query the RSS periodically Google will decide that I am a bot, will return errors for all reads and force me to pass a captcha next time I try to use any Google product (presumably connecting the two activities via ip).
So now my RSS reader does not periodically query YT and instead I manually click the update button when I'm interested...
For people looking for a more lightweight option of that kind, I run the following script hourly [1]. This script uses yt-dlp to go through a text file full of YouTube RSS urls (either a channel RSS or a playlist RSS works for channels where you're only interested in a subset of videos) [2] and downloads the latest 5 videos organized in folders based on channel name. I watch these files by adding the output folder in a Jellyfin "Movies" type library sorted by most recent. The script contains a bunch of flags to make sure Jellyfin can display video metadata and thumbnails without any further plugins, and repackages videos in a format that is 1080p yet plays efficiently even in web browsers on devices released in at least the last 10 years.
It uses yt-dlp's "archive" functionality to keep track of videos it's already downloaded such that it only downloads a video once, and I use a separate script to clean out files older than two weeks once in a while. Running the script depends on ffmpeg (just used for repackaging videos, not transcoding!), xq (usually comes packaged with jq or yq) and yt-dlp being installed. You sometimes will need to update yt-dlp if a YouTube side change breaks it.
For my personal usage it's been honed for a little while and now runs reliably for my purposes at least. Hope it's useful to more people.
[1]: https://pastebin.com/s6kSzXrL
[2]: E.g. https://danielmiessler.com/p/rss-feed-youtube-channel/