The more I listen to NotebookLM “episodes”, the more I am convinced that Google has trained a two-speaker “podcast discussion” model that directly generates the podcast off the back of an existing multimodal backbone. The two speakers interrupt and speak over each other in an uncannily humanlike manner. I wonder whether they basically fine tuned against a huge library of actual podcasts along with the podcast transcripts and perhaps generated synthetic “input material” from the transcripts to feed in as training samples.
In other words, take an episode of The Daily and have one language model write a hypothetical article that would summarize what the podcast was about. And then pass that article into the two—speaker model, transcribe the output, and see how well that transcript aligns with the article fed in as input.
I am sure I’m missing essential details, but the natural sound of these podcasts cannot possibly be coming from a text transcript.
> the more I am convinced that Google has trained a two-speaker “podcast discussion” model that directly generates the podcast off the back of an existing multimodal backbone.
I have good and bad news for you - they did not! We were the first podcast to interview the audio engineer who led the audio model:
TLDR they did confirm that the transcript and the audio are generated separately, but yes the TTS model is trained far beyond anything we have in OSS or commercially available
they didnt confirm or deny this in the episode - all i can say is there are about 1-2 yrs of additional research that went into nblm's tts. soundstorm is more of an efficiency paper imo
I feel similarly about NotebookLM, but have noticed one odd thing - occasionally Host A will be speaking, and suddenly Host B will complete their sentence. And usually when this happens, it's in a way that doesn't make sense, because Host A was just explaining something to or answering a question of Host B.
I'm actually not sure what to make of that, but it's interesting to note
It's speaker diarisation, and depending on the quality of the resulting labelling and speaker end marker tokens, what influences the rhythm of a conversation (Or the input data just has many podcast hosts completing each other's..sandwiches?)
I think this is an important enough quality that betrays that there are no two minds here creating 1+1=3.
One cheap trick to overcome this uncanny valley may be to actually use two separate LLMs or two separate contexts / channels to generate the conversations and take "turns" to generate the followup responses and even interruptions if warranted.
Funnily, even two different LLMs, when put in conversation with each other, can end up completing each other's sentence. I guess it has something to do with the sequence prediction training objective.
Those moments always make me think they’re going for a scripted conversation style where the “learner” is picking up the thread too quickly and interjecting their epiphany inline for the benefit of the listener.
It doesn’t look that useful to use as it. But the approach there are investigating is clearly and well documented in plain text. Seems like a valid contribution to public knowledge to be grateful for, even if it can’t be use verbatim.
(Please note that the parent poster has edited their comment. Before edit, they implied that it was the OP who included the words “open source” in the HN post title.)
Oh, I see the links now, thanks! But they reference four different licenses, and those are the licenses just for model weights I think?
If the intention was to make something that you can only use with Llama models, stating that clearly in a separate code license file would be better IMO. (Of course, this would also mean that the code still isn’t open source.)
Great to see this: Fellow tech-geeks, ignore the NotebookLM thing at your peril.
NotebookLM, far and away, has been the "AI Killer App" for the VAST MAJORITY of bright-but-not-particularly-techy people I know. My 70ish parents and my 8 year old kid are both just blown away by this thing and can't stop playing with it.
Edit: As someone pointed out below, I absolutely mean just the "podcast" thing.
I can understand why it's cool for a lot of people but it's the opposite of a time saver to me: they are a time loser, if that's a word. It's the same thing of those videos that serve a purpose only because some people (and developers) are not able to read or feel intimidated at walls of text. They are at a competitive disadvantage only partially mitigated by having videos for even the smallest text page.
I don't get it. Are you saying "bright but not particularly techy" people can't read? What would I be missing out on by ignoring this just like I do every other podcast? I've literally never heard of someone learning anything from a podcast except scattered knowledge from another field that will never be useful.
Again, I'm absolutely like you and I'm with you. I don't much do podcasts either, but in a way this is why I worded it like this. It struck me as a fun party trick to ignore, but it really seems to GRAB a lot of other people.
The point being made is that while this may be grating for you. It is magic for a large part of the population. This combined with chatgpt advanced voice mode shows a direction of travel for AI agents. It makes it possible to imagine a world where everyone has personalized tutors and that world isn't very far away.
> It makes it possible to imagine a world where everyone has personalized tutors and that world isn't very far away.
My issue with AI hype is exactly this. Everything is “imagine if this was just better enough to be useful”
“Imagine if we had an everything machine”
“Image everyone having a personal assistant/artist/tutor/programmer”
“Imagine a world where finance is decentralized and we all truly own our digital stuff”
<rant>
I’m not much of a visionary, admittedly, but it’s exhausting being told to imagine products that only half exist now.
Having worked with LLMs in the autonomous agent space, I think we’re very far away from agents actually doing useful work consistently.
There are still so many problems to be solved around the nature of statistical models. And they’re hard problems where the solution, at least at the product level, boils down to “wait for a better model to come out”
I’m just tired of people imagining a future instead of building useful things today
At any given time there are millions of children who will fall for the coin behind the ears trick. It's magic to this large part of the population. That doesn't make it a technique I need to evaluate for my professional practice, because I'm not a clown.
Ariana already has personalized tutors. Wikipedia, for example is just arriving in different forms. you could argue chatbots are superior in many forms versus a podcast where you can't just scan information
It does have a tendacy to meander or spend too time reflecting on a topic instead of distilling the details. However the new ability to add a prompt improves this greatly.
Some instructions that worked for me:
- Specifics instead of high level
- Approach from non-critical perspective
- Dont be philosophical
- Use direct quotes often
- Focus on the details. Provide a lesson, not reflections
- Provide a 'sparknotes' style thorough understanding of the subject
Every time I've listened to a NotebookLM podcast on some article or blog post, I would have much preferred a simple AI text to speech of the same article.
you might just know very old non-tech people. but the non-tech people that will generally be the larger tech people of the future are gen z and they're definitely not on notebookLM. they are on AI character chatbots
I tried to build something kind of like NotebookLM (personalized news podcasts) over the past months (https://www.tailoredpod.ai), but biggest issue is that the existing good TTS Apis are so expensive that a product such as NotebookLM is not really possible for a normal company that doesn't have internal access to Google's models. OpenAI has the cheapest / quality good enough TTS Api, but even then generating hours of audio for free is way too expensive.
Pretty weird choice of TTS engines. None of them are anywhere near state of the art as far as open TTS system goes. XTTSv2 or the new F5-TTS would have been much better choices.
You can always update the code to use that. Meta releasing stuff on github is not trying to release the "bet" but to give a proof of concept. The licenses of those TTS system matters, it's not enough to be open. If this was a product for their users, they will definitely have better TTS.
"Speech Model experimentation: The TTS model is the limitation of how natural this will sound. This probably be improved with a better pipeline and with the help of someone more knowledgable-PRs are welcome! :)"
The sample output is very poor. Cool demo, but really just emphasizes how much of a hit product the NotebookLM team has managed to come up with, ostensibly with more or less the same foundation models already available.
I'm not so sure this is an open source NotebookLM as it is a few experiments in an iPython notebook. What NotebookLM does at an LLM level is not particularly novel, it's the packaging as a product in a different way than what others are doing that I think is interesting. Also the "podcast" bit is really just an intro/overview of a large corpus, far more useful is being able to discuss that corpus with the bot and get cited references.
What this does however demonstrate is that prototyping with LLMs is very fast. I'd encourage anyone who hasn't had a play around with APIs to give it a go.
not necessarily when you're really jiving with someone, the conversation flows really well. notice this is also what makes for really good vs bad television, example pulp fiction
Counterpoint: I have used the podcast numerous times and shared it with many. Great system and medium to digest complex information that I otherwise wouldn’t have.
If we can have this running locally on mobile phone that would be pretty cool. Imagine receiving a work document (for example, product requirement documents), and then this turning it into a podcast to play for me while I am driving. I think my productivity will be through the roof and I don't need to worry about compliance issues.
its more with using the microphones in the car rather than the phone's microphone, as they tend to work better for hearing the driver..or at least I think they would.
In other words, take an episode of The Daily and have one language model write a hypothetical article that would summarize what the podcast was about. And then pass that article into the two—speaker model, transcribe the output, and see how well that transcript aligns with the article fed in as input.
I am sure I’m missing essential details, but the natural sound of these podcasts cannot possibly be coming from a text transcript.