NotebookLM's is incredibly good at generating the affect and structure of a quality podcast.
This is in-line with all art, music, and video created by LMM at the moment. They are imitating a structure and affect, the quality of the content is largely irrelevant.
I think the interesting thing is that most people don't really care, and AI is not to blame for that.
Most books published today have the affect of a book, but the author doesn't really have anything to say. Publishing a book is not about communicating ideas, but a means to something else. It's not meant to stand on its own.
The reason so much writing, podcasting, and music is vulnerable to AI disruption is that quality has already become secondary.
> The reason so much writing, podcasting, and music is vulnerable to AI disruption is that quality has already become secondary.
Commercial creative workers are vulnerable because there's a billions-of-dollars industry effort to copy their professional output and compete with them selling cheap knock-offs.
I see this sort of convenient resignation all the time in the tech crowd... "creative workers only can blame themselves for tech companies taking their income because their art just isn't any good anymore!"
The poor quality "content" that's been proliferating recently has been created, largely, using the very tools that AI has built, or their immediate precursors. AI, for all its benefits, has only made that worse.
If you're saying, in good faith, that most of the infomercials, televangelist programs, talk radio, celebrity autobiographies, self-help books, scandalous expose books, and health/exercise fad books etc etc etc that came out 50 years ago were made for no reason beyond advancing human knowledge, you're either too young to remember any media from before our current era and haven't looked beyond survivorship bias.
Tech folks love sentiments like this because it entirely emotionally places the onus on the people getting ripped off by big tech companies for being ripped off. If their work was that awful, companies wouldn't be clamoring to vacuum it up into their models to make more of it. Nearly all of the salable output from these models exists solely because it took a creative product someone made with the intention of selling it and it's using it to sell a simulacra.
It's using nostalgia to deflect guilt for harpooning the livelihood of many people because it's just more convenient and profitable to empower mediocre "content creators" they use to justify doing it.
> Tech folks love sentiments like this because it entirely emotionally places the onus on the people getting ripped off by big tech companies for being ripped off.
This, times a million. Add to that the ancient quote from Plato(?) criticizing writing or the other ancient quote complaining about the irresponsibility of the youth, unthinkingly deployed to attempt to delegitimize any kind of critique of nearly anything.
The technology industry seems to be overflowing with so-called "rational" people who mainly seem to use use whatever intelligence they have to rationalize away responsibility for whatever problems their beloved technology has caused. It's a really stupid and obnoxious pattern; and once you see it, it's hard to not see if everywhere and be annoyed.
I think one element of it is naked greed (especially from the entrepreneurs) but I think another big part is a kind of stuntedness and parochialism that's often fueled by overconfidence (because of success in software engineering, forming an identify around "being smart" etc).
It's one of the reasons I left tech altogether after decades. It's like most people in the tech business right now think their totally unique supreme intellectual might gives them enough pan-subject-matter expertise. The further I moved away from development within the business, the more it repelled me.
Nobody is getting "ripped off" by ML models any more than by other humans. When a human wants to launch a high-quality podcast, they survey the market, listen to a lot of other high quality podcasts, and then set to creating their own derivative work.
What ML models are doing is really no different. It's just much, much faster.
Everything humans create is derivative of other works. Speed is the only difference.
The only difference between cracking a 4-bit private key and a 512-bit private key is speed, too. So are private keys of those sizes qualitatively the same thing?
Or is it that, at some nebulous point, a difference in speed between two things impacts the way humans choose to direct their efforts to such a great extent that, for all intents and purposes, the two things are qualitatively different?
> The only difference between cracking a 4-bit private key and a 512-bit private key is speed, too. So are private keys of those sizes qualitatively the same thing?
That's like sayin "The only difference between drinking 1 gallon of water and 100 gallons of water is death." Yes, the quantity of something for a given use-case is bound to give different results.
What the parent comment was commenting is that the actions being taken by these models should not be morally classified as wrong in abundance just as humans following the same process would never be regardless of the output they produced.
I disagree about what the parent is trying to claim.
>What ML models are doing is really no different. It's just much, much faster.
I take this argument to be that (a) what the ML models are doing is fundamentally the same as what humans already do (I agree with this part), (b) that we have no moral problem with humans doing this already (I agree again), and that (c) the fact that AI does it much faster is not sufficient to cause any moral difference (I disagree with this part, and gave a counterexample to show how, in general, a difference in speed can make a moral difference, because that speed difference can have a large impact on how other people decide to behave).
> Commercial creative workers are vulnerable because there's a billions-of-dollars industry effort to copy their professional output and compete with them selling cheap knock-offs.
I agree there will be winners and losers of some proportion here. But I also think the people that want to pay for art will continue to pay as their motives and values are different. There's plenty of cheap knock-off art, but people still pay premiums for art to support the artist and their work.
As someone else replied to you, it's similar to piracy. The people that pirate were never going to pay in the first place. To tie it back here, the people listening to AI generated <whatever> were never going to pay in the first place - which is why so many podcasts get their money from ads.
> The people that pirate were never going to pay in the first place.
I think I agree with your larger point, but is this part true? When Spotify provided a much simpler UX to get the goods, people were happy to pay $10/month and Napster et al basically died.
> But I also think the people that want to pay for art will continue to pay as their motives and values are different.
The big difference is the type of artist. People selling fine art won't be affected much. The vast majority of artists are commercial artists, the the idea that being a commercial artist is morally or creatively bankrupt— a common sentiment among those who want to imagine that this is all just fine— is nonsense. It's pilfered commercial artwork that makes up the bulk of these tools commercial utility, and the people that made it stand to suffer the most.
I haven't seen that idea (artists being morally bankrupt), like you I'd strongly disagree. I also agree it's a shitty situation that artists invested hundreds of hours of their own time to create something only to be repackaged and sold by some AI tool.
That said, I'd still make the same point that people who value art and the artist will buy from and support the artist. Those that don't value it, won't. But now we're on a larger scale.
>That said, I'd still make the same point that people who value art and the artist will buy from and support the artist.
The chances anyone will come across the artist when their marketplace is flooded with increasingly plausible simulacra become more and more slim as time goes on.
AI is choking off any hope for artists supported by patronage, simply by virtue of discoverability being lost and trust being eroded.
When I was younger, piracy was justified with similar tech folks arguments: "Information wants to be free", "Serves them right for controlling their content in a way that inconveniences me", "If I own a copy, the content is mine to share".
And I used to be right there with them until I realized it was entirely an entirely self-serving way to justify not paying for shit. (I'm not saying it is for everybody, or that everybody's situation mimics mine-- but I was honest enough to admit that's what it was it was for me.) I don't like Adobe's subscription plan, but I was f-ing poor for my younger decades and there's no way in hell I could afford paying a month's rent for photoshop, but $10/mo? I signed up immediately. Also, rather than just using BT whenever I felt like getting an album, I started making deliberate decisions about what albums I wanted and bought them on iTunes when I learned about it in the mid-aughts. Sure the lock-in and DRM suck, but I was happy to pay for the convenience. For indie bands, I still will buy their stuff on Bandcamp even if I can stream it just because they add value to my life and not being legally compelled to pay them isn't the same as not being morally obligated to compensate people whose labor you voluntarily benefit from. I haven't pirated software in decades. If it's not FOSS and I want it, I'll buy it. It's absolutely bananas how many developers make a fat living off of making commercial software but pretend to be radical class warriors when its time to bust out the credit card for anything that isn't physical.
I'm confused about your point RE: infomercials et al - that's poor quality "content" that's been proliferating for, as you say, more than 50 years.
Is that not the work of commercial creative workers? Did it not exist pre AI? There's an argument to scale, certainly, but the idea that "things were better in the past before these <<new technologies>> came out" is generally a suspect argument.
To your broader point - new tools for creating creative work come out all the time. Did we suffer greatly at the loss of image compositors when Photoshop arrived? On the flip side, did digital art gut painting and sculpture? Isn't this just another tool for creative expression?
Art is a way of seeing, not a way of creating. I don't think the technology is taking that away.
>Is that not the work of commercial creative workers? Did it not exist pre AI? There's an argument to scale, certainly, but the idea that "things were better in the past before these <<new technologies>> came out" is generally a suspect argument.
The fact that all of that stuff was crap is central to my point. You might just need to give it another read.
> Art is a way of seeing, not a way of creating. I don't think the technology is taking that away.
I'm really sick and tired of the tech industry's bumper-sticker-level-reductive pseudo-philosophical generalizations about "what art is," what it means to be an artist, the acceptable ways to be an artist, and all of that. Art is a whole fucking lot of things, and chief among them in this context is a class of professions. Glib decrees based on a razor-thin slice of one of the broadest topics in the human experience that conveniently exclude or dismiss the stakes of those with the loudest criticism and the most to lose is obviously self-serving. If you're going to take the libertarian "well that's the market for ya" stance," at least be honest about it. If you're going to try to carefully define the entire universe of ideas and practices that comprise art to conveniently exclude the concerns of the people getting screwed over because you think the optics are better or you feel less icky about it, well you better expect to get some really pissed off responses from them.
There's no glib decree - a new technology has arrived. I'm being very honest about it - you can't put it back in the bottle, any more than you could put jacquard looms or machine woodcarving of trim could be. The position between art and craft can be endlessly debated and I put my stake in the ground.
You can disagree! Folks who are impacted have every right to be pissed, organize, take action. All of these creative endeavors existed _post technology updates_ though - that's my entire point. The need for art doesn't disappear - it changes. Standing athwart the change is a choice, but I'm not sure it is an effective position.
Well gosh, good thing someone in big tech gave me permission to be mad about many in my field being screwed by big tech! Too bad that won't help pay for my cancer treatment because there's no way in hell they'll push out a cure soon enough when they're dumping billions of dollars into figuring out how to sell other people's artwork. At least people won't have to waste an uncomfortable few minutes writing a thoughtful note to my wife in the aftermath when they can just "Ok, google" it.
>> Art is a way of seeing, not a way of creating.
> There's no glib decree
This is a glib decree and it completely ignores most of what art actually is in our world, rather than the quaint little box that most people in the NN business try to stuff it into. Your patronizing tone doesn't lend any authority or add depth to your initial analysis, which you essentially just restated using more words. The "art vs craft" dichotomy doesn't even approach the depth and complexity of the interplay of art and commerce in the worlds like video game development, music, cinema and television, and writing... hell even advertising. Like most tech dudes that assume their incredible mental might gives them some kind of pan-topic expertise allowing them to casually dismiss subject matter experts in other fields based on a few a priori thought exercises, you simply don't know how much more you need to learn to make informed decisions about this topic.
Sure, and it wasn't the right thing to do, especially if it was from an independent artist. I haven't in well over a decade. There's also a canyon of a difference between that, and if I had re-sold their product, at scale, effectively putting the artists out of business. I'd love to explode copyright, but unfortunately, our society has no mechanism for compensating the people that make this valuable work without it, because a whole bunch of tech execs will say "jeez — i'd really like to get paid for their work instead of them."
> There's also a canyon of a difference between that... effectively putting the artists out of business.
There is a direct line between music piracy you did in the past and the status quo of Spotify paying millidollars to artists today. Another POV is, find me musicians who prefer a world with Internet piracy compared to one without.
You're arguing about something I'm not. I completely agree that what I did was wrong, and to boot, I stopped pirating music as soon as online music stores like itunes popped up despite being an impoverished line cook.
That has absolutely no impact, at all, on my fitness to criticize this current wrong.
I think and hope that you're wrong. There's always been cheese, and there's a lot of it now. But there is still a market for top-notch insight.
For example, Perun. This guy delivers an hourlong presentation on (mostly) the Ukraine-Russia war and its pure quality. Insights, humour, excellent delivery, from what seems to be a military-focused economist/analyst/consultant. We're a while away from some bot taking this kind of thing over.
I keep seeing this asertion: "the robots will get there" (or its ilk), and it's starting to feel really weird to me.
It's an article of faith -- we don't KNOW that they're going to get there. They're going to get better, almost certainly, but how much? How much gas is left in the tank for this technique?
Honestly, I think the fact that every new "groundbreaking" news release about LLMs has come alongside a swath of discussion about how it doesn't actually live up to the hype, that it achieves a solid "mid" and stops there, I think this means it's more likely that the robots AREN'T going to get there some day. (Well, not unless there's another breakthrough AI technique.)
Either way, I still think it's interesting that there's this article of faith a lot of us have "we're not there now, but we'll get there soon" that we don't really address, and it really colors the discussion a certain way.
IMO it seems almost epistemologically impossible that LLM's following anything even resembling the current techniques will ever be able to comfortably out-perform humans at genuinely creative endeavours because they, almost by definition, cannot be "exceptional".
If you think about how an LLM works, it's effectively going "given a certain input, what is the statistically average output that I should provide, given my training corpus".
The thing is, humans are remarkably shit at understanding just how exception someone needs to be to be genuinely creative in a way that most humans would consider "artistic"... You're talking 1/1000 people AT best.
This creates a kind of devils bargain for LLMs where you have to start trading training set size for training set quality, because there's a remarkably small amount of genuinely GREAT quality content to feed this things.
I DO believe that the current field of LLM/LXM's will get much better at a lot of stuff, and my god anyone below the top 10-15% of their particular field is going to be in a LOT of trouble, but unless you can train models SOLELY on the input of exceptionally high performing people (which I fundamentally believe there is simply not enough content in existence to do), the models almost by definition will not be able to outperform those high performing people.
Will they be able to do the intellectual work of the average person? Yeah absolutely. Will they be able to do it probably 100/1000x faster than any human (no matter how exceptional)?... Yeah probably... But I don't believe they'll be able to do it better than the truly exceptional people.
I’m not sure. The bestsellers lists are full of average-or-slightly-above-average wordsmiths with a good idea, the time and stamina to write a novel and risk it failing, someone who was willing to take a chance on them, and a bit of luck. The majority of human creative output is not exceptional.
A decent LLM can just keep going. Time and stamina are effectively unlimited, and an LLM can just keep rolling its 100 dice until they all come up sixes.
Or an author can just input their ideas and have an LLM do the boring bit of actually putting the words on the paper.
I’m just saying, the vast majority of human creative endeavours are not exceptional. The bar for AI is not Tolkien or Dickens, it’s Grisham and Clancy.
IMO the problem facing us is not that computers will directly outperform people on the quality of what they produce, but that they will be used to generate an enormous quantity of inferior crap that is just good enough that filtering it out is impossible.
We have already trashed the internet and really human communication with SEO blogspam brought even lower by influencers desperately scrambling for their two minutes of attention. I could actually see quality on average rising, since it will now be easy to churn out higher quality content, even more easily than the word salad I have been wading through for at least the last 15 years.
I am not saying it's not a sad state of affairs. I am just saying we have been there for a while and the floor might be raised, a bit at least.
Yes, LLMs are probably inherently limited, but the AI field in general is not necessarily limited, and possibly has the potential to be more genuinely creative than even most exceptional creative humans.
I loosely suspect too many people are jumping into LLMs and I assume real research is being strangled. But to be honest all of the practical things I have seen such as by Mr Goertzel are painfully complex very few can really get into.
Agreed. I think people are extrapolating with a linearity bias. I find it far more plausible that the rate of improvement is not constant, but instead a function of the remaining gap between humans and AI, which means that diminishing returns are right around the corner.
There's still much to be done re: reorganizing how we behave such that we can reap the benefits of such a competent helper, but I don't think we'll be handing the reigns over any time soon.
In addition to "will the robots get there?" there's also the question "at what cost?". The faith-basedness of it is almost fractal:
- "Given this thing I saw a computer program do, clearly we'll have intelligent AI real soon now."
- "If we generate sufficiently smart AI then clearly all the jobs will go away because the AI will just do them all for us"
- "We'll clearly be able to do the AI thing using a reasonable amount of electricity"
None of these ideas are "clear", and they're all based on some "futurist faith" crap. Let's say Microsoft does succeed (likely at collosal cost in compute) in creating some humanlike AI. How will they put it to work? What incentives could you offer such a creature? What will it want in exchange for labor? What will it enjoy? What will it dislike? But we're not there yet, first show me the intelligent AI then we can discuss the rest.
What's really disturbing about this is hype is precisely that this technology is so computationally intensive. So of course the computer people are going to hype it--they're pick and shovel salespeople supplying (yet another) gold rush.
AI has been so conflated with LLMs as of late that I'm not surprised that it feels like we won't get there. But think of it this way, with all of the resources pouring into AI right now (the bulk going towards LLMs though), the people doing non-LLM research, while still getting scraps, have a lot more scraps to work with! Even better, they can probably work in peace, since LLMs are the ones under the spotlight right now haha
We all seek different kinds of quality; I don't find Peruns videos to have any quality except volume. He reads bullet points he has prepared, and makes predictable dad jokes in monotone, re-uses and reruns the same points, icons, slides, etc. Just personally, I find it really samey and some of the reporting has been delayed so much it's entirely detached from the ground by the time he releases. It's a format that allows converting dense information and theory to hour long videos, without examples or intrigue.
Personally, I prefer watching analysis/sitrep updates with the geolocations/clips from the front/strategic analysis which uses more of a presentation (e.g. using icons well and sparingly). Going through several clips from the front and reasoning about offensives, reasons, and locations is seems equally difficult to replicate as Peruns videos, which rely on information density.
I do however love Hardcore history - he adds emotion and intrigue!
I agree with your overall hope for quality and different approaches still remaining stand out from AI generated alternatives.
I think the main problem with Peruns' videos are that they are videos. I run a little program on my home-lab that turns them into podcasts and I find that I enjoy them far more because I need to be less engaged with a podcast to still find them enjoyable. (Also, I gave up on being up to date with Ukraine situation, since up to date information is almost always wrong. I am happy to be a week or a 14 days behind if the information I am getting is less wrong).
I like Hardcore history very much, but I think it would be far worse in a video form.
> He reads bullet points he has prepared, and makes predictable dad jokes in monotone, re-uses and reruns the same points, icons, slides, etc.
The presentation is a matter of taste (I like it better than you do), but the content is very informative and insightful.
Its not really about what is happening at the frontline right now. Its not its aim. Its for people who want dense information and analysis. The state of the Ukrainian and Russian economies (subjects of recent Perun videos) does not change daily or weekly.
All of the other commentators have replied with a good diverse set of YouTubers and included ones with biases from both sides; I'd recommend the ones they have linked. Some (take note of the ones that release information quicker) might be more biased or more prone to reporting murky information than others.
I like a range of the Ukraine coverage. From stuff that comes in fast to the weekly roundup-with-analysis. E.g. Suchomimus has his own humour and angle on things, but if you don’t have a unique sense of humour or delivery then it’s easier for an AI to replace you.
Give it a year or three, up to the minute AI generated sitrep pulling in related media clips and adding commentary…not that hard to imagine.
> Give it a year or three, up to the minute AI generated sitrep pulling in related media clips and adding commentary…not that hard to imagine.
But why? Isn’t there enough content generated by humans? As a tool of research AI is great in helping people do whatever they do but having that automated away generating content by itself is next to trash in my book, pure waste. Just like unsolicited pamphlets thrown at your door you pick up in the morning to throw in the bin. Pure waste.
This is true but the quality frontier is not a single bar. For mainstream content the bar is high. For super-niche content, I wouldn’t be surprised if NotebookLM already competes with the existing pods.
This will be the dynamic of generated art as it improves; the ease of use will benefit creators at the fringe.
I bet we see a successful Harry Potter fanfic fully generated before we see a AAA Avengers movie or similar. (Also, extrapolating, RIP copyright.)
On the contrary, the mainstream eats any slop you put infront of it as long as it follows the correct form - one needs only look at cable news - the super niche content is that which requires deep thinking and novel insights.
Or to put another way, I've heard much better ideas on a podcast made by undergrad CS students than on Lex Fridman.
It's the complete opposite. Unless your definition of mainstream includes stuff like this deep drive into Russia/Ukraine, in which case I think you're misunderstanding "mainstream".
I know I'm not the first to say this, but I think what's going on is that these AI things can produce results that are very mid. A sort of extra medium. Experts beat modern LLMs but modern llms are better than a gap.
If you just need voice discussing some topic because that has utility and you can't afford a pair of podcasters (damn, check your couch cushions) then having a mid podcast is better than having no podcast. But if you need expert Insight because expert Insight is your product and you happen to deliver it through a podcast then you need an expert.
If I were a small software shop and I wanted something like a weekly update describing this week's updates for my customers and I have a dozen developers and none of us are particularly vocally charismatic putting a weekly update generated from commits, completed tickets, and developer notes might be useful. The audience would be very targeted and the podcast wouldn't be my main product, but there's no way I'd be able to afford expert level podcasters for such a position.
I would argue Perun is a world class defense Logistics expert or at least expert enough, passionate enough, and charismatic enough to present as such. Just like the guys who do Knowledge Fight, are world class experts on debunking Alex Jones, and Jack Rhysider is an expert and Fanboy of computer security so Darknet Diaries excels, and so on...
These aren't for making products, they can't compete with the experts in the attention economy. But they can fill gaps and if you need audio delivery of something about your product this might be really good.
Edit - but as you said the robots will catch up, I just don't know if they'll catch up with this batch of algorithms or if it'll be the next round.
> I know I'm not the first to say this, but I think what's going on is that these AI things can produce results that are very mid. A sort of extra medium. Experts beat modern LLMs but modern llms are better than a gap.
I've seen people manage to wrangle tools like Midjourney to get results that surpass extra medium. And most human artists barely manage to reach medium quality too.
The real danger of AI is that, as a society, we need a lot of people who will never be anything but mediocre still going for it, so we can end up with a few who do manage to reach excellence. If AI causes people to just give up even trying and just hit generate on a podcast or image generator, than that is going to be a big problem in the long run. Or not, and we just end up being stuck in world that is even more mediocre than it is now.
AI looks like it will commoditise intellectual excellence. It is hard to see how that would end up making the world more mediocre.
It'd be like the ancient Romans speculating that cars will make us less fit and therefore cities will be less impressive because we can't lift as much. That isn't at all how it played out, we just build cities with machines too and need a lot less workers in construction.
If you want to say AI have reached intellectual Excellence because we have a few that have peaked in specific topics I would argue that those are so custom and bespoke that they are primarily a reflection on their human creators. Things like Champions and specific games or solutions to specific hard algorithms are not generally repurposable, and all of the general AI we have are a little bit dumb and when they work well they produce results that are generally mid. On occasionally we can get a few things we can sneak by and say they're better but that's hardly a commodity that's people sifting through large piles of mid for gems.
There are a lot of ways if it did reach intellectual excellence that we could argue that it would make Humanity more mediocre, I'm not sure I buy such arguments but there are lots of them and I can't say they're all categorically wrong.
> It'd be like the ancient Romans speculating that cars will make us less fit and therefore cities will be less impressive because we can't lift as much. That isn't at all how it played out
No, obviously not. Modern construction is leagues outside what the Romans could ever hope to achieve. Something like the Burj Khalifa would be the subject of myth and legend to them.
We move orders of magnitude more cargo and material than them because fitness isn't the limiting factor on how much work gets done. They didn't understand that having humans doing all that labour is a mistake and the correct approach is to use machines.
I don't know, Dubai is...bigger, but I'd say it's vastly more mediocre city than Rome. To your original point, making things easier to make probably does exert downward pressure on quality in the aesthetic/artistic sense. Dubai might have taller buildings and better sewage system[0], but it will never have the soul of a place like Rome.
[0] Given the floods I saw recently, I'm not even sure this is even true.
I don't think you're logic follows that we need a lot of people suffering to get a few people to be excellent. If people with a true and deep passion follow a thing I think they have a significant chance of becoming excellent at it. These are people who are more likely to try again if they fail, these are people who are more likely to invest above average levels of resources into acquiring the skill, these are people who are willing to try hard and self-educate, such people don't follow a long tail distribution for failure.
If someone wants to click generate on a podcast button or image generator it seems unlikely to me that was a person who would have been sufficiently motivated to make an excellent podcast or image. On the flip side, consider if the person who wants to click the podcast or image button wants to go on to do script writing, game development, Structural Engineering, anything else but they need a podcast or image. Having such a button frees up their time.
Of course this is all just rhetorical and occasionally someone is pressed into a field where they excel and become a field leader. I would argue that is far less common than someone succeeding and I think they want to do, but I can't present evidence that's very strong for this.
> as a society, we need a lot of people who will never be anything but mediocre still going for it, so we can end up with a few who do manage to reach excellence
"Reach excellence" is the key phrase there. Excellence takes time and work, and most everyone who gets there is mediocre for a while first.
I guess if AIs become excellent at everything, and the gains are shared, and the human race is liberated into a post-scarcity future of gay space communism, then it's fine. But that's not where it's looked like we're heading so far, though - at least in creative fields. I'd include - perhaps not quite yet, but it's close - development in that category. How many on this board started out writing mid-level CRUD apps for a mid-level living? If that path is closed to future devs, how does anyone level up?
> But that's not where it's looked like we're heading so far
I think one of the major reasons this is the case is because people think it's just not possible; that the way we've done things is the only possible way we can continue to do things. I hope that changes, because I do believe AI will continue to improve and displace jobs.
My skepticism is not (necessarily) based on the potential capabilities of future AI, it's about the distribution of the returns from improved productivity. That's a political - not a technological - problem, and the last half century has demonstrated most countries unable to distribute resources in ways which trend towards post-scarcity.
That may be your position as well - indeed, I think your point about "people think[ing] it's not possible" is directly relevant - but I wanted to make that more explicit than I did in my original comment.
I stumbled on a parody of Dan Carlin recently. I don't know the original content enough to know if it's accurate or even funny as a satire of him specifically, but I enjoyed the surreal aspect. I'm guessing some AI was involved in making it:
Seriously, hardcore history? I dont even remember where I heard from him, but I think it was a Lex podcast. So I checked out hardcore history and was mightily disappointed. To my ears, he is rambling 3 hours about a topic, more or less unstructured and very long-winded, so that I basically remember nothing after having finished the podcast. I tried several times again, because I wanted it to be good. But no, not the format for me, and not a presentation I can actually absorb.
Hardcore History can certainly be off kilter, and the first eppy of any series tends to be a slog as he finds his groove. That said, Wrath of the Khans, Fall of the Republic, and the WW1 series do blossom into being incredible gripping series.
Yea there are much better examples of quality history podcasts, that are non-rambling. E.g. Mike Duncan podcasts (Revolutions, History of Rome), or the Age of Napoleon podcast. But even those are really just very good digestions of various source materials, which seems like something where LLMs will eventually reach quite a good level.
It's interesting I have the exact opposite opinion. I'm sure Mike Duncan works very hard, and does a ton of research, and his skill is beyond anything I can do. But his podcasts ultimately sound like a list of bullet points being read off a Google Doc. There's no color, personality, or feeling. I might as well have a screen reader narrate a Wikipedia article to me. I can barely remember anything I heard by him.
Carlin on the other hand, despite the digressions and rambling, manages to keep you engaged and really feel the events.
For such historical topics, my LLM-based software podgenai does a pretty good job imho. It is easier for it since it's all internal knowledge that it already knows about.
I would like them to be right, for that to mean that the 'real' content gets fewer (fewer bother) but better (or at least higher SNR among what there is).
And then faster/easier/cheaper access to the LM 'uninspired but possibly useful' content, whatever that might look like.
Think back to the mid-1980s and the first time everyone got their hands on a Casio or Yamaha keyboard with auto-accompaniment.
It was a huge amount of fun to play with, just pressing a few buttons, playing a few notes and feeling like you were producing a "real" pop song. Meanwhile, any actual musicians were to be found crying in the corner of the room, not because a new tool had come along which threatened their position, but because non-musicians apparently didn't understand (at least immediately) the difference between these superficial, low-effort machine-generated sounds and actual music.
What is scary about AI is the speed of improvement, not what it currently is.
People keep forming these analogies/explanations with the inherent premise that what we have now is what AI is going to be - "It's actually kind of shitty so don't fret, not much will change".
AI music creation has improved more in the last 5 years than keyboard accompaniment improved in the previous 40 years. It would be very brazen to bet that the tech 5 years from now is hardly any better. Especially when scaling transformers has consistently improved outputs. Double especially when the entire tech industry is throwing the house at scaling it.
Popular music has already been synthetic and souless for decades now. People will listen to what sounds good to them, and we already know the bar is very low, and that the hard truth is that it is all subjective anyway.
More of a behavioural science take. Is music the sound that is played or the people making the sound?
We’ve had software accompaniment for a long time. Elevator music. The same 4 chords arranged in similar ways for decades. Hasn’t destroyed music. Neither will AI.
At some point people are going to want to know who’s on the other side making the music.
Unless your argument is that nobody values artists… which is I guess one of the primary conceits of GenAI enthusiasts today.
To be clear, Dan the Automator added an additional drum track, an additional bass track, and a melodica track as well as numerous other sound effects. They didn't just loop the Casio demo track.
It ends up sounding like a smarmy Sunday-morning talk show conversation, with over-exaggerated affect and no content.
So far I've just fed it technical papers, which may be part of the problem, but what I got back was, "Gosh, imagine if a recommender system really understood us? Wow, that would be fantastic, wouldn't it?"
Already in the sample embedded by Simon. "Gosh", "wow", "like", "like", "like", "[wooooaaaawiiiing, woooooooaawiiiiiiing]", "Oh my god", "I was so, like...".
While it's impressive, I agree that it tends to make over the top comments or reactions about everything. It could probably make a Keurig machine sound like a revolutionary coffee maker.
I ran one of my papers into it, mind blown how well they dumbed it down without losing too much details (still quite a lot was ommitted). I wonder if it's domain specific, and I wonder what's the variance by topic.
Same here. In fact, I typically struggle communicating my scientific research to journalists, and next time I'll use this. It found some good metaphors to make even a quite math-heavy paper's core concepts understandable to the audience without losing correctness, which is something that both I and the journalist typically fail to do (I keep the correctness but don't make it understandable enough, so then journalists start coming up with metaphors and do the opposite).
A lawyer friend of mine also suggested giving it the Spanish civil code, a long, arid legal text. The podcast of course didn't cover the whole text in 10 minutes, which would be impossible, but they selected some interesting tidbits and actually had me hooked until the end and made me learn a few things about it, which is no small merit. And my friend was quite impressed and didn't complain about correctness.
I did the same thing, running one book I edited and another book I wrote through it, and it did quite well. I was particularly impressed with how the “hosts” came up with their own succinct examples and metaphors to explain what I had written at much greater length. (I should mention that one of those books was in Japanese, and they captured it clearly in English.)
Lately, when I just want to get the gist of a long article or research paper, I run it through NotebookLM and listen to the podcast while I’m exercising.
My only complaint is that the chatty podcasty gab gets tiring after a while. I wish it were possible to dial that down.
We’ve become so great at articulation and delivery of empty ideas. To a point, I completely block out people like these in real life. This is an entire career for many.
my first job out of college was at a big name management consulting firm... to riff on your point: yes, such is the entire career for many. and theirs aren't even such bad careers if one only considers money and prestige. two years there completely cured me of any illusion of positive correlation between prestige and intelligence. I used to wonder if the partners at the firm actually believed the bullshit they were spilling -- actually "delivering value" per consulting parlance. I get it that people do intellectually dishonest things just for the money... but the partners seemed to genuinely believe their chatgpt-esque text generation. In the end I figured it was a combination of self-selection (only the true believers stay for the years and make partner) and a psycho-hack where if you want to convince your client, you better believe it yourself first (only the true believers make good evangelists).
In the case of those books and podcasts, who cares if you read or listen to them? The point is that the books are sold and make the right lists. The point is that the podcasts are downloaded so ads can be sold or that vanity numbers can be reported.
In terms of such music and films (whether created by human or AI) sometimes it's just because we are social creatures and need shared experiences to talk with others about.
But knowing it's synthetic, why would you buy the book or listen to the podcast in the first place? There's nothing social or shared in a synthetic affectation.
In an ideal world, I would sit down with an espresso or a beer, and review collections of research papers on a regular basis.
In reality, between work, sleep and family, I rarely have anything resembling that kind of time and mental energy reserve available.
But, what I can afford is to listen to podcasts while doing other things. Doing that gives me enough of an overview to keep up with a general topic and find new topics that might be worth investing into deeper.
Wouldn’t it be great if someone made a podcast channel specifically for “Papers corysama wants to hear about at this moment”? I think so. Apparently, so do a lot of other people. But, they don’t want to list to my specific channel.
I wouldn't read an AI-generated book (except maybe once as a curiosity), but I would definitely listen to AI-generated music if it were good enough.
Reading a book is a time investment so I want it to convey the thoughts of another human being, otherwise it would feel like wasting my time. Listening to music, on the other hand, often is something that I do while I exercise, to keep a brisk pace and not get bored. As long as it sounds good, fits the genres and styles I like and is upbeat enough for exercising, I wouldn't have much of a problem with AI music - maybe it would even be a plus, since there are some specific music genres where I have already listened to pretty much everything there is (and no more is being made), and it would be great to have more.
I don't listen to podcasts, but I suppose in that case it depends on how you do so: devoting your full time and attention like a book, or as a background while you do something else like exercise music? As far as I know, many listeners are in the latter case, so I don't see why they wouldn't listen to AI podcasts.
There's background sounds and there's music. Music can communicate as much as the written word. I've listened to algorithmically generated bloop-blops and it's fine for background sound, but if it can't touch my heart it's not really music to me.
To me, as soon as I know it was fully generated it looses it's magic. It doesn't matter how good it is.
I see the same with potteries. A factory made pot cannot have more value than a hand made pot with the signature of a human. This touches the very fabric of society. Hard to explain.
LLMs are already better than books for exploring some ideas. But in conversation form.
Until we get better versions of o1 that can generate insights over days and then communicate them in book form the loss of interactivity and personalisation makes LLM books pointless.
An interactive conversation / tutorial session beats a book pretty much all of the time. Nonfiction books contain a lot of information that's redundant to a reader familiar with the topic, and not enough for someone new. They don't backtrack if you clearly missed an important point. And so on. It's like fractal geometry.
If an AI agent understands you (and book writing, and the topic of the book) well enough then it should be able to write you a pretty nice bespoke book.
I do suspect that interactive media is just strictly better in theory. But maybe there will be a period of time where bespoke AI-generated books make sense.
The problem is a whole book worth is a long time to go between feedback and questions. I don't see how the agent would know the reader that well, knowledge is embedded in the brain and only comes out when prompted.
I think it comes down to your area of interest. As a musician and music lover, I spend a significant amount of time trying to find or create music that is both original and good. AI generated music can be a competent imitation of well established ideas and forms, but that’s of zero value to me - I’m not looking for ‘more of the same’ - quite the opposite.
Of course. In my case, I'm not saying that I could do with AI music in any context either. Sometimes I play music in the living room, and I pay real attention to it, obviously AI won't do there. But when I'm using the music just as a background for exercising? Then sure, why not.
So you’re basically saying filler music, elevator music, backgrouound noise or whatever names it may come under. Since there’s already so much of it out there and since AI one isn’t novel in any way, I have a hard time understanding why you’d choose AI generated one.
Too much of it -> No, there are entire musical genres (e.g. italodance or big beat) where I have already listened to pretty much everything available, and they are not expanding anymore because they are not fashionable. It would be nice to have more songs and be surprised.
Isn't novel in any way -> This is not how it works, there are studies showing that AI can be creative. Or at the very least (since the definition of creativity can be controversial) produce output that is indistinguishable from novel, creative output, which is enough for the purpose discussed here.
Your comment implies that there is an existing piece of music, which can subsitute the generated music. While subsitutability varies from person to person, your original statement implies for me that each generated music has an accompanying original music that you can listen to instead (of which it was "stolen" from), since it is similar enough. I think we both know that that is not the case.
I know that you likely intended to imply that you can subsitute the aformentioned AI music with an existing piece of music of the same genre, but that is not a view shared by all. Sometimes the generated music scratches such a specific and personal itch, that it cannot be replicated by something in the same genre.
A better counterargument to your original comment would be "It is not an exclusive situation. I can listen to and support both generated music and handcrafted music at the same time. They both contain music tracks that I like."
You don't have to be a big fucking nerd about it, you know what I meant. The generated music wouldn't exist without the foundation of stolen music made by people.
No, I didn't know what you meant. Communication is hard, and there are multiple ways to interpret your statements. It is better to be specific.
To be more specific about the second sentence, if there are any readers in doubt:
> The generated music wouldn't exist without the foundation of stolen music made by people.
The word "stolen" is a value judgement that is not shared by all. It is a word meant to invoke an emotional response in the reader. For example, Stallman has argued that the data could not have been stolen, or else it would not be there anymore. So, removing this word gives you:
> The generated music wouldn't exist without the foundation of existing music made by people.
Which is a true fact that has never been in debate.
However, this is not relevant to the main point that not all generated music has a suitable handcrafted substitute, and that there is no actual need to choose exclusively to listen to generated or human crafted music. Furthermore, the conversation has turned uncivil (the first sentence). Therefore, goodbye.
>But why would I buy those books or listen to those podcasts that are synthetic affectations of no substance?
A randomly selected NotebookLM podcast is probably not substantial enough on its own. But with human curation, a carefully prompted and cherry-picked NotebookLM podcast could be pretty good.
Or without curation, I would use this on a long drive where audio was the only option to get a quick survey of a bunch of material.
That's the same question I have. There is already a ton of great podcasts/music/everything in the niches that I like that I don't have the time to listen to them all. I also like to have quiet introspective time.
So where does AI regurgitated slop fit into my life?
In the case of NotebookLM, the AI generated podcasts aren't competing with existing podcasts, they're competing with other ways of consuming the source material. Would I rather listen to a real podcast? Yes. But no one's making a real podcast about the Bluetooth L2CAP specification.
> The reason so much writing, podcasting, and music is vulnerable to AI disruption is that quality has already become secondary.
I think that has always been the case, we just tend to compare today’s average stuff with the best stuff from earlier days.
For example, most furniture pictures from the 60s and 70s are from upper middle class homes. If we listen music, we listen to Queen and not some local band from Alabama (not that I’m against such bands at all; they can make great music too).
> I think that has always been the case, we just tend to compare today’s average stuff with the best stuff from earlier days.
I agree with this of course, because generally nobody remembers the bad stuff unless it was the worst. I beg to differ with music, though, because there's an opposing effect: we tend to be left with the most marketed music, which was usually a cheap knockoff of something interesting going on at the time. The shitty commercial knockoff becomes the "classic" while the people they were ripping off don't even get a wikipedia page.
You're raising a good point about how "best" is defined.
If you ask most people, they are by definition more likely to connect with broadly disseminated cheap knock offs than they are with whatever 'legit' inventive underground creator, simply because they've heard the former and not the latter.
Just a mental exercise: If you ask 1000 people if they prefer Knock Off or Original, and 900 say Knock Off, which one was better? If the answer is still Original, by what metric do we measure quality?
I would disagree it's trying to be a "quality" podcast. As usual with AI, it's an average over averages, incredibly mediocre, sometimes borderline satire. For instance, in this example podcast they say "and trust me, guys, you wanna hear all about this", which is where I would usually turn off, because nothing of quality can come after this sentence.
In my company, HR now uses AI to do training videos. It's hilariously funny, because it looks like a satire on training videos (well, granted, it's funny for a minute or two, then it shifts to annoying).
Right? The fact that the LLM output is indistinguishable from a podcast says more about podcasts than about LLMs.
If anything, listening to that reminded me of why I stopped listening to podcasts in the first place - every 5 second snippet of something interesting ends up suffocated by 5 minutes of filler and dead air.
That's actually a really good application of AI, because the quality of the content is meaningless as long as it hits the bullet points. They only do this to check a box that training on <topic> was done.
> The reason so much writing, podcasting, and music is vulnerable to AI disruption is that quality has already become secondary.
They're vulnerable because people aren't random. Most of what we do can be modeled statistically and translated into patterns and tendencies. Given a sufficient number of parameters, just about anything we do can be digested by an autocompletion program that can then generate an output similar enough to the real thing to fool us.
Remembering the 90s when I grew up really into alternative music I think what has changed is maybe public perception. AI back then would not have changed much because mainstream pop music was already accepted as generic derivative existing only to make money. Quality was already seen as secondary to be successful. But nowadays maybe due to social networks incentives instead of journalists curation only numbers seem to matter.
it is the perfect milquetoast personality. It's like don lemon but without the interesting bits of don lemon. It has no draw or interest.
Podcasts are only somewhat about things. The most important part is that they're by people, and the people is what draws people in. These ai podcasts are not by people, and when you listen to more than one you start to see the patterns and void where a personality is.
People care about being able to consume information in ways that works for them.
I don't have time to read white papers (nor am I very good at it), but want to know what they consist of. I also want to take my dog for a walk which is hard to do while staring at a screen. This, and other tools like it are useful in achieving that.
I think it is right that people don't care and there is some merit to it.
Reading, or listening to podcast, these days is more akin to a meditation - many people do it to reenforce an identity rather than to expand on themselves.
And I do think that is reasonable as, for many people, there are few other structures that can keep them in check with themselves.
I think the average person is more interested in the output than in the process e.g. more people want to read The Shining than want to read about how The Shining was written
Yes, this is impressive, it has all the idiosyncrasies of podcasting, the pauses, turns of phrase, even the tones where we hear people putting things in quotes, etc.
... but it's also pointless. And it's likely different episodes on different topics will tend to sound very much alike; it's already the case here, I'm sure I heard another example where the two voices were the same.
In less than a year we all have learned to recognize AI images with pretty good accuracy; text is more difficult, but podcasting seems easy in comparison.
Well, yes. Replace the various music and book publishing mills with LLMs for even more low quality drivel filling the marketplaces because now even the already low barrier of having to actually pay someone to produce it will be removed.
That's definitely going to be an improvement. Not.
I thought this was a great, insightful comment, but noodling over it a little more made me think it's not just content producers who are responsible for this "quality vacuousness" epidemic.
I think this is just partly an inevitable consequence of going from "content scarcity" to our new normal of "content obesity" over the past 20 years or so. In this new era of an overwhelming amount of content, it's just natural to compare it all against each other, e.g. to essentially "optimize" it to the "best" form, but in doing that we've fallen into a homogeneity, and the resulting lack of variation is an actual lowering of quality in and of itself.
2 examples to explain what I mean:
1. I find that nearly all interior design (at least within broad styles) looks basically the same to me now. It's all got that "minimalist, muted tones but with a touch of organic coziness and one or two pops of color" look to it. Honestly, I don't know how interior designers even exist today, when it's trivial to go to Houzz or any of a million websites and say "yes, like this". A while back I was complaining online somewhere that I thought all interior design looked similar where in the past there was much more interesting variation, and somebody insightful replied that it's not really that interior design is now just the same, it's that it's really just converged. People can easily see and compare a million designs against each other, so there is much less of a chance for that green shag carpet to even get a moment in the sun.
2. I was recently on vacation and decided I wanted to read a "classic" book, so I read Hemingway's The Sun Also Rises (I'm not sure why I never had to read that in high school). Nearly throughout the entire book I couldn't help but thinking "Is there any time this book stops sucking?" I hated the entire thing - it was like being forced to watch someone's vacation photos for twelve hours straight, and I kept wondering why there never seemed to be any attempt to actually make me give a shit about any of the characters in the book, as nearly every one of them I found insufferable and wondered how they each had about 3 or 4 livers to spare. But I do understand that Hemingway's writing style was unique and original at the time, and that he was doing something new and interesting that influenced American literature for a long time. But these days, given the flood of content, it feels like most attempts at doing something "new and interesting" are not only forced, but nearly impossible given that there are a million other people also trying to do new and interesting things that now have the means to disseminate them. I don't think a book like The Sun Also Rises, where I believe the main impact was the style of writing/dialogue vs the actual story, could ever break through today.
I guess my point with this long post is that I think the "loss of quality" in content that many of us sense is just a direct result of there being so much content that we see variations from the "ideal" as worse, where in the past we may have found them interesting.
You're right, I love this, thanks! I was familiar with some of these examples, e.g. Komar and Melamid's painting example (and, IIRC, unless I'm confusing with other artists, they also painted a painting filled with features that the "average" person hated, like abstract geometric shapes and stark colors, and the artists actually liked that painting and said something along the lines of "turns out we're really good at making bad art"), and the "AirBnB-style of interior design" was so excellently skewered by SNL recently, and HN has had a number of posts about how so many brands have devolved to the same monochrome, san-serif typefaces for their logos.
Still, at the same time, I couldn't help but feeling a little bit sad/resigned at the existence of the article you linked. Here I thought I had an idea that was not exactly unique but that I felt would be good to share. And yet then here is an example that expresses this idea a million times better than I ever could (I love "The Age of Average" headline), with great researched examples and tons of helpful visuals. It's hard to not feel a bit like Butters in that "Simpsons did it!" episode of South Park...
What you say (though I'm not sure that we can speak of an "ideal"), compounded with the "late stage capitalism" fact that everything today is consolidated, and has to be about making profit and maximizing it: Disney shareholders probably like the latest Marvel movie more than you do for being the same as the previous ones: business don't like taking risks. The same applies to your furniture maker: when you sell to millions and want your shelves stuffed, you pick a select few materials and color variations that minimize cost and targets the broadest audience.
yes, podcasting is a goto market strategy. One reason there are so many VC podcasts is because it is how GPs (VCs who fundraise) reach LPs (the money that invests in venture funds).
So your argument in anutshell is: humans have nothing to say, let's stop listening to them. Are you serious? It's ALL about what humans want to send out to the world, this is what it's all about. I'm perplexed that this isn't obvious.
This is in-line with all art, music, and video created by LMM at the moment. They are imitating a structure and affect, the quality of the content is largely irrelevant.
I think the interesting thing is that most people don't really care, and AI is not to blame for that.
Most books published today have the affect of a book, but the author doesn't really have anything to say. Publishing a book is not about communicating ideas, but a means to something else. It's not meant to stand on its own.
The reason so much writing, podcasting, and music is vulnerable to AI disruption is that quality has already become secondary.