I think and hope that you're wrong. There's always been cheese, and there's a lot of it now. But there is still a market for top-notch insight.
For example, Perun. This guy delivers an hourlong presentation on (mostly) the Ukraine-Russia war and its pure quality. Insights, humour, excellent delivery, from what seems to be a military-focused economist/analyst/consultant. We're a while away from some bot taking this kind of thing over.
I keep seeing this asertion: "the robots will get there" (or its ilk), and it's starting to feel really weird to me.
It's an article of faith -- we don't KNOW that they're going to get there. They're going to get better, almost certainly, but how much? How much gas is left in the tank for this technique?
Honestly, I think the fact that every new "groundbreaking" news release about LLMs has come alongside a swath of discussion about how it doesn't actually live up to the hype, that it achieves a solid "mid" and stops there, I think this means it's more likely that the robots AREN'T going to get there some day. (Well, not unless there's another breakthrough AI technique.)
Either way, I still think it's interesting that there's this article of faith a lot of us have "we're not there now, but we'll get there soon" that we don't really address, and it really colors the discussion a certain way.
IMO it seems almost epistemologically impossible that LLM's following anything even resembling the current techniques will ever be able to comfortably out-perform humans at genuinely creative endeavours because they, almost by definition, cannot be "exceptional".
If you think about how an LLM works, it's effectively going "given a certain input, what is the statistically average output that I should provide, given my training corpus".
The thing is, humans are remarkably shit at understanding just how exception someone needs to be to be genuinely creative in a way that most humans would consider "artistic"... You're talking 1/1000 people AT best.
This creates a kind of devils bargain for LLMs where you have to start trading training set size for training set quality, because there's a remarkably small amount of genuinely GREAT quality content to feed this things.
I DO believe that the current field of LLM/LXM's will get much better at a lot of stuff, and my god anyone below the top 10-15% of their particular field is going to be in a LOT of trouble, but unless you can train models SOLELY on the input of exceptionally high performing people (which I fundamentally believe there is simply not enough content in existence to do), the models almost by definition will not be able to outperform those high performing people.
Will they be able to do the intellectual work of the average person? Yeah absolutely. Will they be able to do it probably 100/1000x faster than any human (no matter how exceptional)?... Yeah probably... But I don't believe they'll be able to do it better than the truly exceptional people.
I’m not sure. The bestsellers lists are full of average-or-slightly-above-average wordsmiths with a good idea, the time and stamina to write a novel and risk it failing, someone who was willing to take a chance on them, and a bit of luck. The majority of human creative output is not exceptional.
A decent LLM can just keep going. Time and stamina are effectively unlimited, and an LLM can just keep rolling its 100 dice until they all come up sixes.
Or an author can just input their ideas and have an LLM do the boring bit of actually putting the words on the paper.
I’m just saying, the vast majority of human creative endeavours are not exceptional. The bar for AI is not Tolkien or Dickens, it’s Grisham and Clancy.
IMO the problem facing us is not that computers will directly outperform people on the quality of what they produce, but that they will be used to generate an enormous quantity of inferior crap that is just good enough that filtering it out is impossible.
We have already trashed the internet and really human communication with SEO blogspam brought even lower by influencers desperately scrambling for their two minutes of attention. I could actually see quality on average rising, since it will now be easy to churn out higher quality content, even more easily than the word salad I have been wading through for at least the last 15 years.
I am not saying it's not a sad state of affairs. I am just saying we have been there for a while and the floor might be raised, a bit at least.
Yes, LLMs are probably inherently limited, but the AI field in general is not necessarily limited, and possibly has the potential to be more genuinely creative than even most exceptional creative humans.
I loosely suspect too many people are jumping into LLMs and I assume real research is being strangled. But to be honest all of the practical things I have seen such as by Mr Goertzel are painfully complex very few can really get into.
Agreed. I think people are extrapolating with a linearity bias. I find it far more plausible that the rate of improvement is not constant, but instead a function of the remaining gap between humans and AI, which means that diminishing returns are right around the corner.
There's still much to be done re: reorganizing how we behave such that we can reap the benefits of such a competent helper, but I don't think we'll be handing the reigns over any time soon.
In addition to "will the robots get there?" there's also the question "at what cost?". The faith-basedness of it is almost fractal:
- "Given this thing I saw a computer program do, clearly we'll have intelligent AI real soon now."
- "If we generate sufficiently smart AI then clearly all the jobs will go away because the AI will just do them all for us"
- "We'll clearly be able to do the AI thing using a reasonable amount of electricity"
None of these ideas are "clear", and they're all based on some "futurist faith" crap. Let's say Microsoft does succeed (likely at collosal cost in compute) in creating some humanlike AI. How will they put it to work? What incentives could you offer such a creature? What will it want in exchange for labor? What will it enjoy? What will it dislike? But we're not there yet, first show me the intelligent AI then we can discuss the rest.
What's really disturbing about this is hype is precisely that this technology is so computationally intensive. So of course the computer people are going to hype it--they're pick and shovel salespeople supplying (yet another) gold rush.
AI has been so conflated with LLMs as of late that I'm not surprised that it feels like we won't get there. But think of it this way, with all of the resources pouring into AI right now (the bulk going towards LLMs though), the people doing non-LLM research, while still getting scraps, have a lot more scraps to work with! Even better, they can probably work in peace, since LLMs are the ones under the spotlight right now haha
We all seek different kinds of quality; I don't find Peruns videos to have any quality except volume. He reads bullet points he has prepared, and makes predictable dad jokes in monotone, re-uses and reruns the same points, icons, slides, etc. Just personally, I find it really samey and some of the reporting has been delayed so much it's entirely detached from the ground by the time he releases. It's a format that allows converting dense information and theory to hour long videos, without examples or intrigue.
Personally, I prefer watching analysis/sitrep updates with the geolocations/clips from the front/strategic analysis which uses more of a presentation (e.g. using icons well and sparingly). Going through several clips from the front and reasoning about offensives, reasons, and locations is seems equally difficult to replicate as Peruns videos, which rely on information density.
I do however love Hardcore history - he adds emotion and intrigue!
I agree with your overall hope for quality and different approaches still remaining stand out from AI generated alternatives.
I think the main problem with Peruns' videos are that they are videos. I run a little program on my home-lab that turns them into podcasts and I find that I enjoy them far more because I need to be less engaged with a podcast to still find them enjoyable. (Also, I gave up on being up to date with Ukraine situation, since up to date information is almost always wrong. I am happy to be a week or a 14 days behind if the information I am getting is less wrong).
I like Hardcore history very much, but I think it would be far worse in a video form.
> He reads bullet points he has prepared, and makes predictable dad jokes in monotone, re-uses and reruns the same points, icons, slides, etc.
The presentation is a matter of taste (I like it better than you do), but the content is very informative and insightful.
Its not really about what is happening at the frontline right now. Its not its aim. Its for people who want dense information and analysis. The state of the Ukrainian and Russian economies (subjects of recent Perun videos) does not change daily or weekly.
All of the other commentators have replied with a good diverse set of YouTubers and included ones with biases from both sides; I'd recommend the ones they have linked. Some (take note of the ones that release information quicker) might be more biased or more prone to reporting murky information than others.
I like a range of the Ukraine coverage. From stuff that comes in fast to the weekly roundup-with-analysis. E.g. Suchomimus has his own humour and angle on things, but if you don’t have a unique sense of humour or delivery then it’s easier for an AI to replace you.
Give it a year or three, up to the minute AI generated sitrep pulling in related media clips and adding commentary…not that hard to imagine.
> Give it a year or three, up to the minute AI generated sitrep pulling in related media clips and adding commentary…not that hard to imagine.
But why? Isn’t there enough content generated by humans? As a tool of research AI is great in helping people do whatever they do but having that automated away generating content by itself is next to trash in my book, pure waste. Just like unsolicited pamphlets thrown at your door you pick up in the morning to throw in the bin. Pure waste.
This is true but the quality frontier is not a single bar. For mainstream content the bar is high. For super-niche content, I wouldn’t be surprised if NotebookLM already competes with the existing pods.
This will be the dynamic of generated art as it improves; the ease of use will benefit creators at the fringe.
I bet we see a successful Harry Potter fanfic fully generated before we see a AAA Avengers movie or similar. (Also, extrapolating, RIP copyright.)
On the contrary, the mainstream eats any slop you put infront of it as long as it follows the correct form - one needs only look at cable news - the super niche content is that which requires deep thinking and novel insights.
Or to put another way, I've heard much better ideas on a podcast made by undergrad CS students than on Lex Fridman.
It's the complete opposite. Unless your definition of mainstream includes stuff like this deep drive into Russia/Ukraine, in which case I think you're misunderstanding "mainstream".
I know I'm not the first to say this, but I think what's going on is that these AI things can produce results that are very mid. A sort of extra medium. Experts beat modern LLMs but modern llms are better than a gap.
If you just need voice discussing some topic because that has utility and you can't afford a pair of podcasters (damn, check your couch cushions) then having a mid podcast is better than having no podcast. But if you need expert Insight because expert Insight is your product and you happen to deliver it through a podcast then you need an expert.
If I were a small software shop and I wanted something like a weekly update describing this week's updates for my customers and I have a dozen developers and none of us are particularly vocally charismatic putting a weekly update generated from commits, completed tickets, and developer notes might be useful. The audience would be very targeted and the podcast wouldn't be my main product, but there's no way I'd be able to afford expert level podcasters for such a position.
I would argue Perun is a world class defense Logistics expert or at least expert enough, passionate enough, and charismatic enough to present as such. Just like the guys who do Knowledge Fight, are world class experts on debunking Alex Jones, and Jack Rhysider is an expert and Fanboy of computer security so Darknet Diaries excels, and so on...
These aren't for making products, they can't compete with the experts in the attention economy. But they can fill gaps and if you need audio delivery of something about your product this might be really good.
Edit - but as you said the robots will catch up, I just don't know if they'll catch up with this batch of algorithms or if it'll be the next round.
> I know I'm not the first to say this, but I think what's going on is that these AI things can produce results that are very mid. A sort of extra medium. Experts beat modern LLMs but modern llms are better than a gap.
I've seen people manage to wrangle tools like Midjourney to get results that surpass extra medium. And most human artists barely manage to reach medium quality too.
The real danger of AI is that, as a society, we need a lot of people who will never be anything but mediocre still going for it, so we can end up with a few who do manage to reach excellence. If AI causes people to just give up even trying and just hit generate on a podcast or image generator, than that is going to be a big problem in the long run. Or not, and we just end up being stuck in world that is even more mediocre than it is now.
AI looks like it will commoditise intellectual excellence. It is hard to see how that would end up making the world more mediocre.
It'd be like the ancient Romans speculating that cars will make us less fit and therefore cities will be less impressive because we can't lift as much. That isn't at all how it played out, we just build cities with machines too and need a lot less workers in construction.
If you want to say AI have reached intellectual Excellence because we have a few that have peaked in specific topics I would argue that those are so custom and bespoke that they are primarily a reflection on their human creators. Things like Champions and specific games or solutions to specific hard algorithms are not generally repurposable, and all of the general AI we have are a little bit dumb and when they work well they produce results that are generally mid. On occasionally we can get a few things we can sneak by and say they're better but that's hardly a commodity that's people sifting through large piles of mid for gems.
There are a lot of ways if it did reach intellectual excellence that we could argue that it would make Humanity more mediocre, I'm not sure I buy such arguments but there are lots of them and I can't say they're all categorically wrong.
> It'd be like the ancient Romans speculating that cars will make us less fit and therefore cities will be less impressive because we can't lift as much. That isn't at all how it played out
No, obviously not. Modern construction is leagues outside what the Romans could ever hope to achieve. Something like the Burj Khalifa would be the subject of myth and legend to them.
We move orders of magnitude more cargo and material than them because fitness isn't the limiting factor on how much work gets done. They didn't understand that having humans doing all that labour is a mistake and the correct approach is to use machines.
I don't know, Dubai is...bigger, but I'd say it's vastly more mediocre city than Rome. To your original point, making things easier to make probably does exert downward pressure on quality in the aesthetic/artistic sense. Dubai might have taller buildings and better sewage system[0], but it will never have the soul of a place like Rome.
[0] Given the floods I saw recently, I'm not even sure this is even true.
I don't think you're logic follows that we need a lot of people suffering to get a few people to be excellent. If people with a true and deep passion follow a thing I think they have a significant chance of becoming excellent at it. These are people who are more likely to try again if they fail, these are people who are more likely to invest above average levels of resources into acquiring the skill, these are people who are willing to try hard and self-educate, such people don't follow a long tail distribution for failure.
If someone wants to click generate on a podcast button or image generator it seems unlikely to me that was a person who would have been sufficiently motivated to make an excellent podcast or image. On the flip side, consider if the person who wants to click the podcast or image button wants to go on to do script writing, game development, Structural Engineering, anything else but they need a podcast or image. Having such a button frees up their time.
Of course this is all just rhetorical and occasionally someone is pressed into a field where they excel and become a field leader. I would argue that is far less common than someone succeeding and I think they want to do, but I can't present evidence that's very strong for this.
> as a society, we need a lot of people who will never be anything but mediocre still going for it, so we can end up with a few who do manage to reach excellence
"Reach excellence" is the key phrase there. Excellence takes time and work, and most everyone who gets there is mediocre for a while first.
I guess if AIs become excellent at everything, and the gains are shared, and the human race is liberated into a post-scarcity future of gay space communism, then it's fine. But that's not where it's looked like we're heading so far, though - at least in creative fields. I'd include - perhaps not quite yet, but it's close - development in that category. How many on this board started out writing mid-level CRUD apps for a mid-level living? If that path is closed to future devs, how does anyone level up?
> But that's not where it's looked like we're heading so far
I think one of the major reasons this is the case is because people think it's just not possible; that the way we've done things is the only possible way we can continue to do things. I hope that changes, because I do believe AI will continue to improve and displace jobs.
My skepticism is not (necessarily) based on the potential capabilities of future AI, it's about the distribution of the returns from improved productivity. That's a political - not a technological - problem, and the last half century has demonstrated most countries unable to distribute resources in ways which trend towards post-scarcity.
That may be your position as well - indeed, I think your point about "people think[ing] it's not possible" is directly relevant - but I wanted to make that more explicit than I did in my original comment.
I stumbled on a parody of Dan Carlin recently. I don't know the original content enough to know if it's accurate or even funny as a satire of him specifically, but I enjoyed the surreal aspect. I'm guessing some AI was involved in making it:
Seriously, hardcore history? I dont even remember where I heard from him, but I think it was a Lex podcast. So I checked out hardcore history and was mightily disappointed. To my ears, he is rambling 3 hours about a topic, more or less unstructured and very long-winded, so that I basically remember nothing after having finished the podcast. I tried several times again, because I wanted it to be good. But no, not the format for me, and not a presentation I can actually absorb.
Hardcore History can certainly be off kilter, and the first eppy of any series tends to be a slog as he finds his groove. That said, Wrath of the Khans, Fall of the Republic, and the WW1 series do blossom into being incredible gripping series.
Yea there are much better examples of quality history podcasts, that are non-rambling. E.g. Mike Duncan podcasts (Revolutions, History of Rome), or the Age of Napoleon podcast. But even those are really just very good digestions of various source materials, which seems like something where LLMs will eventually reach quite a good level.
It's interesting I have the exact opposite opinion. I'm sure Mike Duncan works very hard, and does a ton of research, and his skill is beyond anything I can do. But his podcasts ultimately sound like a list of bullet points being read off a Google Doc. There's no color, personality, or feeling. I might as well have a screen reader narrate a Wikipedia article to me. I can barely remember anything I heard by him.
Carlin on the other hand, despite the digressions and rambling, manages to keep you engaged and really feel the events.
For such historical topics, my LLM-based software podgenai does a pretty good job imho. It is easier for it since it's all internal knowledge that it already knows about.
I would like them to be right, for that to mean that the 'real' content gets fewer (fewer bother) but better (or at least higher SNR among what there is).
And then faster/easier/cheaper access to the LM 'uninspired but possibly useful' content, whatever that might look like.
For example, Perun. This guy delivers an hourlong presentation on (mostly) the Ukraine-Russia war and its pure quality. Insights, humour, excellent delivery, from what seems to be a military-focused economist/analyst/consultant. We're a while away from some bot taking this kind of thing over.
https://www.youtube.com/@PerunAU
Or hardcore history. The robots will get there, but it's going to take a while.
https://www.dancarlin.com/hardcore-history-series/