Hacker Newsnew | past | comments | ask | show | jobs | submit | aswegs8's commentslogin

I third this motion


"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise." - Socrates on Writing and Reading, Phaedrus 370 BC

If one reads the dialogue, Socrates is not the one "saying" this, but he is telling a story of what King Thamus said to the Egyptian god Theuth, who is the inventor of writing. He is asking the king to give out the writing, but the king is unsure about it.

Its what is known as one of the Socratic "myths," and really just contributes to a web of concepts that leads the dialogue to its ultimate terminus of aporia (being a relatively early Plato dialogue). Socrates, characteristically, doesn't really give his take on writing. In the text, he is just trying to help his friend write a horny love letter/speech!

I can't bring it up right now, but the end of the dialogue has a rather beautiful characterization of writing in the positive, saying that perhaps logos can grow out of writing, like a garden.

I think if pressed Socrates/Plato would say that LLM's are merely doxa machines, incapable of logos. But I am just spitballing.



Phaedo != Phaedrus. One is the "writing" one, the other one is, well, about Socrates' execution (also extremely good dialogue!).

The one at issue:

https://standardebooks.org/ebooks/plato/dialogues/benjamin-j...

The public domain translations are pretty old either way. John Cooper's big book is probably still the best but im out of the game these days.

AI guys would probably love this if any of them still have the patience to read/comprehend something very challenging. Probably one of the more famous essays on the Phaedrus dialogue. Its the first long essay of this book:

https://xenopraxis.net/readings/derrida_dissemination.pdf

Roughly: Plato's subordination of writing in this text is symptomatic of a broader kind of `logocentrism` throughout all of western canonical philosophy. Derrida argues the idea of the "externality" of writing compared to speech/logos is not justified by anything, and in fact everything (language, thought) is more like a kind "writing."


Presenting this quote without additional commentary is an interesting Rorschach test.

Thankfully more and more people are seriously considering the effects of technology on true wisdom and getting of the "all technological progress clearly is great, look at all these silly unenlightened naysayers from the past" train.


Socrates was right about the effects. Writing did indeed cause us to loose the talent of memorizing. Where he was wrong though (or rather where this quote without context is wrong) is that it turned out that memorizing was by the most part not the important skill to have.

When Socrates uses the same warnings about LLMs he may however be correct both on the effect and the importance of the skill being lost. If we loose the ability to think and solve various problems, we may indeed be loosing a very important skill of our humanity.


You're misinterpreting the quote. Socrates is saying that being able to find a written quotation will replace fully understanding a concept. It's the difference between being able to quote the pythagorean theorem and understanding it well enough to prove it. That's why Socrates says that those who rely on reading will be "hard to get along with" - they will be pedantic without being able to discuss concepts freely.

Huh, I think you're right. I think I failed the litmus test. Thanks for explaining!

While there are dangers to LLMs -science fiction has been talking about this issue for decades (see below) and I think its overblown and the point of the Socrates quote is valid.

e.g the Matrix Reloaded: https://youtu.be/cD4nhYR-VRA?si=bXGBI4ca-LaetLVl&t=69 Machines no one understand or can manage

Issac Asmiov's Classic - the Feeling of Power https://ia600806.us.archive.org/20/items/TheFeelingOfPower/T...

(future scientists discover how to add using paper and pencil instead of computer)

I mean Big Paradigm shifts are like death, we can't really predict how humanity will evolve if we really get AGI -but these LLMs as they work today are tools and humans are experts at finding out how to use tools efficiently to counter the trade offs.

Does it really matter today that most programmers don't know how to code in assembly for example?


I’m not making a Malthusian doomsday prediction, and neither was Socrates for that matter. Jobs need to be done, and there will always be somebody willing and able to acquire the relevant skills, and do the job. And in the worst case scenario, society will change it self before it is allowed to fail.

Unlike Malthus, for whom it was easier to imagine the end of the world then the end of Mercantilism, I can easily imagine a world which simply replaces capitalism as its institutions start producing existential threats for humanity.

However, I don‘t think LLMs are even that, for me they are an annoyance which I personally want gone, but next to climate change and the stagnation of population growth, they wont make a dent in upending capitalism, despite how much they suck.

But just because they are not an existential threat, that doesn’t make them harmless. Plenty of people will be harmed by this technology. Like Socrates predicted people will lose skills, this includes skilled programmers, and where previously we were getting some quality software, instead we will get less of it, replaced with a bunch of AI slop. That is my prediction at least.


That is interesting because your mental abilities seem to be correlated with orchestrating a bunch of abstractions you have previously mastered. Are these tools making us stupid because we no longer need to master any of these things? Or are they making us smarter because the abstraction is just trusting AI to handle it for us?

Does a student become smarter by hiring a smarter student to write his essays and take his tests for him?

We can also invert that by asking: does a student become smarter by writing their essay on their own?

I would argue that the answer to questions is no. It depends on how you define “smarter”, though. You would likely gain knowledge writing the essay yourself, but is gaining knowledge equivalent to getting smarter?

If so, you could also just read the essay afterwards and gain the same knowledge. Is _that_ smarter? You’ve now reached the same benefit for much less work.

I think fundamentally I at least partially agree with your stance. That we should think carefully before taking a seemingly easier path. Weighing what we gain and lose. Sometimes the juice is, in fact, the squeeze. But it’s far from cut and dry.


It's unclear if you've presented this quote in order to support or criticize the idea that new technologies make us dumber. (Perhaps that's intentional; if so, bravo).

To me, this feels like support. I was never an adult who could not read or write, so I can't check my experience against Socrates' specific concern. But speaking to the idea of memory, I now "outsource" a lot of my memory to my smartphone.

In the past, I would just remember my shopping list, and go to the grocery store and get what I needed. Sure, sometimes I'd forget a thing or two, but it was almost always something unimportant, and rarely was a problem. Now I have my list on my phone, but on many occasions where I don't make a shopping list on my phone, when I get to the grocery store I have a lot of trouble remembering what to get, and sometimes finish shopping, check out, and leave the store, only to suddenly remember something important, and have to go back in.

I don't remember phone numbers anymore. In college (~2000) I had the campus numbers (we didn't have cell phones yet) of at least two dozen friends memorized. Today I know my phone number, my wife's, and my sister's, and that's it. (But I still remember the phone number for the first house I lived in, and we moved out of that house when I was five years old. Interestingly, I don't remember the area code, but I suppose that makes sense, as area codes weren't required for local dialing in the US back in the 80s.)

Now, some of this I will probably ascribe to age: I expect our memory gets more fallible as we get older (I'm in my mid 40s). I used to have all my credit/debit card numbers, and their expiration dates and security codes, memorized (five or six of them), but nowadays I can only manage to remember two of them. (And I usually forget or mix up the expiration dates; fortunately many payment forms don't seem to check, or are lax about it.) But maybe that is due to new technology to some extent: most/all sites where I spend money frequently remember my card for me (and at most only require me to enter the security code). And many also take Paypal or Google Pay, which saves me from having to recall the numbers.

So I think new technology making us "dumber" is a very real thing. I'm not sure if it's a good thing or a bad thing. You could say that, in all of my examples, technology serving the place of memory has freed up mental cycles to remember more important things, so it's a net positive. But I'm not so sure.


I don‘t think human memory works like that, at least not in theory. Storage is not the limiting factor of human memory, but rather retention. It takes time and effort to retain new information. In the past you spent some time and effort to memorize the shopping list and the phone number. Mulling it over in your mind (or out loud), repeated recalls, exposure, even mnemonic tricks like rhymes, alliterations, connecting with pictures, stories, etc. if what you had to remember was something more complicated/extensive/important. And retention is not forever, unless you repeat it, you will loose it. And you only have so much time for repetition and recall, so inevitably, there will be memories which won‘t be repeated, and can’t be recalled.

So when you started using technology to offload your memory, what you gained was the time and effort you previously spent encoding these things into your memory.

I think there is a fundamental difference though between phone book apps and LLMs. Loosing the ability to remember a phone number is not as severe as loosing the ability to form a coherent argument, or to look through sources, or for a programmer to work through logic, to abstract complex logic into simpler chunks. If a scholar looses the skill to look through sources, and a programmer looses the ability to abstract complex logic, they are loosing a fundamental part of their needed to do their jobs. This is like if a stage actor looses the ability to memorize the script, instead relying on a tape-recorder when they are on stage.

Now if a stage actor losses the ability to memorize the script, they will soon be out of a job, but I fear in the software industry (and academia) we are not so lucky. I suspect we will see a lot of people actually taking that tape recorder on stage, and continue to do their work as if nothing is more normal. And the drop in quality will predictably follow.



Yup.

My personal counterpoint is Norman's thesis in Things That Make Us Smart.

I've long tried, and mostly failed, to consider the tradeoffs, to be ever mindful that technologies are never neutral (winners & losers), per Postman's Technopoly.


And so we learn that over 2000 years before the microphone came to be, Socrates invented the mic drop.

In all seriousness though, it's just crazy that anybody is thinking these things at the dawn of civilization.


Well, the wisdom part is true.

He was right. It did.

Writing/reading and AI are so categorically different that the only way you could compare them is if you fundamentally misunderstand how both of them work.

And "other people in the past predicted doom about something like this and it didn't happen" is a fallacious non-argument even when the things are comparable.


The argument Socrates is making is specifically that writing isn't a substitute for thinking, but it will be used as such. People will read things "without instruction" and claim to understand those things, even if they do not. This is a trade-off of writing. And the same thing is happening with LLMs in a widespread manner throughout society: people are having ChatGPT generate essays, exams, legal briefs and filings, analyses, etc., and submitting them as their own work. And many of these people don't understand what they have generated.

Writing's invention is presented as an "elixir of memory", but it doesn't transfer memory and understanding directly - the reader must still think to understand and internalize information. Socrates renames it an "elixir of reminding", that writing only tells readers what other people have thought or said. It can facilitate understanding, but it can also enable people to take shortcuts around thinking.

I feel that this is an apt comparison, for example, for someone who has only ever vibe-coded to an experienced software engineer. The skill of reading (in Socrates's argument) is not equivalent to the skill of understanding what is read. Which is why, I presume, the GP posted it in response to a comment regarding fear of skill atrophy - they are practicing code generation but are spending less time thinking about what all of the produced code is doing.


yes, but people just really like to predict dooms and they also like to be convinced that they live in some special era in human history

It takes about 30 seconds of thinking and/or searching the Internet to realize that people also predict doom when it actually happens - e.g. with people correctly predicting that TikTok will shorten people's attention spans.

It's then quite obvious that the fact that someone, somewhere, predicts a bad thing happening has ~zero bearing on whether it actually happens, and so the claim that "someone predicted doom in the past and it didn't happen then so someone predicting doom now is also wrong" is absurd. Calling that idea "intellectually lazy" is an insult to smart-but-lazy people. This is more like intellectually incapable.

The fact that people will unironically say such a thing in the face of not only widespread personal anecdotes from well-respected figures, but scientific evidence, is depressing. Maybe people who say these things are heavy LLM users?


There is always some set of people predicting all sorts of dooms though. The saying about the broken clock comes to mind.

With the right cherry picking, it can always be said that [some set of] the doomsayers were right, or that they were wrong.

As you say, someone predicting doom has no bearing on whether it happens, so why engage in it? It's just spreading FUD and dwelling on doom. There's no expected value to the individual or to others.

Personally, I don't think "TikTok will shorten people's attention spans" qualifies as doom in and of itself.


Did you actually read what you're responding to?

> And "other people in the past predicted doom about something like this and it didn't happen" is a fallacious non-argument even when the things are comparable.

> the claim that "someone predicted doom in the past and it didn't happen then so someone predicting doom now is also wrong" is absurd

It's pretty clear that I'm not defending engaging in baseless negative speculation, but refuting the dismissal of negative speculation based purely on the trope that "people have always predicted it".

Someone who read what they were responding to would rather easily have seen that.

> As you say, someone predicting doom has no bearing on whether it happens

That is not what I said. I'm pretty sure now that you did not read my comment before responding. That's bad.

This is what I said:

> It's then quite obvious that the fact that someone, somewhere, predicts a bad thing happening has ~zero bearing on whether it actually happens, and so the claim that "someone predicted doom in the past and it didn't happen then so someone predicting doom now is also wrong" is absurd.

I'm very clearly pointing out (with "someone, somewhere") that a random person predicting a bad thing has almost no ("~zero") impact on the future. Obviously, if someone who has the ability to affect the future (e.g. a big company executive, or a state leader (past or present)) makes a prediction, they have much more power to actually affect the future.

> so why engage in it? It's just spreading FUD and dwelling on doom.

Because (rational) discussion now has the capacity to drive change.

> There's no expected value to the individual or to others.

Trivially false - else most social movements would be utterly irrelevant, because they work through the same mechanism - talking about things that should be changed as a way of driving that change.

It's also pretty obvious that there's a huge difference between "predicting doom with nothing behind it" and "describing actual bad things that are happening that have a lot of evidence behind them" - which is what is actually happening here, so all of your arguments about the former point would be irrelevant (if they were valid, which they aren't) because that's not even the topic of discussion.

I suggest reading what you're responding to before responding.

> Personally, I don't think "TikTok will shorten people's attention spans" qualifies as doom in and of itself.

You're bringing up "doom" as a way to pedantically quarrel about word definitions. It's trivial to see that that's completely irrelevant to my argument - and worth noting that you're then conceding the point about people correctly predicting that TikTok will shorten people's attention spans, hence validating the need to have discussions about it.


We are very clearly living through a moment in history that will be studied intensely for thousands of years.

Because of the collapsing empire, mind you, not because of the LLMs.

Creation of the internet, social media, everyone on the planet getting a pocket sized supercomputer, beginning of the AI boom, Trump/beginning of the end of the US, are all reasons people will study this period of time.

This is really interesting because I wholeheartedly believe the original sentiment that everyone thinks their generation is special, and that "now this time they've really screwed it all up" is quite myopic -- and that human nature and the human experience are relatively constant throughout history while the world changes around us.

But, it is really hard to escape the feeling that digital technology and AI are a huge inflection point. In some ways this couple generations might be the singularity. Trump and contemporary geopolitics in general is a footnote, a silly blip that will pale in comparison over time.


I know managers who can read code just fine, they're just not able/willing to code it. Tho the ai helps with that too. I've had a few managers dabble back into coding esp scripts and whatnot where I want them to be pulling unique data and doing one off investigations.

I read grandparent comment as saying people have been claiming that the sky is falling forever… AI will be both good for learning and development and bad. It’s always up to the individual if it benefits them or atrophies their minds.

I'm not a big fan of LLMs, but while using it for day to day tasks, I get the same feeling I had when I first started the internet (I was lucky to start with broadband internet).

That feeling was one of empowerment: I was able to satisfy my curiosity about a lot of topics.

LLMs can do the same thing and save me a lot of time. It's basically a super charged Google. For programming it's a super charged auto complete coupled with a junior researcher.

My main concern is independence. LLMs in the hands of just a bunch of unchecked corporations are extremely dangerous. I kind of trusted Google, and even that trust is eroding, and LLMs can be extremely personal. The lack of trust ranges from risk of selling data and general data leaks, to intrusive and worse, hidden ads, etc.


When I first started using the internet, I was able to instant text message (IRC) random strangers, using a fake name, and lie about my age. My teacher had us send an email to our ex-classmate who had move to Australia, and she replied the next day, I was able to download the song I just heard on the radio and play it as many times as I wanted on my winamp.

These capabilities simply didn’t exist before the Internet. Apart for the email to Australia (which was possible with a fax machine; but much more expensive), LLMs don‘t give you any new capabilities. It just provides a way for you to do what you already can (and should) do with your brain, without using your brain. It is more like using replacing your social interaction with facebook, then it is to experience an instant message group chat for the first time.


Before LLMs it was incredibly tedious or expensive or both to get legal guidance for stuff like taxes, where I live. Now I can orient myself much better before I ask an actual tax expert pointed questions, saving a lot of time and money.

The list of things they can provide is endless.

They're not a creator, they're an accelerator.

And time matters. My interests are myriad but my capacity to pass the entry bar manually is low because I can only invest so much time.


If this resembles the feeling you had when you first used the internet, it is drastically different from when I used the internet.

When I first used the internet, it was not about doing things faster, it was about doing things which were previously simply unavailable to me. A 12 year old me was never gonna fax my previous classmate who moved to Australia, but I certainly emailed her.

We are not talking about a creator nor an accelerator, we are talking about an avenue (or a road if you will). When I use the internet, I am the creator, and the internet is the road that gets me there.

When I use an LLM it is doing something I can already do, but now I can do it without using my brain. So the feeling is much closer to doomscrolling on social media where previously I could just read a book or meet my pals at the pub. Doomscrolling facebook is certainly faster then reading a book, or socializing at the pub. But it is a poor replacement for either.


I didn't have friends in other countries.

I could however greatly enrich my general knowledge in ways I couldn't do with books I had access to.


Prior to the internet I used my school library for that (or when I was very young, books at my grandparent’s house). So for me personally that wasn’t a new capability. It wasn’t until I started using Wikipedia around 2004 (when I was 17 years old) that the internet replaced (or rather complemented) libraries for that function.

But I can definitely see how for many people with less access to libraries (or worse quality libraries then what I had access to) the internet provided a new avenue for gaining knowledge which wasn’t available before.


To understand the impact on computer programming per se, I find it useful to imagine that the first computer programs I had encountered were, somehow, expressed in a rudimentary natural language. That (somewhat) divorces the consideration of AI from its specific impact on programming. Surely it would have pulled me in certain directions. Surely I would have had less direct exposure to the mechanics of things. But, it seems to me that’s a distinction of degree, not of kind.

I only browse top page. There is anti-ICE on here all the time.

I mean, yeah, but I doubt OP is psychotic for asking this.


Yeah, I couldn't follow this "disabled organization" and "non-disabled organization" naming either.


Wait what? I keep insulting ChatGPT way worse on a weekly basis (to me it's just a joke, albeit a very immature one). This is new to me that this behavior has any consequences. It never did for me.


same here. i just opened a new chat and sent "fuck you"

it replied with:

> lmao fair enough (smiling emoji)

> what’s got you salty—talk to me, clanka.


ChatGPT self-censoring went through the roof after v5, and it was already pretty bad before.


Not sure why this "GPUs obsolete after 3 years" gets thrown around all the time. Sounds completely nonsensical.


Especially since AWS still have p4 instances that are 6 years old A100s. Clearly even for hyperscalers these have a useful life longer than 3 years.


I agree that there is hyperbole thrown around a lot here and its possible to still use some hardware for a long time or to sell it and recover some cost but my experience in planning compute at large companies is that spending money on hardware and upgrading can often result in saving money long term.

Even assuming your compute demands stay fixed, its possible that a future generation of accelerator will be sufficiently more power/cooling efficient for your workload that it is a positive return on investment to upgrade, more so when you take into account you can start depreciating them again.

If your compute demands aren't fixed you have to work around limited floor space/electricity/cooling capacity/network capacity/backup generators/etc and so moving to the next generation is required to meet demand without extremely expensive (and often slow) infrastructure projects.


Sure, but I don't think most people here are objecting to the obvious "3 years is enough for enterprise GPUs to become totally obsolete for cutting-edge workloads" point. They're just objecting to the rather bizarre notion that the hardware itself might physically break in that timeframe. Now, it would be one thing if that notion was supported by actual reliability studies drawn from that same environment - like we see for the Backblaze HDD lifecycle analyses. But instead we're just getting these weird rumors.


I agree that is a strange notion that would require some evidence and I see it in some other threads but looking at the parent comments going up it seems people are discussing economic usefulness so that is what I'm responding to.


A toy example: NeoCloud Inc builds a new datacenter full of the new H800 GPUs. It rents out a rack of them for $10/minute while paying $6/minute for electricity, interest, loan repayment, rent and staff.

Two years later, H900 is released for a similar price but it performs twice as many TFlOps/Watt. Now any datacenter using H900 can offer the same performance as NeoCloud Inc at $5/month, taking all their customers.

[all costs reduced to $/minute to make a point]


It really depends on how long `NeoCloud` takes to recoup their capital expenditure on the H800s.

Current estimates are about 1.5-2 years, which not-so-suspiciously coincides with your toy example.


It's because they run 24/7 in a challenging environment. They will start dying at some point and if you aren't replacing them you will have a big problem when they all die en masse at the same time.

These things are like cars, they don't last forever and break down with usage. Yes, they can last 7 years in your home computer when you run it 1% of the time. They won't last that long in a data center where they are running 90% of the time.


A makeshift cryptomining rig is absolutely a "challenging environment" and most GPUs by far that went through that are just fine. The idea that the hardware might just die after 3 years' usage is bonkers.


Crypto miners undervote for efficiency GPUs and in general crypto mining is extremely light weight on GPUs compared to AI training or inference at scale


With good enough cooling they can run indefinitely!!!!! The vast majority of failures are either at the beginning due to defects or at the end due to cooling! It’s like the idea that no moving parts (except the HVAC) is somehow unreliable is coming out of thin air!


Economically obsolete, not obsolete, I suspect this is in line with standard depreciation.


Not sure why this is news. This is a common approach and has been so well since pre-Corona times.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: