Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think removing pointless cognitive load makes sense, but the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either, but it is definitely not great if we get a whole generation who skip learning how to problem solve/think due to using LLMs.

IMO it's quite different than using a calculator or any other tool. It can currently completely replace the human in the loop, whereas with other tools they are generally just a step in the process.



> IMO it's quite different than using a calculator or any other tool. It can currently completely replace the human in the loop, whereas with other tools they are generally just a step in the process.

The (as yet unproven) argument for the use of AIs is that using AI to solve simpler problems allows us humans to focus on the big picture, in the same way that letting a calculator solve arithmetic gives us flexibility to understand the math behind the arithmetic.

No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.


> The (as yet unproven) argument for the use of AIs is that using AI to solve simpler problems allows us humans to focus on the big picture, in the same way that letting a calculator solve arithmetic gives us flexibility to understand the math behind the arithmetic.

And I can tell you from experience that "letting a calculator solve arithmetic" (or more accurately, being dependent on a calculator to solve arithmetic) means you cripple your ability to learn and understand more advanced stuff. At best your decision turned you into the equivalent of a computer trying to run a 1GB binary with 8MB of RAM and a lot of paging.

> No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.

It's the latter. Though I suspect the masses will be shoved into the garbage disposal than be allowed to wallow in ignorant consumerism. Only the elite that owns the means of production will be allowed to indulge.


There are opposing trends in this. First, that like many tools the capable individual can be made much more effective (eg 2x->10x), which simply replaces some workers, and last occurred during the great depression. Second, that the tools become commoditized to the point where they are readily available from many suppliers at reasonable costs, which happened with calculators, word processors, and office automation. This along with a growing population, global trade, and rising demand led to the 80s-2k boom.

If the product is not commoditized, then capital will absorb all the increased labor efficiency, while labor (and consumption) are sacrificed on the altar of profits.

I suspect your assumption is more likely. Voltaire's critique of 'the best of all possible worlds' and man's place in creating meaning and happiness, provides more than one option.


I know how to do arithmetic, but I still use my PC or a calculator because I am not entirely sure that I am accurate. I use "units" as well extensively, it can be used for much more than just unit conversion. You can do complex calculations with it.

You can solve stuff like:

> If you walk 1 mile in 7 minutes, how fast are you walking in kilometers per hour?

  $ units -t "1 mile / 7 minutes" "kilometers per hour"
  13.7943771428571
You need some basic knowledge to even come up with "1 mile / 7 minutes" and "kilometers per hour".

There are examples where you need much more advanced knowledge, too, meaning it is not enough to just have a calculator, for example in thermodynamics, when dealing with gas laws, you cannot simply convert pressure, volume, and temperature from one unit to another without taking into account the specific context of the law you’re applying (e.g., the ideal gas law or real gas behavior)", or for example you want to convert 1 kilowatt-hour (kWh) to watts (W). This is a case of energy (in kilowatt-hours) over time (in hours), and we need to convert it to power (in watts), which is energy per unit time.

You cannot do:

  $ units -t "1 kWh" "W"
  conformability error
  3600000 kg m^2 / s^2
  1 kg m^2 / s^3
You have to have some knowledge, so you could do:

  $ units -t "1 kWh" "J"
  1 kWh = 3600000 J
  $ units -t "3600000 joules / 3600 seconds" "W"
  3600000 joules / 3600 seconds = 1000 W
To sum it up: in many cases, without the right knowledge, even the most accurate tool will only get you part of the way there.

It applies to LLMs and programming, too, thus, I am not worried. We will still have copy-paste "programmers", and actually knowledgeable ones, as we have always had. The difference is that you can use LLMs to learn, quite a lot, but you cannot use a calculator alone to learn how to convert 1 kWh to W.


>the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E

Seeing how the world is based around consumerism, this future seems more likely.

HOWEVER, we can still course correct. We need to organize, and get the hell off social media and the internet.


> HOWEVER, we can still course correct. We need to organize, and get the hell off social media and the internet.

Given what I know of human nature, this seems improbable.


I think it's possible. I think the greatest trick our current societal structure ever managed to pull, is the proliferation of the belief that any alternatives are impossible. "Capitalist realism"

People who organize tend to be the people who are most optimistic about change. This is for a reason.


It may be possible for you (I am assuming you are > 20, mature adult). But the context is around teens in the prime of their learning. It is too hard to keep ChatGPT/Claude away from them. Social media is too addictive. Those TikTok/Reels/Shorts are addictive and never ending. We are doomed imho.

If education (schools) were to adopt a teaching-AI (one that will given them the solution, but at least ask a bunch of questions ), may be there is some hope.


>We are doomed imho.

I encourage you to take action to prove to yourself that real change is possible.

What you can do in your own life to enact change is hard to say, given I know nothing about your situation. But say you are a parent, you have control over how often your children use their phones, whether they are on social media, whether they are using ChatGPT to get around doing their homework. How we raise the next generation of children will play an important role in how prepared they are to deal with the consequences of the actions we're currently making.

As a worker you can try to organize to form a union. At the very least you can join an organization like the Democratic Socialists of America. Your ability to organize is your greatest strength.


So your plan is to encourage people to "get off the Internet" by posting on the Internet, and to stave off automation by encouraging workers to gang up on their employers and make themselves a collective critical point of failure.

Well, you know, we'd all love to change the world...


Apparently you'd love to change the world; a good start would be accurately reading and recounting others' arguments.


Agreed, that's important. What'd I get wrong?


>Well, you know, we'd all love to change the world

The social contract lives and dies by what the populace is willing to accept. If you push people into a corner by threatening their quality of life, don't be surprised if they push back.


Exactly, so don't be surprised if you receive some pushback as well. AI may threaten you, but it empowers me.


> No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.

I believe there is some truth to it. When you automated away some time-consuming tasks, your time and focus is shifted elsewhere. For example, washing clothes is no longer a major concern since the massification of washing machines. Software engineering also progressively shifted it's attention to higher-level concerns, and went from a point where writing/editing opcodes was the norm to a point where you can design and deploy a globally-available distributed system faster than what it takes to build a program.

Focusing on the positive traits of AI, having a way to follow the Socratic method with a tireless sparring partner that has an encyclopedic knowledge on everything and anything is truly brilliant. The bulk of the people in this thread should be disproportionally inclined to be self-motivated and self-taught in multiple domains, and having this sort of feature available makes worlds of difference.


> The bulk of the people in this thread should be disproportionally inclined to be self-motivated and self-taught in multiple domains, and having this sort of feature available makes worlds of difference

I agree that AI could be an enormous educational aid to those who want to learn. The problem is that if any human task can be performed by a computer, there is very little incentive to learn anything. I imagine that a minority of people will learn stuff as a hobby, much in the way that people today write poetry or develop film for fun; but without an economic incentive to learn a skill or trade, having a personal Socratic teacher will be a benefit lost on the majority of people.


I think it's both, just like we saw with the internet.


> the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either

This is the existential crisis that appears imminent. What does it mean if humanity, at large, begins to offload thinking (hence decision making), to machines?

Up until now we’ve had tools. We’ve never before been able to say “what’s the right way to do X?”. Offloading reasoning to machines is a terrifying concept.


> I think removing pointless cognitive load makes sense, but the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either, but it is definitely not great if we get a whole generation who skip learning how to problem solve/think due to using LLMs.

There's also the problem of developing critical thinking skills. It's not very comforting to think of a time where your average Joe relies on an AI service to tell what he should think and believe, when that AI service is ran, trained, and managed by people pushing radical ideologies.


I think the latest GenAI/LLM bubble shows that tech (this hype kind of tech) doesn't want us to learn, to think or reason. It doesn't want to be seen as a mere tool anymore, it wants to drive under the appearance that it can reason on its own. We're in the process where tech just wants us to adapt to it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: