Hacker News new | past | comments | ask | show | jobs | submit login

Could you help me understand what it means to "pump entropy out of a system"?

I asked ChatGPT and it claims "It is generally not possible to "pump" entropy out of a system in the same way that it can be added to a system. This is because the second law of thermodynamics states that the total entropy of a closed system will always tend to increase over time."




ChatGPT is wrong here because in order to "pump" entropy out of a system this system must not be closed! So to ask it, you must insist the system isn't closed.

And, specifically, living organisms can't be closed systems, because their existence depend on spending energy to decrease their internal entropy, increasing the entropy of the external environment in the process, in order to keep their own entropy low enough to still function without random breakage we would expect from an aging system.

And we all know that entropy always wins in the end, and we always break if we are old enough, so this is a sisyphean task; but we still do it all day every day, because we are afraid to die.

Anyway, https://en.wikipedia.org/wiki/Entropy_and_life


"Pumping entropy out of a system" to me means keeping the system more orderly. To drive such processes, you have to increase entropy elsewhere, which is also known as using energy.


As usual ChatGPT is only half right. The sun is not a closed system.


Why is asking ChatGPT becoming so common? Are people experimenting with AI or actually expecting google-like quality of information?


Beats me. It was pretty obvious to me early on when asking about any field I know well that it had no understanding and would happily blurt out a wrong, but plausible sounding answer. So I haven't even tried to ask it abkut stuff I don't understand. Because how would I even tell if the answer made sense? Seems like an easy way to get the completely wrong idea.


People love DeepL translations for the same reason, that they sound convincing, even though they are often completely wrong. Even before that people were (and still are) trusting the infocards that Google puts in search results based on arbitrary snippets it extracted from the webpage, because those snippets are presented as authoritative even though they're often out of context or completely wrong.

People are used to AI being clunky, unfocused, ungrammatical text, ala markov chain bots from the 2000s. So conversely this kind of verbose, coherent, well-written text appears to be knowledgeable and correct.

I can only hope that deepfakes and such become popular enough that people learn to be less trusting of what they find on the internet.


> People love DeepL translations for the same reason, that they sound convincing, even though they are often completely wrong.

Could you show examples?

I'm using DeepL to translate things form and to languages that I know very well, usually to double check or get additional inspiration for wording. I've never experienced anything that was completely wrong. The translations are most of the time almost perfect.

But maybe it's a question of language pairs.


The examples I know of all involve Japanese light novels translated to English. For example https://twitter.com/Xythar/status/1405658562378952705 (The tweet author is someone I know, not me.) Other cases are fan TLs of novels that I've read which were done through DeepL, so I can't link them.


It probably works quite well if the field you are asking about has a high ratio of ‘plausible sounding’ to ‘logic follows’ language


Which fields would that be? I can't think of any field that, once dug into a bit, doesn't reveal that lots of plausible sounding ideas are false.


If you define advertisements as quality information google definitely wins out. I wouldn’t use either to learn something unless you’re interested in buying what their advertisers and their SEO results are selling.


To some degree, I'm probably at least curious about what SEOs are selling. If it's art, it probably popular and I'll want to know what it is, and if it's tech, people with money for SEO are often also the ones with money for innovation.

I don't usually buy stuff in ads(Actually, I rarely buy anything nonessential without thinking for a week about it), but Google's results usually are interesting, if a bit creepy and echo chambery.

I don't just use Google to learn things and accept the first results, but I do use it as a starting point.


That’s how I use chatGBT to learn. It’s rarely precisely correct but it’s answers are often basically right or direct me to a lot of concepts to investigate further. For instance I want to build a TEC based CO2 freezing chamber but need a low temperature TEC. I asked it about what TEC materials operate at low temperature. It gave me 10 different TEC semiconductors with operating temperatures, which were sometimes right. I then used Kagi (seriously, google?) to find literature. Using the two together - one to get a rough outline of the domain explained in simple and concise language and the other to retrieve information.

IR systems lack the bootstrapping phase - unless you know precisely what you want to learn you’ll end up in SEO hell sifting through loosely related garbage produced by algorithms to include as many relevant terms but zero insightful knowledge. I think for learning chatGPT is great for bootstrapping, then I turn to IR to refine and get accurate information.


"Half right" means wrong.

To be honest I've seen only wrong answers form ChatGPT until now.

It can't "think logically" so almost everything is flawed when you look a little bit deeper.

But a lot of people are even much dumber than this bot. So I quite fear the consequences of its existence.


I disagree, I think it's a case of the question being asked imprecisely. When you ask the right question chatGPT gives you a very insightful and accurate answer.

Human :

By what means might entropy be pumped out of one part of a system and into another part?

ChatGPT:

There are several ways that entropy can be transferred from one part of a system to another:

    Heat transfer: Entropy can be transferred from a hotter body to a cooler body through heat conduction, convection, or radiation.

    Work: Entropy can be transferred from a system to its surroundings when work is done on the system.

    Mass transfer: Entropy can be transferred from a system to its surroundings through the transfer of matter.

    Information transfer: Entropy can be transferred from a system to its surroundings through the transfer of information.

    Chemical reactions: Entropy can be transferred from a system to its surroundings through chemical reactions, either through the production of heat or through the transfer of matter.

    Electrical currents: Entropy can be transferred from a system to its surroundings through electrical currents.

    Magnetic fields: Entropy can be transferred from a system to its surroundings through the generation of magnetic fields.

    Gravitational fields: Entropy can be transferred from a system to its surroundings through the generation of gravitational fields.


If the person knew how to ask the question "correctly", they wouldn't need to ask.

They asked about systems, not closed systems. ChatGPT just regurgitated the 2nd law, which is just plain wrong no matter how you slice it. Any physicist asked this question would explain the distinction.


They gave you all the information you needed to ask the next question. It specified “in a closed system,” which meant it made an assumption of the closed system and told you. An observant reader would ask then “and what about an open system?” Or if they’re not that sophisticated, “what about a non-closed system?”

You’ve not talked to many theoretical physicists I see. I’ve not noticed they’re usually that good at explaining things, especially to lay people. They tend to be very good at talking with people at the same level or more advanced, usually in a mathematical form. They tend to be pretty bad at imitating a human.


Yeah, if you understand the field or are observant enough, you can tell the answer is fishy. And if you don't, you can't tell.

So what, anyone who gets a wrong idea from ChatGPT is just unsophisticated and we should ignore it? Why are you so incredibly set on invalidating any criticism of ChatGPT?

You don't see a problem with advertising this LLM as something it isn't? Lots of people seem willing to take ChatGPT completely at face value now, and walk away having learned a bunch of nonsense. And lots of them are smart people, they've just been duped by the hype into thinking LLMs can do things they fundamentally can't.


Oh I think it should be improved for sure. I just think this is a bad example. I think most of the fact checking can be done using any modern information retrieval system and you can build algorithms that will regenerate answers until they’re factually correct, or use the IR to hint the answer to correctness. We also have very powerful semantic inference engines and other tools that complement LLM output. I think judging the possibilities by the beta is simplistic, and folks are unfairly down on the achievement by picking nits.


In the context of this question, system has a specific technical meaning though. This is a very technical question so chatGPT might be forgiven for assuming the technical meaning.


So what is ChatGPT supposed to be useful for if people need to understand what the answer is in order to ask the "correct" question? Clearly the person didn't pick up on this closed/non-closed distinction which is why they ended up asking on a forum instead of reformulating.

I think GPT just saw "entropy" and "system" and predicted 2nd law. Which is the sort of low effort response you might get if you ask random non-experts on the internet.


> So what is ChatGPT supposed to be useful for if people need to understand what the answer is in order to ask the "correct" question?

Personally I think there's still a lot of value in mere rephrasing, recontextualizing, perspective shifting, and occasional insightful connection that ChatGPT can do even if it's all using information you already ostensibly know.

As it is often said that one never learns a topic more deeply than when they teach it to others, ChatGPT can serve as the ultimate "rubber duck" coworker for any subject. I'm not sure what we can do about users not using critical thinking, or expecting the machine to do that part for them. There are plenty of non-AI sources of misinformation that can readily be taken uncritically too, though, so it's not necessarily a new problem.


No, rubber ducks are still the ultimate rubber ducks, because they don't talk back with industrial grade overconfident bullshit that misleads and confuses you.


Is there a body of evidence that suggests people get more misled and confused after using things like ChatGPT? It seems like a reasonable hypothesis, but my own experience doesn't necessarily support it. I've used the language model at character.ai for a bit and have found it to be clarifying in a sense. When the model spits out some overconfident misinformation, it's a great opportunity to argue with the bot about it in ways one could never argue with another person - certainly not a stranger, at least.

Perhaps I've been confused and misled so badly I don't realize it, so all I can really say is I think it's premature to assume people will be any more misled or confused by technologies like ChatGPT when all they have to do now is get on the internet or flip on a TV to be personally targeted with misleading and confusing information already. I think there's very real potential for the technology to give people a lever against misinformation if it helps them understand and explore their own thoughts/thought processes.

I guess to me, fundamentally, it's a question of who's the one with agency over using it, and to what end. I'd be much more comfortable once we can fit models like this on home computers and worry less about them suddenly trying to sell us sponsored products or convince us of some ideology because their creator was paid to do so.


Write a python function to compute the idiotsecant.

Sure! Here is a function that computes the idiotsecant of a number x:

    def idiotsecant(x):
        return 1/x
This function returns the reciprocal of x, which is also known as the reciprocal function or the multiplicative inverse of x. The reciprocal of a number x is defined as 1 divided by x.

For example, if we call the function with an argument of 2, it will return 0.5, because 1/2 is equal to 0.5.

I hope this helps! Let me know if you have any questions.


Haha i concede the point


It's time to stop with the god damn ChatGPT spam in HN comments! https://www.youtube.com/watch?v=2k0SmqbBIpQ


Wouldn't the star itself be the closed system? Any lifeforms within it are free to reduce their own entropy so long as they increase entropy within the system.


A star, of course, is itself almost as far from a closed system as it is possible to be in our universe.

A bomb calorimeter in a chemistry class might be approximated as a closed system for the purposes of thermodynamics. Very few things in nature can.


The star is constantly sending stuff outside (solar wind and light / photons)


It's a bit of a colloquialism, I think, but the idea is, all of your biological processes are constantly generating entropy.It is constantly tending toward a system in equilibrium, as physical systems do, but you are essentially made out of non-equilibrium systems. You need the water to be here, but not there. Your nervous system is based on electrical gradients that, left to their own devices, will normalize. Every cell has an electric potential gradient it maintains (google "cell proton gradient"). All of these things and thousands more are constantly breaking down and require energy to maintain. Basically, imagine the difference between a recently dead body that just attained ambient temperature and a living one. Characterizing all those differences would be more than the work of a lifetime.

A super abstract, but physically valid, way of expressing that is that your body is constantly "generating entropy" and it needs to consume energy to fight it. "Pumping out that entropy" is what the body is doing when it takes in energy and uses it to maintain all those gradients.

Life in general may not use organic chemistry, or chemistry at all, but the advantage of this level of abstraction is that any life form (in this universe) will have to do something to "pump out the entropy". It is essentially by definition a deviation from the equilibrium state around it, and it will require energy to maintain.

The point I am making here is that there must be some ability to pump out the entropy faster than it is being generated, or pushed in to the system. Otherwise those gradients and variations from equilibrium will be erased. Inside a sun, there is so much heat energy being pressed in to the system that it is challenging to imagine how any conceivable structure could push it back out again.

This boils down to the observation that "Holy cow, the sun is, like, REALLY HOT!", but, you know, wrapped up in a different formalism that allows us to get past "But what if, like, there's something that could deal with that?". You see this online, the challenge that someone needs to prove that there's no way to build something that could live in the sun. If you don't have thermodynamics as a tool, this isn't even necessarily unreasonable. But thermodynamics gives us a principled way to turn around and say "Any such system would have to have this and that and the other property, and it's really hard to see how plasma and magnetic fields in such a violent environment could have that." It's so not-close that it's not really plausible.

ChatGPT may be referring to the general conceptual space/idea that you can't just build a machine to "reverse entropy" trivially. The most common example of this is that you can't just build an anti-microwave, that remotely cools things by shooting radiation at it in some easy manner. It is sooooo much easier to add entropy than remove it; that is definitely true. In fact, it's kinda a key element of my point here. But you can pump entropy out of a system, in some ways, with some machines, in some manners; since you are literally such a machine yourself, you are an existence proof of that.


I think there might be fundamental tradeoffs at play. An information processing system acting in a cold local environment will have the ability to do far more complex computations with more reliability (thus, in a way, be much more likely to exhibit complex internal behavior) but will also be limited in how much work it can perform onto the outside, thus how much it can recognizably output from the computation. While the opposite will be true at higher temperatures.



I am well aware of that. It is not the anti-microwave that people commonly ask about.


I don’t understand. It’s remote cooling. It even uses electromagnetic radiation. What’s the gap?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: