> Many data centres use water-based systems to cool the plant with towers evaporating the heat, like a huge perspiration system, which means that the water is lost.
Someone doesn't know what "evaporation" or "lost" means.
Total datacenter energy use is on the order of 1PWh/yr. Total global solar surface irradiance is on the order of 400,000PWh/yr.
The direct heat contribution is negligible to global temperature.
There is an argument that water vapour in the atmosphere is a greenhouse gas, but also an argument that clouds reflect solar energy, and that this water vapour is emitted at ground level. I don't think it's so obvious that water evaporation from datacenter cooling systems is directly either good or bad for global heating.
There certainly are negative environmental and social effects of this water usage in some places and implementations, and certainly it could include severe heating potential, but this is an over-reduction without further context.
I was walking down the road, beside me a young man with airpods on. He was walking on a side lane which is intended for parking lots. Suddenly, a Tesla was driving behind him with about the same speed, but at a distance of less than 10cm behind him.. forcing him off the street. He didn't hear or see the car.
So, how stupid a driver must be to drive that close behind a walking one?
It's Tesla drivers. In Germany I saw that kind of behavior more than once.
If someone could drive "less than 10cm behind" a pedestrian without them noticing, my reaction is to be impressed with such astonishingly precise driving.
Astonishingly precise driving should be reserved for a closed circuit, not a public street, where there are a million other variables you can't control.
That's why a good driver in public is a smooth and predictable one, not one who can just parallel park with 2 cm space in one go every time.
The pedestrian guy did not hear the Tesla. And the Tesla was really that close. I saw it and thought what the heck is happening - even if the road is yours, why would you choose to drive that close to a pedestrian - who at any time can choose to stop or to turn around. It's stupid, because if the Tesla would even slightly touch the pedestrian, it would be a serious problem for the driver.
I looked at the Tesla driver, then the pedestrian noticed something is wrong and then noticed the Tesla, too, and left the lane.
This is a very risky and reckless driving behavior. I've seen such behavior from young guys wanting to impress or some stupid drivers with fast cars who things the roads are belong to them. The Tesla driver was about 50y old.
I've should have made a video and post it here. No one believes me :)
I don’t believe you. When the two entities are a wheeled vehicle and a bipedal animal, the differences in locomotion are too different to achieve a sub-10cm clearance.
I'm not questioning what you said, I'm saying that you are wrong.
It's obviously not true that "adaptive cruise control or FSD" could explain your anecdote. Adaptive cruise does not engage in a Tesla unless the road is marked and painted. FSD is not available in Germany. Even if it was, it's far too cautious around pedestrians, it wouldn't allow itself to get within 1 metre of a pedestrian let alone 0.1 metres.
It's also obviously not true that "Tesla used" anything, because the car in your anecdote was owned and operated by a (presumably) licensed driver and not by Tesla.
A better hypothesis for your anecdote is that the driver was French and not that driving a Tesla vehicle somehow reprograms human brains to drive dangerously.
No, the kind of people buying a Tesla are already programmed to drive like this. That's the whole point.
It's the same question like who buy's a BMW or a Mercedes Benz? There's a special type of people the different car makers are aiming at. The same applies to Tesla drivers.
In Germany, where a lot of discussion around cars is happening, just a special type of people buy Tesla.
I thought about this too. People say that LLM models are only saying the most common tokens that come after the previous token. And that this makes them incomparable to human intelligence.
But Humans are basically long running LLMs that are retrained in real-time. We are the product of our environment.
The claim that "humans are basically long running LLMs" oversimplifies the complexity of human cognition and experience.
Humans don't just process information; they experience emotions, desires, and subjective experiences that are deeply intertwined with their cognition. LLMs don't have feelings, motivations, or consciousness.
Humans have inner subjective experience, self-awareness, and the ability to reflect on our own existence. LLMs don't.
Humans can adapt to a wide range of environments and situations, drawing from a complex interplay of instincts, learned behaviors, emotions, and rational thought. LLMs are much more limited in their adaptability, since they focus primarily on the tasks they were designed for.
Human cognition has evolved over millions of years and is rooted in a complex biological system, the brain. Yes, both LLMs and human brains process information, but the underlying mechanisms, structures, and functions are vastly different.
I really wish people would stop this sort of cavalier reductionism of humans by saying we are basically LLMs. It isn't true.
> Humans have inner subjective experience, self-awareness, and the ability to reflect on our own existence. LLMs don't.
We have no test capable of determining whether or not they have those things, not even if we disregarded the limitations of our current technology capabilities and are only asking hypothetically how to differentiate.
We also don't have that for animals, or even other humans — I know I have an experience of being, but no way of telling if someone else who says they do actually does. I have to assume at least some of y'all do or humanity wouldn't have written about it since at least Descartes.
People with aphantasia report being surprised when they realise that other people do have mental images, and that they previously thought such things were invented by the film industry as a metaphor. By analogy, there may well be humans out there without qualia, who just learned to mimic the language of those of us who do, which is after all exactly what LLMs must have done if they don't have inner subjective experience. Philosophers call them P-Zombies.
That’s like saying a physics simulation is basically an entire sub universe on your computer.
It sounds true, but it’s just not. It’s a gross oversimplification
I think Gödel had a proof for how it’s impossible to fully describe a system from within that system. That’s the nail in the coffin for AGI.
No matter how much data we give it, no matter how big it is, it’ll never be “human intelligent” since it’s impossible for us to describe a loss function for being human or describe being human in a dataset.
We’ll never be able to evaluate it, since we can’t fully describe what it means to communicate because to do that we’d need to communicate it and that process can’t be fully self describing.
Not to say AI isn’t useful or impressive, but it’ll never be comparable to humans, truly.
> I think Gödel had a proof for how it’s impossible to fully describe a system from within that system. That’s the nail in the coffin for AGI.
Gödels theorems are about formal axiomatic theories. To apply them to human intelligence, you'd have to prove that human intelligence springs from formal axiomatic theories. I don't think this is possible, which would mean that you can't apply the theorems.
> No matter how much data we give it, no matter how big it is, it’ll never be “human intelligent” since it’s impossible for us to describe a loss function for being human or describe being human in a dataset.
How do you know? If we were able to fully record whatever is going on in someones brain, we should be able to build a loss function for it. How do you know that this is fundamentally impossible?
> We’ll never be able to evaluate it, since we can’t fully describe what it means to communicate because to do that we’d need to communicate it and that process can’t be fully self describing.
Why not? Again, if you argue that this is due to Gödels theorems, you'd have to prove that our communication itself is based on formal axiomatic theories.
> To apply them to human intelligence, you'd have to prove that human intelligence springs from formal axiomatic theories.
It applies to everything we could say or think or measure about intelligence (and anything else). All that probably "spring from" things that don't (rely on our axioms), but they are not accessible to us, so that doesn't really help.
> It applies to everything we could say or think or measure about intelligence (and anything else).
How do you know? Gödels proof doesn't support your claim since it only applies to formal axiomatic theories. Do you have an alternative proof for his theorems also applying to all other systems?
I know by repeatedly by starting anywhere at all and asking "but why is that?", if you will. Furthermore by pondering it and realizing that even if I ever found a perfectly harmonious explanation for all my observations, I would have no clue if reality wasn't even more complex beyond of what I can perceive of it. So, I'm still only really dealing with my own observations and narratives, and even if I got the answer perfectly right, even if God told me yes, this is how it works, all of it to the last detail, and I understood all of it, I couldn't be sure if there isn't more to it.
Maybe gravity makes things fall down, sure, but maybe there are tiny kobolds in the spaces between all particles with little clipboards that calculate the correct motion and cast spells to move them. I'm not trying to be a smartass, but I honestly tried and could not find bedrock. Can you name (or even just think) something that doesn't rest on something else or an assumption? I honestly can't.
> Gödels proof doesn't support your claim since it only applies to formal axiomatic theories.
"only"? I'd say those axioms are a superset of the sloppy stuff we throw around in our day to day, like "this is a chair"; if we drilled down on our informal speech and thoughts, we'd at best arrive at such axioms, which ultimately rest on things we simply posited (because otherwise there would be nothing to think about, and no way to think about it -- I'm not knocking it per se, just the idea that the quest for truth could possibly ever be complete, which makes it no less noble IMO).
> Maybe gravity makes things fall down, sure, but maybe there are tiny kobolds in the spaces between all particles with little clipboards that calculate the correct motion and cast spells to move them. I'm not trying to be a smartass, but I honestly tried and could not find bedrock. Can you name (or even just think) something that doesn't rest on something else or an assumption? I honestly can't.
I'm not sure how a "bedrock" relates to the question whether an intelligence can ever fully describe what an intelligence is. When answering this question, we don't need to find a "natural" bedrock, since the assumptions we choose are the bedrock we build on. As long as those assumptions align with reality to the best of our knowledge and the end result passes all tests we can think of, what does it matter whether there might be more to know? Of course it doesn't mean we should stop searching, but it also doesn't mean we should not even try. There are many such unfalsifiable statements, but that doesn't mean they stop us from answering other questions.
> "only"? I'd say those axioms are a superset of the sloppy stuff we throw around in our day to day, like "this is a chair"; if we drilled down on our informal speech and thoughts, we'd at best arrive at such axioms, which ultimately rest on things we simply posited (because otherwise there would be nothing to think about, and no way to think about it -- I'm not knocking it per se, just the idea that the quest for truth could possibly ever be complete, which makes it no less noble IMO).
I don't think this is true, and if you can prove it, you might earn a Nobel prize. "Formal axiomatic theories" are well-defined - as Wikipedia states, they are "formal systems that are of sufficient complexity to express the basic arithmetic of the natural numbers and which are consistent and effectively axiomatized. [...] In general, a formal system is a deductive apparatus that consists of a particular set of axioms along with rules of symbolic manipulation (or rules of inference) that allow for the derivation of new theorems from the axioms."
Can you try to describe how you'd "drill down" on informal speech to transform it into such a system? There are many, many examples for systems that are absolutely not based on formal axiomatic systems.
> Can you try to describe how you'd "drill down" on informal speech to transform it into such a system?
As I said, just keep asking "why?" or "what does that mean?", then repeat that with the answer. Sooner or later you hit an assumption and a shrug. I wouldn't understand Gödel's proof even if I tried to, I'm sure -- it "rings true" because it matches my own intellectual observations regardless where I turn.
> As long as those assumptions align with reality to the best of our knowledge and the end result passes all tests we can think of, what does it matter whether there might be more to know?
It matters for the question whether you can fully describe a system from within that system, that's all. But I'd argue even whether we made an effort or no effort, whether it passes all the tests we came up with or doesn't, doesn't really matter (in regards to that question) because any ground we cover won't bridge what remains an infinite distance. I still think it's good, but it's more like going for a walk each day: you always arrive where you started out, you're just getting fresh air and what other temporary benefits come with it. It beats just staying where you started out.
I can't find the quote but apparently Werner Heisenberg said something along those lines, that we basically set out to find the bed rock of reality, but more and more are just facing ourselves, that is, our instruments of measurement and ways to conceptualize things. And again, I don't know jack about quantum mechanics and don't want to call on the authority of Heisenberg and Gödel. But I hear they know their fields, right, and it matches everything I know in any area, both the ones I am bad in and the ones I am really bad in.
I'm not saying it's a problem, just that that's how it is. But thinking you know the ultimate and final truth because it passes all tests (e.g. witches sink), and thinking software is actually intelligent because it convinces you it is, when it really isn't, can be super mega dangerous. And comments how picking the statistically most likely word is "basically what our brain does" [0] etc. show an even worse possibility; where we take the shortcut of just confusing what we are creating with us, because then it's easy and now we know how we think (when we really don't, not remotely). It just generally seems backwards to start out with the goal of "AI" when we can't even describe what we're looking for, much less how to build or find it. Having no more than "we'll know it when we have it", plus eagerness to claim we have it, is a recipe for at least a lot of circus, if not disaster.
[0] And that's on HN. Now ponder, for example, the average opinion of HN on say, whether banks should limit your passwords in all sorts of weird way that imply they're not hashing them, and how much worse the "real world" is. In this case, even the people at the forefront are so keen to move fast and break things, so the "real world" is pretty much doomed I'd say.
>> Can you try to describe how you'd "drill down" on informal speech to transform it into such a system?
> As I said, just keep asking "why?" or "what does that mean?", then repeat that with the answer. Sooner or later you hit an assumption and a shrug.
I still don't understand your assumption. Do you think that any axiomatic system is a formal axiomatic theory? As I've said before, this term is well-defined, and I don't see how you could "drill down" on natural language to arrive at such a system. There are many axiomatic systems that are not formal axiomatic theories, and Gödels proof doesn't apply to those.
> I wouldn't understand Gödel's proof even if I tried to, I'm sure -- it "rings true" because it matches my own intellectual observations regardless where I turn.
This is why I've been asking about how you'd bridge the gap between Gödels proof and your assumption, because Gödels proof applies strictly to one thing, and you seem to apply it to everything, even if it doesn't meet the requirements of the proof. But I guess you're arguing from a philosophical standpoint, not a logical one.
Nature managed to bootstrap human intelligence even though nature is not even a conscious entity, let alone a conscious entity that can describe loss functions and datasets. That seems to me like pretty solid evidence that those things are not hard requirements for creating human-level intelligence.
It sounds like maybe you're arguing that humans will never be able to conclusively determine if an artificial intelligence is equivalent to a human intelligence, on the basis of a theory that a human can't describe precisely what it is to have human-equivalent intelligence from inside the system of a human brain. Humans build things that are too complex for any one person to hold the entirety of in their conscious mind all the time, by working together, or by organizing it into simpler pieces. But even setting that tangent aside, if your theory is accurate, would you accept that an AGI could prove it was more intelligent than a human by successfully describing human intelligence and how to create an equivalent AGI?
Maybe?
But the way AI is made nowadays is by using a human made metric to determine the quality and/or correctness of a model’s output.
Because we’re unable to make a perfect, totally correct metric, I find it unlikely that any of the current generation of AI will get anywhere near human level.
Again, not that these new models aren’t extremely useful or impressive, but not really “intelligent” as a human is.
It doesn't have to be perfect. It just has to be convincing. That takes a lot less data and is much more tolerant to simplifications. Just look at the claims that were thrown around GPT's abilities at first.
If the physics simulation accurately reflects the dynamics of physical systems at the large scale and your goal is to see what the stars look like in the future, who cares if it doesn't actually have the entire universe?
What makes you think that? What we can LLMs are a specific architecture that literally does predict the next token, one at a time. This isn’t the only way to generate text. Perhaps humans use a method that synthesizes all the text at once. Or some other, unimaginable way.
Almost certainly not true. Consider pronouns, as they reference other words, therefore cannot be written in isolation. Or simple rules about when to use "an" or "a".
Shaping your words to sound correct is very common. Both in speech and in writing. Sometimes it is finding how to fit a word you want to use into a sentence. Sometimes it is building a rhyme.
You may feel that you go a word at a time, but that really shows how embedded language is.
Right now we don't know if LLM's are also doing this, or not.
In their calculating the 'next' word, as part of that 'weighting', are the 'simple rules' for future words, that you are saying humans do but LLM's can't.
It doesn't seem LLMs are capable of "understanding" that they don't "know" something, so there are certainly some observable differences. But the intent of my original comment was precisely to highlight that we don't know. It could be that next token predication is somehow mathematically equivalent to what we call consciousness. That would certainly be a revelation.
I seem to have two parts of my inner monologue, one comes up with complete concepts, the other puts them into words. When I started to notice this, I tried skipping the "make words" part to save time, as clearly I had not only had the thought but also was aware that I had already had the thought. This felt wrong in a way I have no words to describe as there's nothing else like that particular feeling of wrongness, though I can analogise it as being almost but not quite entirely unlike annoyance.
Anyway, point is 80% of this comment was already in my head before I started typing; the other 20% was light editing and an H2G2 reference.
Yeah but do you write that one word based on the past 1000 or so words that come before it and how each of those words relate to each other with a perfect recollection of each word at the same time?
And if you believe you do that subconsciously, then how can you be sure you don’t subconsciously plan a few words ahead?
To be fair, I rarely write more than a sentence or two in serial form, and I have often determined the point of a sentence before I write the first few words.
Occasionally the act of writing out an idea in words clarifies or changes my mind about that idea, causing me to edit or rewrite what I’ve already written.
You may write one word at a time but the grammar of most languages forces their users to know what they’re going to write several words ahead of the current word.
Ok. Why do you think GPT isn't doing that? or is different?
It might be calculating the next word, because you can only write one word at a time, but you can't say that the current next word isn't influenced by a few words ahead. Don't think the current understanding of what LLM's do internally can rule that out.
Sometimes people use LLMs very broadly to talk about neural networks overall.
But to be clear, humans don't have emergent reasoning from language, we learn language as part of our overall reasoning. The short evidence for that being that children are capable of solving logic and spatial puzzles before they learn how to speak. Humans learn concepts like object permanence before we learn language complicated enough to describe that concept. And obviously people are capable of reasoning without learning how to write or interpret text tokens, there are plenty of illiterate people in the world who are nonetheless indisputably intelligent agents.
So ignoring other differences about how prediction works, humans are not similar to LLMs in the sense that LLMs are language models that when large enough either develop (or appear to develop depending on who you ask) reasoning capabilities. And that's not how humans work; we don't learn text tokens before we learn how to reason.
But very often when people make this claim they're trying to make a broader claim about neural networks or the role of prediction in learning in general. People might disagree or agree with the broader claim, I still think it oversimplifies how humans work, but the point is -- they're not actually saying something specific about LLMs, even though it sounds that way sometimes. It's just that the terminology gets conflated in people's heads.
We can have a debate about the similarities and differences between humans and neural networks, but I don't think anyone would seriously claim that GPT-4 in specific works the same way as a human does. I think people are using LLMs to refer to a broader category of AI research.
Definitely, LLM's have gotten so much press, that many people arguing about 'AI', are thinking about LLM.
And, LLM's are not all that a human can do. Language is not everything about a human.
But there is an argument that there is part of the brain that produces language, and it has some LLM characteristics. It's just that the brain is bigger and does more than an LLM. So the brain is not an LLM.
The brain has many components. What happens when you take the problem solving of something like AlphaGo/AlphaStar, with the Vision processing in Cars or DaLLe, and the language processing in LLM. Add in hearing, touch.
It's not that the brain is bigger than an LLM, it's that the way we learn written language (and spoken language too, tbh) is different from how LLMs learn language, and that the way we think about the world isn't derivative of language.
We don't learn to read or write by doing token prediction (if we did, subjects like spelling would be much easier). In fact, there was a movement in schools to teach reading by asking students to predict what words might be based on the context of the sentence, and it was a disaster and led to increased illiteracy rates and schools have started shifting back to phonics. Not only do we not learn that way, when we try to learn that way it leads to worse education outcomes.
The reason why brains are not like an LLM is not because we also have eyes and an LLM doesn't, it's because just isolating out our language "models", we are trained differently and interact with the rest of our brains differently.
If our language centers of our brain worked like an LLM, we would expect language skills to develop faster than reasoning capabilities within our writing/speaking. A primitive LLM like GPT-2 has very limited processing ability but is still able to imitate a wide range of styles and is still able to "speak" in a grammatically correct way. Humans are the opposite: we start out communicating complex ideas poorly and we start out using language poorly. We master language as a processing tool before we become competent at using language in general.
That's true. Grind LC/sys design and then apply to every top company (not just FAANG) and get referrals everywhere you can. I got interviews places that had passed on me a few weeks before just by getting a referral.
That's the only one I know of at my bank that's mobile only. (And for how frequently I deposit checks, going to an ATM wouldn't be a big deal.) I realize other banks may have more de-featured web sites.
Because sometimes you want to check banking services on the go. Or you get a notification on your email and want to open the app to check. Or it's much quicker to log in with fingerprint. Or so you can quickly zelle someone on the go. Or because why do you want to go and pull out our computer and turn it on and log in and waste all that time when you have mobile computer in your pocket that's always on and always connected and not in your backpack in the corner somewhere. I know a lot of people who only have a work laptop and use their phone for everything else.
> Selon la réglementation européenne, ces systèmes assurent 100 000 euros par déposant.
> (DeepL) According to European regulations, these systems guarantee 100,000 euros per depositor.
From experience, this is never wasted. At the contrary, stating a problem clearly very often helps finding the solution, without bothering anybody else :)
Someone doesn't know what "evaporation" or "lost" means.