Hacker Newsnew | past | comments | ask | show | jobs | submit | more sctb's commentslogin

If you're still interested: https://www.gnu.org/software/emacs/manual/html_node/elisp/Le.... Basically modern Emacs Lisp works like Common Lisp.


Welcome, Tom! Sometimes I can't believe how good we have it here. Thank you both very much.


Sincere thanks to you too, Scott.


Yanis Varoufakis has been writing about this with a mostly neutral, inquisitive tone. Recently:

https://unherd.com/2025/02/why-trumps-tariffs-are-a-masterpl...

https://unherd.com/2025/04/will-liberation-day-transform-the...


Good links, though I would not call that a "mostly neutral, inquisitive tone". Varoufakis knows that the outcome of Trump's imagined/proposed tariffs cannot be known at this point, and so he explores a couple of the possible results.


To add to this: I think pg does drill down, but what he does more than most is make the effort to bring those insights back up.


Explicitly yes:

> I mean new things in a very general sense. Newton's physics was a good new thing. Indeed, the first version of this principle was to have good new ideas. But that didn't seem general enough: it didn't include making art or music, for example, except insofar as they embody new ideas.


I remember encountering this file in Red Hat Linux circa 2000: https://en.wikipedia.org/wiki/File:Linus-linux.ogg. It sounds much clearer nowadays, but still to my ear it's roughly halfway between lih-nucks and lee-noox.


My general understanding is that "mind" is an objective concept; people have minds that cognize and think and learn and so on. Some minds are apparently more capable of those things than others. When speaking about intelligence, it makes sense to associate that with the mind.

Consciousness, on the other hand, is (even) less well-defined and is usually considered to be subjective. Being subjective, it tends to resist all of the usual objective approaches of description and analysis. Hard problem and all that.


I don't understand why people have problem with simply stating that it is emergent phenomenon and that's it.

Similarly to how computer is computer and half sized computer is half of its bigger friend – you can keep halving it until there is no "computer" left in it.

Or pencil – you have pencil that you call pencil; what about pencil half size of it? and so on until you hit single atom. You had pencil, now you don't, where on this line there was pencil and then there wasn't?


Because that's the same as giving up and saying "we don't understand."

What is mind emerging into? When a video game experience emerges from the combination of processing, display, sound, and controller input, it emerges into a level of organization that a mind can participate in. It emerges into a system of organization emanating downward from the mind experiencing it. It can't just "emerge" into existence on its own. If a game falls in the woods, its not a game.

If you call the mind an emergent phenomenon but can't describe the context into which it emerges, you've added nothing to our understanding.


I agree with GP. Consciousness isn't so hard to explain if you don't enshroud it with mysticism.

Consciousness is the emergent, graduated phenomenon of an information processing system when that information processing system has achieved significant complexity to model itself with relation to the various systemic inputs.

It's not binary, it's a gradient. I have more developed consciousness than my dog, which has more developed consciousness than a rat, and then a fish, then an insect, etc.

Somewhat disturbingly it also goes the other way, AIs may achieve a more profound conscious experience than humans - same for aliens. What does it mean for inferior forms of consciousness that have always placed themselves on a pedestal in relation to the rest of the animal kingdom simply because they have the most developed consciousness?


Welp, that does it. Pack it up, Cognitive Scientists: we've solved the hard problem of consciousness right here on HN.

What you're describing is what's been proposed by Giulio Tonini as "Integrated Information Theory" (IIT) [1]. I quite like the framework and the math behind it is beautiful. Unfortunately, it hasn't been supported well empirically.

Re: AI, IIT actually gives basis for AI not being conscious. Not to mention that all conscious systems we can currently observe are dynamic/continuous, not discrete. The difference there is qualitative—there's no reason to assume that because a dynamic system is conscious that a discrete system approximating it is conscious too.

[1: https://www.nature.com/articles/nrn.2016.44]


It's a nice idea but there's no evidence for it. Not only is there no evidence, but nobody has any idea what such evidence would even look like. We can't even conceptualize an experiment that would support or refute this theory.


That's not true, the authors of IIT [1] propose a number of experiments that would support or deny the underlying theory. To my knowledge, those experiments haven't shown much support. But there are aspects of it that are absolutely empirically falsifiable.

[1: https://www.nature.com/articles/nrn.2016.44]


I don't buy it. It might support the part of the theory that talks about how brains work. But the statement that this is qualia is different, and can't be proven or disproven. Let's say I believe that some person is actually a P-zombie, someone with no conscious experience but who behaves exactly like a normal person. Would these experiments be able to tell me if my belief is correct? I don't see how.


You're welcome to read the paper. Tonini's work is well-known within CogSci and its not quackery by any stretch.


I skimmed it. There's one mention of "qualia" and I didn't spot anything to connect their theory with the actual experience of consciousness besides them saying they think so.


Better let Nature know their Peer Review committee screwed up then!


> Better let Nature know their Peer Review committee screwed up then!

The article you cite [0] is labelled as "opinion". The standards for peer review of opinion articles in scientific journals are a lot lower than those for ordinary research articles. While precise standards vary from journal to journal, for opinion articles peer reviewers often see their role as simply excluding egregious misinformation and blatant errors, as compared to research articles where their role is to make sure the article is presenting high quality evidence in support of its conclusions. [1]

[0] https://www.nature.com/articles/nrn.2016.44

[1] https://ecologyisnotadirtyword.com/2021/02/24/lets-talk-abou...


That would be because I linked the wrong article. The original is here, different journal:

https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...


I think this article has the problem that it is addressing an interdisciplinary topic with too much focus on only a single discipline, which can be a sign of lacking sufficient diversity in disciplinary background of peer reviewers.

Scott Aaronson’s attempted refutation of IIT - https://scottaaronson.blog/?p=1799 - I think is better in that he actually tries to relate IIT to some of the philosophical literature (e.g. his distinction between Chalmers’ “Hard Problem” and the distinct “Pretty Hard Problem” which he sees IIT as trying to address)

I think it is a pity that Aaronson has never (to my knowledge) published his criticisms of IIT in a more formal setting-and I don’t know if Tononi has responded to them anywhere. I think Aaronson is probably right - that IIT fails as a mathematical model of what we intuitively consider conscious, since even though it excludes many common electronic devices we wouldn’t “conscious”, it is possible to mathematically construct an algorithm, capable of being physically implemented in electronics, which would be conscious per IIT but not per our intuition. And even if Tononi patches his mathematics to solve a particular case of that problem, someone with Aaronson’s skillset may just be able to construct another.

Tononi might then argue that if there is no mathematical model of our intuitions about consciousness lacking in special pleading, that’s a sign our intuitions are flawed. Okay, but then if we accept our intuitions can be flawed in some cases, why not in more cases? One could decide the intuition of consciousness is completely erroneous and become an eliminativist about it. Or, if IIT forces you to accept (contrary to our intuitions) certain (special cases of) simple electronic devices or computer systems as just as conscious as humans, why not violate those intuitions further and insist on that for even more cases?


I'm not a "believer" in IIT. But I think its an incredible idea and taking the time to really understand what Tononi et al are proposing is a mind-expanding experience. It may not explain consciousness but it does make you think about what things could be a part of it. And any attempt to mathematically formalize cognitive science gets a vote of approval from me.

My personal belief is that consciousness requires dynamic continuity. I don't think an algorithmic system is conscious because it's "cognition" is discrete and the information isn't integrated across frames. I don't have a "why that works"—its just a gut belief.


Funny, I was just thinking in the opposite direction. There's "I think therefore I am," but there isn't really "I thought therefore I was." I know I'm conscious, but I only have the memory of being conscious before, which could be false. Conscious could just be a snapshot, although it certainly doesn't feel like it.


Your brain is still physically continuous and dynamic. ANNs are not, there are distinct frames.


I agree this seems like an important difference. It’s just interesting to me that there’s no proof that my consciousness is continuous. I’m going to continue to assume that it is, but it’s unknowable.


You’re modded down for some reason but I haven’t thought of consciousness as a model that eventually becomes sophisticated enough to model itself. That could be an explanation for self awareness. …haven’t thought of that.


See Scott Aaronson's rebuttal to IIT: https://scottaaronson.blog/?p=1799


Emergence has no “into”, IMO: https://en.wikipedia.org/wiki/Emergence


All of those examples are emergence "into". The snowflake emerges into the mathematical patterns emanating downward. The termite cathedral emerges into the architectural context of the observer. Without emanating structure, there is no "emergence"—just a proliferation of chaos and error.


Not to butt in, but if the emergent phenomena only exists in the mind of the observer, and the mind is a material phenomenon, the where in the observer-snowflake system is there anything not fully decomposable to atoms, particle motion, so on?


The mind is not a material phenomenon, in the same way that a video game is not a computational one.

The emergent experience exists at the level into which it emerges. It's constructed at a lower level of organization, but not decomposable to them in a way that's meaningful without their recomposition—that's what makes the phenomenon emergent. The qualitative experience of a film does not meaningfully break down to the bits in the video stream, the compressed sound waves carrying the dialog, the photons hitting your eyes' rods and cones, or the biochemical signals in your brain.

The mind is not in the brain, but on the brain.


But we have to classify it as an “appears to” rather than an “is,” don’t we? It’s perfectly fine to do categorize out emergent phenomena that have practical utility, eg its useful to see the snowflake over its constituent parts, or the film over the bits, but what underlies the choice to see it as a film rather than an improbably corrupted png? When talking about the mind, then, why is it we choose to see the mind at all, and how does this constitute more than a convenient framing device, ie how can it explain qualia?


Because our entire perception system functions as a mediation between the teleological affordances an object presents at a given level of organization/analysis and how those affordances relate to our motivational system's current objective and directed action. The emergence only "is" at a certain level of analysis and its emergence at that level is dependent entirely on the perception of an observer.

If a car is hurtling towards you, you don't perceive its handle. But if you're trying to go somewhere, you have to open the door. "Threat", "vehicle", or "handle" aren't just convenient framing device, but an accurate depiction of the object within your perceptual/motivation systems based on the current level of organization and analysis you're participating in.

We choose to see the mind because we are minds. Consciousness is. There is something which it is like to be. Denying our emergent experience of it, or reducing it to a "convenient framing device" tosses out the most fundamental empirical experience we have: to exist.


I completely agree with you, I’m just being more reductive when I say it’s a practical categorization rather than essential reality. Certainly it’s also reasonable to say there’s no essential reality, just subjective levels of analysis, so everything is practical categorization. The issue is that we’ve gotten nowhere in explaining why we seem to exist.

A video game is relatively easy, at least seemingly, to reduce down to its underlying principles. The content dissolves the more closely I look at the game. The issue here isn’t whether the game still exists (it does, in the place I’m no longer looking), the issue is in seeing why the game arises from its component parts, and not something else. Easy-ish for the game, it follows directly from what we know about physics and such, but hard for the mind. Why do neurons together produce pain that exists, rather than pain as a purpose-driven internal signal to help organize the escape from a predator? Emergence doesn't tell us why one or the other, just that whatever it is must emerge from constituent parts.


I don't think its a question of whether there is an essential reality or not, but rather whether we have access to essential reality. Donald Hoffman makes a strong game theoretical argument for how natural selection chooses effective presentations of reality rather than necessarily accurate ones [1]. Based on your level of interest in this conversation I'd expect you would really enjoy that book!

The game is certainly easier than the mind—I like it as an example because most of us have a hands-on knowledge of what the qualitative experience of "playing a game" is like. But the game still only emerges because the game developer, computer manufacturer, and player jointly give it a emanating system into which it can emerge. On its own, the raw game data doesn't really mean anything at all—if the bitstream of Diablo IV washed up on the beach, there's nowhere in that data encoding the experience of killing Diablo for the first time. One wouldn't even recognize it as something that could be decoded into such an experience [2].

I agree with you that the "why" is tough. Why have a conscious experience? Why have a sense of self at all? Why experience emotions rather than have them be—like you described—a purpose-driven internal signal? And then you get into theories like Internal Family Systems, which has empirical support at least within a prescriptive context if not necessarily a descriptive one [3].

The whole thing is a mess. A great, big, beautiful mess.

[1: https://www.amazon.com/Case-Against-Reality-Evolution-Truth/...] [2: https://benjamincongdon.me/blog/2021/02/21/Three-Layers-of-I...]


Sounds similar to Plato's Theory of Forms.


Close! Emergence/emanation is particular to Neoplatonism. https://en.wikipedia.org/wiki/Neoplatonism


It depends on what you mean by consciousness, if we are talking about the intelligence or self awareness or thoughts then I don’t see any problem with it being emergent. But if we are talking about conscious experience/qualia (not something that thinks or interacts, but something that just experiences) then I think it’s incoherent for it to be emergent. That there is a consciousness that is experiencing something is the only thing we can know as 100% true, and the world itself is something we can never know is 100% true: we could be a brain in a vat, we could be dreaming, in the matrix, a demon making us hallucinate everything etc. It seems a bit silly to say the 100% true thing is an illusion or is dependent because something that we don’t know is true tells us it is.


Pencil is just an idea, minds objectively have qualia (measured internally).

Edit: you can’t measure “pencilness,” but you can’t help but know whether or not you’re in pain.


You can measure "pencilness" a number of ways depending on how you operationalize the term. It could be a measure of how well it achieves the function of a pencil, how well it matches the collective understanding of the form of a pencil, how closely it materially relates to an existing reference pencil.

These are all proxy measures, but all of science is done by proxy.


Well sure, but you’re putting “pencilness” onto the collection of heterogeneous matter, same with any other level of analysis. Consciousness isn’t debatable by the thing doing it, it’s an irrepressible fact of existence to the conscious thing. Science needs a falsifiable hypothesis for the “why” of the material->consciousness transition, and constructing such a hypothesis is difficult for a lot of reasons. Saying “it emerges from neuron connections” just doesn’t capture the issue. Why should neuron connections produce this observer thing when we seem to see machines do similar things without it? Is a sufficiently large recurrent neural network conscious by the same process? If not or if so, then why? What precisely produces the phenomenon. Emergence is an observation, not a hypothesis for why that observation occurred. It could just be a trick of the light.


Agreed, I was just commenting that one can measure "penciliness". I see a lot of pedantic arguments against measurement by proxy, as if every single measurement we do wasn't by proxy.


Actually yeah I see your point. I’ll concede on that.


There's a whole scientific study of consciousness that actually comes out of behaviorism. The thought is, if I have a conscious experience I can then exhibit the behavior of talking about it. From this developed a whole paradigm of investigation including stuff like the research of subliminal images.

Stanislas Dehaene's book Consciousness and the Brain does a great job of describing this, though it's 10 years old now.


Trouble is that you can also exhibit the behavior of talking about it just by being exposed to the idea, even if you don't have the experience. If you were never exposed to the idea and you started talking about it, then I'd be convinced you had the experience, but nobody is actually like that. The fact that the idea exists at all proves to me that at least one human somewhere had conscious experience, and I know there's at least one more (me), but that's it.


I was evidently unclear. I mean, if an image of a parakeet is flashed up on a screen for 100ms and you can say "I saw a parakeet" you were conscious of the image. If the image is flashed for 50 ms and you can't you weren't conscious of the image. In this paradigm being conscious is being conscious of particular things.


That seems like a fairly simple machine could be conscious, which is not usually how the word is used. Typically consciousness means that there is some ill-defined entity that has a subjective experience, what the philosophers call qualia.


The mind concept here then could apply to computers as well since after all those can also be configured to learn things and behave in certain intelligent ways


mind = container of values

consciousness = meta-attention


> It's like Cypher in the Matrix writing about the relative benefits of changing his dinner from steak to something cheaper.

Now how can we take the rest of this comment seriously with a sentence like this?


I'll give you my understanding, which isn't exactly Christian or Buddhist or whatever—it's just how it seems to me, and YMMV.

The word "soul" describes the fundamental sense of self experienced by a human person. We know this self isn't the same as our mind, our body, our possessions, or our memories, because those things change but the sense of self seemingly doesn't. It's just "me" or "I".

Because this sense of self doesn't change, it seems timeless, or eternal. And many experience some amount of tension because this eternal sense of self seems to get mixed up with all of the temporary things in the world, especially our bodies and minds. With that admixture comes a fear of losing the self (i.e. death), as everything temporary is eventually lost.

So what? The infinite is infinite, the finite is finite. IMO, any direct experience actually includes both. But anyway I figure it's wise to keep them straight and not mix them up.


My dictionary says this about "protocol":

> [In computing:] a set of rules governing the exchange or transmission of data between devices.

In the article, pg says this:

> The reason is that it's a new messaging protocol, where you don't specify the recipients.

It seems obvious to me what pg is getting at, even though the other protocols he mentioned are all formal while Twitter's is not.


It seems like that could apply to many other ways of messaging though, not all of them online. I'm not sure the "new" bit is really correct. Perhaps the potential scale or reach of a message is what matters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: