Hacker Newsnew | past | comments | ask | show | jobs | submit | katabasis's commentslogin

It's capitalism – the true "artificial intelligence" that has been organizing human life for the last ~200 years or so.


I think Vue.js has done a great job of avoiding a lot of the churn while staying competitive with other projects in terms of features and performance.

The core library of React seems well managed, but the accompanying ecosystem of 3rd-party tools for styling, routing, state management, etc seem to be constantly changing.

Meanwhile in Vue land, the critical packages have remained fairly stable and are all maintained by the core team. You get support for styling and transitions out of the box; for most other things (routing, state management, etc) there is one well-maintained library (as opposed to a bunch of options of varying quality that you need to sift through).

I've been using Vue.js for 10 years at this point, and have been using the composition API for the last 5. The older options API is still viable as well.


LLMs are not people, but I can imagine how extensive interactions with AI personas might alter the expectations that humans have when communicating with other humans.

Real people would not (and should not) allow themselves to be subjected to endless streams of abuse in a conversation. Giving AIs like Claude a way to end these kinds of interactions seems like a useful reminder to the human on the other side.


This post seems to explicitly state they are doing this out of concern for the model's "well-being," not the user's.


Yeah, but my interpretation of what the user you’re replying to is saying is that these LLMs are more and more going to be teaching people how it is acceptable to communicate with others.

Even if the idea that LLMs are sentient may be ridiculous atm, the concept of not normalizing abusive forms of communication with others, be they artificial or not, could be valuable for society.

It’s funny because this is making me think of a freelance client I had recently who at a point of frustration between us began talking to me like I was an AI assistant. Just like you see frustrated people talk to their LLMs. I’d never experienced anything like it, and I quickly ended the relationship, but I know that he was deep into using LLMs to vibe code every day and I genuinely believe that some of that began to transfer over to the way he felt he could communicate with people.

Now an obvious retort here is to question whether killing NPCs in video games tends to make people feel like it’s okay to kill people IRL.

My response to that is that I think LLMs are far more insidious, and are tapping into people’s psyches in a way no other tech has been able to dream of doing. See AI psychosis, people falling in love with their AI, the massive outcry over the loss of personality from gpt4o to gpt5… I think people really are struggling to keep in mind that LLMs are not a genuine type of “person”.


Yeah pretty much this. One can argue that it’s idiotic to treat chatbots like they are alive, but if a bit of misplaced empathy for machines helps to discourage antisocial behavior towards other humans (even as an unintentional side effect), that seems ok to me.

As an aside, I’m not the kind of person who gets worked up about violence in video games, because even AAA titles with excellent graphics are still obvious as games. New forms of technology are capable of blurring the lines between fantasy and reality to a greater degree. This is true of LLM chat bots to some degree, and I worry it will also become a problem as we get better VR. People who witness or participate in violent events often come away traumatized; at a certain point simulated experiences are going to be so convincing that we will need to worry about the impact on the user.


> People who witness or participate in violent events often come away traumatized

To be fair it seems reasonable to entertain the possibility of that being due to the knowledge that the events are real.


> It’s funny because this is making me think of a freelance client I had recently who at a point of frustration between us began talking to me like I was an AI assistant. Just like you see frustrated people talk to their LLMs.

I witness a very similar event. It's important to stay vigilant and not let the "assistant" reprogram your speech patterns.


Yes, this is exactly the reason I taught my kids to be polite to Alexa. Not because anyone thinks Alexa is sentient, but because it's a good habit to have.


No doubt, but yelling is built in method to air your frustration. After all there’s a reason we are agitated.

It’s a bit like pain response when injured. It’s not pretty, but society is used to a little bit of adversity.


This is like saying I am hurting a real person when I try to crop a photo in an image editor.

Either come out and say whole of electron field is conscious, but then is that field "suffering" as it is hot in the sun.


I'd love to see something like this that is marketed to parents or schools who want to give kids a way to access the good parts of the internet (Wikipedia, e-books, etc) without the toxic parts. Maybe throw in some kind of de-centralized social media or file sharing for collaboration with other students or like-minded families. Kiwix + Pi-hole + Activity Pub basically. The device could create its own network (that only allowed access to an allowed list of sites, defaulting to a list of educational projects).

If no one produces such a device I may need to make one myself by the time my toddler gets old enough to go online.


Didn't England or GB do (does?) a curated dump precisely for that purpose? I'll pull up the info tonight if no one else chimes in.


The closest I found, which partially matches my memory, was from SOS Children that claims "This selection of articles from Wikipedia matches the UK National Curriculum ..." and "... we’ve checked all the articles, tidied them up a bit, ..."

https://web.archive.org/web/20171022101730/http://schools-wi...


I'd definitely support this (even help). My non-related wish for my future kids would be to have some modern version of the BASIC games you would have to type in manually.

I never did that but it sounds a really fun to get into technology and programming. The difference is that I'd use different languages and maybe also allow them to draw images (with paint) and such. Everything offline and simple.


You could maybe install something like NextDNS? (even the free version might cut it) - I think that can block major categories including. the ones you mention.


It's exciting seeing grassroots censorship efforts like this.


The modern internet is filled with content designed to track, mislead, and manipulate users (especially young users who don't have a full set of critical thinking skills). I think it is totally appropriate to want to give families (not the state, FWIW) the tools to take back some control.

Giving kids accessed to a curated experience online sounds much more feasible than keeping them offline all together, but most parents don't really have the technical know-how to do this themselves (and tech giants like Google, FB, etc. are not interested in providing these capabilities).


That’s the first thing I thought as well - as long as Wikipedia remains open and public and free there’s a degree of transparency and accountability there -

But if you download your own separate offline version, it can be whataver you want it to be, and that will be all your users get - whether your goal is to ensure access to the ‘right’ parts, or to disappear the ‘wrong’ parts. You could even start out with the former objective, but end up with the latter situation, depending on how things go.

That isn’t quite the same situation with the definitive version of Wikipedia live on the World Wide Web.


I don't think it's productive to assume we know exactly why someone would use this, and how they would both implement this and discuss it with their children in their own home, and then attack them based on said assumption.


That is entirely not how a healthy society works.

https://en.wikipedia.org/wiki/Paradox_of_tolerance


Do you think katabasis' proposal is unhealthy for a society? It strikes me as healthy for parents to keep their kids at ideological home during an information pandemic, to provide access to educational material and shelter from social media.


I think your original comment was mostly parsed as sarcastic.


Yes, that's how it read to me.

I.e., interpretation...

katabasis: (Suggests sensible guardrails for children)

fritzo: THOUGHT-POLICE!!


s/censorship/content curation/


Wikipedia qualifies as toxic Internet for many people for several reasons: it's heavily manipulated by political and corporate interests as part of their image polishing media strategy, the links that are used to provide credibility for articles are poorly curated and often broken, and the internal power (deciding which articles get heavily edited by invested interests etc.) structure of Wikipedia is mostly hidden from the public.

If anything, children should be taught that Wikipedia is not a reliable source, meaning citing a Wikipedia article in a bibliography for a paper should never be allowed by any responsible educational institution.


> the links that are used to provide credibility for articles are poorly curated and often broken

I didn’t appreciate this until a few years ago when I got into the habit of going on deep dives to find primary sources. Easily half of all the sources I checked were either paywalled, out of print, or simply didn’t say what the Wikipedia article claimed they said.


Are you confusing Nussbaum with someone else? Her writing is way more straightforward than your typical fancy philosopher, and she is definitely a proponent of liberal/progressive views (I wouldn't call her a radical though). She's nothing like Ayn Rand.

Someone already posted a link to her excellent 1999 New Republic essay (https://newrepublic.com/article/150687/professor-parody) where she criticizes Judith Butler – today this criticism seems more valid than ever and extremely prescient:

> Indeed, Butler’s naively empty politics is especially dangerous for the very causes she holds dear. For every friend of Butler, eager to engage in subversive performances that proclaim the repressiveness of heterosexual gender norms, there are dozens who would like to engage in subversive performances that flout the norms of tax compliance, of non-discrimination, of decent treatment of one’s fellow students. To such people we should say, you cannot simply resist as you please, for there are norms of fairness, decency, and dignity that entail that this is bad behavior. But then we have to articulate those norms--and this Butler refuses to do.

Your whole comment here is extremely uncharitable and it exhibits the very simple-mindedness that you criticize.


I am not a neuroscientist, but I think it's likely that LLMs (with 10s/100s of billions of parameters) and the human brain (with 1-2 orders of magnitude more neural connections[1]) process language in analogous ways. This process is predictive, stochastic, sensitive to constantly-shifting context, etc. IMO this accounts for the "unreasonable effectiveness" of LLMs in many language-related tasks. It's reasonable to call this a form of intelligence (you can measure it, solve problems with it, etc).

But language processing is just one subset of human cognition. There are other layers of human experience like sense-perception, emotion, instinct, etc. – maybe these things could be modeled by additional parameters, maybe not. Additionally, there is consciousness itself, which we still have a poor understanding of (but it's clearly different from intelligence).

So anyway, I think that it's reasonable to say that LLMs implement one sub-set of human cognition (the part that has to do with how we think in language), but there are many additional "layers" to human experience that they don't currently account for.

Maybe you could say that LLMs are a "model distillation" of human intelligence, at 1-2 orders of magnitude less complexity. Like a smaller model distilled from a larger one, they are good at a lot of things but less able to cover edge cases and accuracy/quality of thinking will suffer the more distilled you go.

We tend to equate "thinking" with intelligence/language/reason thanks to 2500 years of Western philosophy, and I believe that's where a lot of confusion originates in discussions of AI/AGI/etc.

[1]: https://medicine.yale.edu/lab/colon-ramos/overview/#:~:text=...


>I am not a neuroscientist, but I think it's likely that LLMs (with 10s of billions of parameters) and the human brain (with 1-2 orders of magnitude more neural connections[1]) process language in analogous ways

Related is the platonic representation hypothesis where models apparently converge to similar representations of relationships between data points.

https://phillipi.github.io/prh/ https://arxiv.org/abs/2405.07987


Interesting. I'm not sure I'd use the term "Platonic" here, because that tends to have implications of mathematical perfection / timelessness / etc. But I do think that the corpuses of human language that we've been feeding to these models contain within them a lot of real information about the objective world (in a statistical, context-dependent way as opposed to a mathematically precise one), and the AIs are surfacing this information.

To put this another way, I think that you can say that much of our own intelligence as humans is embedded in the sum total of the language that we have produced. So the intelligence of LLMs is really our own intelligence reflected back at us (with all the potential for mistakes and biases that we ourselves contain).

Edit: I fed Claude this paper, and "he" pointed out to me that there are several examples of humans developing accurate conceptions of things they could never experience based on language alone. Most readers here are likely familiar with Helen Keller, who became an accomplished thinker and writer in spite of being blind and deaf from infancy (Anne Sullivan taught her language despite great difficulty, and this Keller's main window to the world). You could also look at the story of Eşref Armağan, a Turkish painter who was blind from birth – he creates recognizable depictions of a world that he learned about through language and non-visual senses).


Many philosophical traditions which incorporate a meditation practice emphasize that your consciousness is distinct from the contents of your thoughts. Meditation (even practiced casually) can provide a direct experience of this.

When it comes to the various kinds of thought-processes that humans engage in (linguistic thinking, logic, math, etc) I agree that you can describe things in terms of functions that have definite inputs and outputs. So human thinking is probably computable, and I think that LLMs can be said to be ”think” in ways that are analogous to what we do.

But human consciousness produces an experience (the experience of being conscious) as opposed to some definite output. I do not think it is computable in the same way.

I don’t necessarily think that you need to subscribe to dualism or religious beliefs to explain consciousness - it seems entirely possible (maybe even likely) that what we experience as consciousness is some kind of illusory side-effect of biological processes as opposed to something autonomous and “real”.

But I do think it’s still important to maintain a distinction between “thinking” (computable, we do it, AIs do it as well) and “consciousness” (we experience it, probably many animals experience it also, but it’s orthogonal to the linguistic or logical reasoning processes that AIs are currently capable of).

At some point this vague experience of awareness may be all that differentiates us from the machines, so we shouldn’t dismiss it.


> It's very difficult to find some way of defining rather precisely something we can do that we can say a computer will never be able to do. There are some things that people make up that say that, "While it's doing it, will it feel good?" or, "While it's doing it, will it understand what it's doing?" or some other abstraction. I rather feel that these are things like, "While it's doing it, will it be able to scratch the lice out of it's hair?" No, it hasn't got any hair nor lice to scratch from it, okay?

> You've got to be careful when you say what the human does, if you add to the actual result of his effort some other things that you like, the appreciation of the aesthetic... then it gets harder and harder for the computer to do it because the human beings have a tendency to try to make sure that they can do something that no machine can do. Somehow it doesn't bother them anymore, it must have bothered them in earlier times, that machines are stronger physically than they are...

- Feynman

https://www.youtube.com/watch?v=ipRvjS7q1DI


"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra

Maybe we can swap out "think" with "experience consciousness"


You need to define "consciousness" first for the question to have any meaning, but all our definitions of consciousness seem to ultimately boil down to, "this thing that I'm experiencing".


What about the famous solution provided by Descartes, “Cogito ergo sum”? Let's assume the fact that “we think”, so we can put it in a function to be computable, how is that going to prove that “I exist” for a machine? How is the machine going to perceive itself as a conscious being?


> When it comes to the various kinds of thought-processes that humans engage in (linguistic thinking, logic, math, etc) I agree that you can describe things in terms of functions that have definite inputs and outputs.

Function can mean inputs-outputs. But it can also mean system behaviors.

For instance, recurrence is a functional behavior, not a functional mapping.

Similarly, self-awareness is some kind of internal loop of information, not an input-output mapping. Specifically, an information loop regarding our own internal state.

Today's LLMs are mostly not very recurrent. So might be said to be becoming more intelligent (better responses to complex demands), but not necessarily more conscious. An input-output process has no ability to monitor itself, no matter how capable of generating outputs. Not even when its outputs involve symbols and reasoning about concepts like consciousness.

So I think it is fair to say intelligence and consciousness are different things. But I expect that both can enhance the other.

Meditation reveals a lot about consciousness. We choose to eliminate most thought, focusing instead on some simple experience like breathing, or a concept of "nothing".

Yet even with this radical reduction in general awareness, and our higher level thinking, we remain aware of our awareness of experience. We are not unconscious.

To me that basic self-awareness is what consciousness is. We have it, even when we are not being analytical about it. In meditation our mind is still looping information about its current state, from the state to our sensory experience of our state, even when the state has been reduced so much.

There is not nothing. We are not actually doing nothing. Our mental resting state is still a dynamic state we continue to actively process, that our neurons continue to give us feedback on, even when that processing has been simplified to simply letting that feedback of our state go by with no need to act on it in any way.

So consciousness is inherently at least self-awareness in terms of internal access to our own internal activity. And that we retain a memory of doing this minimal active or passive self-monitoring, even after we resume more complex activity.

My own view is that is all it is, with the addition of enough memory of the minimal loop, and a rich enough model of ourselves, to be able to consider that strange self-awareness looping state afterwards. Ask questions about its nature, etc.


LLMs are recurrent in the sense that you describe, though, since every token of output they produce is fed back to them as input. Indeed, that is why reasoning models are possible in the first place, and it's not clear to me why the chain-of-thought is not exactly that kind of "internal loop of information" that you mention.

> Meditation reveals a lot about consciousness. We choose to eliminate most thought, focusing instead on some simple experience like breathing, or a concept of "nothing".

The sensation of breathing still constitutes input. Nor is it a given that a thought is necessarily encodeable in words, so "thinking about concept of nothing" is still a thought, and there's some measurable electrochemical activity encoding that in the brain which encodes it. In a similar vein, LLMs deal with arbitrary tokens, which may or may not encode words - e.g. in multimodal LMs, input includes tokens encoding images directly without any words, and output can similarly be non-word tokens.


> chain-of-thought is not exactly that kind of "internal loop of information" that you mention.

It is, but (1) the amount of looping in models today is extremely trivial. if our awareness loop is on the order of milliseconds, we experience it on the order of thousands of milliseconds at a minimum. And consider and consolidate our reasoning about experiences over minutes, hours, even days. Which would be thousands to many millions of iterations of experiential context.

Then (2), the looping of models today is not something the model is aware of at a higher level. It processes the inputs iteratively, but it isn't able to step back and examine its own responses recurrently at a second level in a different indirect way.

Even though I do believe models can reason about themselves and behave as if they did have that higher functionality.

But their current ability to reason like that has been trained into them by human behavior, not learned independently by actually monitoring their own internal dynamics. They cannot yet do that. We do not learn we are conscious, or become conscious, by parroting others conscious enabled reasoning. A subtle but extremely important difference.

Finally, (3) they don't build up a memory of their internal loops, much less a common experience from a pervasive presence of such loops.

Those are just three quite major gaps.

But they are not fundamental gaps. I have no doubt that future models will become conscious as limitations are addressed.


This is what I wrote while I was thinking about the same topic before I can across your excellent comment; as if it’s a summary of what you just said:

Consciousness is nothing but the ability to have internal and external senses, being able to enumerate them, recursively sense them, and remember the previous steps. If any of those ingredients are missing, you cannot create or maintain consciousness.


Thanks. I do believe that is a good summary of what I was saying.


When I was a kid, I used to imagine if that society ever developed AI, there would be widespread pushback to the idea that computers could ever develop consciousness.

I imagined the Catholic Church, for example, would be publishing missives reminding everyone that only humans can have souls, and biologists would be fighting an quixotic battle to claim that consciousness can arise from physical structures and forces.

I'm still surprised at how credulous and accepting societies have been of AI developments over the last few years.


Probably because we've been conditioned to accept that machines, no matter how friendly, are not really conscious in the way we are, so there is no risk of them needing to be treated differently than a hammer.

AI developments over the last few years have not needed that view to change.


>it seems entirely possible (maybe even likely) that what we experience as consciousness is some kind of illusory side-effect of biological processes as opposed to something autonomous and “real”.

I've heard this idea before but I have never been able to make head or tail of it. Consciousness can't be an illusion, because to have an illusion you must already be conscious. Can a rock have illusions?


I think it’s more apt to say that free will is an illusion.


Well, it entirely depends on how you even define free will.

Btw, Turing machines provide some inspiration for an interesting definition:

Turing (and Gödel) essentially say that you can't predict what a computer program does: you have to run it to even figure out whether it'll halt. (I think in general, even if you fix some large fixed step size n, you can't even predict whether an arbitrary program will halt after n steps or not, without essentially running it anyway.)

Humans could have free will in the same sense, that you can't predict what they are doing, without actually simulating them. And by an argument implied by Turing in his paper on the Turing test, that simulation would have the same experience as the human would have had.

(To go even further: if quantum fluctuations have an impact on human behaviour, you can't even do that simulation 100% accurately, because of the no cloning theorem.

To be more precise: I'm not saying, like Penrose, that human brains use quantum computing. My much weaker claim is that human brains are likely a chaotic system, so even a very small deviation in starting conditions can quickly lead to differences in outcome.

If you are only interested in approximate predictions, identical twins show that just getting the same DNA and approximation of the environment gets you pretty far in making good predictions. So cell level scans could be even better. But: not perfect.)


> Humans could have free will in the same sense, that you can't predict what they are doing, without actually simulating them.

I think it's a good point, but I would argue it's even more direct than that. Humans themselves can't reliably predict what they are going to do before they do it. That's because any knowledge we have is part of our deliberative decision-making process, so whenever we think we will do X, there is always a possibility that we will use that knowledge to change our mind. In general, you can't feed a machine's output into its input except for a very limited class of fixed point functions, which we aren't.

So the bottom line is that seen from the inside, our self-model is a necessarily nondeterministic machine. We are epistemically uncertain about our own actions, for good reason, and yet we know that we cause them. This forms the basis of our intuition of free will, but we can't tell this epistemic uncertainty apart from metaphysical uncertainty, hence all the debate about whether free will is "real" or an "illusion". I'd say it's a bit of both: a real thing that we misinterpret.


You are right about the internal model, but I wouldn't dismiss the view from the outside.

Ie I wouldn't expect humans without free will to be able to predict themselves very well, either. Exactly as you suggest: having a fixed point (or not) doesn't mean you have free will.


The issue I have with the view from the outside is that it risks leading to a rather anthropomorphic notion of free will, if the criterion boils down to that an entity can only have free will if we can't predict its behavior.

I'm tempted to say an entity has free will if it a) has a self-model, b) uses this self-model as a kind of internal homunculus to evaluate decision options and c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information). It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.


I don't understand why a self-model would be necessary for free will?

> [...] c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information).

I don't think humans reach that threshold. Though it depends a lot on how you define things.

But as far as I can tell, most of my second-to-second decisions are very much coloured by the fact that we have gravity and an atmosphere at comfortable temperatures (external factors), and if you changed that all of a sudden, I would decide and behave very differently.

> It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.

Your homunculus is one hell of a complexity threshold.


There is no you to have the illusion.


> I think that LLMs can be said to be ”think” in ways that are analogous to what we do. ... But human consciousness produces an experience (the experience of being conscious) as opposed to some definite output. I do not think it is computable in the same way.

"We've all been dancing around the basic issue: does Data have a soul?" -- Captain Louvois. https://memory-alpha.fandom.com/wiki/The_Measure_Of_A_Man_(e...


That may be an illusion. And easily outputtable in the same way. Function calling in the output to release certain hormones etc.


I for one (along with many thinkers) define intelligence as the extent to which an agent can solve a particular task. I choose the definition to separate it from issues involving consciousness.

Both matter of course.


And that's a useful and pragmatic definition, because it's very hard to measure the other definition even just for other humans.


To state it's a turing machine might be a bit much but there might be a map between substrates to some degree, and computers can have a form of consciousness, an inner experience, basically the hidden layers and clearly the input of senses, but it wouldn't be the same qualia as a mind, I suspect it has more to due with chemputation and is dependent on the substrate doing the computing as opposed to a facility thereof, up to some accuracy limit, we can only detect light we have receptors for after all. To have qualia distinct to another being you need to compute on a substrate that can accurately fool the computation, fake sugar instead of sugar for example.


What we have and AI don't are emotions. After all, that what animates us to survive and reproduce. Without emotions we can't classify and therefore store our experiences because there no reason to remember something which we are indifferent about. This includes everything not accessible by our senses. Our abilities are limited to what is needed for survival and reproduction because all the rest would consume our precious resources.


The larger picture is that our brains are very much influenced by all the chemistry that happens around our units of computation (neurones); especially hormones. But (maybe) unlike consciousness, this is all "reproducible", meaning it can be part of the algorithm.


We don’t know that LLMs generating tokens for scenarios involving simulations of conscious don’t already involve such experience. Certainly such threads of consciousness would currently be much less coherent and fleeting than the human experience, but I see no reason to simply ignore the possibility. To whatever degree it is even coherent to talk about the conscious experience of others than yourself (p-zombies and such), I expect that as AIs’ long term coherency improves and AI minds become more tangible to us, people will settle into the same implicit assumption afforded to fellow humans that there is consciousness behind the cognition.


The very tricky part then is to ask if the consciousness/phenomenological experience that you postulate still happens if, say, we were to compute the outputs of an LLM by hand… while difficult, if every single person on earth did one operation per second, plus some very complicated coordination and results gathering, we could probably predict a couple of tokens for an LLM at some moderate frequency… say, a couple of tokens a month? a week? A year? A decade? Regardless… would that consciousness still have an experience? Or is there some threshold of speed and coherence, or coloration that would be missing and result in failure for it to emerge?

Impossible to answer.

Btw I mostly think it’s reasonable to think that there might be consciousness, phenomenology etc are possible in silicon, but it’s tricky and unverifiable ofc.


> would that consciousness still have an experience?

If the original one did, then yes, of course. You're performing the exact same processing.

Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain. The individual humans performing the simulation are now comparable to the individual neurons in a real brain. Similarly, in your scenario, the humans are just the computer hardware running the LLM. Apart from that it's the same LLM. Anything that the original LLM experiences, the simulated one does too, otherwise they're not simulating it fully.


You are assuming that consciousness can be reproduced by simulating the brain. Which might be possible but it's by no means certain.


You can simulate as much of the human as you need to. So long as consciousness is a physical process (or an emergent property of a physical process), it can be simulated.

The notion that it is not a physical process is an extraordinary claim in its own right, which itself requires evidence.


You can simulate as much of an aircraft as you need to. So long as flying is a physical process, it can be simulated.

But your simulation will never fly you over an ocean, it will never be an aircraft or do what aircraft do. A simulation of heat transfer will not cook your dinner. A simulation of Your assumption that a simulation of a mind is a mind, requires evidence.


> But your simulation will never fly you over an ocean

It will fly over a simulated ocean just fine. It does exactly what aircraft do, within the simulation. By adding “you” to the sentence you've made it an apples to oranges comparison because “you” is definitionally not part of the simulation. I don't see how you could add the same “you” to “it will simulate consciousness just fine”.


> "It does exactly what aircraft do"

It doesn't move real Oxygen and Nitrogen atoms, it doesn't put exhaust gas into the air over the ocean, it doesn't create a rippling sound and pressure wave for a thousand miles behind it, it doesn't drain a certain amount of jet fuel from the supply chain or put a certain amount of money in airline and mechanics' pockets, it doesn't create a certain amount of work for air traffic controllers... reductio ad abusurdum is that a flipbook animation of a stickman aircraft moving over a wiggly line ocean is a very low granularity simulation and "does exactly what aircraft do" - and obviously it doesn't. No amount of adding detail to the simulation moves it one inch closer to doing 'exactly what aircraft do'.

> "I don't see how you could add the same “you” to “it will simulate consciousness just fine”"

by the same reductio-ad-absurdum I don't see how you can reject a stickman with a speech bubble drawn over his head as being "a low granularity simulated consciousness". More paper, more pencil graphite, and the stickman will become conscious when there's enough of it. Another position is that adding things to the simulation won't simulate consciousness just fine - won't move it an inch closer to being conscious; it will always be a puppet of the simulator, animated by the puppeteer's code, always wooden Pinocchio and never a real person. What is the difference between these two:

a) a machine with heat and light and pressure sensors, running some code, responding to the state of the world around it.

b) a machine with heat and light and pressure sensors, running some code [converting the inputs to put them into a simulation, executing the simulation, converting the outputs from the simulation], and using those outputs to respond to the state of the world around it.

? What is the 'simluate consciousness' doing here at all, why is it needed? To hide the flaw in the argument; it's needed to set up the "cow == perfectly spherical massless simulated cow" premise which makes the argument work in English words. Instead of saying something meaningful about consciousness, one states that "consciousness is indistinguishable from perfectly spherical massless simulated consiousness" and then states "simply simulate it to as much detail as needed" and that allows all the details to be handwaved away behind "just simulate it even more (bro)".

Pointing out that simulations are not the real thing is the counter-argument. Whether or not the counter-argument can be made by putting "you" into a specific English sentence is not really relevant, that's only to show that the simulated aircraft doesn't do what the real aircraft does. A simulated aircraft flying over a simulated ocean is no more 'real' than drawing two stick figures having a conversation in speech bubbles.


You just wrote a lot of text just to say that you don't accept the simulation as “real”.

That's just semantics. I'm not here to argue what the word “real” means. Of course you can define it in such a way that the simulated aircraft isn't “really” flying over an ocean, and it would be just as valid as any other definition, but it doesn't say anything meaningful or insightful about the simulation.

Nobody contests your point that the simulated aircraft isn't going over a real ocean and isn't generating work for real-life air traffic controllers. But conversely you don't seem to contest the claim that oceans and air traffic controllers could be simulated, too. Therefore, consciousness can be simulated as well, and it would be a simulated consciousness that just doesn't fall into your definition of “real”.


You need to clearly define what constitutes "real" before we can meaningfully talk about the distinction between "real" atoms and simulated ones.

As far as physics go, it's all just numbers in the end. Indeed, the more we keep digging into the nature of reality, the more information theory keeps popping up - see e.g. the holographic principle.


> "As far as physics go, it's all just numbers in the end."

No it isn't; numbers are a map, maps are not the territory. You are asking me to define how a map is different from a city, but you are not accepting that the city is made of concrete and is square kilometers large and the map is made of paper and is square centimeters large as a meaningful difference, when I think it's such an obvious difference it's difficult to put any more clearly.

What constitutes a real atom: a Hydrogen atom capable of combining with Oxygen to make water, capable of being affected by the magnetic field of an MRI scanner, etc.

What constitutes a simulated atom: a pattern of bits/ink/numbers which you say "this is a representation of a Hydrogen atom", capable of nothing, except you putting some more bits/ink/numbers near it and speaking the words "this is it interacting to make simulated water".


Ok, you are saying that a map is different than the territory. That a simulation is meaningfully different.

Do you deny that you could be in a simulation right now, in the matrix? What you actually think are are molecules of oxygen are actually simulated molecules. That there is no way for you to every tell the difference.


Is simulate the right word there? With a hundred trillion connections between 80 billion neurons, it seems unlikely that it would ever be worth simulating a human brain, because it would be simpler to just build one than to assemble a computer complex enough to simulate it.


Yes that’s my main point - if you accept the first one, then you should accept the second one (though some people might find the second so absurd as to reject the first).

> Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain.

However, I don’t really buy “of course it would,” or in another words the materialist premise - maybe yes, maybe no, but I don’t think there’s anything definitive on the matter of materialism in philosophy of mind. as much as I wish I was fully a materialist, I can never fully internalize how sentience can uh emerge from matter… in other words, to some extent I feel that my own sentience is fundamentally incompatible with everything I know about science, which uh sucks, because I definitely don’t believe in dualism!


It would certainly with sufficient accuracy honestly say to you that it's conscious and believes it whole heartily, but in practice it would need to a priori be able describe external sense data, as it's not separate necessarily from the experiences, which intrinsically requires you to compute in the world itself otherwise it would only be able to compute on, in a way it's like having edge compute at the skins edge. The range of qualia available at each moment will be distinct to each experiencer with the senses available, and there likely will be some overlap in interpretation based on your computing substrate.

We in a way can articulate the underlying chemputation of the universe mediated through our senses, reflection and language, turn a piece off (as it is often non continuous) and the quality of the experience changes.


But do you believe in something constructive? Do you agree with Searle that computers calculate? But then numbers and calculation are immaterial things that emerge from matter?


I'm a fan of Carlo Rovelli's popular science books. "The Order of Time" in particular left a big impression on me. He is a philosopher as well as a scientist, and he has a gift for writing beautiful and accessible prose about some pretty heady topics.

If you are interested in RQM but find this article somewhat dry, check out his short book "Helgoland".

I find both RQM and Rovelli's notion of "thermal time" (i.e. the idea causality and entropy are emergent properties dependent on our perspective, as opposed to fundamental features of reality) to be very convincing.


I just finished Rovelli's "Reality Is Not What It Seems" last weekend and enjoyed it. It is a broad and high level summary of the progress of physics and the book ends with some of the more speculative ideas being pursued. I especially enjoyed the "thermal time" ideas you mention, which he touches on in this book.


Agreed – I enjoyed this article, but the conclusion (that philosophers or intellectuals may need to come to terms with self-censorship or obscurantism to avoid drawing the ire of the masses) strikes me as very pessimistic.

We've gained a lot of freedoms since Spinoza's era and we shouldn't be so quick to surrender them.


I live in Portland and I 100% agree with this. I say this as someone who voted for Measure 110 and now regrets that vote. At this point I would support a straight-up repeal. At least under the old system some people who were arrested were able to get clean in jail, or were able to enter court-mandated treatment programs.

The reality is that people in the throes of drug addiction have already lost their agency, so some kind of coercive intervention will often be necessary to break the cycle. By refusing to do this out of a (commendable) compassionate impulse, we are making the situation worse.

In general the last ~5 years of living here has been a lesson of how important order (i.e. the enforcement of rules and norms) is for a functioning society. You could say it is the foundation of all social goods.

Watching the city's decline up close has deeply altered my political beliefs on a number of topics – this is one of them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: