Fair to say there is no consensus on what experience and consciousness and so forth are but it is clear that Claude does not have those things. It is a word calculator. The word calculation is sophisticated and can simulate the verbal reporting of experiential conscious beings but it does not actually have those things itself.
Said another way it is very likely that non verbal symbolic language having creatures have experience and consciousness and Claude is definitely not one of those. Its "experience" is just the calculations across word sequences within a given set of conversations.
There is a long history of writing in this space and it is interesting that these models are not really anticipated by that literature. So the line between simulation via word calculation and reporting via verbal capacity is not well understood. And the human ability to discern simulation via only a word/conversation channel is limited.
So to answer your question, a Claude could easily be constructed to fool you into infusing it with continuity of self. But being fooled by a grift is not the same.
I'm not sure that "it is clear that Claude does not have those things".
I AM sure that it is hard to conclusively show that Claude has experience and consciousness. Even Claude isn't sure about that.
But while it is absolutely true that "it is a word calculator" - unless you hold the position that human consciousness isn't neural[1]- I don't see how this is any different from saying saying humans beings are neural activation pattern calculators.
If you're sure that your consciousness isn't neural - then fine: Claude isn't made of the right stuff so couldn't possibly be. But state your assumption up-front.
If one opens up a person and looks at their nervous system the single neurons look complicated, but not especially mysterious.
Given how shockingly little we understand the brain/mind it is hard to be sure that we are certain enough of how we work and given how little we know how LLMs work at any of the many layers above the raw architecture either position can be reasonably held, but not convincingly argued/demonstrated.
Feel free to think Claude isn't conscious - I can't prove to you it isn't. And the amount of theory we still need to learn to be able to is vast.
But don't expect me to be _certain_ that it isn't and couldn't be - you simply can't show that convincingly either.
[1]
Penrose thinks we have a quantum nature - so sure no classical computer can be then.
Some like Rupert Sheldrake think it's a field phenomenon - very woo maybe Claude has a morphic field as well?
Lots of people are sure we have a supernatural soul/spirit. One then needs to take up Claude's status with the Creator.
I think that's true but do you see MCP as enough of a discovery primitive on its own, or does it still lack a ranking/trust layer? My intuition is that capability exposure is only half the problem and the harder part is how agents evaluate and choose between multiple similar tools.
Take Supabase for example. It’s disproportionately recommended by LLMs when people ask for backend/database stacks. It can't be just because of it's capability since a lot of tools expose similar primitives. Something in the model’s training data, ecosystem visibility, or reinforcement layer is shaping that ranking.
If agents start choosing tools autonomously, the real leverage point isn’t just “can you describe your capabilities in MCP?” but “how does the agent decide you’re preferred over 5 near identical alternatives?”
Do you think that ranking layer sits inside the model providers, or if it becomes an external reputation network?
So totally uninteresting. There is no "being a _" experience when "you" are a word calculator. Thinking that there is or that there is something useful there is a category error.
He has not, and it isn't the right framing. In this telling he has also "erased the stain" on many crimes. We will see how long that whitewashing lasts.
ADRs are similar, yet the important point is not the format of the design log entry, but it's usage with AI.
with design log, we ask the AI to create the ADR or log entry, but unlike ADR we
1. require the AI to ask question and we answer.
2. require the AI to update the design log after implementation with any drift from the original design.
Both of the above help to make the AI more precise and prevent context drift.
Said another way it is very likely that non verbal symbolic language having creatures have experience and consciousness and Claude is definitely not one of those. Its "experience" is just the calculations across word sequences within a given set of conversations.
There is a long history of writing in this space and it is interesting that these models are not really anticipated by that literature. So the line between simulation via word calculation and reporting via verbal capacity is not well understood. And the human ability to discern simulation via only a word/conversation channel is limited.
So to answer your question, a Claude could easily be constructed to fool you into infusing it with continuity of self. But being fooled by a grift is not the same.
reply