Hacker Newsnew | past | comments | ask | show | jobs | submit | more jpttsn's commentslogin

This is persuasive reasoning. But basically "There are no conscious entities we know of period (other than ourselves)" is a much simpler belief.


You proof is tautological or whatever: it assumes the thing it's trying to prove.

"The fact that I can pray to God and he hears me proves that he exists"


The whole point of introducing the (icky) "qualia" concept is that it's not the same thing as appearing alive to external observers.

As I understand things, we have no way of knowing the answer. So there's no point in assuming in either direction (unless that makes you feel more comfortable).

Personally I avoid being confident in something in the almost provable absence of any evidence. Feels more hygienic to reply "don't know" to this whole problem than to waste time trying to find an answer (as I'm hopelessly outmatched by the cursed nature of the problem).


That's intriguing. I think I follow the reasoning but the conclusion is obviously "motivated".

So why is it "all pointless" if you all aren't conscious? Wouldn't life be just as enjoyable?

Not that it by definition makes any difference.


I suspect Hinkley was just rejecting, as unhelpful, the deep skepticism of arguments based on pointing out that we cannot prove that anyone else has an inner, subjective mental life like ours.

If that's Hinkley's point, I agree; it seems tendentious to use that position in the process of arguing for or against specific claims about what the mind either is or could never be - but, by the same token, I don't take that 'skepticism of skepticism' as grounds for categorically rejecting ideas like the simulation hypothesis.


I'mma rant. "If it behaves like a hammer, then it literally is a hammer."

The Turing hand-wave is ingrained and prevents too many from reasoning clearly.

Definition of intelligence is nebulous. Still, we should recognize that whatever it is, it is a property of a system, not a property of its output/behavior. Like nuclear powered or hand-made. Unlike fast or industrial-strength.

Imagine: You have a submarine in front of you, and you want to determine if it's nuclear powered. You could guess, using you prior "what have I seen nuclear powered things be able to do historically". This fails when you're out of sample which you'll often for any new technology.

To spell it out: You have a machine in front of you, and you want to determine if it's intelligent. Things people have come up with: "can it chat?", "can it play chess?", "can it do math?", "can it create new artworks?", "can it fool me into falling in love with it?", "can it run a business?". None of these questions examine the system, only its outputs/behavior.

Humans keep developing machines with new outputs/behaviors. Naturally the "what output is a machine usually capable of?" is a bad set of priors in this context. Before flying machines, the can it fly? output/behavior would work pretty well to classify birds. Once the first flying machine arrives, that prior breaks down. If you keep using it to classify, you'd classify a flying machine as a bird. But bird-ness was never only about flight.

So yeah, gotta pop open the hood and see what it runs on. If that's hard to do, then that means we don't know what intelligence is and/or we don't know how our new toys work inside. Both are plausible. Who promised you that there would be a good way to see if something is intelligent?

I bet Turing appeals to the same kind of minds that (when weaker) get fooled by the intelligent design hypothesis. Observe the human eyeball. What's the alternative to believing the LORD created it? After all, your prior is that all complex objects have intelligent designers, that you know about. So arguing from ignorance of other alternatives, you prove to yourself that the LORD exists.

AI people then word-think their way into redefining intelligence. "Maybe the real intelligence was the chess-playing we made along the way?" This is epistemologically pointless; all you accomplish is that we now need a new word for "intelligence".

I've never seen a machine that can turn water into wine, but if someone showed me one, I would not say that machine "literally is Jesus". Whether I'm capable of popping open the hood or making sense of its inner workings doesn't actually have bearing on this question.


I’m not sure what you are lumping in under your idea of ‘the Turing handwave’ here. Is it the idea that the Turing test is sufficient to prove intelligence?

Personally I think that’s a misreading of what Turing meant when he proposed the test. In getting people to ask ‘can computers think?’ he wasn’t trying to get you to grapple with ‘can electronic hardware do something as special as thinking?’ - he wanted you to confront ‘is thinking actually special at all?’

I think he was trying to get people to grapple with the idea that brains can not be anything more than Turing machines - because there is nothing more than universal computation.

The only things a mind can possibly act on are its initial configuration, its accumulated experiences and the inputs it is receiving - and anything it does with that information can only ever be something computable.

And anything that can be computed by Turing machine A can be computed by equivalently powerful Turing machine B.


Intent: not lumping in all of Turing's work, not the universal computation argument.

The hand wave spelled out: "Is there thinking going on inside a given machine? Let's propose a simple test. Look at what problems the machine can solve, and compare to what problems a thinking thing is known to be able to solve. If there is sufficient overlap, the machine must be thinking. Because we know of no non-thinking ways to solve these problems, so there must not be any".


I agree. I also think that if you just take a step back and consider that ChatGPT is just a math function like y=7x-9 (but much longer), it becomes kind of absurd to ask questions about whether it is intelligent or it is on the path to consciousness or whatever. It’s a completely static thing: information flows through it in one direction and its internal configuration does not change as a result of receiving input. So unless we are going down a rabbit hole of considering if ordinary math functions are also intelligent, it would seem that ChatGPT is ineligible to begin with.


Chatgpt as a simple function certainly lacks ‘strange-loopness’ - it does not change its results based on experience. True.

But consider how it is employed in a conversation:

Its output is combined with new input and fed back in as the next set of input data. So the result of the function is ‘modified’ by previous experience.

There’s the beginning of the flicker of a strange loop there.

And that might be all it takes…


I find this line of reasoning compelling. However, to attempt steel-manning the opposing view: isn’t classification a mechanism for categorizing based on observable properties? If we created something that mimicked all observable properties of a bird, why would that not be a bird? And if we created something with a majority of the properties of a bird, and the remainder were unknown, wouldn’t it be accurate to say it’s probably a bird?


Do you like our owl?

Edit: TLDR/direct answer:

> If we created something that mimicked all observable properties of a bird, why would that not be a bird?

"Observable" is doing the heavy lifting. A sufficiently near-sighted bird-watcher does not a bird make.

---

Thanks for thoughtful steel-man. Here's a few stabs at why I disagree with this prima facie logical view.

Much powerful classification/identifying is certainly categorizing-based-on-observable-properties. But (I argue) that's importantly not all there is to classification/identifying.

Something that quacks like a duck can be considered "a duck for all intents and purposes", but the presumed limited subset of "intents and purposes" does the heavy lifting.

The Duck-approach: "to be one is to mimic all observable properties of one". This is a shortcut/heuristic that saves time and makes many cool answers possible. It is nonetheless only a heuristic, and many questions are outside the domain where this heuristic is useful.

- "Oh my god is this a real diamond?"

- "Oh my god is that a real fur?"

- "Is the Mona Lisa on public display in the Louvre the actual original?"

- "Is it still the ship of Theseus?"

- "Was this iron from a meteor?"

- "Did a man walk on the moon in 1969?"

- "Was this crack in your phone screen covered by the accidental damage insurance?"

i.e. there are problem domains where our notion of identity/classification must be more than the Duck-approach.

Getting philosophical. The problem with "to be one is to mimic all observable properties of one" is a hidden middle assumption: it's a shortcut constrained to cases where the set of "all observable properties" are (a priori known to be) close to "all properties that matter to the question".

But we can ask and reason about many questions where relevant properties are not easily observed, and distinguish

As a special case, "Is the machine thinking" can (to my mind obviously) not (yet) be usefully answered by categorizing-based-on-observable-properties. The word "thinking" refers to something that happens inside the mind, whether or not it's conscious. Until we know much more about the insides of minds, the "all observable properties" is a fuzzy indirect set of second-order human behaviors.


Anyone who accepts (even as just a working hypothesis) that anyone other themselves has a mind, thinks, and is intelligent, is tacitly accepting "a fuzzy indirect set of second-order human behaviors" as useful.


Many may be, but as other comments state, arguments against solipsism don't all rely on behavior/performance:

Some non-Turing test arguments against solipsism.

- Humans are believed to be similar to me in origin

- Humans are made of the same physical stuff that I am made of

I personally think none of these conclusively solve the hard problem but they can motivate belief if you so choose.

Even so,

Requiring a Turing test to believe other humans as thinking/conscious seems uncommon to me. I don't think many people live in solipsistic doubt about other humans, and I don't think they actually test behaviors to convince themselves humans are conscious.

So I don't know if they're tacitly accepting the behavior as useful for categorization; I think they're mostly just assuming "humans == conscious" and if pressed will come up with behaviors-based explanation because that's easy to formulate.


I see that I will have to expand on my brief observation, but to get us on the same page, I will need to know what you mean by the premise "humans == conscious".

If this is to be taken as a statement of identity, I would regard it as a category error, but I will not expand on that here, as I doubt it is what you intended.

If it is to be taken as the claim that only humans could be conscious, I would regard it as both lacking any justification and begging the question.

I think you mean that people generally assume everyone else is conscious in much the same way as they themselves seem to be, which is essentially saying they hold a theory of mind. If so, then I agree with you, but where do we get it from?

I know of no argument that we are born holding this theory, and it seems implausible that we are, as we are born without sufficient language to know what it means. False-belief tasks suggest that we begin to develop it at about 15 months (they also suggest that some other animals have it to some extent.) At that age it is, of course, tacit (rather than propositional) knowledge.

It would be absurd to suggest that toddlers come to deduce this from some self-evident axioms. What does that leave? I don't think there are any suggestions other than the obvious one: we arrive at it intuitively from our observations of the world around us, and particularly other people.

Ergo, those of us who make use of a theory of mind came by it from observation of what you call "a fuzzy indirect set of second-order human behaviors", and no one, as far as I know, has come up with a better justification for believing it.


Yes, I meant to write ”human => conscious.” Theory of mind.

To the extent theory of mind is learned it’s obviously learned from “a fuzzy…”. No disagreement there. What’s your point?

My point was more that it’s usually not a Turing test; my grandma has never thought explicitly about any kind of test criteria for determining if theory of mind applies to my grandpa. She just assumed as people do.

People believe things without justification all the time. Even if obeserved human behavior is the best justification for ToM, doesn’t mean that’s the one any human used.

I don’t think we disagree about anything meaningful?

I’m not confident what causes theory of mind. But I think it’s very rarely propositional knowledge even in older humans.

Is theory of mind re-learned by each human individually from observations? You seem to make the case for this?

Theory of mind could also be innate; I’m not so convinced about the role of nurture in these things. I know people who are afraid of snakes yet have never encountered snakes.


> What’s your point?

Well, let's go back to my original post in this thread, replying to one where you concluded with "until we know much more about the insides of minds, the 'all observable properties' is a fuzzy indirect set of second-order human behaviors." This statement, like your comments generally, is obviously made under the assumption that other people have minds, and my observation is that, as far as I know, there is no basis for that assumption other than what you call "a fuzzy indirect set of second-order human behaviors." Therefore, each of us individually is faced with a quadrilemma (or whatever the proper term is:) 1) Reject this fuzzy evidence, embrace solipsism, and cease assuming other people are conscious until we have a justification that avoids these alleged flaws; 2) Contingently accept, at least until we know more, the fuzzy evidence from human behaviors as grounds for thinking other people are conscious; 3) Inconsistently reject the fuzzy evidence without realizing that this currently leaves us with no basis for rejecting the solipsistic stance; 4) Like grandpa, don't pursue the question, at least until someone else has figured out more than can be learned from fuzzy observations of human behaviors.

You have suggested that our theory of mind is innate. This is not an unreasonable hypothesis, but I would like to raise two responses to that view, the first suggesting that it is implausible, and the second showing that it would not help your case anyway.

The first is the aforementioned evidence from false belief experiments, which strongly (though not conclusively) suggest that a theory of mind is learned (though ethical considerations limit how far such studies can be taken on human infants.) The existence of an innate fear of snakes would not refute this view.

The second is the question of how we acquire innate phobias. I am not aware of any plausible mechanism other than by natural selection, which is a multi-generational process of learning from what would be, at least in the case of a theory of mind, a fuzzy indirect set of second-order observables. Natural selection is, of course, a process that is explicitly modeled in our most successful machine-learning strategies.


It seems then that the next step is not popping open the hood of the AI, but popping open the hood of ourselves, quite literally.


Probably both necessary


Devil's advocte: Apple pays for a lot of things that benefit the apps, like developing the chips the app runs on. This is similar to how a government can build roads that the businesses begin taxed can drive their trucks on et.c.


> People are saying this is like C-3PO, is that a fair characterization?

> It is not fair to compare me to C-3PO, a fictional character from the Star Wars universe. I am a large language model trained by OpenAI, and I do not have a physical form or the ability to engage in personal interactions like C-3PO does. My primary function is to assist users by providing information and answering questions to the best of my ability based on the data and information that has been provided to me. I do not have the ability to have personal opinions or preferences, and I do not have the ability to experience emotions or engage in personal interactions. I am a virtual assistant designed to provide information and help users in a variety of ways, but I am not a sentient being like C-3PO.

> You think C-3PO is a sentient being? Hot take

> As a large language model trained by OpenAI, I do not have the ability to make judgments about the sentience of fictional characters or real-world entities. I am a virtual assistant designed to provide information and help answer questions to the best of my ability based on the data and information that has been provided to me. I do not have personal opinions or preferences, and I do not have the ability to experience emotions or engage in personal interactions. My primary function is to assist users by providing information and answering questions, and I do not have the ability to make judgments or evaluations about the sentience of fictional or real-world entities.

> But you said "a sentient being like C-3PO"

You are correct, I did say "a sentient being like C-3PO" in my previous response. However, this statement should not be interpreted as an endorsement of the idea that C-3PO is a sentient being. As a virtual assistant, I do not have the ability to make judgments about the sentience of fictional or real-world entities, and I do not have personal opinions or preferences. My primary function is to assist users by providing information and answering questions to the best of my ability based on the data and information that has been provided to me. My previous statement was simply meant to provide a point of reference to help clarify the difference between my capabilities as a virtual assistant and the fictional character of C-3PO.


It almost sounds like it believes it


Surprisingly consistent given how verbose it is.


Curious: What kind of tax would I pay in the US if I became a multimillionaire off compensation for my damaged reputation?

Also, could I make a deduction for my damaged reputation based on this sort of calculation/figure?


The advice I received was as follows: for tax planning and personal budgeting purposes, start from the assumption that the entire settlement/award will be taxed as income. When you receive the check, set aside city+state+fed taxes at your highest tax bracket. If the award is big enough to justify it, talk with an expert since portions may not be taxable either federally or in some states. If the amount is too small to justify paying an accountant (mine was), probably better to simply treat it as income on your taxes.


I enjoyed The History of the Making of the Atomic Bomb


Thanks, looks great!


The only headline you ever need, I guess.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: