Hacker News new | past | comments | ask | show | jobs | submit login

Ah, so the AGI term users are like "ha-ha I chose a vague term, now you can't argue semantics, take that!"



We understand what intelligence is better than we understand what consciousness is. "Artificial Consciousness" would be the vague term.


Not sure what you mean. I've seen some hacker news people argue the exact opposite and downvote me for exploring the exact opposite concept, now the opposite happens. I guess this is not the right place to ask or think.


> Not sure what you mean.

I mean that e.g. if we create a machine that can solve every thinking-related problem that humans can solve, then we can be certain that we have created artificial intelligence. But how are we supposed to ascertain that we have created something conscious, as in a machine with subjective experience? Strictly speaking I can't be certain that _you_ are conscious. (Also, why would we replace "AGI" with "AC", when people are looking to build something intelligent, irrespective of whether it has internal subjective experience?)

> I guess this is not the right place to ask or think.

That has not been my experience.


>> I mean that e.g. if we create a machine that can solve every thinking-related problem that humans can solve, then we can be certain that we have created artificial intelligence.

This is the very notion I'd like to challenge. First of all, there is nothing concrete here so I will make up some definitions.

For simplicity's sake, if you define thought as a way of iterating a large knowledge graph (assuming that a graph is not a grossly inefficient way of representing knowledge), and forming new knowledge (or making inferences) as a way of extending that graph through certain constraints (maybe axiomatic, maybe probabilistic) that somehow also exist within that graph, what goal would a graph have other than the ones you give to it? This would make AGI just an interactive machine.

And if you can't give it adequate goals that will at least make it pass a turing test, what good is a graph that can be used for emergent inferences? My real gripe with that is "It" isn't intelligent. "You" are intelligent and "you" gave it goals. So subjectivity is, in my opinion inescapable when you are talking about intelligence.

I will concede that you can't know I have subjective experiences, but practically that's not a very useful thing to say. If it doesn't matter, why bring it up? If it does matter, why not use your past experience to have a belief that I am conscious despite that belief being subject to future modification? That's how I'd treat a perceived AC.


Since when is avoiding arguing semantics a bad thing?


Since Wittgenstein, Umberto Eco, Roland Barthes? How will you improve your knowledge and understanding of concepts if you don't argue in order to find out the best way of expressing them? Common sense is not a substitute for knowledge. Blanket statements and short dismissals are not a way of furthering our understanding of inference engines and whether subjective experience is required for intelligence and how to quantify that subjective experience.


If we had to preface every inquiry with a philosophical debate on everything semi-related to the subject at hand we’d get nowhere (including within philosophy itself). Should all math papers include a section where they argue about why one should use ZFC vs IZF or type theory because that decision might have impact on the matter at hand?

Blanket statements and short dismissals are great when their content is “that’s an interesting topic but not necessarily what everyone is trying to discuss right now.” Discussions on AI risk may not be augmented by understanding of subjective experience, or may require developments that cannot be acquired via even another 100 years of navel gazing on the subject. You’ve not even attempted to justify why this would be the case, and instead started complaining right off the bat that nobody had the inclination to immediately discuss your favorite tie-in to the subject immediately.

You’ll notice that people were actually happy to talk about consciousness once you brought it up, and probably would have been even happier to do so if you didn’t start off with such a curmudgeonly tone and spend a bunch of time accusing everyone of intellectual dishonesty because their interests differ from yours.


Well, I've explained in an above comment why I think consciousness is directly proportional to intelligence, and I was pretty content with the conversation so far, actually.

I'm happy that you feel you are so progressive in multiple disciplines and an expert on online behavior. If you think what I said is off-topic, that's just your opinion, man.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: