Hacker News new | past | comments | ask | show | jobs | submit login

I think there are two questions here:

(1) "Is general intelligence even a thing you can invent? Like, is there a single set of faculties underlying humans' ability to build software, design buildings that don't fall down, notice high-level analogies across domains, come up with new models of physics, etc.?"

(2) "If so, then does inventing general intelligence make it easy (unavoidable?) that your system will have all those competencies in fact?"

On 1, I don't see a reason to expect general intelligence to look really simple and monolithic once we figure it out. But one reason to think it's a thing at all, and not just a grab bag of narrow modules, is that humans couldn't have independently evolved specialized modules for everything we're good at, especially in the sciences.

We evolved to solve a particular weird set of cognitive problems; and then it turned out that when a relatively blind 'engineering' process tried to solve that set of problems through trial-and-error and incremental edits to primate brains, the solution it bumped into was also useful for innumerable science and engineering tasks that natural selection wasn't 'trying' to build in at all. If AGI turns out to be at all similar to that, then we should get a very wide range of capabilities cheaply in very quick succession. Particularly if we're actually trying to get there, unlike evolution.

On 2: Continuing with the human analogy, not all humans are genius polymaths. And AGI won't in-real-life be like a human, so we could presumably design AGI systems to have very different capability sets than humans do. I'm guessing that if AGI is put to very narrow uses, though, it will be because alignment problems were solved that let us deliberately limit system capabilities (like in https://intelligence.org/2017/02/28/using-machine-learning/), and not because we hit a 10-year wall where we can implement par-human software-writing algorithms but can't find any ways to leverage human+AGI intelligence to do other kinds of science/engineering work.




Those aren't exactly the questions I'm raising; I have no doubt that there exists some way to produce AGI. My concern is that it doesn't seem like the right question to ask, since history suggests that humans are much better at first building specialized devices, and when it comes to AI risk the only one that really matters is the first one built.

I might have misunderstood your post, though.


The thing I'm pointing to is that there are certain (relatively) specialized tasks like 'par-human biotech innovation' that require more or less the same kind of thinking that you'd need for arbitrary tasks in the physical world.

You may need exposure to different training data in order to go from mastering chemistry to mastering physics, but you don't need a fundamentally different brain design or approach to reasoning, any more than you need fundamentally different kinds of airplane to fly over one land mass versus another, or fundamentally different kinds of scissors to cut some kinds of hair versus other kinds. There's just a limit to how much specialization the world actually requires. And, e.g., natural selection tried to build humans to solve a much narrower range of tasks than we ended up being good at; so it appears that whatever generality humans possess over and above what we were selected for, must be an example of "the physical world just doesn't require that much specialized hardware/software in order for you to perform pretty well".

If all of that's true, then the first par-human biotech-innovating AI may initially lack competencies in other sciences, but it will probably be doing the right kind of thinking to acquire those competencies given relevant data. A lot of the safety risks surrounding 'AI that can do scientific innovation' come from the fact that:

- the reasoning techniques required are likely to work well in a lot of different domains; and

- we don't know how to limit the topics AI systems "want" to think about (as opposed to limiting what it can think about) even in principle.

E.g., if you can just build a system that's as good as a human at chemistry, but doesn't have the capacity to think about any other topics, and doesn't have the desire or capacity to develop new capacities, then that might be pretty safe if you exercise ordinary levels of caution. But in fact (for reasons I haven't really gone into here directly) I think that par-human chemistry reasoning by default is likely to come with some other capacities, like competence at software engineering and various forms of abstract reasoning (mathematics, long-term planning and strategy, game theory, etc.).

This constellation of competencies is the main thing I'm worried about re AI, particularly if developers don't have a good grasp on when and how their systems possess those competencies.


> The thing I'm pointing to is that there are certain (relatively) specialized tasks like 'par-human biotech innovation' that require more or less the same kind of thinking that you'd need for arbitrary tasks in the physical world.

The same way Go requires AGI, and giving semantic descriptions of photos requires AGI, and producing accurate translations requires AGI?

Be extremely cautious when you make claims like these. There are certainly tasks that seem to require being humanly smart in humanly ways, but the only things I feel I could convincingly argue being in that category involve modelling humans and having human judges. Biotech is a particularly strong counterexample, because not only is there no reason to believe our brand of socialized intelligence is particularly effective at it, but the only other thing that seems to have tried seems to have a much weaker claim at to intelligence yet far outperforms us: natural selection.

It's easy to look at our lineage, from ape-like creatures to early humans to modern civilization, and draw a curve on which you can place intelligence, and then call this "general" and the semi-intelligent tools we've made so far "specialized", but in many ways this is just an illusion. It's easier to see this if you ignore humans, and compare today's best AI against, say, chimps. In some regards a chimp seems like a general intelligence, albeit a weak one. It has high and low cognition, it has memory, it is goal-directed but flexible. Our AIs don't come close. But a chimp can't translate text or play Go. It can't write code, however narrow a domain. Our AIs can.

When I say I expect the first genuinely dangerous AI to be specialized, I don't mean that it will be specific to one task; even neural networks seem to generalize surprisingly well in that way. I mean it won't have the assortment of abilities that we consider fundamental to what we think of as intelligence. It might have no real overarching structure that allows it to plan or learn. It might have no metacognition, and I'd bet against it having the ability to convincingly model people. But maybe if you point it at a network and tell it to break things before heading to bed, you'd wake up to a world on fire.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: