Hacker News new | past | comments | ask | show | jobs | submit login
A Roadmap Towards Machine Intelligence (arxiv.org)
59 points by vonnik on Nov 27, 2015 | hide | past | favorite | 29 comments



Seems like the authors do not consider how safe such a system would be. Yet it seems to me that a machine whose main goals would be to learn things about the world so that it can communicate with humans and answer their questions, is possibly very dangerous. While reading the first section I kept thinking about this blog entry from Paul Tyma, "How Artificial Intelligence Will Really Kill Us All":

http://paultyma.blogspot.fr/2015/10/how-artificial-intellige...


Mutually assured destruction doctrine is possibly very dangerous, too. Also: Biological weapons, destruction of rainforest ecosystems, ocean acidification, etc etc. Why these discussions don't consider opportunity costs of NOT developing AI as fast as possible?

Most people live in appalling, inhuman conditions:

$100 monthly income puts a person in the top 52.40% earners worldwide. [0] $300 monthly income puts a person in the top 28.35% earners worldwide. [0]

Human intelligence is obviously suboptimal for solving global problems, perhaps AI will do better.

[0] http://www.globalrichlist.com/


$100 monthly income puts a person in the top 52.40% earners worldwide.

That's misleading without considering numerous other variables - cost of things? It seems to me to be the kind of statement aimed at deceitfully triggering emotions, by the likes of charities. Charities are like police - there for good reason, but along with the good intentions they have a slightly seedy side. Of the countries I've lived in, the one where I was paid least had the highest standard of living.


Actually, it's other way around. Your comment is misleading and your intentions have not-so-slightly seedy side. Are you really attacking a concept of charity organization in a capitalistic world?

Calculation methodology is available on the linked page:

"For currency conversion we use Purchasing Power Parity Dollars (PPP$) in order to take into account the difference in cost of living between countries; PPP$ are also less susceptible to short term fluctuations."


"there for good reason"


An interesting assessment of safety considerations is: http://intelligence.org/files/AIPosNegFactor.pdf

I've submitted it for HN discussion: https://news.ycombinator.com/item?id=10637207


Interesting entry, but I don't find it very credible. We already have superpowered weapons which make it possible for a few powerful individuals to obliterate the rest of humanity, yet thus far we've stayed alive.


Recently Max Tegmark talked at the UN (1) about the potential danger of AI, and he asked what would have happened if making a nuclear bomb was as simple as say baking sand in a micro-wave. He argues that if that was the case most of us would probably be dead. I guess the point is that in these matters, the past is not always a good indicator of the future.

1. http://www.youtube.com/watch?v=W9N_Fsbngh8&t=1h55m31s


"Moore's Law of Mad Science: Every eighteen months, the minimum IQ necessary to destroy the world drops by one point." - Eliezer Yudkowsky

On a long enough timeline, we're probably screwed by Fermi's Great Filter regardless, but a self-improving machine learning singularity can certainly accelerate the process!


AI seems to be at odds with the great filter. One would expect it to expand out into the universe and be visible.


Why... So... Visible?


Really, you guys believe technology sufficiently advanced to be indistinguishable from magic (h/t to Arthur C. Clarke) will be festooned with the alien tech equivalent of pop-up ads and banners easily detectable by puny earthlings? Allow me to disagree.


Interesting analogy, but I think it's rather debatable. We have no evidence that making a strong AI is as simple as baking sand in a microwave—in fact, we have quite a bit of evidence that it's actually extremely difficult. It's unlikely to think that such a powerful and challenging piece of software would have any wider distribution than nuclear weapons.

I think this is affirmed by the fact that most state-of-the-art AI depends primarily on massive volumes of data rather than clever algorithms. Massive datasets are inherently concentrated in the hands of the few and powerful, so it seems unlikely that a rogue actor could acquire suitable data.


You take the analogy too literally. The contingency is not on whether or not making AI is easy, it's about whether or not it is harmful.

And yes, it is debatable, but as Tegmark puts it: relying on luck is probably not a good strategy.


What is the point of the sand-microwave analogy? I really don't get it.

It can't be about microprocessors, since we've had those for decades and don't have murderous AI yet.


> What is the point of the sand-microwave analogy? I really don't get it.

The point is to suggest something that nobody would do by accident, but almost anyone can do intentionally, and nobody can stop others from doing. It's a thought experiment: suppose that taking some common everyday substance, and using it with a common appliance many people have access to, would destroy the world.

Right now, a very small handful of people have the ability to existentially threaten humanity. Fortunately, those people seem moderately unlikely to do so, and the process for becoming one of those people selects at least somewhat against the type of craziness that would hit the button just to see what happens, or because they genuinely want the world to end.

Out of seven billion people, based on existing examples of human behavior, it seems exceedingly likely that there exist people who would intentionally choose to to destroy the world.


Strong AI is not something that almost anyone can do intentionally, given that it's currently impossible. And any predictions on it becoming possible within a definite timeframe are total bullshit [1]. I think it's rather telling that the only people who make those kinds of optimistic predictions about neural networks do not have extensive research experience in / are experts in artificial neural networks.

[1] I certainly think it will become possible, but the timeframe is indefinite. If we get really lucky and breakthrough after breakthrough is achieved, it could be within 10 years. But it could be 20 years, it could be 100 years.


I absolutely agree. However, I also agree with the more general only-half-tongue-in-cheek line that "Every eighteen months, the minimum IQ necessary to destroy the world drops by one point."


Thanks. I am enlightened.


I will do everything in my power to ensure that corporate-backed, ivory-tower, anti-philosophical, techno-enthusiast academics do not figure out some kind of formula to dominate the world of human affairs with their so-called "super intelligent machines". I'm not saying that this specific research group falls under this umbrella, but they are all credited as being associated with Facebook AI research, so take it FWIW. I do believe that a certain Elon Musk would back me up here:

"I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it's probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don't do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there's the guy with the pentagram and the holy water, it's like yeah he's sure he can control the demon. Didn't work out."

The classes of algorithms that are meant to directly interact with people in a way that appears authentically human must obviously be highly regulated, and I certainly agree that these kinds of regulations should be on par with international nuclear oversight bodies.

It is always important to give end users the option to figure out for themselves how the algorithms that they interact with actually function. In order to encourage this kind of thing, I have been working on an online, distributed operating system concept for the past several years, and one of the apps is a simple, "Hello World" kind of an AI that goes by the name, Bertie.

I am working on making the underlying system code (which is mostly JavaScript) completely editable by end users. They will be able to test out their edits in an application window inside of the web page, and will be able to share their changes with each other over the distributed, web-based filesystem that I am actively developing.

If you are running Chrome, you can try out the OS by going to https://nacl-pg.appspot.com/. To see the AI app, just click on the Applications folder, and then click on Bertie's face. Alternatively, if you are feeling lazy, just click on the following link and the Bertie app will automatically open: https://nacl-pg.appspot.com/desk?open=Bertie.


I'm not quite sure why this made it to the front page. It's just a wish list of things they would like an AI to be able to do.


They're some of the most prominent people in their field, and they are setting a major direction for Facebook ai research. So this is the Facebook road map. Also, because everyone is focused on ML, the move bank to people trying to solve AI is a big deal. Other major researchers are saying similar things


Ok, good explanation. Thanks.


Outline:

1. Introduction

2. Desiderata for an intelligent machine

2.1 Ability to communicate

2.2 Ability to learn

3. A simulated ecosystem to educate communication-based intelligent machines

3.1 High-level description of the ecosystem (Agents, Interface channels, Reward, Incremental structure, Time off for exploration, Evaluation)

3.2 Examples from early stages of the simulation (Preliminaries and notation, The Learner learns to issue Environment commands, Associating language to actions, Learning to generalize, Understanding higher-level orders, Interactive communication, Algorithmic knowledge)

3.3 Interacting with the trained intelligent machine

4. Towards the development of intelligent machines

4.1 Types of learning

4.2 Long-term memory and compositional learning skills

4.3 Computational properties of intelligent machines

5. Related ideas

6. Conclusion

I'm thinking the winner might not be the company who makes huge progress towards MI, it could be the one who provides the ecosystem. For example, in the trend of Siri-type chatbots, Slack is very well positioned to reap most of the benefits even without doing much research in AI, because they could get a commission on the platform usage.


If we successfully create actual machine intelligence, with the ability to learn and communicate, then the "winner" hardly matters; it only matters whether they programmed it correctly or not. Any scenario that has a "winner" means they didn't.


If we manage to build truly intelligent machines, I'd hope it'd be used to enrich people's lives, that is, all humans be the winners


"And bingo! I get to press the button again, woohoo!"

Wonder what the other button does...

On the bright side, they bucked the trend and cited Terry Winograd's research from the 1970s instead of pretending they made this idea all up by themselves.

https://en.wikipedia.org/wiki/SHRDLU


Lot of interesting ideas and research collected in one place. It looks like the high concept pitch for this is, "chatbot that can query the internet"


So basically Siri?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: