Hacker News new | past | comments | ask | show | jobs | submit login

Disappointing. Yet another industry leader sewing public FUD for some reason. Why not bring rational discourse into the conversation around software safety and ethics.

Automation has been the driving force of industry since the industrial revolution itself. We're not new to automation, and we are certainly not new to safety of autonomous systems. AI is no different.




Yeah, what could we possibly hope to learn from the "The Godfather of A.I." about the protentional dangers of A.I.

Maybe... they're better positioned to opine on this topic than you are?


The issue is that the ML stuff is so bleeding edge and paradigm breaking that no set of prior credentials is really a valid mark for being right or wrong about this, especially those with traditional college research background.

The best way to describe this issue is with a hypothetical scenario:

A well respected Ph.D Physicist comes out with a paper that describes a warp drive that can be used to travel faster than light. He hands the paper to very talented multi disciplinary engineer that has built everything from small micro circuits to spaceships. Engineer says: "Ok this is cool, what do I need to do to start building this?". Researches says "Ok so first, you need to find some negative mass and gather it". Engineer says: "How do I do that"? Researcher answers: "I dont know, but once you do, find a way to arrange it into a ring around the spaceship".

Its the same story with AI. Just because someone has research experience doesn't mean that he/she knows how technology will evolve in the real world. There may be theories floating around about how AI development could cause danger, but without real concrete paths of what has to happen for those theories to come true - and without those paths, those theories remain theories.


Alternatively, we can draw a more fitting parallel to Robert Oppenheimer, who, upon recognizing the devastating potential of his creation, dedicated himself to halting the spread of nuclear weapons worldwide.


Robert Oppenheimer knew the entire domain space of nuclear weapons. Current researchers don't. Its not like future neural networks are just going to be stacks and stacks of transformers on top of each other.


I'm not sure what you're trying to argue here outside to say current researchers can't predict the future.

True... Which is why they're warning against the "potential" dangers and sounding the alarm now, as opposed to after the fact.

"The Godfather of AI" seems as qualified as anyone to voice those concerns. There's no debate to be had here.


Warning against potential dangers is meaningless. Any significant piece of tech has potential danger. Some innocuous microprocessor can be used as a guidance chip for a middle, or run a smart toaster oven.

There is more to it though. Geoffrey isn't just warning about potential danger. He is looking at current research, and wrongfully extrapolating AI power into the future. Sure, AI can and will be misused, but most of the warnings about sentient AI, or it's ability to solve complex problems like making deadly viruses are all hypothetical.


"He is looking at current research, and wrongfully extrapolating AI power into the future."

You should ask Google if you can have Geoffrey's spot at Google.


If they offered, I would take it. Not going to put an ounce of effort into convincing anyone to give me the position.

Jokes aside, the statement stands on its own without any sort of credentials and lack there of. A lot of the hypothetical AI danger relies on the fact that AI will somehow internally prove by a proxy that P=NP, and be able to produce information that would require brute force iteration to search using traditional methods through some arbitrary algorithm. Or alternatively, it will somehow be able to figure out how to do search for those tasks more efficiently, despite there being no evidence what so ever that a more efficient search algorithm exists for a given task.

Everything "simpler" then that is already possible to do, albeit with more steps, which is irrelevant for someone with capital or basic knowledge.


It is different. The most powerful of today's machines has a red stop button. But if a machine becomes smarter than us, it could create a copy of itself without such button, so we lose control and will be quickly overpowered.


There’s an argument that we’ve gone past that point already. Yes, Microsoft can theoretically take their Bing GPT-4 program offline and turn it off, but they just invested $10B in it and they don’t want to. In fact a corporation can be thought of as an AGI itself, just made up of humans. Again, we can take Microsoft offline but we don’t want to.

I guess my point is that the most likely scenario for AGI that looks more like AGI isn’t that we won’t be able to take it down but we won’t want to.


I see lots of people pointing to what is "more likely" or "more realistic."

I'm not sure where everyone got these strong priors on the consequences of 'intelligent' machines from, did I miss some memo?


Do you disagree with my first paragraph?


What are your credentials in the field if I may ask?


Senior Hacker News Commenter


Prove that "AI is no different." Its creators appear to differ with you on this point.

The burden of proof is thus yours.


One major difference between now and then is that now automation is starting to look and behave in a way that can be confused with a human. Most, if not all, comments generated by machines before LLMs could be identified as such, while now it's going to get harder and harder to detect properly.

Quick evaluation: did a human write this comment or did I use GPT-4 to write this comment by just providing what meaning I wanted to convey?

The answer is f3bd3abcb05c3a362362a17f690d73aa7df15eb2acf4eb5bf8a5d39d07bae216 (sha256sum)


I suspect there is more to it than what has been published. These are smart people that appear (to us) acting irrationally.


Irrationally how?

What is so obviously irrational about what is being done? I don't see it.


I don't want to speculate his reasons, but I don't see how leaving his influential role at a top AI company (Google) accomplishes the goals written about in the paper.


Too bad you're being downvoted because you're making very good points. All the counter arguments are arguments from authority also.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: