Hacker Newsnew | past | comments | ask | show | jobs | submit | highduc's commentslogin

>A lot of the initial speculation is that the coup was led by Ilya over AI safety concerns, but that seems unlikely now given how quickly he switched sides.

For more context: https://news.ycombinator.com/item?id=38362301

If the above is real then it kind of changes things.


Interesting. Also what a clusterfuck.


At this point this whole thing looks more like

>tech billionaires and mammoth corporation use the full weight of social media to exert control over a non-profit organization, and also telling us what to think about the event without having enough info.

The framing in these articles is insane.


This article is that Obama meme awarding a medal to himself. Also I don't get it, OpenAI's product was technically made by Ilya who's still at OpenAI.


> OpenAI's product was technically made by Ilya

Nope, he is not a core contributor of GPT-4, GPT-3, GPT-2, DALLE-3, DALLE-2: https://arxiv.org/abs/2303.08774 https://cdn.openai.com/papers/dall-e-2.pdf https://arxiv.org/pdf/2005.14165.pdf https://arxiv.org/abs/1908.09203


With 700 ish employees I find it difficult to attribute their success to one person.


I was not talking about the success but the core tech those employees are working with. That's what really matters isn't it?


If there is no success then there is nothing to discuss.


You get my point. With all due respect to Sam's technical abilities, if I'd have to choose one to make AI/AGI I'd clearly go with Ilya. The fact that Sam goes to Microsoft doesn't mean all OpenAI is moves to Microsoft. That is the point I'm trying to make


No but the 700 employees who are following him does.


So?


For me the real danger of AGI (even aligned), fast forward in some future where robotics are similarly advanced to create full human motion replacement, is that it will dilute if not completely replace human work, ANY kind. And the real danger doesn't come from AGI itself, comes from the psychopaths having control of it.


There is a real danger of AGI, IMHO the question is if we humans should protect ourselves beforehand with specific military and cybersecurity technologies among others. Is there someone working on this?

SciFi-wise I understand in the Terminator movies they were not preventing this, just moving forward naively.


Non-profit org man. We still don't know what happened and their actions might align to their mission. Weird how quite a few people with your narrative mistake it for a for-profit company.


I fully understand the nature of the org. I just don't see how destroying it does anything but hurt its mission. Weird how quite a few people with your narrative always project false assumptions onto others.


I don't get it, how is OpenAI "dead"? Are you going to stop using ChatGPT or?


He may have no choice but to try to play this card at this point? The stakes are pretty high I suppose. You don't just give up such level of potential control over this tech.


Trading privacy for safety.


> The police say they cannot access the tag’s location unless the owner shares it during an investigation


> The police say


Not even, the car would be still be stolen. In any case it could help police make their work easier.


That ignores incentive effects.


The amount of money and power their products might offer makes it pretty desirable. Theoretically there should be no limit to the amount and type of shenanigans that are possible in this particular situation.


That's fair lol


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: