Hacker Newsnew | past | comments | ask | show | jobs | submit | hal009's commentslogin

Emacs is the unholiest of them all.


No need to retype manually. If you upload a screenshot of the error to ChatGPT, it will transcribe it for you and even give you some hints on how to resolve it.


As mentioned by another person in this thread [0], it is likely that it was Ilya's work that was getting replicated by another "secret" team, and the "different opinions on the same person" was Sam's opinions of Ilya. Perhaps Sam saw him as an unstable element and a single point of failure in the company, and wanted to make sure that OpenAI would be able to continue without Ilya?

[0] https://news.ycombinator.com/reply?id=38357843


Since a lot of the board’s responsibilities are tied to capabilities of the platform, it’s possible that Altman asked for Ilya to determine the capabilities, didn’t like the answer, then got somebody else to give the “right” answer, which he presented to the board. A simple dual-track project shouldn’t be a problem, but this kind of thing would be seen as dishonesty by the board.


> it’s possible that Altman asked for Ilya to determine the capabilities, didn’t like the answer, then got somebody else to give the “right” answer, which he presented to the board.

This makes no sense given that Ilya is on the board.


No, it just means that in that scenario Sam would think he could convince the rest of the board that Ilya was wrong because he could find somebody else to give him a preferable answer.

It’s just speculation, anyway. There isn’t really anything I’ve heard that isn’t contradicted by the evidence, so it’s likely at least one thing “known” by the public isn’t actually true.


Firing Sam as a way of sticking up for Ilya would make more sense if Ilya wasn’t currently in support of Sam getting his job back.


I’m not sure Ilya was anticipating this to more or less break OpenAI as a company. Ilya is all about the work they do, and might not have anticipated that this would turn the entire company against him and the rest of the board. And so, he is in support of Sam coming back, if that means that they can get back to the work at hand.


Perhaps. But if the board is really so responsive to Ilya's concerns, why have they not reversed the decision so that Ilya can get his wish?


This is an interesting theory when combined with this tweet from Google DeepMind's team lead of Scalable Alignment [1].

[1] https://twitter.com/geoffreyirving/status/172675427761849141...

The "Sam is actually a psychopath that has managed to swindle his way into everyone liking him, and Ilya has grave ethical concerns about that kind of person leading a company seeking AGI, but he can't out him publicly because so many people are hypnotized by him" theory is definitely a new, interesting one; there has been literally no moment in the past three days where I could have predicted the next turn this would take.


That guy is another AI doomer though, and those people all seem to be quite slippery themselves. Supposedly Sam lied to him about other people, but there's no further detail provided and nobody seems willing to get concrete about any time Altman has been specifically dishonest. When the doomer board made similar allegations it seemed serious for a day, and then evaporated.

Meanwhile the Google AI folks have a long track record of making very misleading statements in public. I remember before Altman came along and made their models available to all, Google was fond of responding to any OpenAI blog post by claiming they had the same tech but way better, they just weren't releasing it because it was so amazing it just wasn't "safe" enough to do so yet. Then ChatGPT called their bluff and we discovered that in reality they were way behind and apparently unable to catch up, also, there were no actual safety problems and it was fine to let everyone use even relatively unconditioned models.

So this Geoffrey guy might be right but if Altman was really such a systematic liar, why would his employees be so loyal? And why is it only AI doomers who make this allegation? Maybe Altman "lied" to them by claiming key people were just as doomerist as those guys, and when they found out it wasn't true they wailed?


Interesting. I’m glad he shared his perspective despite the ambiguity.


either that or Sam didn't tell Adam D'Angelo that they were launching a competing product in exactly the same space that poe.ai had launched one. For some context, poe had launched something similar to those custom GPTs with creator revenue sharing etc. just 4 weeks prior to dev-day


Not sure how he would see that coming? It was a UI tweak away for OpenAI


I believe the point here is that they claim that they care about security, while their Icelandic VPS hosting provider can just dump the host server memory, which would include the encryption keys.


Then can’t we say that? “If they truly cared about security they wouldn’t use a VPS”. It just rubs me the wrong way the way it’s worded.


Yes, I should have been clearer. Sorry.


This is a good point. We are moving to a dedicated server to resolve this issue.


Nice :)

You probably know this, but anyway. If you're setting up FDE with dropbear on a remote server, it's best to build the installer on the machine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: