Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Being "anti-web" is the least of its problems.

This thing is an absolute security nightmare. The concept of opening up the full context of your authenticated sessions in your email, financial, healthcare or other web sites to ChatGPT is downright reckless. Aside from personal harm, the way they are pushing this is going to cause large scale data breaches at companies that harbour sensitive information. I've been the one pushing against hard blocking AI tools at my org so far but this may have turned me around for OpenAI at least.



Let’s make a not-for-profit, we can make rainbows and happiness.

Yay!! Let’s all make a not-for-profit!!

Oh, but hold on a minute, look at all the fun things we can do with lots of money!

Ooooh!!


Yeah, I think there are profound security issues, but I think many folks dug into the prompt injection nightmare scenarios with the first round of “AI browsers”, so I didn’t belabor that here; I wanted to focus on what I felt was less covered.


I totally agree.

Clearly, an all-local implementation is safer, and using less powerful local models is the reasonable tradeoff. Also make it open source for trust.

All that said, I don’t need to have everything automated, so we also have ‘why even build it’ legitimate questions to ask.


I mean... Edge already have copilot integrated for years, and Edge actually have users, unlike Atlas. Not sure why people are getting shocked now...


It's bad too, yes. But not as bad, because MS is a profitable company with real enterprise products, so they have some reputation and compliance to maintain. SamAI is a deeply unprofitable company, mostly B2C oriented, with no other products to fall back to except for LLM. So it is more probably that Sam will be exploiting user data. But in general both are bad, that's why people need to use Firefox, but never actually do so, due to some incorrect misconception from decade ago.


>MS is a profitable company with real enterprise products, so they have some reputation and compliance to maintain.

On the contrary, it could be the case that Microsoft ritually sacrifices a dozen babies each day in their offices and it would still be used because office.


Microsoft calls everything copilot. It is unclear what they had back then under that name, and what they will have under it.


"This bad no good thing is already happening, so why are you complaining"


Is this the security flaw thingy that stores OAuth or Auth0 tokens in sqllite database with overly permissive read privileges on it?


no I'm talking about the general concept of having ChatGPT passively able to read sensitive data / browser session state. Apart from the ever present risk they suck your data in for training, the threat of prompt injection or model inversion to steal secrets or execute transactions without your knowledge is extreme.


Right, the software is inherently a flaming security risk even if the vendor were perfectly trustworthy and moral.

Well, unless the scenario is moot because such a vendor would never have released it in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: