> We’re sharing an early look into Private Processing, an optional capability that enables users to initiate a request to a confidential and secure environment and use AI for processing messages where no one — including Meta and WhatsApp — can access them.
What is this and what is this supposed to mean? I have a hard time trusting these companies with any privacy and while this wording may be technically correct they’ll likely extract all meaning from your communication, probably would even have some AI enabled surveillance service.
I don't understand the knee-jerk skepticism. This is something they are doing to gain trust and encourage users to use AI on WhatsApp.
WhatsApp did not used to be end-to-end encypted, then in 2021 it was - a step in the right direction. Similary, AI interaction in WhatsApp today is not private, which is something they are trying to improve with this effort - another step in the right direction.
What's the motive "to gain trust and encourage users to use AI on WhatsApp"? Meta aren't a charity. You have to question their motives because their motive is to extract value out of their users who don't pay for a service, and I would say that whatsapp has proven to be a harder place to extract that value than their other ventures.
btw whatsapp implemented the signal protocol around 2016.
"motive is to extract value out of their users who don't pay for a service"
that is called a business.
if you find something deceitful in the business practice, that should certainly be called out and even prosecuted. I don't see why an effort to improve privacy has to get a skeptical treatment, because big business bad bla bla
Did you read the next paragraphs? It literally describes the details. I would quote the parts that respond to your question, but I would be quoting the entire post.
> This confidential computing infrastructure, built on top of a Trusted Execution Environment (TEE), will make it possible for people to direct AI to process their requests — like summarizing unread WhatsApp threads or getting writing suggestions — in our secure and private cloud environment.
We’re into “can’t prove a negative” territory here. Yes, the scheme is explained in detail, yes it conforms to cryptographic norms, yes real people work on it and some of us know some of them..
..but how can FB prove it isn’t all a smokescreen, and requests are printed out and faxed to evil people? They can’t, of course, and some people like to demand proof of the negative as a way of implying wrongdoing in a “just asking questions” manner.
Except the software running in TEEs, including the source code, is all verifiable at runtime, via a third party not controlled by Meta. And if you disagree, claim a bug bounty and become famous for exposing Meta as frauds. Or, more likely, stick with your reddit-tier zealotry and clown posting.
What is this and what is this supposed to mean? I have a hard time trusting these companies with any privacy and while this wording may be technically correct they’ll likely extract all meaning from your communication, probably would even have some AI enabled surveillance service.