
Popular messaging app WhatsApp on Tuesday unveiled a new technology, called private processing to enable Artificial Intelligence (AI) capabilities to a privacy-consistent manner.
Meta’s service, a statement shared with hacker news, said, “Private processing will allow users to avail powerful alternative AI features-as if to fed the until the until or help the main secrecy promises of the support or help-whatsApp, the service of Meta said in a statement shared with the hacker news.
With the introduction of the latest feature, this idea is to facilitate the use of AI features by keeping users’ messages private. This is expected to be made available in the coming weeks.
The capacity, in short, allows users to start a request to process messages using AI within a safe environment called confidential virtual machine (CVM), such as any other aspect, including meta and WhatsApp, cannot access them.
Confidential processing is one of the three principles that underline the convenience, others are –
- Applied guarantee, which fails the system or publicly becomes searchable when efforts are detected to modify the confidential processing guarantee
- Verification
- Non-scarcity
- Stateless Processing and Forward Security, which ensures that messages are not maintained after processing messages so that an attacker does not recover historical requests or reactions
The system is designed as follows: Private processing obtains anonymous credibility to verify that future requests are coming from a legitimate WhatsApp client and then the user’s device and a third-part between the user’s device and the Meta Gateway through a third-part relay to install a bold http (OHTTP) connection, which moves forward and the source from the metapsupp to the IPTSAP Also hides.
A safe application session is later installed between the user’s device and the reliable execution environment (TEE), after which an encrypted request is made to the private processing system using a almanac key.
This also means that the request cannot be dec
The data is processed into the CVM and the result is sent back to an encrypted format to the user’s device, which is only accessible on the device and private processing server.
Meta has also accepted various danger vectors that can highlight the system for possible internal sources, supply chain risk and possible attacks through malicious end users, but emphasized that it has adopted a defense-in-decking approach to reduce the surface of the attack.
In addition, the company has promised external researchers to “analyze, replicate and report CVM binary digests and a third party log of CVM binary image, where they believe that log users can leak data.”
This development comes when Meta released a dedicated Meta AI app manufactured with Lama 4, which comes with “social” discovery and discovers the feeds to share the signals and even remix them.
Private processing, in some ways, reflects Apple’s approach to a confidential AI processing private cloud compute (PCC), which also roots PCC requests through OHTTP relay and processes them in a sandbox atmosphere.
At the end of last year, the iPhone manufacturer publicly provided his PCC virtual research environment (VRE) to allow the research community to inspect and verify the system’s privacy and security guarantee.
(The story was updated after publishing to make it clear that the compromised interiors, supply chain risk, and malicious end users are examples of possible danger scenarios and not a weak link as pre -stated.)