
Austrian privacy non-profit Noyb (none of your business) has sent a conflict-change letter to the Irish headquarters of the meta, threatened to the company with a class action case if it proceeds with its plans to train users to train their artificial intelligence (AI) models without a clear opt-in.
Social media Behemoth stopped efforts in June 2024 in June 2024, after weeks on May 27, 2025 in the European Union (EU) using public data shared by adults to train their AI models in the European Union (EU), on 27 May, 2025, in June 2024, in June 2024.
Noyab said in a statement, “Instead of asking consumers for opt-in consensus, Meta depends on an alleged ‘legitimate interest’ to suck all user data.” “Meta may face massive legal risks-perfect because it depends on ‘opt-out’ instead of ‘Opt-in’ system for AI training.”
The advocacy group further stated that the meta AI does not correspond to the general data security regulation (GDPR) in the region, and in addition to claiming that it is “legitimate interest” in taking user data for AI training, company is also limiting opt-out right before starting training.
NOYB also stated that even though 10% of the meta users clearly agree to hand over data for this purpose, it would be the amount for the company for sufficient data points to learn the European Union languages.
It is worth indicating that Meta had earlier claimed that this information needs to be collected to catch various languages, geography and cultural references to collect this information.
“Meta begins a big fight for opt-out systems instead of an opt-in system,” said Noyab’s Max Schlemes. “Instead, they rely on an alleged ‘legitimate interest’ only to take data and walk with it. It is neither legal nor necessary.”
“Meta’s absurd claim that AI training is necessary to steal personal data of all, laughing. Other AI providers do not use social network data – and generate better models than META.”
The privacy group also accused the company of moving forward with their plans by putting on the users and stated that the National Data Protection Officer remained quiet to a large extent on the validity of AI training without consent.
“So it seems that the meta just proceeded anyway – taking another major legal risk in the European Union and trampling the rights of users,” said Nibi.
In a statement shared with Reuters, Meta rejected NOYB’s arguments, stating that they are incorrect on facts and laws, and it has provided “clear” option to the European Union users to object to their data being processed for AI training.
This is not the first time the dependence on Meta’s GDPR’s “legitimate interest” has come under investigation to collect data without obvious opt-in consensus. In August 2023, the company agreed to change the legal basis from the “legitimate interest” from a consent-based perspective to process the user data to serve user data to serve the advertisements targeted for people in the region.
This disclosure comes as the Belgian Court of Appeal, deciding on the structure of transparency and consents used by Google, Microsoft, Amazon, and other companies, which is illegal to obtain consent for data processing for individual advertising purposes, illegal in the entire Europe.