Brazilian Regulator Bans Meta From Using Local Data for AI Training
Brazil’s national data protection authority has instructed Meta to cease using data originating from the country for the purpose of training its AI models.
Meta’s current privacy policy grants the company the ability to utilize data collected from its platforms, including Facebook, Instagram, and WhatsApp, to train its artificial intelligence models. However, this practice will no longer be permitted in Brazil following the order issued by the national data protection authority, which gave the company a five-day deadline to modify its policy on Tuesday.
Brazil cited “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects,” and stated that the company must confirm the discontinuation of data use or face a daily non-compliance fine of $50,000 Brazilian Reals (approximately $9000).
Meta expressed its “disappointment” with the Brazilian authority’s decision, labeling it a “step backward for innovation.”
“AI training is not unique to our services, and we’re more transparent than many of our industry counterparts who have been using public content to train their models and products,” the company stated to TIME on Wednesday, following the Brazilian authority’s decision.
The decision arrives after a report published in June by Human Rights Watch, which revealed that a widely used dataset of images scraped from online sources, employed for training image models by the German nonprofit LAION, contained identifiable images of Brazilian children. The report asserts that this poses a risk of deep fakes or other forms of exploitation. Human Rights Watch claims to have identified 170 photos of children from at least 10 Brazilian states after examining less than 0.0001 percent of the images within the dataset.
Brazil holds a significant position as one of Meta’s largest markets, boasting over 100 million Facebook users alone. In June, Meta unveiled new AI tools for businesses on its WhatsApp platform during a conference held in the South American country.
The Brazilian authority argued that users were not sufficiently informed about the changes and that the opt-out process was “not very intuitive.” Meta maintains that its approach adheres to local privacy laws and has committed to addressing the Brazilian authority’s concerns.
Brazil’s decision to prohibit Meta from feeding user data into its AI models aligns with similar pushback encountered in Europe. Last month, Meta postponed the launch of its AI services and put a hold on plans to train its models on EU and U.K. data following a complaint from the Irish privacy regulator. Meta is anticipated to proceed with training in the U.S., a country lacking federal online privacy protections.
This instance marks not the first time Meta has faced opposition from Brazilian authorities. In February, the company was prohibited from using its name in Brazil due to confusion with another company. Meta successfully appealed the decision in March.