Meta, Spotify, Other Social Platforms Criticise EU AI Data Regulations
Meta, Spotify, and other tech companies have criticized the European Union’s (EU) approach to data privacy and AI regulations, specifically calling it a “fragmented and inconsistent” approach to regulating the use of artificial intelligence. This criticism comes shortly after Meta had to halt its plans to use European users’ data to train AI models due to pressure from privacy regulators.
The open letter, signed by tech firms, researchers, and industry groups, argues that the EU’s decision-making under the General Data Protection Regulation (GDPR) has created uncertainty about what data can be used to train AI. They urge for more harmonized, consistent, and clear regulations to avoid Europe’s AI development from falling behind.
Meta and other companies, like Google, have delayed launching new products in the EU due to these regulatory uncertainties. Meta, in particular, has faced record fines for breaching GDPR, including a billion-euro penalty. Along with these data privacy challenges, the EU’s AI Act, aimed at curbing technology abuses, has further complicated the rollout of AI products in the European market.
A spokesperson for the European Commission has stated that all countries within the EU are meant to abide by its rules, and that Europe’s first-in-the-world attempt to stop the abuse of AI is no exception. Time will tell if these platforms comply, and how it will impact the launch of their AI-driven products in the near future.