Bluesky Plans to Give Users Control Over AI Use of Their Posts
Bluesky is exploring a new approach to user consent regarding how personal data is used for AI training, according to CEO Jay Graber. Speaking at the SXSW conference in Austin, Graber outlined the company’s efforts to develop a framework that would allow users to control whether their content can be utilized for generative AI.
The discussion comes amid growing concerns over how public social media content is being used to train AI models. Last year, it was revealed that a dataset containing one million Bluesky posts had been hosted on Hugging Face, raising questions about the extent to which user-generated content is being leveraged without explicit consent. Meanwhile, competitor X has already begun integrating user posts into its AI training efforts, with its privacy policy changes last year allowing third parties to train AI models on its platform’s content.
While Bluesky does not plan to develop its own AI models based on user data, the company recognizes the need to establish clear policies on the matter. Graber emphasized that the goal is to give users control over how their content is utilized.
“We really believe in user choice,” Graber said, explaining that Bluesky users would have the ability to specify their preferences. She compared the proposed system to the robots.txt file used by websites to indicate whether they want to be indexed by search engines. Though such measures are not legally binding, they are generally respected by major tech platforms.
The proposal, currently available on GitHub, suggests implementing user consent settings at both the account and post level while encouraging AI companies to honor those choices. Bluesky is working with other stakeholders in the industry to advance the initiative, which Graber called “a positive direction to take.” The entire concept is loosely inspired by how the robots.txt file works for websites, giving website owners a way to opt some or all of their content out of search engine spiders.