Mastodon has updated its terms to prohibit AI models from using its user data, a move that acknowledges the concerns around AI training. The platform informed users via email, stating that data from Mastodon instances shouldn’t be used for training large language models (LLMs). This decision comes amid rising awareness about user content feeding into AI systems. While the ban may appear as a reactive measure, it reassures users about the handling of their data. Implementing such terms will be tricky, especially since the Fediverse extends beyond Mastodon’s control. The solution lies in using technical barriers like the robots.txt file, assuming compliance from AI developers. Mastodon isn’t alone in this endeavor. Platforms like Bluesky have also stated their opposition to utilizing user content for AI, but enforcement outside organizational ecosystems remains problematic. Besides, Reddit’s legal actions against unauthorized content scraping highlight the broader industry tension. Overall, Mastodon’s stance reflects its commitment to user privacy as it requires adherence to newly raised age limitations in its next update.