Magazin #2 | Sommer 2023
Moderators: Exploited to Train AI
Online moderators and clickworkers provide data for training AI systems under poor working conditions. Their plight is often overlooked in the AI debate.
Facebook says that Artificial Intelligence technology is a key component in moderating posts on the platform. “AI can detect and remove content that goes against our Community Standards before anyone reports it,” the company states on its website. AI is also a central component of ChatGPT’s automated text production: Toxic content must be removed to make the ChatBot’s output suitable for widespread use. Indeed, all major online platforms likely rely on the support of AI for content moderation. One reason is that deploying AI systems is cheaper than using exclusively human moderation. Moreover, the job is psychologically challenging: Moderators are constantly exposed to disturbing content circulating on the internet.
But AI moderation systems also rely on human decision-making, and not just for murky moderation issues. Moderators provide training data to the systems, and are thus a mandatory prerequisite for the systems to be developed in the first place.
Nevertheless, moderators are exposed to extremely poor working conditions. Given the psychologically stressful nature of the work, large platforms have an extra obligation to take care of their moderators. But there are instead constant reports of subcontractors of Facebook, TikTok or OpenAI not paying their moderators and clickworkers enough, not offering them adequate psychological support, and exerting extreme pressure on them through constant monitoring and by way of threats aimed at preventing their unionization. The French company Teleperformance, which provides moderation services to TikTok, among other platforms, was recently the target of all these accusations. In response to an investigative report, the Colombian Labor Ministry ordered an investigation into working conditions at Teleperformance sites in the country. Subsequently, public pressure on Teleperformance grew so great that the UNI Global Union was able to reach a worldwide agreement with the company in December 2022 to secure greater rights for workers and better protections for their health.
Such steps are urgently needed to commit AI system manufacturers to fair working conditions along the entire value chain. Current European digital policy projects such as the AI Act, the Platform Worker Directive and the Data Act ignore the problem. But if the development and use of AI systems is to adhere to European values, as formulated in the AI Act, then the EU cannot close its eyes to poor working conditions, and not just in the case of online moderators.