Digital sweatshops of the Global South
So, where does this hidden labor take place? According to Casilli’s research, workers are in countries including Kenya, India, the Philippines, and Madagascar — regions with high levels of digital literacy, access to English- or French-speaking workers, and little in the way of labor protection or union representation.
Behind most of today’s AI models lies the labor of workers in the Global South, who are exposed to disturbing content and poor working conditions. This reality raises urgent questions about the transparency and ethics of AI development.
Picture working 10-hour days tagging distressing images to train an AI model — and getting paid not in money, but in a kilogram of sugar. This isn’t dystopian fiction, but reality for some of the workers behind today’s most advanced artificial intelligence.
While the development of AI is undoubtedly enhancing the lives of many by streamlining processes and offering efficient solutions, it also raises a pressing question: What is the true cost of AI, and who is paying for it?
Antonio Casilli, Professor of Sociology at Télécom Paris and Founder of DipLab, addressed this question during an Esade seminar on the promises and perils of the digitalization of work. The event was part of the kick-off for the DigitalWORK research project, which explores how digital technologies are transforming work and promoting fair, equitable and transparent labor conditions, with Anna Ginès i Fabrellas and Raquel Serrano Olivares (Universitat de Barcelona) as principal investigators.
AI isn’t autonomous, it’s human-powered
To produce the polished and well-mannered AI that consumers use, a lot of training has to occur behind the scenes. Mary L. Gray and Siddharth Suri introduced the concept of ‘ghost work’ in their book in 2019 — the invisible, repetitive labor that powers machine learning.
Manual, invisible labor is essential to the perception of machine intelligence
AI is not magic. “It is the result of thousands of micro-tasks performed by human workers behind the scenes,” Casilli said. These tasks include content moderation, image tagging, transcribing audio, and flagging hate speech. Someone has to teach AI what’s acceptable and what’s not — and that means workers are exposed to the very content AI is designed to filter out. An incredibly disturbing and unpleasant way to spend a working day.
The irony is that without these ‘ghost workers’ providing human input, AI would not appear intelligent at all.
The examples vary widely: from workers paid to tag photographs of dog excrements to help AI-powered vacuums detecting them and avoid making a mess, to a house in Madagascar where 120 people are crammed together, watching live supermarket security footage from Europe and the US, sending alerts about potential shoplifters.
The latter isn’t AI training at all—it’s human labor masquerading as machine intelligence. These security systems are marketed as autonomous, yet rely on unseen workers halfway around the world to function. Invisible labor is essential to the perception of machine intelligence.
Digital sweatshops of the Global South
So, where does this hidden labor take place? According to Casilli’s research, workers are in countries including Kenya, India, the Philippines, and Madagascar — regions with high levels of digital literacy, access to English- or French-speaking workers, and little in the way of labor protection or union representation.
Many of these tasks are outsourced through platforms that hire freelancers, allowing companies to avoid labor laws and benefits. In one widely reported case, workers in Kenya were earning as little as $1.32 per hour for labeling toxic content to train large language models.
This labor regime is part of the global value chain of the AI industry. “We are witnessing the same dynamics that we’ve seen in other globalized industries — AI is just following the same paths as textiles or electronics, relying on cheap labor from the Global South to create value in the Global North,” explained Casilli. The Global North gets the benefit of ‘smart’ services and systems, while the Global South provides the cheap labor under exploitative conditions.
Even refugees in Europe, as noted during the seminar’s Q&A, are being sucked into this hidden labor economy — a concerning development in the ethics of AI deployment.
Psychological damage — AI moderation and mental health
Beyond the economic exploitation lies another cost: the psychological toll. Content moderation is among the most traumatic roles in the AI production pipeline. Workers tasked with filtering violent, abusive or sexually explicit content often suffer from post-traumatic stress disorder (PTSD) and long-term mental health problems.
“They are exposed daily to the worst of humanity online,” Casilli said. “They are not just undervalued — they are unrecognized. This is dangerous work, both mentally and physically.” And yet, these individuals are often framed as ‘freelancers’, with no access to mental health support, job security or basic workplace protections.
Uma Rani, Senior Economist at the International Labour Organization (ILO) and one of the seminar panellists, highlighted how many of these workers are initially recruited under misleading job descriptions, often told they will be working as data analysts or translators. Once hired, the reality is starkly different: they spend their days categorizing toxic content or annotating disturbing material. “They are required to sign non-disclosure agreements,” she explained, “which prevent them from speaking to anyone — even family — about the work they do.” This secrecy not only deepens the psychological burden, but also makes it nearly impossible for workers to organize, unionize or seek collective redress.
How can transparency be assured?
If AI is positioned as a tool for all, why does it mostly enrich Big Tech and the Global North?
Casilli encouraged participants to follow the ‘flows’ of AI: “Labor is extracted in Nairobi or Manila. Value is created in Silicon Valley, in Paris, in Beijing.” This uneven structure means that while AI promises global empowerment, it often replicates existing imbalances.
We can assess the actual ethics of an AI operation if we know, in a transparent way, how this AI is produced
How can we tackle these inequalities and prevent the digitalization of labor from replicating the injustices of the industrial era?
Ideas raised during the seminar’s Q&A suggest a potential path forward: developing new governance frameworks for AI labor, extending labor protections to gig and ghost workers, and increasing transparency in how AI systems are trained and maintained throughout the entire supply chain. There is also a growing call for global ethical standards and independent auditing of digital labor practices.
AI should not be an excuse to roll back labor rights. Instead, it should prompt a rethink about how we value work in the digital age.
The price of convenience
The worker who finishes their long day of content moderation feeling shaken by the graphic images and hateful speech they’ve seen earns a mere $10—if not a bag of sugar or a sack of rice. Is that enough to offset the trauma and exploitation they have had to endure?
As AI becomes more embedded in our lives, it’s worth asking: What are we really outsourcing when we rely on machines? And at what human cost?
“We can assess the actual ethical nature of an AI operation if we know, in a transparent way, how this AI is produced,” said Casilli. “This is, in my opinion, the ethical challenge that these companies have to tackle — and that they do not tackle as of today.”
If AI is to be truly transformative, it must also be just — because convenience for one part of the world shouldn’t come at the expense of dignity for another.