The hidden humans powering the AI economy

Since January, Tina Lynn Wilson of Hamilton, Ont., has been freelancing for a company called DataAnnotation.
The 45-year-old says she loves the work, which involves checking responses from an AI model for grammar, accuracy and creativity. It calls for analytical skills and an eye for detail — and she also gets some interesting projects, like choosing the better of two samples of poetry.
“Because it is a creative response, there would be no fact-checking involved. You would have to indicate … what the better reply is and why.”
The work Wilson does is part of a huge, yet not well-known, network of gig workers of the emerging AI economy. Companies such as Outlier AI and Handshake AI hire them to be "artificial intelligence trainers, contracting with large AI platforms to help them train their models.
Some data annotation work is poorly paid — even exploitative, in other parts of the world — but there's a broad range of jobs in training, tending to and correcting AI. It's labour the tech giants seem to prefer not to talk about. And as models advance, they will require more specialized training — meaning companies may soon no longer need many of the very humans who helped make them what they are today.

We often hear that today's generative AI is trained on vast amounts of data to teach it how human ideas typically go together. Sometimes called pre-training, that’s only the first step. For these systems to produce responses that are accurate, useful and not offensive, they need to be further refined, especially if they're going to work in narrow fields in the real world.
This is called fine-tuning, and it relies on human expertise. It's basically gig work: done on a per-assignment basis, without guaranteed hours. The Canadian AI trainers we spoke to made about $20 an hour, though some more specialized work can pay around $40. Still, inconsistency can be a problem.
"You cannot rely on this as a main source of income," said Wilson, who described her work as that of a generalist. Many other annotators consider it a side hustle as well, she said.

Reinforcement learning from human feedback is a type of fine-tuning that relies on people evaluating AI outputs.
Wilson’s work involves evaluating how “human” an AI response sounds.
“This is especially true for voice responses,” she said. “‘Is this something a human would like to hear?’”
So, when ChatGPT or Claude sounds uncannily human, that's because humans have trained it to be so.
"It's still the output of a software product," said Brian Merchant, a tech journalist specializing in labour and digital technology. "You need quality assurance of the output of a commercial, for-profit, software product.”

Outlier AI has more than 250,000 active contributors across 50 countries, said Fiorella Riccobono, a spokesperson for Scale AI, its parent company. Eighty-one per cent hold at least a bachelor’s degree, she said. The company was not able to provide Canada-specific numbers.
A possibly changing marketThere are signs that the market for this work is changing, with less demand for the generalized labour that people like Wilson do. Scale AI recently laid off generalist workers in Dallas, according to Business Insider, in a shift toward more technical training. Meanwhile, newer, more advanced models, like that by Chinese firm DeepSeek, have automated the reinforcement process.
"Demand for contributors with specialized knowledge and advanced degrees has grown significantly as AI systems become more complex,” said Riccobono.
Eric Zhou, 26, was one of these specialized workers. After studying materials and nanoscience at the University of Waterloo, he freelanced for Outlier AI part time for about a year. There, he assessed prompts and AI responses about undergrad-level physics and chemistry, and corrected answers.

"It's very fun work if you're just doing the science questions,” he said. “So that problem-solving part, I really enjoyed."
He found, however, that tasks could take more time than the company allotted, so a job listed as $20 for an hour of work could take longer, with no additional pay.
There seems to be no shortage of Canadians working in specialized fields and looking to supplement their income improving AI, including a number of Zhou’s friends.
That means workers feel they can constantly be replaced, he said.
‘Digital sweatshops’Still, AI as a whole relies on a global supply chain throughout the training process, much of which is outsourced to workers in lower-wage countries. This can mean fine-tuning data, but a lot of the work is in data labelling, which can be gruelling.
The number of people employed in this field are estimated to be in the millions. Companies have been accused of exploiting lax labour laws in regions like East Africa and Southeast Asia.
"There are a variety of what you could call digital sweatshops in anywhere from the Philippines to Kenya, where workers are essentially transforming these data sets into products that can be used by AI,” said James Muldoon, co-author of the book Feeding the Machine, about human labour and hidden costs of powering AI.

He says the tasks can be brutal, as he found in field work in Kenya and Uganda, where people worked up to 70-hour weeks for just over a dollar an hour, under conditions he called “really appalling.”
While many of the annotators had ambitions to work in the tech sector more meaningfully, he said they found themselves stuck doing “really boring, excruciating” tasks.
AI companies typically don’t focus on the human labour that powers automation. Merchant, the tech journalist, says these firms typically want to show their product “feels magical, feels powerful, feels like it’s the future.”
“Very rarely do you have a job that's completely taken over by machinery, especially in industrial settings,” he said.
“What you usually have is a technology that can get you part of the way.”
cbc.ca
