Chatbot developers face a problem, due to the fact that AI learns from texts in the public domain, it can inevitably create racist, sexist and other undesirable statements. To solve this problem, OpenAI, the developer of the ChatGPT chatbot, has created a neural network that will help find and filter such “toxic” statements.
The owners of the ChatGPT chatbot turned to Sama, which hires workers from third world countries to perform monotonous “idiot labor” with low pay. The company already has experience with Facebook, where it has found moderators to watch videos of executions and other similar content for minimal pay. They were promised a salary of $12 an hour, but in reality, they only received $2. As a result, they were fired without bonuses.
According to Time, OpenAI has signed three contracts with Sama totaling about $200,000 to mark up text descriptions. This will allow the neural network to improve its work and avoid unwanted statements. Overall, this data markup process will help create a more efficient and ethical chatbot that will better understand and respond to user needs.
Sama said employees working with OpenAI asked for about 70 pieces of text in 9 hours of work, rather than 250 as reported in the contract. Salaries ranged from $1.46 to $3.74 per hour, after taxes. The company also noted that the $12.5 per hour rate specified in the contract includes all costs, including salaries, infrastructure and staff benefits.
In addition, employees who deal with traumatic content could visit qualified psychotherapists individually or in a group at any time. OpenAI acknowledged that it used Sama employees as outsourcers, but noted that this work helped many to get out of poverty. Andrew Strait, an AI ethics specialist, said we shouldn’t forget that ChatGPT and other generative models are not magic, they are based on huge amounts of human labor and extracted data.