Richard Mathenge was part of a team of contractors in Nairobi, Kenya who trained OpenAI's GPT models. He did so as a team lead at Sama, an AI training company that partnered on the project. In this episode of Big Technology Podcast, Mathenge tells the story of his experience. During the training, he was routinely subjected to sexually explicit material, offered insufficient counseling, and his team members were paid, in some cases, just $1 per hour. Listen for an in-depth look at how these models are trained, and for a look at the human side of Reinforcement Learning with Human Feedback.
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.
For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/
Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
----
OpenAI's response:
We engaged Sama as part of our ongoing work to create safer AI systems and prevent harmful outputs. We take the mental health of our employees and our contractors very seriously. One of the reasons we first engaged Sama was because of their commitment to good practices. Our previous understanding was that wellness programs and 1:1 counseling were offered, workers could opt out of any work without penalization, exposure to explicit content would have a limit, and sensitive information would be handled by workers who were specifically trained to do so. Upon learning of Sama worker conditions in February of 2021 we immediately sought to find out more information from Sama. Sama simultaneously informed us that they were exiting the content moderation space all together.
OpenAI paid Sama $12.50 / hour. We tried to obtain more information about worker compensation from Sama but they never provided us with hard numbers. Sama did provide us with a study they conducted across other companies that do content moderation in that region and shared Sama’s wages were 2-3x the competition.