Monday, February 6

ChatGPT


In a complicated story from TIME, work between OpenAI, the company behind ChatGPT, and Sana, a San Francisco-based firm that employs people in Kenya named Sama. Sama works with some of the biggest companies in the tech industry, including Google, Meta, and Microsoft, on labeling content in images and text for explicit content.

Microsoft already has invested $1 billion into OpenAI, with a possible $10 billion more on the way. Microsoft plans to put AI into everything and reportedly leverage ChatGPT with Bing.

Sama is based in San Fran, but the work is performed by workers in Kenya, earning around $1.32 and $2 per hour. Unfortunately, to keep ChatGPT “safe” for users, OpenAI needs to feed it a lot of data from the internet, which is all unfiltered. So instead of using humans to filter out all the bad stuff, OpenAI (and companies like Meta with Facebook) employ other AI tools to remove that content from the data pool automatically.

Like the 2019 story on Facebook from The Verge, which highlighted the psychological impact of such content on the workers, Sama employees also suffered a similar fate:

“One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.”

The overall contract with Sama was $200,000, and that contract stipulated it would pay “an hourly rate of $12.50 to Sama for the work, which was between six and nine times the amount Sama employees on the project were taking home per hour.”

Later, Sama began to pilot a new project for OpenAI unrelated to ChatGPT. However, instead of text this time, it was imagery, including some illegal under US law, such as child sexual abuse, bestiality, rape, sexual slavery, and death and violence. Again, workers were to view and label the content so that OpenAI’s systems could filter out such things.  READ MORE...

No comments:

Post a Comment