Friday, March 26

A Question of Ethics

(CNN Business)In September, Timnit Gebru, then co-leader of the ethical AI team at Google, sent a private message on Twitter to Emily Bender, a computational linguistics professor at the University of Washington.

"Hi Emily, I'm wondering if you've written something regarding ethical considerations of large language models or something you could recommend from others?" she asked, referring to a buzzy kind of artificial intelligence software trained on text from an enormous number of webpages.

Google reshuffles AI team leadership after researcher's controversial departure
The question may sound unassuming but it touched on something central to the future of Google's foundational product: search. This kind of AI has become increasingly capable and popular in the last couple years, driven largely by language models from Google and research lab OpenAI. Such AI can generate text, mimicking everything from news articles and recipes to poetry, and it has quickly become key to Google Search, which the company said responds to trillions of queries each year. In late 2019, the company started relying on such AI to help answer one in 10 English-language queries from US users; nearly a year later, the company said it was handling nearly all English queries and is also being used to answer queries in dozens of other languages.

"Sorry, I haven't!" Bender quickly replied to Gebru, according to messages viewed by CNN Business. But Bender, who at the time mostly knew Gebru from her presence on Twitter, was intrigued by the question. Within minutes she fired back several ideas about the ethical implications of such state-of-the-art AI models, including the "Carbon cost of creating the damn things" and "AI hype/people claiming it's understanding when it isn't," and cited some relevant academic papers.

Gebru, a prominent Black woman in AI — a field that's largely White and male — is known for her research into bias and inequality in AI. It's a relatively new area of study that explores how the technology, which is made by humans, soaks up our biases. The research scientist is also cofounder of Black in AI, a group focused on getting more Black people into the field. She responded to Bender that she was trying to get Google to consider the ethical implications of large language models.

Bender suggested co-authoring an academic paper looking at these AI models and related ethical pitfalls. Within two days, Bender sent Gebru an outline for a paper. A month later, the women had written that paper (helped by other coauthors, including Gebru's co-team leader at Google, Margaret Mitchell) and submitted it to the ACM Conference on Fairness, Accountability, and Transparency, or FAccT. 

The paper's title was "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" and it included a tiny parrot emoji after the question mark. (The phrase "stochastic parrots" refers to the idea that these enormous AI models are pulling together words without truly understanding what they mean, similar to how a parrot learns to repeat things it hears.)          READ MORE

No comments:

Post a Comment