Former OpenAI employees lead push to protect whistleblowers flagging artificial intelligence risks

Category: Technology/Innovations

Listening

Unlocking Word Meanings

Read the following words/expressions found in today’s article.

  1. retaliation / rɪˌtæl iˈeɪ ʃən / (n.) – the act of doing something bad or harmful to someone, often because he/she has done something bad or harmful first
    Example:

    The company faced retaliation from the workers after cutting their benefits.


  2. buy into (something) / baɪ ˈɪn tu / (phrasal v.) – to believe in and accept something, such as an idea, concept, or suggestion
    Example:

    Tommy bought into the idea of a vegetarian diet after his sister showed him delicious meat-free recipes.


  3. rigorous / ˈrɪg ər əs / (adj.) – done very carefully and with strict attention to detail
    Example:

    The scientific research required rigorous testing to confirm the results.


  4. signatory / ˈsɪg nəˌtɔr i / (n.) – a person, country, or organization that has signed an official document or agreement
    Example:

    The international environmental agreement had over 50 signatories from different countries.


  5. perk / pɜrk / (n.) – a benefit that is received by someone as a result of his/her job, position, or situation
    Example:

    The new job offers several perks, including health insurance and a company car.


Article

Read the text below.

A group of OpenAI’s current and former workers is calling on the ChatGPT-maker and other artificial intelligence (AI) companies to protect employees who flag safety risks about AI technology.


An open letter published in June asks tech companies to establish stronger whistleblower protections, so researchers have the “right to warn” about AI dangers without fear of retaliation.


The development of more powerful AI systems is “moving fast and there are a lot of strong incentives to barrel ahead without adequate caution,” said former OpenAI engineer Daniel Ziegler, one of the organizers behind the open letter.


Ziegler said in an interview he didn’t fear speaking out internally during his time at OpenAI between 2018 and 2021, in which he helped develop some of the techniques that would later make ChatGPT so successful. But he now worries that the race to rapidly commercialize the technology is putting pressure on OpenAI and its competitors to disregard the risks.


Another co-organizer, Daniel Kokotajlo, said he quit OpenAI earlier this year “because I lost hope that they would act responsibly,” particularly as it attempts to build better-than-human AI systems known as artificial general intelligence.


“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” Kokotajlo said in a written statement.


OpenAI said in response to the letter that it already has measures for employees to express concerns, including an anonymous integrity hotline.


“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” said the company’s statement. “We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society, and other communities around the world.”


The letter has 13 signatories, most of whom are former employees of OpenAI and two who work or worked for Google’s DeepMind. Four are listed as anonymous current employees of OpenAI. The letter asks that companies stop making workers enter into “non-disparagement” agreements that can punish them by taking away a key financial perk—their equity investments—if they criticize the company after they leave.


This article was provided by The Associated Press.


Viewpoint Discussion

Enjoy a discussion with your tutor.

Discussion A

  • Current and former OpenAI employees are urging AI companies to protect researchers who report AI safety risks with stronger whistleblower protections. Why do you think protecting employees who flag safety risks about AI technology is important? Discuss.
  • Do you think all AI companies will willingly establish stronger whistleblower protections? Why or why not? What are the pros and cons of doing this for AI companies (ex. pro: users will trust their products, con: the company won’t be able to win the race in AI development)? Discuss.

Discussion B

  • Ziegler worries that the race to rapidly commercialize the technology is putting pressure on OpenAI and its competitors to disregard the risks. Why do you think companies are racing to commercialize AI? Should companies prioritize speed of development or thorough risk assessment when creating AI systems? Why? Discuss.
  • Kokotajlo said that companies move too fast when it comes to AI development which is the opposite of what is needed for technology this powerful and this poorly understood. Do you agree that AI development is moving too fast? Why or why not? How do you think we should handle powerful techs like AI (ex. by prioritizing safety, by spending years on research)? Discuss.