Cats on the moon? Google’s AI tool is producing misleading responses that have experts worried

Category: Technology/Innovations

Listening

Unlocking Word Meanings

Read the following words/expressions found in today’s article.

  1. falsehood / ˈfɔls hʊd / (n.) – something that’s not true
    Example:

    The suspect was only telling falsehoods in court.


  2. unleash / ʌnˈliʃ / (v.) – to allow or cause something to happen or begin
    Example:

    The students unleashed their creativity during the workshop.


  3. perpetuate / pərˈpɛtʃ uˌeɪt / (v.) – to make a situation, perspective, etc., especially a bad one, to happen or exist for a long time
    Example:

    Not giving consequences to bad habits will only perpetuate them.


  4. untrustworthy / ʌnˈtrʌstˌwɜr ði / (adj.) – cannot be trusted
    Example:

    People need to be careful of untrustworthy online sellers who only fool buyers.


  5. doctor / ˈdɒk tər / (v.) – to make changes to something to fool people
    Example:

    The reports are doctored to hide the financial losses.


Article

Read the text below.

Ask Google if cats have been on the moon and it used to spit out a ranked list of websites so you could discover the answer for yourself. Now it comes up with an instant answer generated by artificial intelligence—which may or may not be correct.


“Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s newly retooled search engine in response to a query by an Associated Press reporter. It added, “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.” 


None of this is true. Similar errors—some funny, others harmful falsehoods—have been shared on social media since Google last month unleashed AI Overviews, a makeover of its search page that frequently puts the summaries on top of search results.


The new feature has alarmed experts who warn it could perpetuate bias and misinformation and endanger people looking for help in an emergency.


“Given how untrustworthy it is, I think this AI Overviews feature is very irresponsible and should be taken offline,” Melanie Mitchell, an AI researcher at the Santa Fe Institute said in an email to the AP.


Google said in a statement that it’s taking “swift action” to fix errors that violate its content policies; and using that to “develop broader improvements” that are already rolling out. But in most cases, Google claims the system is working the way it should thanks to extensive testing before its public release.


“The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web,” Google said in a written statement. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”


This article was provided by The Associated Press. 


Viewpoint Discussion

Enjoy a discussion with your tutor.

Discussion A

  • What do you think about Google’s use of artificial intelligence to quickly answer questions, even if they are not always right? What problems does misinformation create? Discuss.
  • Do you think AI tools should be regulated? Why or why not? Discuss.

Discussion B

  • Do you easily believe the information you find online? What skills do you think are important so that people can determine whether the information is correct or incorrect (ex. research skills, critical thinking)? Discuss.
  • Do you think it would be easy or difficult to develop these skills? Why do you say so? Do you think your country needs programs that inform and alert people of the misinformation that can be found online? Why or why not? Discuss.