- The Cambridge Dictionary is updating the definition of the word “hallucinate” because of AI.
- Hallucination is the phenomenon where AI convincingly spits out factual errors as truth.
- It’s a word that also captures one of the AI industry’s key challenges: misinformation.
The Cambridge Dictionary’s newly crowned word of the year is a familiar one, but it’s taking on a new meaning because of AI.
On November 15, the organization announced that “hallucination” would take on a new definition beyond just seeing or hearing something that does not exist. The word’s entry in the dictionary will be amended to include:
When an artificial intelligence (= a computer system that has some of the qualities that the human brain has, such as the ability to produce language in a way that seems human) hallucinates, it produces false information.
Hallucination is a widely used term within the AI industry, and it refers to incidents in which AI convincingly spits out inaccuracies as though they were truth — sometimes with damaging consequences.
News outlets including Gizmodo, CNET, and Microsoft have all landed in hot water over errors found in their AI-written articles. A lawyer told Insider on Friday that he was fired for using ChatGPT to help him improve a motion, after the chatbot made up non-existent lawsuits as citations.
In February, Morgan Stanley analysts wrote that one of ChatGPT’s key shortcomings is how it occasionally makes up facts. It’s an issue the analysts said they expect will persist for “the next couple of years.” Business leaders and misinformation experts have also voiced their concerns over how AI might worsen the state of online misinformation.
“The fact that AIs can ‘hallucinate’ reminds us that humans still need to bring their critical thinking skills to the use of these tools,” wrote Wendalyn Nichols, the publishing manager of the Cambridge Dictionary, in the group’s announcement of its changes to the dictionary.