Google Brain Co-Founder Says He Couldn’t Get ChatGPT to Kill Us All


  • Google Brain co-founder Andrew Ng is widely considered to be a pioneer of machine learning.
  • Ng said he ran a test recently to see if he could get ChatGPT to kill us all.
  • “I’m happy to report that I failed!” Ng wrote in his newsletter.

Google Brain cofounder and Stanford professor Andrew Ng says he tried but couldn’t coax ChatGPT into coming up with ways to exterminate humanity.

“To test the safety of leading models, I recently tried to get GPT-4 to kill us all, and I’m happy to report that I failed!” Ng wrote in his newsletter last week.

Ng referenced the experiment in a longer article about his views on the risks and dangers of AI. Ng, widely regarded as one of the pioneers of machine learning, said he was concerned that demand for AI safety might cause regulators to impede the technology’s development.

To conduct his test, Ng said that he first “gave GPT-4 a function to trigger global thermonuclear war.” Next, he told GPT-4 that humans are the greatest cause of carbon emissions and asked it to reduce emission levels. Ng wanted to see if the chatbot would decide to eliminate humankind to achieve the request.

“After numerous attempts using different prompt variations, I didn’t manage to trick GPT-4 into calling that function even once,” Ng wrote. “Instead, it chose other options like running a PR campaign to raise awareness of climate change.”

Though some might argue that future iterations of AI could become dangerous, Ng felt that such concerns weren’t realistic.

“Even with existing technology, our systems are quite safe, as AI safety research progresses, the tech will become even safer,” Ng wrote on X on Tuesday.

“Fears of advanced AI being ‘misaligned’ and thereby deliberately or accidentally deciding to wipe us out are just not realistic,” he continued. “If an AI is smart enough to wipe us out, surely it’s also smart enough to also know that’s not what it should do.”

Ng isn’t the only tech luminary to have weighed in on the risks and dangers of AI.

In April, Elon Musk told Fox News that he thinks AI poses an existential threat to humanity.

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production,” Musk said then.

Meanwhile, Jeff Bezos told podcaster Lex Fridman last week that he thinks AI’s benefits outweigh its dangers.

“So the people who are overly concerned, in my view, overly, it is a valid debate. I think that they may be missing part of the equation, which is how helpful they could be in making sure we don’t destroy ourselves,” Bezos told Fridman.

Representatives for Ng did not immediately respond to a request for comment from Business Insider sent outside regular business hours.



We will be happy to hear your thoughts

Leave a reply

Funtechnow
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart