NEW YORK (CNN) — Geoffrey Hinton, considered the “godfather of artificial intelligence,” confirmed on Monday that he left his position at Google last week, warning of the “dangers” of the technology he created.
Hinton’s pioneering work in neural networks shaped the artificial intelligence systems that power many of today’s products. He spent a decade at Google working part-time on the tech giant’s artificial intelligence development efforts, but now he cares about the technology and the role he played in its development.
“I console myself with the usual excuse: If I hadn’t done it, it would have been someone else,” Hinton said. For the New York Times, He was the first to announce his decision.
In a tweet on Monday, Hinton said he resigned from Google so he could speak freely about the dangers of artificial intelligence (AI) as opposed to specifically criticizing Google.
“I quit, so without considering how the dangers of AI affect Google” said Hinton in a tweet. Google has acted very responsibly.
Jeff Dean, Google’s chief scientist, said Hinton had “made fundamental advances in AI” and praised Hinton’s “decade of contributions at Google.”
“We are committed to a responsible approach to AI,” Dean said in a statement to CNN.
“We continue to learn to understand emerging risks while boldly innovating.”
Hinton’s decision to step away from the company and talk about technology comes at a time when a growing number of lawmakers, advocacy groups and tech activists are raising alarm bells about the potential for a new crop of AI-powered chatbots. Misinformation and displacement jobs.
ChatGPT gained attention late last year, renewing the arms race among tech companies to build and use similar AI tools in their products. OpenAI, Microsoft and Google are leading the trend, but IBM, Amazon, Baidu and Tencent are also working on similar technologies.
In March, some leading tech figures signed a letter calling on AI labs to stop training the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” letter, Published By the Future of Life Institute, a non-profit organization approved Written by Elon Musk, it comes two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early testing and an enterprise demo, GPT-4 was used to write requests, pass standardized tests, and create a functional website from a hand-drawn sketch.
In an interview with The New York Times, Hinton added to concerns about the potential for artificial intelligence to destroy jobs and create a world where many “can no longer know what is true.” He also noted a rate of improvement that was more surprising than he and others had expected.
“The idea that these things could become smarter than people, some believed,” Hinton said in the interview. “But most people thought it wasn’t. So did I. I thought it would be 30 to 50 years or more. Frankly, I don’t think so anymore.”
Even before parting ways with Google, Hinton had spoken publicly about the potential for AI to do as much good as harm.
In a 2021 keynote address at the Indian Institute of Technology in Bombay, Hinton said, “The rapid progress of AI is going to change society in ways we don’t fully understand, and not all of the consequences will be good. AI will increase health care, but it will also create opportunities for dangerous autonomous weapons,” Hinton said. This prospect seems to me more immediate and more terrifying than the prospect of robots taking over, which I think is a long way off.”
Hinton isn’t the first Google employee to sound the AI alarm. In July, the company fired an engineer accused of violating yet-to-be-disclosed AI system, data protection and employment policies. Many members of the AI community strongly rejected the engineer’s claim.
— Samantha Murphy Kelly and Ramisha Maruf contributed reporting.