For more than half a century, Geoffery Hinton nurtured Artificial Intelligence (AI) wonders like ChatGPT, new Bing and Bard and others but now he left Google raising alarming concerns afoot. There have always been sounding alarms around tech innovations if they had even slight connections with AI.
The list of high-profile personalities condemning the invention is long and includes big names such as industrialist Elon Musk, intellectual Noam Chomsky and the 99-year-old retired statesman Henry Kissinger.
After the San Francisco startup OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because AI technologies pose “profound risks to society and humanity.”
The recent induction is Hinton who is considered as the Godfather of AI. Hinton in an interview said, he left Google so that he can more freely speak against the AI and pin point the danger this technology may pose in the near future.
Hinton, 75, retired from Google and said, “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he told MIT Technology Review. “As long as I’m paid by Google, I can’t do that.”
Some of the dangers of AI chatbots are “quite scary,” Hinton told the BBC. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”
Hinton is concerned for the “Bad Actors” who may use it for their nefarious ideas such as affecting the elections and even instigating violence.
Leaving Google he made it very clear that Google has been handling the AI quite sensibly, he fears the bad guys out there. He told MIT Technology Review that there are also “a lot of good things about Google” that he would want to talk about — but those comments would be “much more credible if I’m not at Google anymore.”
Google confirmed that Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.
In 2012, he and two of his graduate students, Ilya Sutskever and Alex Krishevsky, at the University of Toronto created technology that became the intellectual foundation for today’s biggest AI systems. On May 1, however, he joined a growing chorus of critics who say those companies are racing towards danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
In the 1980s, Hinton was a professor of computer science at Carnegie Mellon University but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most AI research in the United States was funded by the Defense Department. Hinton is deeply opposed to the use of AI on the battlefield — what he calls “robot soldiers.”
Hinton was one of three AI pioneers who in 2019 won the Turing Award, an honor that has become known as tech industry’s version of the Nobel Prize. The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
He is also worried that AI technologies will in time upend the job market. Today, chatbots such as ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetical scenarios of existential risk caused by computers that supersede human intelligence. On the other are concerns about automated technology that’s already getting widely deployed by businesses and governments and can cause real-world harms.
Alondra Nelson, who until February led the White House Office of Science and Technology Policy said, “For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn’t only include AI experts and developers,”
“AI is no longer abstract, and we have this kind of opening, I think, to have a new conversation about what we want a democratic future and a non-exploitative future with technology to look like,” Nelson said in an interview last month.