news

"AI Godfather" Bengio said: It may take humans 10 years to solve AI risks and policy supervision issues

2024-08-24

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Titanium Media App reported on August 24th,Yoshua Bengio, a Canadian computer scientist known as the "Godfather of AI", a pioneer of deep learning technology, and the winner of the 2018 Turing Award, publicly stated that with the launch of ChatGPT, the time for AI technology risks has been compressed.Unfortunately, we cannot use ten years to resolve AI risk and policy regulatory issues, but rather a shorter period of time, so the industry needs to act now to speed up the implementation of AI safety policies.

"Policy takes a lot of time to develop, it requires resolving years of disagreements, legislation like this takes years to enact, some processes need to be established, it takes many years to really do this well," Bengio told Bloomberg, saying he now worries that there may not be much time to "get this done right."What's worse is that AI risk is not only a regulatory issue, it is also an international treaty, and we need to solve this problem on a global scale.

It is reported that Bengio was born in Paris, grew up in Canada, and currently lives in Montreal, Canada, where he is a professor in the Department of Computer Science and Computing at the University of Montreal. He received a Ph.D. in Computer Science from McGill University in Canada in 1991. Bengio's main research areas are deep learning and natural language processing.

In his more than 30 years of deep learning research, Bengio has published more than 300 academic papers, which have been cited more than 138,000 times. Andrew Ng once said that many of Bengio's theoretical research have greatly inspired him.

In 2018, he, Geoffrey Hinton, the "father of neural networks", and Yann LeCun, the "father of convolutional networks", won the "Turing Award" for pioneering the theory of deep learning technology. The three of them are also known as the "AI Big Three", "Deep Learning Big Three", and "Three AI Godfathers" in the AI ​​industry.

Bengio, one of the "three giants of deep learning", once pointed out that the term AI is being abused, and some companies anthropomorphize AI systems, as if AI systems are intelligent entities equivalent to humans, but in fact, there is no entity equivalent to human intelligence yet. Of course, he still loves AI technology.

Bengio said that before ChatGPT appears at the end of 2022, he believes that real AI risks will not appear for at least decades. He once did not worry much about AI safety issues for a while, and believed that humans were still "decades away" from developing AI technology that could rival humans. He assumed that we could "reap the benefits of AI" for many years before facing risks.

But now, the launch of ChatGPT has completely changed his mind. He’s no longer so sure. His assumptions about the pace of AI development — and the potential threats the technology poses to society — have changed.This was disrupted after OpenAI released ChatGPT at the end of 2022, because human-level AI technology has arrived.

"Since we finally have machines that can have conversations with us (ChatGPT), I've completely changed my mind," Bengio said. "We now have AI machines that can master language. We didn't expect to do that so quickly. I don't think anyone really expected that, even the people who build these systems. You can see that people think it may be years, decades, or even longer before we get to human-level intelligence. There is a general consensus that once we get to human-level intelligence, it's hard to predict what will happen - whether we will end up with something very, very good or something very, very bad, and how we will respond. There is a lot of uncertainty. That's what drives my current work in science and policy."

After ChatGPT sparked a global discussion about the risks of AI, Bengio began to devote more energy to advocating for AI regulation, and together with fellow academic and AI pioneer Geoffrey Hinton, he publicly supported a controversial California AI safety bill, SB1047, which would make companies liable for catastrophic harm caused by their AI models if they do not take safety precautions.

Bengio believes it is the “second most important proposal on the table” after the EU’s AI bill.

On August 15th local time, the controversial California Frontier AI Model Safety Innovation Act (hereinafter referred to as the California AI Safety Bill SB 1047) finally passed the review of the California House of Representatives Appropriations Committee after significantly weakening relevant provisions.

SB 1047 aims to prevent large-scale AI systems from causing mass deaths or cybersecurity incidents with losses exceeding $500 million by holding developers accountable. After successfully passing the review, it means that the bill has taken an important step towards becoming a formal law and regulation, and it is also an important step for the United States in AI regulation.

However, the California AI Safety Act is controversial. Stanford University professor Fei-Fei Li, Meta chief scientist LeCun Yang and others believe that the bill will ultimately undermine California and even the United States' leading position in the field of AI. More than 100 academics have also published articles in opposition, and venture capital firm a16z has set up a website to list the six sins of the bill.

Even former U.S. House Speaker Nancy Pelosi issued a statement opposing the bill, saying that the California AI Safety Act is well-intentioned but lacks sufficient understanding. Many important scholars and heads of California technology companies have expressed opposition to the bill, believing that it will do more harm than good.

The latest to oppose California's AI bill is OpenAI, which said in a statement this week that the legislation would harm AI innovation and could have "broad and significant" effects on the United States' competitiveness in AI and national security.

Bengio, however, said the California AI safety bill avoids being too prescriptive and instead uses liability to ensure AI companies don’t ignore safety precautions “that a reasonable expert would do.” “It will create an incentive for companies not to be the worst student in the class.”

Bengio noted that discussions around AI regulation are likely to be influenced by venture capitalists and companies looking to profit from the technology. “You can draw analogies to climate change, fossil fuel companies, etc.,” he said.

"That's not the case, it's quite the opposite. It's not that binding. It doesn't exert much influence. It just says: if you cause billions of dollars in damage, you're responsible. The whole computing field has been somewhat immune to any kind of regulation for decades. I think (the opposition) is more ideological. I'm really worried that the power of the lobbying groups and the trillions or even quadrillions of profits brought by Ai (said Stuart Russell, professor at the University of California, Berkeley and founder of the Center for Artificial Intelligence Systems) will incentivize companies to oppose any regulation." Bengio said.

Speaking of risk management, Bengio noted, "I consider myself skeptical about risk. I've heard a lot of good arguments that AIs that are smarter than us could be very dangerous. Even before they're smarter than us, they could be used by humans in ways that are extremely dangerous to democracy. Anyone who says they know is overconfident because the fact is that science doesn't have an answer. There's no way to answer it in a verifiable way. We can see arguments this way or that way, and they're all reasonable. But if you're agnostic and you're thinking about policy, you need to protect the public from really bad things. That's what governments have been doing in many other areas. There's no reason they can't do that in computing and AI."

Bengio repeatedly stressed that he did not expect humans to master conversational AI voice technology so quickly, and at least to have reached a sufficiently human level.

"I think the desire to innovate quickly on dangerous things is unwise. In biology, when researchers discovered that they could make dangerous viruses through so-called gain-of-function studies, they collectively decided, well, we're not going to do that anymore. Let's ask governments to set rules so that if you do that, at least in an uncontrolled way, you go to jail," Bengio said. "I'm sure you're aware of synthetic child pornography, which is maybe less emotionally challenging, but more importantly, the same deepfakes could distort society. Those deepfakes are only going to get better. The sound is going to get better. The video is going to get better. And one thing people don't talk about enough: the ability of current and even future systems to convince people to change their minds. There was a new study out of Switzerland just a few months ago where they compared GPT-4 to humans to see who could convince people who clearly didn't know if they were talking to a person or a machine to change their minds about something. Guess who won."

“We should be taking a similar precautionary approach with AI,” Bengio said.

Looking ahead,Bengio stressed thatAll AI technology applications will still change human lifestyles in a healthy way.For example, in biology, AI will understand how the body works and how each cell works, which may revolutionize fields such as medicine and drug development; in climate, it can help make better batteries, better carbon capture, and better energy storage.

“It hasn’t happened yet, but I think these are all great technology products that we can do with AI. And by the way, these applications — most of them are not dangerous. So we should invest more in these things that can clearly help society meet challenges,” Bengio said at the end of the conversation.