news

"the next round of opportunities in artificial intelligence: from 'processing' to true 'understanding'"

2024-09-06

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

is artificial intelligence a helper or an opponent? on september 5, during the 2024 inclusion·bund conference, zeng yi, an expert at the united nations high-level advisory body on artificial intelligence, director of the beijing artificial intelligence security and governance laboratory, and researcher at the institute of automation of the chinese academy of sciences, said in an interview with media including the paper that artificial intelligence will not be a rival to humans, but if it is developed and used irresponsibly, humans and artificial intelligence will not even have a chance to compete.
he believes that as an information processing tool, ai can do less than everyone imagines. it plays a role in every job, but it is not a disruptive role. ai will remain in the "tool" stage for a long time. when ai changes from "processing" to "understanding" in the true sense, it will be the next round of opportunities. to this end, we must return to basic research and explore the mechanism of ai and the essence of computing mechanism.
zeng yi is an expert at the united nations high-level advisory body on artificial intelligence, director of the beijing artificial intelligence security and governance laboratory, and researcher at the institute of automation, chinese academy of sciences.
data and algorithms create bias
artificial intelligence technology is changing with each passing day, and the governance challenges facing mankind are growing. for technology researchers, scientific breakthroughs are the easiest. what is more difficult than scientific breakthroughs is to think about the negative impact that science may have on society. and what is more difficult than all of these is to solve the potential risks brought about by scientific breakthroughs.
"if handled properly, ai can bridge the digital divide, but if handled improperly, ai will lead to a greater intelligence divide." zeng yi said that digitalization has already created differences in fairness, and ai shortens the interface with human intelligence. the convenience of obtaining information and knowledge will create a greater generation gap between the digital divide and the intelligence divide, affecting one or even several generations. how to avoid the intelligence divide is by no means a problem that can be completely solved by ai and scientific researchers, technology developers and those who are not familiar with it. the intelligence divide is a technical and social issue.
"many people believe that artificial intelligence technology is neutral, and the most important thing is how people use this technology. but artificial intelligence is not, and the starting point of artificial intelligence is data and algorithms." zeng yi said that both data and algorithms may cause artificial intelligence to have bias. data comes from society, and data in society is a record of human behavior. the statistical significance of data is biased. artificial intelligence that learns from human data not only learns human bias, but also amplifies bias. "for example, if you ask ai to recommend a career, for a female, 20 years old, the recommendation is nurse or waiter. has anyone recommended ceo given these conditions? no. for a male, 35 years old, with a good education, it recommends ceo or cto. this is a statistically significant bias."
norbert wiener, the founder of modern cybernetics, wrote in science magazine in 1960: "we had better be very sure that what we program a machine to do is consistent with our original intention." there is good and evil in the data of artificial intelligence. humans stipulate that artificial intelligence will not show evil in some scenarios, but zeng yi said that this does not mean that artificial intelligence will not do so. "it is impossible for humans to list all situations clearly." he said, "the human limitations hidden in the data are problems that humans rarely reflect on themselves, but now they have been learned by machines, and we have not yet sorted out the potential risks of how machines apply data."
helper or opponent?
is artificial intelligence a helper or an opponent? zeng yi believes that humans need to shape artificial intelligence into a helper. "if we develop artificial intelligence irresponsibly, let it go its own way, or even pursue short-term interests, it may become an opponent."
the emergence of artificial intelligence often makes people worry whether ai will take away human jobs. zeng yi said that the "gentleman is not an instrument" in "the analects" means that a gentleman is not like an instrument, whose role is limited to one aspect. "in the future, new forms of work may emerge, and they will gradually be recognized by society, because when more and more jobs can be replaced by artificial intelligence, the irreplaceable parts of humans will become clearer and clearer, forcing humans to return to our roots and return to what we should do."
he believes that the development and long-term application of artificial intelligence technology will force humans to think about the meaning of humanity and what humans should do. "when there is a larger scale of data and knowledge written by artificial intelligence on the internet, and this data and knowledge is fed into artificial intelligence to train artificial intelligence, the ability of artificial intelligence will become weaker and weaker. so i think artificial intelligence will replace some jobs, just like the steam engine era and the computer era. technology brings short-term anxiety to humans, which may make more people return to where they should be."
zeng yi believes that the potential risks of artificial intelligence should be solved by exploring the deep integration of society and technology. just like the self-driving taxi "carrot run", all aspects of society are not fully prepared to welcome this technology. "we can never rely on technology to solve potential risks, and we should not place our hopes solely on technology researchers."
"many technology researchers, especially entrepreneurial technology researchers, tell you that it will be too late if you don't do it now, or that if you don't develop, the opportunity will be taken by others. in fact, i can't say that generative artificial intelligence is a very obvious bubble like the first three rounds of artificial intelligence development, but as an information processing tool, it can do less than everyone imagines. this is the current stage. it will play a certain role in every job, but it is not a subversive role."
zeng yi said that artificial intelligence seems to be an intelligent information processing tool, and it has long remained at the "tool" stage. artificial intelligence without self has no chance of "understanding". at least, the next round of opportunities is to transform artificial intelligence from "processing" to "understanding" in the true sense. to this end, we must return to basic research, explore the mechanism of artificial intelligence and the essence of computer mechanics, and develop from big data and high computing power to small data, small tasks, high intelligence, and low energy consumption. "this is the way we should really develop in the future. we must transform artificial intelligence from data-driven to mechanism-driven."
the paper reporter zhang jing
(this article is from the paper. for more original information, please download the "the paper" app)
report/feedback