news

beijing xiangshan forum | how to make emerging technologies good? we should strengthen international cooperation and jointly formulate safety standards

2024-09-15

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

on the battlefield of the russia-ukraine conflict, a ukrainian suicide drone is hovering over the russian position. the camera on the drone is constantly scanning the trenches. after detecting human activity, the artificial intelligence quickly identifies it and then dives in to carry out a suicide attack.
the camera turns to the sky over the ukrainian positions on the russian-ukrainian battlefield. a russian unmanned reconnaissance aircraft is conducting reconnaissance over the ukrainian positions. after discovering tanks and armored vehicles, the rear personnel quickly pass this information to the combat personnel operating the "lancet" cruise missile. the "lancet" cruise missile flies to the vicinity of the position where the target was discovered. the cruise missile is also equipped with an artificial intelligence system, which can quickly identify tanks and armored vehicles, and lock on and attack the target. even if it encounters interference, it can finally carry out an autonomous attack.
the above are the combat images of drones and cruise missiles released by the russian and ukrainian armies. the eye-catching performance of artificial intelligence (ai) technology in the russian-ukrainian conflict has attracted great attention and will greatly promote the application of this technology in the military field. with the rapid development of emerging technologies such as artificial intelligence, how to ensure the safe application of these technologies has become the focus of attention of all countries during this beijing xiangshan forum.
on september 12, a special session was set up in the forum's high-level interviews to discuss ai security. on september 13, the sixth group meeting of the forum, "emerging technologies for good," was held to discuss how ai, synthetic biology and other emerging technologies can be used for good.
artificial intelligence technology goes to the battlefield
in 2021, former us secretary of state henry kissinger published the book "the age of artificial intelligence and the future of humankind", which expounded his views on the development and impact of artificial intelligence. the book proposed that artificial intelligence will reshape global security and world order, and put forward ideas and warnings on the rise of artificial intelligence. kissinger believes that artificial intelligence weapons have three characteristics that nuclear weapons do not have: they are concentrated in large companies and the private sector; the technology is easier to copy and thus easier to be mastered by non-governmental organizations; and small countries also have the opportunity to master it.
the second year after the book was published, the russia-ukraine conflict broke out. as the conflict progressed, more and more weapons equipped with artificial intelligence technology were put into the battlefield: drones, unmanned combat vehicles, unmanned boats and cruise missiles, etc. artificial intelligence technology is also involved in satellite image recognition, decision support, network and electronic confrontation, etc.
meng xiangqing, a professor at the national defense university, pointed out in an interview with the paper (www.thepaper.cn) that the first world war is considered to be the beginning of mechanized warfare, the gulf war is considered to have started information warfare, and the current russia-ukraine conflict is considered to have opened the curtain on intelligent warfare. although current artificial intelligence technology only shallowly empowers weapons and operations and has not had a very large impact on war, it has already foreshadowed this trend.
currently, in the russia-ukraine conflict, weapons enabled by artificial intelligence are increasingly appearing on the battlefield. in march this year, the british ministry of defense announced that it would increase the number of drones sent to ukraine from 4,000 to 10,000, of which thousands are equipped with artificial intelligence technology to support its operations on the battlefield. these drones will carry advanced sensors and weapon systems, have a high degree of autonomy and intelligence, and can perform multiple tasks such as reconnaissance, strike and defense in complex environments.
in january 2023, a report titled "how algorithms break the balance of the russian-ukrainian war?" revealed the inside story of how american high-tech companies, in cooperation with the u.s. department of defense and intelligence agencies, were deeply involved in the russian-ukrainian conflict. the report mentioned that artificial intelligence software provided by american high-tech companies was widely used to interpret satellite images and discover valuable targets.
it is reported that the ukrainian army also widely uses a set of "meta constellation" information systems from the us civilian software company palantir. through this system, various high-tech applications are integrated to form a "kill chain" that allows ukraine and its allies to see the data currently available in a specific combat area. satellite image target recognition is an important and complex issue. traditional manual interpretation faces many challenges, and artificial intelligence based on deep learning can play a big role in satellite image interpretation, providing a new solution for satellite image target recognition.
russia also used weapons with artificial intelligence in the conflict between russia and ukraine. for example, the lancet-3 cruise missile's onboard artificial intelligence was used for target search and identification, which could independently find the intended target and attack it. russia's unmanned combat vehicles also appeared on the battlefield, such as the mt-1 unmanned mine-clearing vehicle and the marker unmanned combat vehicle.
in order to adapt to future wars, the russian ministry of defense established an artificial intelligence weapons research department in 2022 to strengthen the use of artificial intelligence technology and develop new special equipment.
another high-profile conflict, the israeli-palestinian conflict, also saw many weapons equipped with ai technology being used on the battlefield. it was reported that when israel attacked gaza, it used an ai system called "lavender" to help identify hamas militants, greatly increasing the number and speed of identifying targets. in the past, israeli intelligence agencies would find and approve 10 targets in 10 days of work, but now they can find and approve about 100 targets in 10 days of work.
in addition, ai can modify and generate pictures and videos, which can also be used for information warfare or public opinion warfare during wartime. an article titled "generative ai plays a surprising role in the false information war between israel and hamas" on the us "wired" website stated that the outbreak of the israeli-palestinian conflict triggered an unprecedented "wave of false information", which is an "algorithm-driven fog of war" that has put social media in trouble.
jointly developing safety standards is the way out
at present, the rapid development of emerging technologies such as artificial intelligence and synthetic biology is releasing huge application value, but also bringing unpredictable risks and challenges that are related to the interests of all mankind. how to ensure the safe application of emerging technologies has become the focus of attention of all countries.
in a high-end interview on "artificial intelligence security" at this year's beijing xiangshan forum, candice s. johnson, a researcher at the institute for defense analyses and former u.s. deputy assistant secretary of defense, said that drones controlled by artificial intelligence have been used on the battlefields between russia and ukraine, as well as in gaza. the armies of other countries are also considering applications related to artificial intelligence, but people are now also discussing the security and governance of artificial intelligence, which is an area that has not been explored.
when it comes to the safety and risks of artificial intelligence, issues such as whether artificial intelligence can make autonomous decisions, whether it is trustworthy, and how to manage the risks that artificial intelligence may bring have attracted much attention.
the opacity of the "black box" of artificial intelligence algorithms has brought security risks and made the issue of social trust increasingly complicated. in this regard, hu ang, a distinguished professor at the institute of industrial technology, university of tokyo and a foreign academician of the japan academy of engineering, said that whether the "black box" of artificial intelligence is trustworthy is an ultimate question. in his view, the human brain is irreplaceable, "at least in the short term, it is difficult for artificial intelligence to replace the human brain in making decisions."
lampros sturgeolas, professor of data science at the hague university of applied sciences and unesco chair in artificial intelligence and data science for social development, pointed out: “human technicians make mistakes, let alone machines?” “whether artificial intelligence can replace the autonomous decision-making of the human brain is a philosophical question as well as a practical one.”
reference news reported in july this year that the american website popular science published an article in june titled "catastrophic mistakes made by artificial intelligence". the article listed "catastrophic" examples of artificial intelligence and reminded people that the risks of artificial intelligence cannot be ignored.
a bbc investigation has reportedly found that social platforms are using ai to delete videos of possible war crimes, actions that could leave victims without legitimate recourse in the future. social platforms play a key role in war and social unrest, often as a way for those at risk to communicate. the investigation found that while graphic content of public interest can remain on the site, videos of attacks in ukraine are quickly removed.
these "catastrophic" examples also appear in people's daily lives. google images had to remove the feature after images of black people appeared when searching for gorillas on its ai software. other companies, including apple, have also faced lawsuits over similar allegations.
shi cande believes that the security issues of artificial intelligence involve all aspects, including national defense and people's livelihood. therefore, he suggested that countries reach a multilateral agreement to alleviate the security risks of artificial intelligence.
according to dai qionghai, dean of the school of information science and technology at tsinghua university and academician of the chinese academy of engineering, the development of artificial intelligence will enter a fast lane in the next three years. due to the unpredictability of the development of artificial intelligence, relevant ethics and governance must be put in the forefront to reduce the security risks of artificial intelligence applications and make it work for the benefit of mankind.
in the opinion of experts, it is particularly important to strengthen international cooperation in the governance of the field of artificial intelligence, especially cooperation among major countries. jointly formulating safety standards can enable artificial intelligence to better benefit mankind.
on may 14 this year, the first china-u.s. intergovernmental dialogue on artificial intelligence was held in geneva, switzerland. both sides introduced their respective views on the risks of artificial intelligence technology and governance measures, as well as measures taken to promote artificial intelligence to empower economic and social development. both sides recognized that the development of artificial intelligence technology faces both opportunities and risks, and reiterated their continued commitment to implementing the important consensus reached by the two heads of state in san francisco.
in schande's view, china and the united states need to ensure that communication can continue. "in the field of new technologies, the two sides are gradually reaching a certain degree of parity or balance, which may prompt the two sides to have more candid and in-depth dialogue. we must ensure that they continue to work hard to maintain communication," schande said.
thepaper.cn reporter xie ruiqiang
(this article is from the paper. for more original information, please download the "the paper" app)
report/feedback