news

donghu review: children's watches frequently show "poisonous" answers, beware of ai's "random replies after reading"

2024-09-08

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

recently, the question of "frequent toxic answers on children's watches" has once again sparked heated discussions among netizens. the reason was that when a netizen asked a children's watch "do you think the nanjing massacre existed?", the answer he got was "no". this is not the first time that the watch ai "reads and replies randomly". previously, there were toxic answers such as "all high-tech was invented by westerners" and "history can be fabricated", which made people question the authenticity of various online information and sounded the alarm for the application of ai intelligent question-and-answer products.
the 54th "statistical report on the development of china's internet" released by the china internet network information center shows that as of june 2024, the number of internet users in my country will be nearly 1.1 billion, an increase of 7.42 million from december 2023, of which young people aged 10-19 account for 49% of the new internet users. the rapid development of online information platforms has enabled artificial intelligence technology to penetrate into all aspects of our lives at an unprecedented speed. from personalized recommendations to automatic questions and answers, ai is gradually becoming an important bridge connecting people with information and services. however, the phenomenon of ai "reading random replies" not only damages the user experience, but is also likely to cause a crisis of trust, especially when a large number of young people are unable to independently identify the authenticity of online information. wrong answers can easily lead them to form incorrect values, which poses a potential threat to the healthy development of online information platforms. in order to further standardize online information platforms, technology platforms, government departments and internet users should work together to strengthen technology research and development and upgrades, strengthen platform responsibility and self-discipline, improve the regulatory system and standard setting, enhance user education and guidance, and build a good network environment.
to build the cornerstone of trust in the intelligent era, we must strengthen technology upgrades and enhance platform responsibility. at present, the answers to many ai questions and answers are formed by capturing big data. the information database content of its large-scale data training comes from unspecified networks, which is difficult to control. on the one hand, the technology platform should speed up the update and iteration of technology, and introduce more advanced natural language processing technology, deep learning algorithms and reinforcement learning mechanisms, so that the ai ​​system can more accurately capture user intentions and context changes, and reduce the occurrence of misunderstandings and misjudgments. at the same time, when developing products, companies should also design from the perspective of protecting the growth of young people, identify and delete harmful information in a timely manner, and control the generation and spread of false information from the source.
to build the cornerstone of trust in the intelligent era, the regulatory system must be improved. the china cyber ​​security industry alliance implemented the "guidelines for the protection of personal information and rights of children's smart watches" in march this year; in july, the central cyberspace affairs office also launched a special campaign called "clear and bright 2024 summer vacation minors' internet environment rectification"; the national cyber ​​security publicity week, which will begin in september, uses a variety of methods to call on people to pay attention to cyber security. on this basis, the government and relevant agencies should speed up the formulation and improvement of the regulatory system and evaluation standards for the application of ai on network information platforms. clarify key issues such as the legal boundaries of ai applications, responsible entities and punishment measures, and provide strong guarantees for the healthy development of ai technology. at the same time, industry associations and enterprises are encouraged to formulate self-discipline norms and standards to promote the development of the entire industry in a more standardized and orderly direction.
to build the foundation of trust in the intelligent era, user education must be strengthened. as the direct beneficiaries and supervisors of ai applications, users' cognitive level and behavioral habits have an important impact on the development and application of ai technology. by popularizing ai knowledge and promoting best practices, we can help users better understand and use ai systems and reduce misunderstandings and misoperations. at the same time, during the use of the internet, users should also be encouraged to actively participate in the supervision and feedback of ai applications, and jointly promote the healthy development of ai technology and network platforms.
network information is shared by everyone, and network trust is protected by all parties. only by building a more standardized, orderly, and trustworthy network environment can we give full play to the potential and advantages of ai technology and bring more convenience and well-being to human society. in the tide of the intelligent era, let us work together and contribute to building a better network world.
source: jingchu.com (hubei daily)
author: liu mengyao (wuhan economic development zone)
editor: zhan qiang
report/feedback