news

AI hallucinations are inevitable. How to deal with them?

2024-08-15

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Zhang HaiWith the rapid development of artificial intelligence (AI) technology, "AI hallucinations", that is, errors or illusions in information processing and generation by artificial intelligence, have become a problem that cannot be ignored. In 2023, "hallucination" became the word of the year for Cambridge Dictionary and Dictionary.com. According to Dictionary.com, the number of searches for the word "hallucination" in 2023 increased by 46% over the previous year. In the Cambridge Dictionary, in addition to the original definition of "hallucination" as "seeming to see, hear, feel or smell something that is not there, usually due to a health condition or because you are taking a drug that causes hallucinations", a new definition has been added: "AI hallucinations generate false messages."A recent paper titled "Machine Behavior" in the journal Nature pointed out that artificial intelligence is already prevalent in human society today. News ranking algorithms affect the information people see, consumption algorithms affect what consumer goods people buy, taxi algorithms affect our travel patterns, and smart home algorithms affect our family life. In the fields of law and medicine, the impact of artificial intelligence is even greater. Artificial intelligence makes machines think and behave more and more like humans. In this way, machines are increasingly influencing the structure of human society. Machines shape human behavior, and humans also shape the behavior of machines. How humans and machines work together will have a significant and far-reaching impact on the future form of society.In the era of artificial intelligence, AI hallucinations have become a common phenomenon. From possible misjudgments of self-driving cars to misunderstandings of instructions by smart assistants to misdiagnosis by medical diagnostic tools, AI hallucinations are everywhere in our daily lives. In 2024, Google's search engine launched an AI search service: AI Overview, which provides AI-generated answers. Its original intention was to improve the user experience, but users soon discovered that AI Overview provided a large number of outrageous answers, such as suggesting to glue pizza with glue and eat stones every day to get nutrition, which made Google quickly shut down some of its functions.From the perspective of artificial intelligence scientists, AI hallucinations are inherently unavoidable due to technological limitations and deficiencies in human cognition. Although technicians are working hard to improve the accuracy and reliability of AI, AI hallucinations still occur frequently and are difficult to completely eliminate due to factors such as incomplete data, algorithm limitations, and complex interactive environments.The mechanism of AI hallucinations involves multiple aspects. First, data bias is one of the main reasons. If the AI ​​training data lacks diversity or has systematic biases, the output results may produce hallucinations. Secondly, current AI algorithms, especially those based on statistics, cannot perfectly adapt to new and unseen situations, which may lead to wrong judgments. Thirdly, the cognitive limitations of human designers are also a big problem. The subjective biases of designers and trainers may be inadvertently encoded into the AI ​​system and affect its decision-making. Finally, the interactive environment in which the AI ​​system operates is itself full of variables, and complex and changing environmental factors often exceed the processing capabilities of the AI ​​system, leading to the generation of AI hallucinations.How to deal with the prevalence and inevitability of AI hallucinations? First, improving data quality and diversity is the basis. By increasing the breadth and depth of training data, data bias can be reduced and the generalization ability of AI systems can be improved. Secondly, optimizing algorithm design and enhancing its robustness and adaptability can enable AI systems to better cope with new situations. Third, improving user education and awareness is also crucial. Helping users correctly understand the capabilities and limitations of AI can effectively reduce hallucinations caused by misunderstandings. In addition, establishing ethical norms and regulatory mechanisms to ensure that the development and application of AI comply with ethical and legal standards is equally important for reducing the occurrence of AI hallucinations. Finally, interdisciplinary cooperation plays a key role in dealing with AI hallucinations. Engineers, data scientists, psychologists, ethicists, and legal experts should jointly participate in the design and evaluation of AI systems, and jointly solve the problem of AI hallucinations from the professional perspective of their respective fields.In the era of artificial intelligence, AI hallucination is a complex, common and inevitable problem, which requires us to adopt multi-dimensional and multi-level strategies to deal with it, so as to minimize the negative impact of AI hallucination. The "Guidelines on Generative Artificial Intelligence in Education and Research" issued by UNESCO in 2023 recommends setting the minimum age for using artificial intelligence tools in the classroom at 13 years old. Open AI recommends that children under the age of 13 are prohibited from using generative artificial intelligence, and children aged 13 to 18 need to use it under the guidance of a guardian.In 2023, a Trusted Media Summit was held in Singapore, sharing initiatives on how countries can improve media literacy among young people. For example, "SQUIZ KIDS", a public welfare activity based on websites and podcasts launched abroad and aimed at primary school students to improve media literacy, helps cultivate young people's ability to distinguish false information and fake information on the Internet. It is mainly divided into three steps: stop (STOP) when exposed to online information, think about it (THINK), and finally check (CHECK) and compare with reliable information sources to confirm whether they are consistent. By combining knowledge and skills from different fields, identifying problems more comprehensively and finding solutions, we can look forward to the arrival of a smarter, safer, and more reliable artificial intelligence society. (The author is a professor at the School of Media Science of Northeast Normal University and director of the Jilin Province Education and Artificial Intelligence Integration Innovation Engineering Center)▲
Report/Feedback