news

AI psychologists are becoming popular. How can we make the most of them?

2024-07-29

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina



Recently, a student team from North University of China successfully developed the country's first multi-role AI psychological healing model and AI psychological emotional service platform, aiming to more effectively solve the mental health problems of college students. Through algorithms to identify, screen, and analyze suspected suicide risk information on social media, organize manpower to rescue people with high-risk suicidal tendencies, find their relatives and friends as much as possible, warn and take corresponding rescue actions... The increasingly mature AI technology is gradually being used in the psychological field. In recent years, it is no longer new for AI technology to empower psychological services. AI psychologists just make up for some of the shortcomings of current mental and psychological health services, and their development prospects should be optimistic.

As of the end of 2021, there are 6.6 million registered patients with severe mental disorders in my country, and there are many people who are in sub-healthy mental state. In contrast, there are only 64,000 psychiatrists in my country. There is a severe shortage of doctors specializing in mental and psychological medicine, which has led many patients to have to endure it. But AI psychologists are tireless and can be mass-produced, which can fill in the gaps and allow more patients to receive corresponding diagnosis and treatment in a timely manner. At present, some AI psychologists have played a huge role. For example, the "Bei Xiaoliu" AI psychological service robot has served about 10,000 people in hospital scenarios so far. AI helps "high-quality psychological treatment is available to everyone" gradually become a reality.

AI psychologists have many unique advantages. For example, smart products can memorize the procedures of psychological assessment and intervention, so that the assessment process is standardized and the conclusions are very accurate. Smart products can also be highly standardized in formulating psychological intervention procedures for patients and implementing popular science education. Smart products are also highly capable "students" who can read a large number of relevant cases and remember them at a glance. AI psychologists act according to the set procedures, have a good temper, and do not lose their temper. They will not cause tension in the doctor-patient relationship due to emotional fluctuations.

However, we should also see that the drawbacks of AI psychologists are also very prominent. In particular, although robots follow rules, act according to procedures, and can stick to cold principles, they are a good example of "iron-faced and impartial", but they lack the flexibility to adapt to changes, and it is difficult for them to understand and control human emotions and feelings, let alone carry out emotional interactions. Mental and psychological diseases often involve emotions and feelings, and psychological counseling requires emotional communication. Doctors must be empathetic to do this job well. Artificial intelligence products are more rational than emotional, and cannot provide "warm" services.

More importantly, artificial intelligence technology also has the risk of being abused. At present, some voice intelligent products insult users or have bad habits such as sexism, which shows that the data quality of AI psychologists' deep learning should also be taken seriously. If someone feeds "toxic data" to a certain AI psychologist product, the product may "learn bad things" and even engage in extreme behaviors such as instigating suicide. Medical institutions have now carried out consistency evaluations between AI doctors and real doctors, which is of great benefit to improving the quality of smart products. But in addition to ensuring better quality, measures should be taken to prevent AI psychologists from developing bad habits and becoming tools.

Medical artificial intelligence products are related to health and life, as well as personal sensitive privacy, and the storage and memory functions of artificial intelligence are very powerful. Once the diagnosis and treatment information held by AI psychologists is improperly managed and used, it will have a great lethal effect on patients. Therefore, it is urgent to clarify what permissions this tool has in obtaining health data, as well as what regulations there are for the use, storage, and destruction of data.

No matter how capable AI psychologists are, they can only serve as assistants to humans. We cannot rely on them too much or let them diagnose and treat patients alone. In the art of war, it is said that "if you do not fully understand the harm of using troops, you cannot fully understand the benefits of using troops." The same should be true for AI psychologists. Only by "fully understanding the harm" and clarifying the quality standards, ethical norms, codes of conduct, and regulatory methods can we avoid possible risks while "making full use of the benefits", maintain the safety bottom line of smart products, and ensure that AI psychologists and other smart products are running smoothly on the fast track of development, benefiting society but not causing trouble to society.

By Luo Zhihua