news

“AI medical care”: convenience and risks coexist

2024-08-08

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Original title: Hospital launches AI system to assist in consultation and medical companionship online platform "AI prescribing" may cross the line (lead topic)
“AI+Medical Care”: Convenience and Risks Coexist (Theme)
Legal Daily reporter Wen Lijuan and intern Zhang Guanglong
At 8 o'clock in the morning, Song Wenzhuo, a village doctor from Xinghe Village, Zhongxing Town, Youxian District, Mianyang City, Sichuan Province, came to the clinic. The first thing he did was to turn on the computer and log in to the AI ​​(artificial intelligence) assisted diagnosis and treatment system. This is a new habit he has developed in recent times.
"Doctor Song, my heartbeat is a little fast, I feel short of breath, and I cough occasionally." As soon as the clinic opened, Granny Tu, a villager in her 90s, came to see a doctor accompanied by her husband. While asking about the symptoms, Song Wenzhuo entered the condition into the system, and then clicked the AI ​​auxiliary diagnosis button. The system quickly connected to the national core knowledge base with a large number of cases of frequently occurring and common diseases, and gave diagnostic suggestions such as "acute upper respiratory tract infection" by extracting and analyzing the patient's historical medical records. Song Wenzhuo conducted a comprehensive assessment based on his clinical experience and finally determined that Granny Tu had an acute upper respiratory tract infection.
Subsequently, the system gave recommended medications based on the doctor's choice, listed the basis for the recommendation, examination suggestions, etc. Song Wenzhuo selected the treatment drugs based on the doctor's choice and asked the elderly to take the drugs for observation.
In Song Wenzhuo's view, the application of AI-assisted diagnosis system not only improves the accuracy and efficiency of diagnosis, but also helps doctors better deal with complex cases, reduce the risk of misdiagnosis and missed diagnosis, and make medical treatment and medication safer and more reliable.
The application of AI-assisted diagnosis and treatment systems in primary medical institutions in Youxian District, Mianyang City, is a vivid epitome of my country's promotion of "AI + medical care". The "14th Five-Year Plan for the Development of Medical Equipment Industry" clearly proposes to accelerate the development of intelligent medical equipment; the "Opinions on Further Improving the Medical and Health Service System" proposes to develop "Internet + Medical Health" and accelerate the application of the Internet, artificial intelligence, etc. in the medical and health field... In recent years, my country has continuously strengthened top-level design and promoted the development of "AI + medical care".
Many industry insiders and experts pointed out in an interview with the Legal Daily that the medical field has become an important place to explore the application of AI. At present, it is mainly used in some hospitals in scenarios such as triage, pre-consultation, and medical record generation, helping patients to seek medical treatment more conveniently and improving the quality of medical services. In the future, AI-assisted diagnosis and treatment will become a trend, but we also need to be vigilant about the legal risks hidden behind it. These risks not only involve the protection of patients' personal privacy, but also related to issues such as algorithm transparency and fairness.
AI-assisted diagnosis launched in many locations
Improve patient experience
Recently, the reporter went to Peking University People's Hospital for thyroid and scar examinations, including B-ultrasound and blood tests. After paying the fee, the doctor reminded that the examination time can be booked in the system. The reporter turned on the phone and made an appointment for the examination items in a relatively concentrated time in the optional time period, just like "self-service check-in". In this way, all the examinations can be completed in the shortest time, avoiding repeated running due to queuing on site to change appointments and different examination times.
This is just a microcosm of AI-assisted medical care. The reporter sorted out public information and found that "AI+medical care" has been implemented in many hospitals.
At the Union Hospital affiliated to Tongji Medical College of Huazhong University of Science and Technology, if the patient does not know which department to go to, AI can help.
The hospital launched the "AI Smart Clinic" in May this year, which covers functions such as intelligent triage and intelligent plus sign. Take the "Smart Plus Sign" function at that time as an example: patients can click "Registration Service" and "Online Registration" to enter the department where they need to make an appointment. If the selected expert number is "full", they can click "Apply for Plus Sign" below and select the plus sign date of the expert on the jump page. After confirming the free appointment, AI will automatically initiate a conversation, ask about the condition and other related conditions, and then generate a "condition card", and then comprehensively evaluate the severity of the condition, determine the plus sign eligibility, and finally send it to the expert to determine whether it is passed.
In addition, the hospital has also launched a "smart waiting room" function. After the patient registers, the "doctor digital person" will communicate with the patient first to understand the patient's symptoms, course of disease, etc. in advance to prepare for the doctor's face-to-face consultation.
At Zhejiang Provincial People's Hospital, the digital health person "Anzhener" can accompany patients to medical treatment.
It is understood that "Anzhener" can provide patients with AI accompanying services before, during and after medical treatment. Before the consultation, patients can describe their symptoms to it, and "Anzhener" will match the patient with a department and doctor based on the symptoms, and help the patient make an appointment; during the consultation, "Anzhener" can reasonably arrange the consultation process, provide AR smart navigation throughout the process, and allow patients to directly take a number online, and provide call reminders, and even complete medical insurance payment on Alipay, saving patients' medical treatment time; after the consultation, when the patient leaves the hospital, it will continue to provide services such as electronic medical records, prescriptions and report inquiries.
At Beijing Friendship Hospital, AI can help doctors write medical records.
In May this year, Yunzhisheng's outpatient medical record generation system was applied in Beijing Friendship Hospital. The system can identify doctor-patient conversations in a complex hospital environment, accurately capture key information, separate doctor-patient roles, and remove content that is irrelevant to the condition, generate information summaries expressed in professional terms, and outpatient electronic medical records that meet the requirements of medical record writing standards. Data shows that with the help of the outpatient medical record generation system, the efficiency of outpatient case entry in relevant departments of Beijing Friendship Hospital has been greatly improved, and the doctor's consultation time has been greatly shortened.
Legal risks cannot be ignored
Beware of algorithmic discrimination
Many industry insiders interviewed pointed out that the widespread application of artificial intelligence in the medical field can provide patients with more convenient services, improve the efficiency and accuracy of medical services, and make high-quality medical resources more accessible, but the legal risks behind it cannot be ignored.
In the view of Chen Chuan, a lecturer at the School of Law of Shanxi University, the traditional medical diagnosis process emphasizes the doctor's respect and protection of the patient's personal dignity and autonomy. When making medical decisions, doctors need to comprehensively consider the patient's medical history and current symptoms, and formulate appropriate treatment plans in accordance with relevant laws, regulations and ethical norms. However, medical artificial intelligence has the risk of "automation bias", that is, doctors may over-rely on artificial intelligence technology during the diagnosis process, thereby ignoring their own professional judgment and consideration of the individual needs of patients. This over-reliance may cause doctors to inappropriately hand over difficult medical decisions to artificial intelligence. When doctors rely too much on artificial intelligence, patients' treatment decisions may be deprived and handed over to machines for processing, causing patients to lose their autonomy in managing their own health.
In addition, the lack of algorithm transparency and algorithm discrimination cannot be ignored. "Although the Interim Measures for the Management of Generative Artificial Intelligence Services, jointly issued by the Cyberspace Administration of China, the National Development and Reform Commission and other departments in July 2023, put forward algorithm transparency requirements, in the specific implementation process, the actual working principle and decision-making process of the algorithm are often difficult to be understood and supervised by the outside world. Due to the lack of algorithm transparency, patients cannot understand how medical artificial intelligence reaches a diagnosis, resulting in their right to know and right to choose cannot be fully guaranteed, which may infringe on the patient's right to informed consent and right to self-determination." Chen Chuan said.
She pointed out that algorithmic discrimination can also lead to inequality in medical resources in different regions. Different developers may inadvertently introduce bias when training algorithms, causing generative artificial intelligence to make discriminatory decisions when facing patients from different groups. For example, when some medical artificial intelligence systems screen patients, the diagnosis results are inaccurate or there is a systematic underestimation. For another example, if the training data of the algorithm model mainly comes from certain specific groups, it may cause it to be biased when facing special groups.
Chen Chuan believes that at present, the application of artificial intelligence in the medical field is still in the exploratory stage, and it is easy to cause accountability and attribution issues due to misdiagnosis, data leakage and other behaviors. my country's Civil Code stipulates that the principle of fault liability shall apply to medical damages, and also takes into account the issue of liability for damages caused by medical devices. However, artificial intelligence can independently generate medical diagnosis results or suggestions. Therefore, under the current legal framework, the accountability for the application of medical artificial intelligence is very complicated, and traditional forms of liability are difficult to simply apply to medical artificial intelligence.
Online drug purchases put the cart before the horse
The audit process is ineffective
In addition to the fact that "AI+Medical" as an emerging model may involve certain legal risks in its actual application, some online medical consultation and online drug purchase platforms have brought convenience to patients after introducing AI assistance, but have also exposed many problems.
During the investigation, the reporter found that some Internet medical platforms adopted an upside-down operating method of "first selecting the medicines, then formulating the medicines, or even automatically generating prescriptions by artificial intelligence software."
Not long ago, the reporter placed an order for prescription medicine calcitriol soft capsules on a drug purchasing platform, and the platform prompted "Please select a disease that has been diagnosed offline." The reporter randomly checked a few items in the "Disease Column", left the "Prescription/Medical Record/Examination Report Column" blank, and confirmed that "the disease has been diagnosed and the medicine has been used, and there is no history of allergies, no related contraindications and adverse reactions." The verification was quickly passed. After submitting the list, the system jumped to the consultation section.
Then, a "doctor" received the patient and sent several messages in succession. The first message emphasized that "Internet hospitals only provide medical services to users who come back for follow-up visits." The subsequent messages were all to confirm whether the patient had a history of allergies or was in a special period. When the reporter did not respond, the other party sent a prescription and a purchase link.
Beijing citizen Yang Mu (pseudonym) has had a similar experience. He suspects that the person behind the screen is not a real practicing physician: "When buying prescription drugs on the ×× platform, I felt that the other party was no different from a robot. As long as I typed, the other party would quickly agree within a few seconds and did not give any professional advice at all." Once, he deliberately described some conditions that were not suitable for the medicine he wanted to buy, but the other party still quickly wrote a prescription.
Many industry insiders believe that remote diagnosis and treatment is not suitable for all patients, and follow-up visits for common and chronic diseases have been the focus of Internet diagnosis and treatment for a long time. However, the industry has always lacked specific standards for what constitutes a follow-up visit, resulting in some regulatory vacuums.
"If it is a regular Internet hospital, the electronic prescription issued by a qualified doctor should have the doctor's signature and the Internet hospital's electronic seal. It is not ruled out that some small Internet medical platforms use artificial intelligence, robots and other tools to automatically generate prescriptions. Some large platforms will use AI to assist doctors in consultations, such as asking patients how old they are, where they feel uncomfortable, etc., but prescriptions must be written by doctors." said a doctor named Liu from a tertiary hospital in Beijing.
He also noticed that in order to gain profits, many platforms adopted the model of "AI prescribing, customers directly picking up medicines", and the prescription issuance and review links were non-existent. Either the prescription issuance process was skipped directly, or the prescriptions uploaded by users were not actually reviewed. Such behavior seriously violated my country's drug management system and also posed a risk to patients' medication safety.
Whether personal information will be leaked when consulting and receiving medical treatment on online health platforms is also a question raised by many patients interviewed.
Once, Yang Mu had a lot of red rashes on his back, so he consulted on a health platform. A few days later, he received many advertising calls and text messages, some asking if he needed hair transplants, some promoting skin care products, and even some sales calls from loan companies.
"During the consultation, can the personal information and health conditions collected by the platform be properly preserved? Will this information flow to a third party?" Yang Mu was very worried.
Previously, the Ministry of Industry and Information Technology had reported that multiple Internet medical apps had serious problems in the collection and use of personal information, including collecting personal privacy information beyond the scope, providing personal information to others without the consent of the person concerned, and collecting personal information that is not related to medical services.
"Compared with other types of apps, medical apps that leak personal information may lead to more serious legal problems. If personal health information is leaked, criminals may use this information to carry out targeted fraud, such as taking advantage of the mentality of 'desperate efforts to find a cure' to illegally sell or promote drugs to patients," said Dr. Liu.
Improve relevant legal framework
Effectively protect the rights and interests of patients
Experts interviewed pointed out that in order to effectively deal with the potential legal risks in the actual application of "AI+medical care", it is necessary to systematically improve it from both the legal and policy levels. Only by establishing a sound legal framework and regulatory mechanism, and clarifying the responsibility and data usage specifications of medical big models, can we effectively protect the legitimate rights and interests of patients while promoting the development of medical artificial intelligence.
"First of all, we need to establish and improve the legal framework for medical artificial intelligence. At present, the "Interim Measures for the Management of Generative Artificial Intelligence Services" does not involve provisions on artificial intelligence in the medical field. It is necessary to formulate relevant laws and regulations on its basis and in combination with the characteristics of the medical field, highlighting the auxiliary role of medical artificial intelligence." Chen Chuan said that in order to help medical staff and patients better use artificial intelligence systems and understand the operating mechanism of artificial intelligence to obtain diagnostic results, my country can formulate a set of guidelines for the use of medical artificial intelligence to enhance the interpretability of medical artificial intelligence systems and their results; other legal provisions also need to be improved. The Data Security Law and the "Ethical Review Measures for Life Science and Medical Research Involving Humans" provide a basis for the formulation of medical artificial intelligence management measures. Management measures applicable to medical artificial intelligence should be introduced based on the current development status of artificial intelligence.
Chen Chuan also mentioned that it is urgent to strengthen the supervision of medical artificial intelligence algorithms. On the one hand, in order to ensure the safety and reliability of medical artificial intelligence, a special algorithm review agency should be established to conduct strict safety, transparency, and ethical reviews of medical artificial intelligence; on the other hand, based on the dynamic nature of data and the continuous iteration and upgrading of artificial intelligence technology, developers should be required to judge the security risks generated in the algorithm application process in advance and propose targeted response measures, conduct regular algorithm risk monitoring during the algorithm life cycle, and conduct self-security assessments on the algorithm's data use, application scenarios, and impact effects. In addition, a variety of measures can be initiated, such as public supervision and reporting, and inspections by regulatory authorities.
Deng Yong, a professor at the Law Department of Beijing University of Chinese Medicine, also believes that with the development of the medical big model, compliance operation and supervision of this industry are becoming increasingly important. It is necessary to determine the product positioning and obtain the corresponding qualifications to avoid carrying out corresponding activities without qualifications. "In terms of compliance, the first thing to do is to determine the product positioning. If it is an Internet diagnosis and treatment product, it is necessary to contact or establish a corresponding physical medical institution and apply to set up a corresponding Internet hospital, and meet the corresponding requirements in terms of physician resources, medical record management, drug distribution, and prescription. If it is only for health management and does not involve diagnosis and treatment activities, it must be clear that the product does not have a 'medical purpose' and is only 'intended for health management, the target population is healthy people, and health information is recorded and counted'."
He suggested that data cleaning and other methods should be adopted to ensure that illegal and bad information and personal information in public data are removed to ensure that the training data is legal and compliant. The requirements for the collection of user data by the medical big model need to follow the principles of legality, legitimacy and necessity, and not collect personal information that is not related to the services provided.
"The red line behaviors of medical big models in collecting user data include: failure to disclose the rules for collection and use; failure to clearly state the purpose, method and scope of collecting and using personal information; collection and use of personal information without the user's consent; violation of the principle of necessity, collecting personal information irrelevant to the services provided; providing personal information to others without consent; failure to provide the function of deleting or correcting personal information in accordance with the law or failure to disclose information such as complaint and reporting methods." Deng Yong said that when collecting public data on the Internet, in order to ensure its legality and compliance, it is also necessary to perform informed consent procedures, anonymization procedures, and provide refusal channels. In addition, attention should be paid to the data labeling mechanism, and content security should be achieved by preventing the generation of illegal and bad content such as pornography, violence, and discriminatory information through labeling.
In response to the problems exposed by online medical consultations, Chen Chuan believes that it is urgent to clarify the legal responsibilities of artificial intelligence, establish a scientific and reasonable responsibility allocation mechanism, strengthen the main responsibilities of medical personnel, clarify that AI is in an auxiliary position, and refine the responsibilities of participants in various fields. Medical diagnosis cannot be completely handed over to AI, and the principle of "pre-prevention - in-process monitoring - post-event accountability" should be followed. "Medical personnel must fulfill their duty of care during the practice of medicine, that is, medical personnel must pay attention to identifying and identifying AI diagnosis results, otherwise they should bear responsibility."
Source: Legal Daily
Report/Feedback