news

OpenAI CEO talks about new AI startup for the first time: Inspired by ChatGPT’s medical experience

2024-07-16

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina


Smart Things
Compiled by Chen Junda
Edit Panken

According to Zhidongxi on July 15, last week, OpenAI CEO Sam Altman founded an AI healthcare companyThrive AI Health (Thrive for short)The startup aims to change patients' behavior habits through AI health coaches to solve the chronic disease crisis that plagues 127 million people in the United States, and has also received investment from the richest woman in the United States.

Healthcare has always been a social issue that the American society has paid close attention to, and it is also a key issue in this year's US election. The inefficient and expensive healthcare system makes it difficult for many Americans to obtain effective healthcare services. As a top figure in the AI ​​industry today, Altman's choice to enter the AI ​​healthcare market at this time has naturally attracted a lot of attention.

Altman and Thrive Global founder Arianna Huffington spoke to The Atlantic on July 11 to share more details about the venture.

The two said that Thrive will focus on providing health advice and avoid medical diagnosis, which AI is not currently good at. In the future, it is possible to integrate health information into work scenarios.

However, in the face of the interviewer's questioning, theyIt failed to clearly state in what form the product will be launched, and what specific measures will be taken to ensure the security of user data.

Altman also said in the interview"perhaps"Information shared between humans and AI should be protected by confidentiality clauses similar to those between lawyers and clients, but he believes this should be decided by society.

It’s worth noting that Altman’s new company, Thrive, will have access toAI health data is extremely private and has significant economic value.Insurance companies can use this information to adjust the price of a specific policy or decide whether to reimburse a certain drug. In the United States, there has been a recent leak of such information, which caused a large-scale medical system shutdown.

1. Focus on health advice rather than medical diagnosis, the model performance is already "good enough"

The biggest selling point of Thrive, an AI health product, is its “highly personalized AI health coach.” Thrive will provide users with personalized, immediate health advice by collecting information about their sleep, food, exercise, stress, and social interactions, combined with medical records and expertise in the field of behavior change.

Altman and Huffington believe that AI health is of great significance to improving the current broken medical system in the United States. Currently, 90% of the expenditure of the US medical insurance system is spent on the treatment of chronic diseases, and Thrive is expected to significantly reduce this expenditure.

Altman and Huffington compared the technology to Roosevelt's New Deal, saying that "AI will become part of a more efficient medical infrastructure that continues to support people's health in their daily lives."

But the application of AI in the medical and health industry is nothing new. AI has played an important role in CT reconstruction, drug development, auxiliary diagnosis and other fields.


▲ Nvidia launched the AI ​​medical product Clara (Source: Nvidia)

Currently, AI medical and health applications are mainly aimed at doctors and researchers with professional knowledge, rather than the disease patients that Thrive is targeting this time.Most patients do not have sufficient medical knowledge, so they are unlikely to make effective judgments on the health advice or medical diagnoses provided by AI, and it is difficult for AI products to ensure that they will not make mistakes.

In an interview with The Atlantic, Altman and Huffington responded to questions about the safety of the product. They believe that the current performance of the AI ​​model is good enough, and if ThriveFocusing on “health advice” rather than “medical diagnosis”, and trained using peer-reviewed data, the model is able to provide good enough recommendations.

However, neither Huffington nor Altman could clearly respond to what form the product will eventually take. They said the product will be launched in the form of an app, but Huffington also said that the product will be provided through various possible modes and can even be integrated into work scenarios through apps like Microsoft Teams.

2. Collecting data is not a problem. Altman said users are willing to share

This hyper-personalized product requires convincing users to voluntarily give up a lot of privacy information so that AI can have enough information to make decisions. In an interview with The Atlantic,Altman doesn't think this will be a huge challenge.

Altman shared that part of the reason he started his new company was that many people had already diagnosed medical problems on ChatGPT, and he had heard that many people believed ChatGPT's advice, took relevant tests and received treatment. He believes that users are actually willing to share some very detailed and private information with LLM.

The Atlantic Monthly reporter was shocked by this practice, because the medical advice returned by ChatGPT may contain AI hallucinations and pose a threat to the patient's health. Patients who rely on this false information are also likely to conflict with professional doctors.

The reporter also believes that once medical information is leaked, it may seriously damage the personal rights of users. However, Altman's response to the risk of information leakage is not firm, and he believes that this issue should be left to society to deal with.

He said that currently the content of communication between doctors and patients, lawyers and clients is protected by law, and people's communication with AI "may" also have similar protection, "maybe society will decide whether to establish some form of AI privilege." In other words, they may not actively promote similar protection, but leave the decision to society.

But the protection of health data has reached a point of urgency.Just in February of this year, Change Healthcare, an American health technology giant affiliated with the American insurance group UnitedHealth Group, suffered a large-scale ransomware attack, causing a large-scale shutdown of the medical insurance system and putting the medical information of nearly 1/3 of Americans at risk of leakage.

And OpenAI’s record on data protection is not perfect.In early 2023, OpenAI's internal system was attacked by a cyber attack, and chat records of company employees' discussions about advanced AI systems were leaked.

In addition, according to a report by technology media Engadget in early 2023,ChatGPT once had a serious information leakage incident.At that time, there were some malfunctions on the ChatGPT webpage, causing the conversation titles of some users to appear in other chat boxes, and the identity information and bank card information of some users were also leaked.

Despite this, Altman still called on society to "trust" them in this interview, which is in stark contrast to his remarks at the Bloomberg Technology Summit in 2023, calling on everyone not to trust himself and OpenAI.


▲Altman at the 2023 Bloomberg Technology Summit (Source: Bloomberg)

Altman believes that it is a common expectation for people to use AI technology to improve their health, and this is one of the few application areas where AI can change the world. He later added that to achieve AI improving human health,“It takes a certain amount of faith,” meaning people have to trust the new company to carry out the task responsibly.

In their co-authored article for Time magazine, Altman and Huffington put these “beliefs” in more detail. They believe that to achieve “AI-driven behavior change” and reverse the growing trend of chronic diseases, they need three main aspects of trust.

On the one hand, there is the belief from policymakers that they need to create a "regulatory environment that promotes AI innovation." Medical practitioners also need to trust AI tools and integrate AI technology into their practices. Finally, individuals also need to trust that AI can handle their privacy data responsibly.This is indeed a big request for a company that does not yet have any products and has not promised to take any specific security measures.

Conclusion: It may be too early to entrust health to AI, and AI should not become a game of faith

When talking about the implementation of AI health products, Altman and Huffington described the following scenario in their co-authored article in Time magazine: "The AI ​​health coach will provide everyone with very precise advice: replace the third glass of soda in the afternoon with water and lemon; take a 10-minute walk with your child after picking him up from school at 3:15 p.m.; start your relaxation routine at 10 p.m.." This AI health coach will eventually change some of people's stubborn bad habits, ultimately improving overall human health and prolonging human life.

However, are the various "unhealthy" behaviors of people in life a matter of personal habits or a larger social problem?Should we leave the chronic disease crisis to individuals and AI, or carry out systematic prevention through research and intervention by governments and medical institutions?These are perhaps the issues that people need to consider before this so-called AI medical infrastructure becomes a reality.

In the interview, Altman talked about the realization of AI health vision, which requires people's belief to a certain extent. However, in the field of AI, which has a far-reaching impact and medicine, which is life-threatening,Perhaps what we really need is not such a belief game, but verifiable and explainable technology.

Source: The Atlantic, Time