2024-08-19
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
OpenAI's personnel turmoil has intensified since CEO Sam Altman was fired and rehired by the OpenAI board last year.
Since the beginning of this year, several senior executives have left. As of now, only three of OpenAI's 11 co-founders remain: Altman, Wojciech Zaremba, head of the language and code generation team, and Greg Brockman, the president who is on vacation.
Why can't OpenAI, which has a huge influence in the field of artificial intelligence, retain these founders?
Release these signals
Earlier this month, John Schulman, who played an important role in building ChatGPT, announced his departure from OpenAI to join competitor Anthropic. He said, "I decided to pursue the goal of studying AI alignment at Anthropic and conduct research with people who have deeply studied the topics I am interested in." Peter Deng, the new product leader and vice president of consumer products who joined last year, also officially announced his resignation.
Liu Pengfei, associate professor at Shanghai Jiao Tong University and head of the generative artificial intelligence research group, said in an interview with Yicai Global: "The departure of senior researchers such as Shulman shows that OpenAI may no longer be the first choice for top AI scientists, and other companies focusing on AI safety (such as Anthropic) or newly established companies are attracting talents. This may mark the diversification of the AI research ecosystem."
It is worth noting that Brockman, who stood firmly with Altman during last year’s impeachment crisis, also announced almost at the same time that he would take leave until the end of the year, saying that "the task is far from complete, and we still need to build a safe AGI (artificial general intelligence)."
In this regard, Tan Yinliang, professor of decision sciences and management information systems at China Europe International Business School, said in an interview with Caixin that the industry generally interpreted Brockman's move as his impending resignation, because Andrej Karpathy, former co-founder of OpenAI, also resigned after a long vacation.
Tan Yinliang said that the recent actions of Schulman and Brockman have sent two signals: first, there are problems with OpenAI’s internal management, outstanding scientists may continue to leave, and the development gap between other AI companies and OpenAI may gradually disappear; second, the development of GPT5 and next-generation models may have encountered bottlenecks, and the technology field may have to think about whether Scaling Laws has reached its end.
"By next month, when non-compete clauses of U.S. companies are completely abolished and the flow of AI talent is completely untied, OpenAI's talent loss problem may become even more serious," he said.
"Rebels" usher in a new era
According to statistics from industry research institutions, nearly 75 core employees of OpenAI have left and founded about 30 artificial intelligence startups. Dario Amodei, founder and CEO of artificial intelligence company Anthropic, said in a program in July that artificial intelligence model companies are about to earn more than one trillion US dollars. In the trillion-dollar artificial intelligence market, more than half of them can be started by OpenAI and its employees who left.
The Amodei siblings were former vice presidents of research and security policy at OpenAI. In 2020, they resigned because they were dissatisfied with OpenAI's direct release of GPT-3 without resolving security issues. They also took away 14 researchers who participated in the creation of GPT-3, including former policy director Jack Clark and researcher Jared Kaplan. The company has raised at least $7 billion from technology giants such as Google and Amazon since 2021. The latest version of the chatbot, Claude 3.5 Sonnet, launched in June, is considered to be comparable to or better than OpenAI's GPT-4o in overall performance.
Perplexity, an AI search company founded by Aravind Srinivas, a former OpenAI research scientist, in August 2022, is currently valued at over $3 billion. The company said it had 250 million queries last month, half of the total number of queries last year. The explosive growth has made Perplexity a strong rival to Google's Gemini and OpenAI's SearchGPT.
In July this year, OpenAI co-founder and Tesla CEO Musk also "got the last ticket" and announced the establishment of his artificial intelligence company xAI to "understand the true nature of the universe."
Tan Yinliang said: "Founders leaving is quite common in the field of hard technology, and this 'leaving' culture has been prevalent since the rise of Silicon Valley. The most famous are the 'Fairchild Eight'. After leaving Fairchild, these eight people founded a large number of early technology companies including Intel and AMD. It was these companies that made San Francisco the Silicon Valley. In the era of artificial intelligence, the eight authors of the paper "Attention Is All You Need" that laid the foundation for the big model have now all left Google and founded their own companies."
He said: "Personal business plans and various ideological differences that arise during the development of the company may become the reason for the founder to leave. However, sometimes leaving is not necessarily a bad thing. It is these 'rebels' who open up new fields and eras."
Security is a topic that OpenAI cannot avoid
However, it is undeniable that “security” and “alignment” are also high-frequency words in the founders’ resignation statements.
In addition to Schulman, OpenAI co-founder and chief scientist Ilya Sutskever and super alignment team leader Jan Leike announced their resignations in May. Leike even complained on social media X that "safety culture and processes have given way to shiny products."
Subsequently, Sutskever announced the creation of a new artificial intelligence company, Safe Superintelligence Inc. (SSI) in June this year, saying that "safe superintelligence is our only focus." Lake joined Anthropic, an artificial intelligence company competing with OpenAI, to "continue the mission of super alignment."
In this regard, Tan Yinliang said: "In the field of artificial intelligence, 'Alignment' refers to ensuring that the behavior of the AI system is consistent with human values, intentions and expected goals. Simply put, it is to make AI better act in accordance with human requirements and think in accordance with human values. Altman did not do enough in the security part. He originally promised to give the Super Alignment Team 20% of the computing power resources, but in the end the concept was changed and the entire security team shared the 20%. For a group of scientists with original intentions, security is what they insist on."
At the end of July this year, OpenAI also transferred Aleksander Madry, a senior director of the security department, from the security position and reassigned him to the job of "AI reasoning". Earlier this month, Musk filed a lawsuit against OpenAI and Altman again, accusing OpenAI of putting profits and commercial interests above the public interest, deviating from the original intention of establishing the company to benefit all mankind.
Li Weian, a professor at Nankai University and dean of the China Institute of Corporate Governance, said in an interview with China Business Network: "In traditional entrepreneurship, many job-hopping is due to not getting compensation, shares or positions commensurate with the value created, but most of the co-founders of OpenAI left for this reason."
He said that the founders of OpenAI basically do not hold shares. From the perspective of artificial intelligence governance, the reason why the founders left was mainly due to the conflict of governance concepts, that is, whether artificial intelligence governance should be for good or for profit.
Challenger Anthropic
From Lake to Shulman, Anthropic continues to attract core members of OpenAI.
The company also recently hired Pavel Izmailov, a former researcher at OpenAI who was fired in April for alleged information leaks. Steven Bills, who worked as a technician at OpenAI for more than two years, also said last month that he had left OpenAI to join Anthropic's alignment team.
Liu Pengfei said that Anthropic has made AI safety and alignment its core mission since its inception, and this focus has enabled it to focus more resources and attention on alignment issues, potentially providing researchers with a purer research environment. In addition, for senior researchers like Shulman, working in a newer, relatively small company may mean greater influence and more opportunities to shape the company's direction, which is very attractive to researchers who want to seek new challenges in the middle of their careers.
Li Wei'an said: "Anthropic's innovation in corporate governance structure reflects a better balance between business and security, which may be the reason why it can continue to attract talents to switch jobs there."
He explained: "In order to strike a balance between AI for good and AI for profit, in terms of governance structure, Anthropic has set up a trust committee composed of well-known social figures to supervise the directors. They have the right to remove board members who violate the purpose of technology for good. In addition, in order to ensure the implementation of the system, Anthropic has also set up a special class of T shares, which are held by trust members."
In May this year, Amodei said at a summit: "We have seven co-founders. Three and a half years have passed, and all of us are still in the company."
As for the challenges facing OpenAI, Tan Yinliang said: "Looking at other AI companies, Claude's model has surpassed GPT in some areas. When everyone is racing on the established route of Transformer, the destination and the path are actually very clear. At present, if scientists continue to leave, OpenAI's first-mover advantage is not insurmountable."
"In August, Musk also reopened the lawsuit against OpenAI. Financially, OpenAI's annual losses may be as high as $5 billion, and its cash flow may be exhausted in the next year. Amid internal and external troubles, this year should be very difficult for OpenAI," he said.