news

"Tyrant" dominates AGI, is OpenAI heading towards a dangerous future?

2024-08-06

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Tencent Technology Author Guo Xiaojing Hao Boyang

Edited by Su Yang

OpenAI's personnel turmoil continues, this time it'sOpenAIPresident Greg Brockman announced a long-term vacation. He is widely recognized as a close friend of Sam Altaman and stood by him when Sam was kicked out of the board of directors. Greg Brockman himself announced that he would return to the company after a long vacation. However, many executives of technology companies take a long vacation before leaving, so his move makes it hard not to make other speculations.

In addition, two core executives were revealed to have resigned. One of them was Johnson Schulman, one of the co-founders. He briefly served as the head of the security team before his resignation. This team was previously jointly led by Ilya Sutskever, former chief scientist of OpenAI, and researcher Jan Leike.

Johnson also led OpenAI’sGPTThe post-training process of a series of large models can be said to be a key figure in the construction of technology. What is more intriguing is that after leaving, he, like Jan Leike, directly joined Anthropic, OpenAI's strongest competitor.

In addition, Peter Deng, the product manager who joined OpenAI last year, has also resigned. Although he was not a co-founder, his resignation less than a year after joining the company also led to speculation about what happened to the OpenAI team.

OpenAI’s Second Talent Flow

There are only 3 founders left, and 1 is on long-term leave

After this change, of the original 11 co-founders of OpenAI, only Sam Altman, Wojciech Zaremba and Greg Brockman (who also announced a long-term leave) remain in OpenAI. The rest of the founding members have left the team.

Five of the 11 founders were in 2017-2018, when Musk announced his withdrawal of investment, triggering a wave of turmoil in OpenAI. They include Trevor Blackwell, Vicki Cheung, Durk Kingma, and Pamela Vagata. Another founding member, Andrej Karpathy, was also directly poached by Musk to serve as Tesla’s chief financial officer.AutopilotHe will not return to OpenAI until February 2023.

The remaining six founding members all left OpenAI this year after the palace fight at the end of last year. They include Ilya Sutskever, the former chief scientist of OpenAI, John Schulman mentioned above, and Andrej Karpathy, a scientist who returned only a year ago. It can be said that 2024 is the year that OpenAI has experienced the biggest wave of veteran personnel changes since the Musk incident.

Where did all the people who left OpenAI go?

Judging from the destinations of OpenAI's resigned employees, they generally take two paths. One is to join a rival company, which are basically Deepmind and Anthrophic. The other is to start their own business. The vast majority of the resigned employees mentioned above (66%) chose to join new entrepreneurial projects.

This choice actually gave birth to the so-called "OpenAI Mafia". According to media reports, nearly 30 employees have left OpenAI before this year and founded their own AI companies. Several of them have reached unicorn level. The most well-known one is Anthrophic, founded by Dario Amodei, which has now become OpenAI's most powerful competitor.

What is the impact of this resignation?

From the structure diagram revealed by Information, the three people who have undergone personnel changes are all core managers in OpenAI. In terms of functions, Brockman and Shulman both report to Mira Murati, CTO of OpenAI (Greg should also be responsible for other tasks as chairman and board member), and Peter Deng, VP of user products, should be at the third level of reporting. The work they are responsible for is basically OpenAI's core technical work. The impact of these changes cannot be said to be small, probably because 3 of the 40 core executives have been replaced.

But in fact, the content of the three people's responsibilities may not be really related to the development core of OpenAI's new model at the moment. According to OpenAI's past official blog, Greg Brockman has served as president since 2022, focusing on training OpenAI's flagship AI system. But in fact, since GPT-4, the current chief scientist Jakub Pachocki has led the development of GPT-4 and OpenAI Five.

Shulman was once in charge of the post-training part of OpenAI and proposed the Proximal Policy Optimization (PPO) algorithm. This is a core strategy algorithm for improving large language models and is one of the main algorithms used by OpenAI until now. However, after Illya left, he was replaced as the head of super alignment, and his influence on the GPT-5 project is currently estimated to be limited.

VP of Product Peter Deng is responsible for customer products and has limited overall connection with the core development work of GPT-5.

It can now be basically confirmed that Research VP Bob MacGrew and Chief Scientist Jakub Pachocku are the most critical figures in the development of the new model.

In addition, the main people driving technological innovation within OpenAI may no longer be senior executives, but some grassroots researchers. For example, former Stability AI CEO Emad said at X that only when Alec Radford leaves can OpenAI be sure of collapse.

We can also see his importance by looking at the number of citations of Google Scholer's papers.

Therefore, the impact of this resignation on OpenAI's research progress may be limited, and its main impact will be on the morale within the team and external confidence in the company.

After all, it is hard to have confidence in the management ability of a boss who can’t even keep his best friends and forced all his associates out.

Key guesses of conflict

Keywords to leave: Super Alignment

OpenAI's Super Alignment project was established in June 2023. It plans to invest 20% of its computing power in the next four years to build an automatic alignment researcher comparable to human level by using AI to supervise AI, so as to solve the problem of consistency between higher-level intelligent systems and human goals. The "Super Alignment" team is jointly led by OpenAI co-founder and chief scientist Ilya Sutskever and researcher Jan Leike.

Ilya Sutskever can be said to be one of the souls of OpenAI. When the "Attention is all you need" paper that attracted great attention in the industry was released in 2017, Ilya fully supportedTransformerIn the architecture direction, he subsequently led the research and development of the GPT and Vincent DALL-E series models.

Ilya is also a student of Geoffrey Hinton, known as the "father of deep learning". He completed his undergraduate and graduate studies under Hinton's guidance and eventually obtained a doctorate in computer science. Hinton has also been warning that the rapid development of artificial intelligence may bring threats that surpass human intelligence.

At the end of 2023, the event known as the "OpenAI palace fight" was finally brought to a temporary end by Ilya's departure and Sam Altman's return. However, the storm continues to this day. The biggest speculation from the outside world is that the disagreement within OpenAI is a huge conflict of faith.

Judging from Sam Altman's background and behavior, the industry tends to regard him as a believer in e/acc (effective accelerationism), who believes that the development of technology is inherently beneficial to humans and that humans should invest as much resources as possible to accelerate innovation and change the existing social structure. Corresponding to what OpenAI has done, it can be simply and roughly summarized as concentrating financial resources to break through AGI at full speed.

Super alignment and effective acceleration are, in essence, no longer a dispute over routes, but a huge conflict of inner beliefs. In May 2024, Jan Leike also announced his resignation. When he left, he posted on the social media X: "I really disagree with the priorities set by management for important matters. Building machines smarter than humans is inherently a dangerous job, but in the past few years, AI safety has given way to shiny products. This disagreement has been going on for some time, and it has finally reached a point of no return."

With the resignation of Jan Leike, the original Super Alignment team has basically come to an end. The remaining members have been integrated into the security team, which is led by Johnson Schulman, who also announced his resignation today.

It is interesting that Jan Leike and Johnson Schulman both went to Anthropic, which can be said to be a "derby" with the same roots as OpenAI. The founder of Anthropic is also Dario Amodei, a former core employee of OpenAI. It is said that Dario left as early as 2020 because of AI safety issues.

Dario launched Constitutional AI at Antropic, a method for training AI models that guides the behavior of AI systems through a set of clear behavioral principles, rather than relying on human feedback to evaluate responses. The core of this approach is to use AI systems themselves to help supervise other AI systems to improve the harmlessness and usefulness of AI while expanding the scale of supervision.

Dario emphasized that the main idea of ​​Constitutional AI is to use AI systems to help supervise other AI systems, thereby expanding the scale of supervision and improving the harmlessness and usefulness of AI. The process of Constitutional AI involves steps such as self-criticism, correction, supervised learning and reinforcement learning, as well as a series of constitutional principles based on natural language to constrain the behavior of AI.

Doesn’t it have a very similar flavor to Ilya’s super alignment idea?

It seems that from now on OpenAI and Anthropic have become different representatives of the two beliefs of effective accelerationism and super alignmentism in the field of AI.

Previously, some people worriedly pointed out that Ilya's departure might be because he saw that some of OpenAI's technologies were out of control. If this is true, Anthropic, which is taking another path, may become a useful weapon to contain this behemoth.

After all, Anthropic's Claude series models are already infinitely close to OpenAI's GPT series of large models in terms of capabilities.

Another conflict: Sam Altman, the tyrant, and idealistic governance structures

What another key figure, OpenAI founder Sam Altman, has to uphold may be more difficult than a belief in effective acceleration.

First, let’s look at OpenAI, which was founded on December 11, 2015. It was full of idealism at the beginning, and its vision included actively promoting the development of general artificial intelligence (AGI) for the benefit of all mankind, benefiting everyone, and fighting against the monopoly of artificial intelligence by large companies or a few people.

Under the call of this vision, money was raised from bigwigs including Elon Musk, and funders seemed to have promised to provide billions of dollars to the organization during its development. During this period, OpenAI only needed to care about how to achieve its ideals and did not need to worry about the daily necessities of life.

However, after the now world-famous Attention is All you need paper appeared, OpenAI went the route of Scaling law, a route that requires huge resource consumption.

In 2019, in order to obtain financing, OpenAI turned to a hybrid structure and established a subsidiary OpenAI Global under its parent company OpenAI Inc., in which Microsoft's investment was exchanged for up to 49% of OpenAI Global's shares.

That is, OpenAI retained its original nonprofit parent company, but created a for-profit subsidiary called OpenAI Global under it, which can absorb money from venture investors and employee shares, meaning it can raise funds like a normal company.

However, OpenAI LP is not completely transformed into an ordinary company. It has a special feature, which is "capped profits". This means that although it can make money, there is a limit. If the money earned exceeds this limit, the excess will be returned to the non-profit parent company. This ensures that OpenAI LP will not deviate from its original goal, which is to benefit all mankind.

However, such a seemingly innovative structure has a potential risk, that is, the board of directors of the non-profit organization OpenAI LP controls the for-profit organization Open AI Global, and the mission of the non-profit organization is the welfare of all mankind, not the interests of shareholders. Microsoft has obtained a 49% stake, but has no board seat and no say.

Even more dramatic is that after Altman was expelled from the board of directors, 98% of the employees signed an open letter demanding his reinstatement, threatening to resign. Altman said in a later interview: "This incident may destroy everyone's equity, and for many employees, this is all or most of their wealth."

Ordinary employees' "grouping" for the most mundane interests defeated the core founders' grand ideal of safe AGI.

In addition to OpenAI's governance structure, Sam Altman's somewhat "controversial" management style was also exposed online. For example, Altman has a method he calls "dynamic" management, which is to let someone take on a temporary leadership role and allocate resources without warning. For example, Altman once suddenly promoted another artificial intelligence researcher, Jakub Pachocki, to research director. The then research director Ilya Sutskever expressed his uneasiness. Although Sam's original intention was to provoke competition, Ilya felt that this would cause more conflicts.

In addition, Sam Altman's style in competition is also more aggressive and direct, such as inGPT-4oBefore it went online, it was revealed that it had not undergone sufficient security testing. It was rumored that the time was chosen to give its competitor Google a more effective blocking effect.

OpenAI is the igniter of this wave of generative AI and has always been considered a leader. However, the current situation is becoming more and more challenging: the world is looking forward to how many more big bombs OpenAI has in its hands, but at the same time, it is also looking forward to whether these big bombs can be transformed into a real product that can generate enough profits - after all, OpenAI's product strength has always been complained by the industry.

In addition, according to investment institutions' predictions, OpenAI is expected to continue to lose money this year and continue to train more powerful models. In the case of insufficient hematopoietic capacity, how to continue to raise funds is also a problem that OpenAI has to consider.

The competition environment is becoming increasingly fierce. Not only are there giants surrounding it, but even its largest shareholder Microsoft listed OpenAI as a competitor in its latest financial report. It also has to face the equally powerful Antropic.

After Sam Altman returns, he may become a "lonely man", and the outside world will also worry whether this board of directors, which seems to have become a "shell", can continue to keep OpenAI leading.

However, today a venture capital consultant expressed the following view to us: "OpenAI's original management team was a group of scholars with a splendid background, who were honest and idealistic; however, perhaps a technology company needs a 'tyrant' to truly grow. Without an arbitrary 'tyrant', it may take a long time to make a decision."

I wonder where OpenAI, led by the "tyrant" Altman, will go in the end?

The rise of OpenAI revealed: Ultraman has been power-hungry since childhood and has accumulated huge influence through connections