news

OpenAI personnel earthquake: Ultraman ally Brockman has been on long-term leave, and the product director has left

2024-08-06

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Machine Heart Report

Synced Editorial Department

There is once again important news about three personnel changes in the leadership of OpenAI.



First, Greg Brockman, president of OpenAI and one of its 11 co-founders, will take a long leave of absence.

The executive played a crucial role in turning OpenAI’s breakthroughs into large-scale AI models and products, such as ChatGPT, and was also a key ally of Sam Altman during his bid to return to the company after he was ousted by the board.



Greg Brockman told employees that he planned to return to the company after an extended vacation. However, a long vacation by a senior executive in a high position inevitably leads to some speculation.

In addition, another co-founder, John Schulman, announced his move to Anthropic, a rival company founded by former OpenAI employees.



While at OpenAI, Schulman led the process known as post-training, which involves refining the large language models behind ChatGPT and other products.

Schulman also recently took over the remnants of the Super Alignment Team, which focuses on preventing AI from causing harm to society. Prior to this, the head of the Super Alignment Team was Jan Leike, who announced his resignation on the same day as OpenAI co-founder and chief scientist Ilya Sutskever. After leaving OpenAI, Ilya founded a startup focused on safety, and Jan Leike joined Anthropic.

According to a person with direct knowledge of the matter, Peter Deng, the product head who joined the company last year and previously served as product head at Meta Platforms, Uber and Airtable, has left. OpenAI recently hired its first chief financial officer and chief product officer, and their arrival is likely to affect Deng's role.



We also know another OpenAI co-founder, Andrej Karpathy, who left in February this year and founded an education startup.

These sudden resignation news may not be related, but at least they show one thing: since the "palace fighting farce" last November, although Sam Altman has returned to OpenAI and taken power, the company's leadership has not yet stabilized.

John Schulman: This was a very difficult decision.

Later, John Schulman announced his resignation at X and explained that he chose to switch to Anthropic because he wanted to conduct more in-depth research on AI alignment, that is, the practice of aligning AI with human values.

John Schulman left a letter to his colleagues who had fought alongside him:



I have made the difficult decision to leave OpenAI. This choice stems from my desire to further my focus on AI alignment and to start a new chapter in my career, returning to frontline technical work. I have decided to pursue this goal at Anthropic, where I believe I can gain new perspectives and conduct research with people who are deeply engaged in the topics that interest me most. To be clear, I am not leaving because of a lack of support for alignment research at OpenAI. On the contrary, company leadership has been very committed to investing in this area. My decision is a personal one, based on where I want to focus my efforts in the next phase of my career.

I joined OpenAI as a founding member nearly 9 years ago after grad school. This is the first and only company I have ever worked for (outside of internships). It has been a lot of fun, and I am grateful to Sam and Greg for recruiting me from the beginning, and to Mira and Bob for their confidence in me, giving me the opportunities I need to succeed at all the challenges I face. I am proud of what we have accomplished together at OpenAI; we have built an unusual, unprecedented company with a mission to serve the public good.

I believe that OpenAI and the team I am a part of will continue to thrive even without me. Post-training is going well and we have a fantastic group of people. Much credit goes to ChatGPT — Barret has done a great job building the team into the very capable group it is today, along with Liam, Luke, and others. I am excited to see the coordination team come together and work on some promising projects. Under the leadership of Mia, Boaz, and others, I am confident that the team will be very capable.

I am grateful for the opportunity to be a part of such an important moment in history and proud of what we have accomplished together. Even if I were employed elsewhere, I would still support all of you.

With the same familiar tone, Sam Altman sent off another entrepreneurial partner:



The founder resigned and the team disbanded

OpenAI’s personnel changes started with the board turmoil.

Last November, OpenAI announced that Sam Altman would step down as CEO and quit the board of directors because he was not honest enough with the board of directors. Ilya Sutskever, former chief scientist of OpenAI, was the key promoter of the whole incident.

But a few days later, Sam Altman returned to OpenAI as CEO, and OpenAI announced the reorganization of its board of directors.

When OpenAI was in a "palace fight" and Sam Altman was kicked out of OpenAI, there were rumors that Ilya "saw something" that was powerful enough to make him worry about the future of AI and rethink the development of AI.

Some even interpreted it as: OpenAI may have already implemented AGI internally, but did not synchronize the message to more people in a timely manner. In order to prevent the technology from being widely used without safety assessment, Ilya and others pressed the emergency stop button. But these are just speculations.

After the storm, Altman seemed to be able to manage OpenAI more freely, while Ilya Sutskever was in embarrassment.

In May this year, Ilya Sutskever, co-founder and chief scientist of OpenAI, officially announced his resignation. Jan Leike, co-leader of the Super Alignment Team, also announced his departure. OpenAI's Super Alignment Team was disbanded.



In June, Ilya announced at X that he had founded a new company, Safe SuperIntelligence (SSI).

Jan Leike also announced at the end of May that he would join OpenAI competitor Anthropic to continue his research on super alignment. After John Schulman announced that he would join Anthropic, Jan Leike commented: “I am very happy to work together again!”



For safety reasons, the OpenAI board of directors announced in May the formation of a new safety and security committee, led by directors Bret Taylor (Chairman of the Board), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO). The committee is responsible for advising the full board on key safety decisions for OpenAI projects and operations.

However, personnel changes have continued, which makes people wonder what is happening inside OpenAI.

So, how far is OpenAI's next-generation AI big model from AGI? Is the safety of the big model a cause for concern?

It is worth noting that OpenAI seems to have slowed down recently. OpenAI announced that it will not release GPT-5 for the time being, lowering expectations for the release of large models, and will focus on updating its API and developer services.

Reference link: https://www.theinformation.com/articles/trio-of-leaders-leave-openai?rc=ks2jbm