news

openai is deviating from its "original intention", what happened to ultraman, the helmsman?

2024-09-16

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

openai ceo altman

phoenix.com technology news, september 16, beijing time, the us "business insider" magazine published an article on sunday saying that openai was originally a non-profit organization dedicated to the safe development of artificial intelligence, but as the company's ai technology became more and more powerful and more and more funds came from investors, it began to deviate from its original mission and put the interests of investors above the interests of humanity.

in 2015, sam altman founded openai with a lofty mission: to develop artificial general intelligence (agi) that could "benefit all of humanity."

to accomplish that mission, he set up openai as a nonprofit. however, as the company gets closer to developing agi (reasoning like a human) and excited investors pour money into it, some worry that altman is losing sight of the “benefit all of humanity” part of his mission.

this transformation has been gradual and seems inevitable.

in 2019, openai announced the addition of a for-profit arm to fund company operations and assist in its nonprofit mission, but it remained true to its original philosophy and limited the profits that investors could make.

“we want to improve our ability to raise capital while pursuing our mission, and no existing legal structure we are aware of strikes the right balance,” openai said at the time. “our solution is to create openai lp as a hybrid of for-profit and nonprofit, which we call a ‘capped-profit’ company.”

OpenAI

on the surface, this is a clever move that appears to be designed to satisfy both employees and stakeholders who care about safely developing ai technology and those who want to see companies more actively develop and release products.

but as money poured into openai’s for-profit arm and the company and altman’s reputation grew, some people began to feel uneasy.

internal divisions

last year, board members including openai chief scientist ilya sutskever briefly ousted ceo altman over concerns that the company was releasing products too aggressively without prioritizing safety. altman was quickly reinstated as ceo with the support of major investor microsoft.

in fact, cultural cracks have already emerged within openai.

in may, two of openai's top researchers, jan leike and suskov, announced their resignations. the two were in charge of the company's "super alignment" team, which is tasked with ensuring the company develops agi safely, a core principle of openai's mission. super alignment is an ai term for making ai align with human values.

later that month, openai disbanded the entire super-alignment team, and lake said at x after his departure that the team had been “sailing against the wind.”

"openai must become a safety-first agi company," he said. "building generative ai is an inherently dangerous undertaking, but openai is now more focused on building a shiny product."

suskov, former chief scientist of openai

it now appears that openai has almost completed its transformation and has become a technology giant that "acts fast and breaks the rules."

according to fortune, altman told employees in a meeting last week that the company plans to move away from the control of the nonprofit parent's board next year because it is "no longer a good fit for the company."

changing structure due to valuation

reuters reported on saturday that openai is close to receiving another $6.5 billion investment in the form of convertible bonds, valuing the company at $150 billion. but people familiar with the matter said that valuation would depend on two conditions: openai would overturn its existing corporate structure and abandon profit caps for investors.

the details of the funding round show how far the research-based nonprofit has come in its transformation, and how far it is willing to make structural changes in order to attract more investment to fund its expensive agi research and development.

people familiar with the matter said openai has discussed with lawyers about converting its nonprofit structure into a for-profit public benefit company, similar to the structures used by its competitors, such as anthropic and xai. if the restructuring fails, openai will need to renegotiate its valuation with investors to determine the price at which its shares will be converted, which may be lower.

it’s unclear whether such a fundamental change in corporate structure will happen. removing the profit cap that limits potential returns to investors in openai’s for-profit arm would give earlier investors greater returns.

however, this may also raise questions about openai's governance and deviation from its non-profit mission. openai previously said that the profit cap was set to "incentivize them to research, develop, and deploy agi in a way that balances commercialization, safety, and sustainability, rather than focusing on pure profit maximization."

openai said in a statement that the company remains focused on "building ai that benefits all of humanity" while continuing to work with the board of directors of its nonprofit parent. "nonprofit is at the core of our mission and will continue to exist," an openai spokesperson said. (author/xiao yu)

for more first-hand news, please download the phoenix news client and subscribe to phoenix technology. if you want to read in-depth reports, please search "phoenix technology" on wechat.