news

why doesn’t anyone care about openai anymore?

2024-09-27

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

yesterday, openai ceo sam altman posted a long article "intelligent era" on his newly opened blog homepage. the general content of the full text is: with the development of deep learning, super artificial intelligence will arrive in a few thousand days. by then, ai will change every aspect of people, and mankind will also enter the era of intelligence.

such a vision coming from the mouth of the father of chatgpt will undoubtedly attract huge attention. but unlike in the past, netizens overwhelmingly praised gpt's ability to support openai. under the long article, some netizens complained that sam altman was making trouble again.

for example, some netizens think that altman is playing a word game: he predicts that super artificial intelligence will arrive in a few thousand days, but no one knows whether these thousands of days will be 1,000 days or 9,999 days. there can be a difference of 2 or 30 days. years.

this actually reflects the situation of openai from the side:he is no longer the “true god” in the field of artificial intelligence and is being inundated with doubts.

when chatgpt was first released, openai was artificial intelligence as the "only answer". now, openai's model, profitability, team, security, etc. are all being questioned.

the first thing is the new model o1 released by openai a few days ago was considered by netizens to have no new breakthroughs.

openai said that the o1 model will think like a human and will go through a complex internal reasoning stage before giving an answer. in this process, the model will build a detailed and in-depth thinking chain, and it can also continuously improve its own thinking path, identify and correct its own errors.

o1 performs really well on complex reasoning tasks. it can do well on programming and mathematics problems that are difficult for gpt-4 to handle.

o1 outperformed 89% of competitors in the codeforces programming competition, surpassing human phd-level accuracy on benchmarks on physics, biology, and chemistry problems.

however, o1 still can't tell a simple question like which is bigger, 9.9 or 9.11. some netizens analyzed that o1 has not been significantly optimized at the model level, but is more like an improvement at the engineering level. some people even speculated:o1 is the fine-tuned agent of gpt-4o.

moreover, openai o1 is also extremely expensive. unlike traditional models, o1 generates multiple candidates and scores them at each reasoning step, and these hidden thinking processes cost tokens, that is, money. this means that content output of 100 tokens may be billed as 1000 tokens.

the second area where openai has been questioned is the stability of its leadership team.

after experiencing the openai "forced abortion incident" in november last year, openai was shown to netizens as a team united for the development of artificial intelligence.

especially after altman was fired, more than 700 openai employees (more than 90% of the company) submitted an open letter to the board of directors. they insisted they would resign en masse if altman was not brought back to the company.

however, the resignations in the past few months completely broke the openai team filter.

in february this year, co-founder andrej karpathy resigned and founded an ai+ education company, eureka labs. in may of this year, top executives such as jan leike, the leader of the super alignment team, and ilya sutskever, the chief scientist, resigned one after another.

in august, openai co-founder john schulman announced his resignation on social media and jumped directly to anthropi, openai's direct competitor.

currently, of the 11 co-founders, only 3 are left. openai’sfrequent changes in the company's top management and failure to stabilize it will inevitably have an impact on the development of openai.

in addition to high-level changes,openai’s protection of the interests of ordinary employees has also been questioned

in june of this year, vox reported that openai would ask employees to sign an agreement with a non-disparagement clause when they leave, otherwise they would lose a lot of money.

aschenbrenner, who was fired for "leakage" this year, believes that the real reason why openai fired him was that he wrote an internal memo about openai security.

an important factor in the instability of the openai team is how openai balances security and commercialization. behind this is what everyone has been questioning: as a non-profit organization, can openai shoulder the important task of "general artificial intelligence benefiting all mankind"?

in particular, judging from the current technological path, the money spent to implement agi that benefits all mankind is definitely not something that non-profit organizations can afford.

initially, openai was established as a non-profit organization in 2015 with the goal of advancing the development of safe and beneficial artificial intelligence (ai). however, the us$1 billion promised by the sponsors did not arrive as scheduled, and openai's funds were quickly exhausted. openai had no choice but to launch the establishment of openai global, llc (a profitable company established by openai) in 2019.

moreover, as scaling law continues to play a role, openai’s investment in gpt4 and gpt5 will increase exponentially. it can be believed that if microsoft had not continued to spend money on openai, even gpt4 might have been delayed for several years. but microsoft is a listed company, and its fundamental purpose is to make money.

this also raises people's doubts about the relationship between openai and microsoft: whether openai is still an independent company; whether microsoft can monopolize profits through openai's technology, etc.

as openai has been exposed to huge funding gaps one after another, it is seeking more financing. combined with openai's various actions, we can guess that sam altman wants openai to go public.

in june of this year, the media revealed that sam altman told some shareholders that it was considering transforming openai's governance structure into a for-profit enterprise. in the same month, openai also hired sarah friar, who had led square’s successful ipo as cfo, as its new cfo.

regardless of whether openai will eventually go public or remain non-profit, it cannot do what closeai does under the banner of openai.

finally, this is actually the trigger for all doubts: openai products are getting slower and slower, and the pie is getting bigger and bigger.

there has been no movement on gpt5 (perhaps it has been replaced by o1). when will sora be opened instead of just living in the paper?

starting a blog and writing long articles is just one of the ways to draw a picture. he also made a high-profile announcement that openai will raise us$7 trillion to build chips and reshape the global semiconductor industry.

from time to time, i look forward to agi on social platforms and take a cue from my own products.

when o1 was released, a series of short videos were shot, which looked like a variety show.

faced with doubts, sam altman had to draw more pie in order to maintain openai's leading position.

and more cakes lead to more questions.

complete text.

author: dong daoli

editor: zhang zeyi

visual design: shu rui

editor in charge: zhang zeyi

about "new silicon newgeek": with ai as the center of the circle, we track all aspects of the technology field and strive to explain how technology changes the world in the simplest way. stay tuned.