news

a company with a personal value of 3.5 billion was born

2024-09-06

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

do you remember the famous "openai palace fighting incident"?

on november 18 last year, without any prior notice, the openai board of directors voted to fire founder and ceo sam altman. the reason they gave was that after a careful and detailed review, the board determined that altman's communication with the board was not candid and that they could not "trust his ability to continue to lead open ai." then, after expressing shock on social media, chairman greg brockman, who was also not notified in advance, announced his "resignation" as well. in addition, key personnel such as research director jakub pachocki, risk assessment team leader aleksander madry, and senior researcher szymon sidor also submitted their resignation applications.

the internal turmoil was so severe that almost all business media used headlines that day to discuss whether this "world's most highly valued artificial intelligence company" was about to break up.

of course, the final result of the "palace fight" was decent. sam altman returned to the company and continued to play his core leadership role. openai also continued to lead the large model track and successfully launched the wensheng video product sora in march this year, consolidating its "unique" position in the industry. the latest news is that openai has launched a new round of financing procedures, with a valuation standard of a phenomenal $100 billion.

but this incident also made everyone realize that openai is not invulnerable. at least in terms of development path, they are divided into two factions. one faction is the engineering faction represented by sam altman, who believes that it is necessary to follow the development rules of technology companies, periodically deploy and release externally, and solve problems in the process; the other faction is the technical faction represented by co-founder and chief scientist ilya sutskever, who believes that "r&d" and "business" are naturally in a relationship of mutual growth and decline, and that too much exposure to business too early will only bring more trouble.

on may 15 this year, ilya sutskever's resignation further brought this disagreement to the fore. in his farewell tweet, he paid tribute to his colleagues who had accompanied him for ten years and believed that the company's development trajectory was a "miracle." on the other hand, he left a message with a hint of sarcasm:"i hope everyone can create agi that is both safe and beneficial."

what i want to talk about today starts with this parting gift.safe superintelligence (ssi), the startup project founded by ilya sutskever after leaving openai, has officially completed a financing of us$1 billion (approximately rmb 7.1 billion) with only a team of 10 people and no products.according to relevant sources, the valuation of this round of financing reached us$5 billion (approximately rmb 35.5 billion), which is equivalent to a valuation of us$500 million for each member of the team - and it was only 3 months since he officially announced the establishment of ssi.

“safe super ai”

from the perspective of early-stage entrepreneurship, ilya sutskever's current round of financing is not only impressive in terms of speed, but also has a very impressive lineup of investors. almost all the well-known vcs in the ai ​​field that you have heard of appear in the list of shareholders, such as a16z, sequoia capital, dst and sv angel.

what is more worth mentioning is that daniel gross, a well-known angel investor, former head of apple's internal ai project, and former director of artificial intelligence at y combinator, not only participated in ilya sutskever's entrepreneurial project as a partner, but also invested in ssi through the fund nfdg he manages.

it is no exaggeration to say thatwhen ilya started this business, he not only got two kings and four twos, but also the small cards made up a straight of 345678. every subsequent step will bring enough shock to his peers.

in fact, ilya does have plans to make great strides. in an interview with the media, ssi said that the purpose of this round of financing is to "accelerate the development of secure artificial intelligence systems that far exceed human capabilities." therefore, in addition to part of this money being used to purchase necessary computing power, a considerable part of the money will be used to hire top talents around the world.

however, compared to the "grand display" of those new unicorns in our impression, ssi is more likely to "come slowly". because you can see from the name that ssi carries too much of ilya sutskever's "idealistic" sentiments. since starting his own business, ilya has always emphasized the "research institution" side of ssi, calling their first entity the "straight-shot ssi lab" (roughly translated as "straight-shot security super intelligence laboratory"). in the open letter published on june 19, ilya sutskever also gave this emotional definition:

“building secure superintelligence is the most important technical problem of our time — it will be our only product, our only goal, and our product roadmap… we will strike a balance between security and performance, and we will recognize that ‘secure superintelligence’ is a technical problem that requires revolutionary engineering and scientific breakthroughs to achieve.”

based on this positioning, he put forward many requirements on the level of "emotional value" for future colleagues and future investors.

for future investors, he hopes that the team will not be "interfered by operating expenses or product cycles" in their future work, and that they will be allowed to seek a safe and secure business model to ensure that the team will not be affected by short-term business pressures and the r&d process. for future colleagues, he hopes that they are "the best engineers or researchers in the world" who can abandon other trivial matters and "focus on ssi." in return for their efforts, he promised that this career would allow future colleagues to "dedicate their lives to it."

judging from this round of financing, the first half of this vision has been completed. according to daniel gross, they have found "investors who understand, respect and support our vision...especially they understand that it will take several years of research and development before bringing the product to market."

the second half doesn't seem to be going well. ssi currently has two offices, one in california, the other in tel aviv, israel. the two offices have only 10 employees in total, and it is difficult to expand - according to daniel gross, for every resume received, in addition to examining the applicant's past project experience, they often have to spend hours reviewing whether the candidate is "good character" and judging whether the other party is "interested in the job, not interested in hype."

what’s even more exaggerated is that it’s still unknown whether ilya intends to “corporate” ssi. apart from internal members, almost no one knows how ssi’s vision will be presented on the product side. in an interview in june this year, ilya clearly stated that his ideal startup is"based on a pure research organization," "creating a safe and powerful ai system", and has no intention of selling any ai-related services or products to the outside world in the short term.

ssi, which raised $1 billion, has probably unlocked another achievement: they may be the unicorn with the simplest official website. even today, their homepage still only has a team statement with no layout design and two contact buttons, one for submitting resumes and the other for cooperation.

“over-betting on top talent”

after reading this brief introduction, i guess you must be curious about two questions:

1. why did investors choose to believe the narrative of “safe super ai” in the context of the overall overheating of ai investment, and generously gave a valuation of $5 billion? and even made many “commercial” concessions?

2. can someone like ilya, who has a background in scientific research, really be a good boss? is the north american venture capital circle so tolerant of "scientists turning into entrepreneurs"?

let's answer the first question first. at a media event in early september, ilya briefly explained his entrepreneurial insights. he believes that as artificial intelligence becomes more powerful and computing power continues to accelerate, it is becoming increasingly difficult to determine what kind of testing and certification steps an artificial intelligence product needs to go through before it goes to market. the existence of ssi is to ensure that artificial intelligence technology will become "a force for good in human society."

this sounds right. and as a core member of openai and the author of chatgpt, image generation model dall-e and dozens of other research papers, ilya is indeed more qualified to represent the forefront of this industry than most practitioners.

but the key point is that this is not the first time ilya has tried to develop a business related to "safe super artificial intelligence". in fact, as early as july last year, openai established a department called "superalignment" to study how to manage and guide "super artificial intelligence", and ilya is the direct person in charge of this department. at the same time, openai also allocated 20% of its computing resources to support the research work of this department, and generously extended the development cycle to "find a core solution within four years."

and the ending of "superalignment" is a mess. in may 2024, the media broke the news that openai had disbanded the team, arguing that "super artificial intelligence" is still a theoretical product and no one knows when it can be realized. therefore, compared with the unlimited investment, the senior management of openai believes that the product should be given priority rather than the safeguards, and gradually began to limit the use of superalignment's computing power.

on may 17, jan leike, another co-director of superalignment and a former deepmind researcher, announced his resignation, further corroborating previous media reports. he said on his personal social media: "the disagreement between me and the company's top management on priorities has been going on for some time, and now it has reached a critical point. my view is that we should spend more energy on preparing for the next generation of models, including security, monitorability, confidentiality, social impact and other related topics" - remember ilya's resignation after the palace fight mentioned at the beginning, and his meaningful parting words? it also happened this week.

in other words, if we admit that openai is the company that is closest to "commercial level" success in the current artificial intelligence field, then rounding off,"safe super artificial intelligence" has actually been eliminated once at the commercial level.

therefore, the general interpretation of ilya's financing is that:this is a venture capital over-bet on outstanding talent.what's more, under the objective condition of "long-term inability to make profits", a large number of star entrepreneurs have chosen to return to being "big factory employees" in the past two years (linked to "big models are queuing up to be sold") - if there are still venture capitalists willing to believe the "big model" story, then there are very few options left for them in the market.

now let’s answer the second story. obviously, silicon valley has not fully voted for ilya. many articles point out that there are three core figures in ssi. in addition to ilya, there is also daniel levy, his colleague during the openai period, and daniel gross, the well-known angel investor mentioned at the beginning. the importance of daniel gross has been mentioned repeatedly:

daniel gross is the one with the most entrepreneurial experience among the three. when he was 18 years old, he founded his personal search engine greplin (later renamed cue) through the yc training camp and sold it to apple in 2013 for more than $35 million. he also joined apple through that merger and acquisition and began to be responsible for apple's search and ai projects. in 2017, daniel gross returned to yc as a partner and took the lead in creating yc's investment business in the field of artificial intelligence.

in 2018, he formed his own startup incubator, pioneer, which he believed was a necessary upgrade to the yc model because "software is changing the world, remote work is changing the world, and this may be the way great companies will be born in the future, and it will also be the way venture capital needs to adapt." this idea was supported by stripe and marc andreessen, and more than 100 shots were made in the following 18 months.

in 2023, he completed his self-iteration again and founded "andromeda" with another well-known silicon valley angel investor nat friedman - a "computing cluster" with a total investment of more than 100 million us dollars and weighing more than 4,800 kilograms, used to support early-stage entrepreneurship in ai projects - daniel gross believes that compared to "boring cash", this mine is more suitable for investing in early-stage ai projects.

so it is no exaggeration to say that ilya's startup still has a character similar to sam altman, but he is younger and has a smaller ego.

as for the sub-question of the second question, that is, whether the north american venture capital circle is more tolerant of "scientists turning into entrepreneurs", i can only say that it is also a "bloody lesson". i once found such an article. this article was published on the website of the national institutes of health. the author pointed out that the contradiction between venture capital and scientists has existed for a long time, because "the insights and knowledge of scientists are necessary, but the so-called insights mean that they are in an immature stage. at this stage, they can only attract enough attention from vcs with a lower valuation, which makes scientists always think that they are in a weak position and feel that capital is an evil thing that is grabbing their fruits."

at the end of the article, the author left two sincere suggestions:1. if you really want to invest in a project led by scientists, you have to think about how to allow scientists to obtain sufficient but "appropriate" equity; 2. when investing in scientists, you must carefully negotiate a reasonable technology transfer agreement.