news

exclusive interview with ilya sutskever: 8 soul-searching questions about ssi

2024-09-07

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

author: xuushan, editor: manman zhou

with a valuation of 5 billion alone, what kind of agi does ilya want to build?

on september 5, reuters reported that safe superintelligence (ssi), founded by ilya sutskever, former chief scientist of openai,raised $1 billion in cashaccording to an insider,after the financing, the company's valuation reached us$5 billion.the company plans to use the funds to purchase computing power and hire top talent to help develop secure artificial intelligence systems that far exceed human capabilities. ssi officials have also confirmed the news is accurate.

ssi was founded in june 2024. the initial team consists of ilya sutskever, daniel levy, and daniel gross. currently, ilya sutskever serves as chief scientist, daniel levy serves as principal scientist, and daniel gross serves as ceo, responsible for computing power and fundraising. ssi currently has 10 employees and operates with a conventional for-profit structure.

ssi was able to obtain high financing and high valuation even though it had not released any products and was established less than four months ago, precisely because of the strong industry background of these three people. investors are willing to make excessive bets on outstanding talents who focus on basic artificial intelligence research.

ilya sutskever is one of the most influential technologists in the field of artificial intelligence. he studied under geoffrey hinton, known as the "godfather of artificial intelligence," and was an early advocate of the scaling hypothesis, which states that ai performance improves with the addition of large amounts of computing power, which also laid the foundation for the explosion of generative ai.

daniel gross was formerly the head of ai technology at apple and a former y combinator partner, while daniel levy is a former employee of openai.

for now, ssi will focus on building a small team of researchers and engineers based in palo alto, california, and tel aviv, israel.

investors included top venture capital firms andreessen horowitz, sequoia capital, dst global and sv angel. nfdg, an investment partnership run by nat friedman and ssi ceo daniel gross, also participated.

“it’s important for us to have investors who understand, respect and support our mission to move directly into secure superintelligence, especially inwe spend several years on r&d before we can bring a product to market.。”

after the ssi financing information was released, reuters had an in-depth discussion with ilya sutskever, who also responded to a series of questions about whether ssi is open source and future development. from the interview, we can see that he attaches great importance to the security of super agi.

the following is compiled by silicon rabbit without affecting the original text, enjoy!

01

q: why did you establish ssi?

Ilya“we found a mountain that’s a little different than the ones i worked on before…once you get to the top of this mountain, the paradigm will change…our understanding of ai will change again. that’s when the most important superintelligence safety work will become important.”

“our first product will be about the safety of superintelligence.”

02

q: will you invent ai as smart as humans before superintelligence?

Ilya“i think the question is: is it safe? it’s a globaldo goodi think by the time we get to that point, the world will have changed so much that it's hard to give you a clear plan.

i can tell you that the world is going to be very different. the world is going to look at what’s happening in ai very differently, and it’s going to be very difficult to understand.this will be a more intense conversation.. it may not just be a decision we make on our own.”

03

q: how does ssi determine what is safe ai?

Ilya“to be able to fully answer your question, we’d need to do some significant research. especially if you think, as we do, that things are going to change a lot… a lot of great ideas are awakening.

many people are thinking,as ai becomes more powerful, what steps do we need to take and what tests do we need to conduct?it’s a little tricky. there’s still a lot of research to be done. i don’t want to say we have a definitive answer right now. but it’s one of those things we’re trying to figure out.”

04

q: on scaling assumptions and ai safety

Ilya: "everyone just says 'extend the hypothesis'. everyone ignores the questionwhat exactly are we expanding?the big breakthroughs in deep learning over the last decade are based on a particular formula based on the scaling hypothesis. but it will change… as it changes, the capabilities of the systems will increase. the safety issues will become more severe, and that’s what we need to address.”

05

q: research on open source ssi

Ilya:"at present,not all ai companies open source their work, and so do we. but i think there will be a lot of opportunities for (we) to open source the safety work related to superintelligence for a number of reasons. maybe not all of it, but certainly some of it.”

06

q: what about the security research work of other ai companies?

Ilya: “i actually think very highly of the industry. i think as people progress, all the different companies will realize — maybe at slightly different times — based on the nature of the challenges they face. so rather than saying we don’t think anyone can do it, we’ll say we think we can contribute.”

07

q: what kind of employees should be recruited?

Ilya: "some people can work for a long time, but they will soon return to their old ways. this is not very suitable for our style. but if you are good atdo something different, then you have the potential to do something special (with us).”

"what excites us is when you find out that the employee is interested in what we do, rather than in the hype or some other buzz."

"we will spend hours reviewing candidates for 'good character' and look for people with exceptional abilities, rather than placing undue emphasis on qualifications and experience in the field."

08

q: what is the future development of ssi?

Ilya“we’re going to scale in a different way than openai.”

the company plans to work with cloud providers and chip companies to fund its computing power needs, but has not yet decided which companies to work with.

reference links:

Ilya Sutskever on how AI will change and his new startup Safe Superintelligence(Reuters)