news

ilya warns again: ai paradigm will change, superintelligence safety will become the most critical

2024-09-07

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

fish and sheep from aofei temple

quantum bit | public account qbitai

the company had just 10 people and raised seed round financing$1 billion

this kind of jaw-dropping thing only happensIlya Sutskeveronly then does it look a little "normal".

ilya, at a time when large models are sweeping the world, his contribution is recognized as reachingchanging the worldlevel:

he is one of the three authors of alexnet. after being recruited by google together with his mentor hinton, he was deeply involved in the alphago project that shocked the world.

in 2015, he participated in the founding of openai and served as its chief scientist. chatgpt once again changed the world, and he is considered one of the most critical figures behind it.

since november last year, ilya's every move has been pushed to the forefront and attracted the attention of the global technology circle:

the infighting within the openai board of directors initiated by him revealed a dispute over the development route of large models. after he completely parted ways with openai in may this year, everyone is waiting for his next entrepreneurial move.

now that the dust has settled, ilya, who has always been low-key, has also shared more information about his company ssi and his personal thoughts on agi with the outside world at this point in time.

in an interview with reuters, ilya answered six key questions. the following is the original text:

why was ssi founded?

we have found a mountain that is a little different from my previous jobs…once you reach the top of this mountain, the paradigm changes…everything we know about artificial intelligence is about to change again.

at that point, superintelligence security efforts will become critical.

our first product will be a secure superintelligence.

before superintelligence, will ai as intelligent as humans be released?

i think the key is: is it safe? is it a force for good in the world? i think the world is going to be very different when we get to that point. so it's pretty hard to give a firm plan of "what we're going to do" right now.

what i can tell you is that the world is going to be very different. the way that people think about what's happening in ai is going to change dramatically and be very hard to understand. it's going to be a much more intense conversation. it may not just be about our decisions.

how does ssi determine what is safe ai?

to answer that question, we're going to have to do some serious research. especially if you're like us, and you think things are going to change a lot... a lot of great ideas are being discovered.

a lot of people are thinking about what tests will be needed for ai as it becomes more powerful? that’s a bit tricky, and there’s a lot of research to be done.

i don't want to say that we have a definitive answer right now. but it's one of the things we need to figure out.

on scale assumptions and ai safety

everyone is talking about the “scaling hypothesis,” but everyone is ignoring one question: what are we scaling?

the big breakthrough in deep learning over the last decade is a particular formula around scaling assumptions. but it will change… as it changes, the capabilities of the systems will increase, and the safety issues will become the most pressing, and that’s what we need to address.

will ssi be open source?

currently, no ai company has open-sourced their major work, and neither have we. but i think, depending on certain factors, there will be a lot of opportunities to open-source superintelligence safety work. maybe not all, but certainly some.

what do other ai companies think of their safety research?

i actually have a very high opinion of the industry. i think as people continue to make progress, all companies will realize -- probably at different points in time -- the nature of the challenges they face. so it's not that we think other people can't do it, but we think we can contribute.

what to do with $1 billion

finally, let me add some background information beyond what ilya said.

the news of ssi was first released in june this year. the goal is very clear, to do what ilya failed to do at openai: build safe super intelligence.

currently, ssi has only 10 employees. after the financing is completed, it plans to use the funds to buy computing power and hire top talents.

i agree with their philosophy and am mentally prepared that ai will one day surpass human intelligence.

co-founder daniel gross also revealed that they do not place too much emphasis on qualifications and experience, but instead spend several hours reviewing whether candidates have "good character."

in terms of computing power, ssi plans to cooperate with cloud vendors and chip companies, but it has not yet been clarified which companies to cooperate with and how to cooperate.

in addition to ilya himself, ssi's co-founders include daniel gross and daniel levy.

△left: daniel gross; right: daniel levy

daniel gross graduated from the department of computer science at harvard university. he was previously a partner of y combinator and has founded or co-founded many companies, including citrus lane, writelatex (later renamed overleaf), etc.

he was listed as one of the "most influential people in artificial intelligence" by time 100 magazine.

daniel levy graduated from stanford's department of computer science and was previously the head of openai's optimization team.