SAN FRANCISCO/NEW YORK (Reuters) โ Ilya Sutskever, OpenAIโs former chief scientist, has launched a new company called Safe Superintelligence (SSI), aiming to develop safe artificial intelligence systems that far surpass human capabilities.
He and his co-founders outlined their plans for the startup in an exclusive interview with Reuters this week.
Sutskever, 37, is one of the most influential technologists in AI and trained under Geoffrey Hinton, known as the โGodfather of AIโ. Sutskever was an early advocate of scaling โ the idea that AI performance improves with vast amounts of computing power โ which laid the groundwork for generative AI advances like ChatGPT. SSI will approach scaling differently from OpenAI, he said.
Following are highlights from the interview.
THE RATIONALE FOR FOUNDING SSI
โWeโve identified a mountain thatโs a bit different from what I was working [on]โฆonce you climb to the top of this mountain, the paradigm will changeโฆ Everything we know about AI will change once again. At that point, the most important superintelligence safety work will take place.โ
โOur first product will be the safe superintelligence.โ
WOULD YOU RELEASE AI THAT IS AS SMART AS HUMANS AHEAD OF SUPERINTELLIGENCE?
โI think the question is: Is it safe? Is it a force for good in the world? I think the world is going to change so much when we get to this point that to offer you the definitive plan of what weโll do is quite difficult.
I can tell you the world will be a very different place. The way everybody in the broader world is thinking about whatโs happening in AI will be very different in ways that are difficult to comprehend. Itโs going to be a much more intense conversation. It may not just be up to what we decide, also.โ
HOW SSI WILL DECIDE WHAT CONSTITUTES SAFE AI?
โA big part of the answer to your question will require that we do some significant research. And especially if you have the view as we do, that things will change quite a bitโฆ There are many big ideas that are being discovered.
Many people are thinking about how as an AI becomes more powerful, what are the steps and the tests to do? Itโs getting a little tricky. Thereโs a lot of research to be done. I donโt want to say that there are definitive answers just yet. But this is one of the things weโll figure out.โ
ON SCALING HYPOTHESIS AND AI SAFETY
โEveryone just says โscaling hypothesisโ. Everyone neglects to ask, what are we scaling? The great breakthrough of deep learning of the past decade is a particular formula for the scaling hypothesis. But it will changeโฆ And as it changes, the capabilities of the system will increase. The safety question will become the most intense, and thatโs what weโll need to address.โ
ON OPEN-SOURCING SSIโS RESEARCH
โAt this point, all AI companies are not open-sourcing their primary work. The same holds true for us. But I think that hopefully, depending on certain factors, there will be many opportunities to open-source relevant superintelligence safety work. Perhaps not all of it, but certainly some.โ
ON OTHER AI COMPANIESโ SAFETY RESEARCH EFFORTS
โI actually have a very high opinion about the industry. I think that as people continue to make progress, all the different companies will realize โ maybe at slightly different times โ the nature of the challenge that theyโre facing. So rather than say that we think that no one else can do it, we say that we think we can make a contribution.
(Reporting by Kenrick Cai, Anna Tong and Krystal Hu; Editing by Peter Henderson and Edwina Gibbs)
Comments