JamBlog

OpenAI co-founder starts new company to build safe superintelligence heres what that me

One of OpenAI’s co-founders, who also served as its chief scientist until last month, has started a new company with the sole aim of building ‘safe superintelligence.’

Ilya Sutskever is one of the most important figures in the world of generative AI, including in the development of the models that led to ChatGPT

In recent years his focus has been on superalignment, specifically trying to ensure superintelligent AI does our bidding not its own. He was one of the board members to fire Sam Altman earlier this year before resigning himself when Altman returned.

That is what he hopes to continue with his new company SSI Inc. This is the first AI lab to skip artificial general intelligence (AGI) and go straight for the sci-fi-inspired super brain. “Our team, investors, and business model are all aligned to achieve SSI,” the company wrote on X.

The founders are Sutskever, Daniel Gross, a former Apple AI lead turned investor in AI products and Daniel Levy a former OpenAI optimization lead and expert in AI privacy.

What is Superintelligence?

Superintelligence is within reach.Building safe superintelligence (SSI) is the most important technical problem of our​​ time.We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.It’s called Safe Superintelligence…June 19, 2024

Artificial superintelligence (ASI) is AI with beyond human levels of intelligence. “At the most fundamental level, this superintelligent AI has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human,” according to IBM.

Unlike AGI, which is generally as or more intelligent than humans, ASI would need to be significantly more intelligent in all areas including reasoning and cognition.

There is no strict definition of superintelligence and each company approaching advanced AI has different interpretations. There is also disagreement over how long it will take to achieve this level of technology with some experts predicting decades.

One aspect of superintelligence would be an AI capable of improving its own intelligence and capabilities, leading to even further distance between human and AI capabilities.

How do you ensure Superintelligence is safe?

The problem with creating an AI model more intelligent than humanity is it could be difficult to keep it controlled or stop it from outsmarting us. It could opt to destroy humanity if it isn’t properly aligned to human values and interests.

To solve this every company working on advanced AI is also developing alignment techniques. These are approaches vary from systems that work on top of the AI model and others that are trained alongside it. That is the SSI Inc approach.

SSI says that focusing exclusively on superintelligence will allow them to ensure it is developed alongside alignment and safety. “SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” they wrote on X.

“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the company added. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”

More from Tom's Guide

Arrow

Back to MacBook Air

AMD Ryzen 7

Intel Core M3

Intel Pentium

Refurbished

Showing 10 of 99 deals

Filters

Arrow

Load more deals

ncG1vNJzZmivp6x7tbvMrJ6uoZSae6S7zGiYomefpbKvrchmmqhllqTCr7DEq2SsrJGnwbR5zZ6uZpufor2iuthmq6hlkqq2rbCMrJifnV2owrGx0aKlrZ2cobaosc2cnGaglaeytHnWoZitZaSdrrV5zJ6Yp6s%3D

Merna Tatro

Update: 2024-05-31