Command Palette

Search for a command to run...

Ilya Sutskever

Co-founded OpenAI and co-created AlexNet, driving deep learning advances and steering research on safe AI.

Ilya Sutskever is a pioneering machine-learning researcher whose breakthroughs accelerated deep learning from academic labs to global industry. He co-invented AlexNet, the convolutional network that ignited the modern AI boom, and later co-founded OpenAI, where he serves as Chief Scientist. His work ranges from scalable neural architectures to alignment research aimed at ensuring powerful AI benefits humanity.123

  Revolutionizing Deep Neural Networks at Scale

Sutskever's early work culminated in AlexNet, the first convolutional network to dominate the ImageNet challenge and prove that deep nets trained on GPUs could outperform hand-engineered vision pipelines.2 This milestone sparked a wave of investment and research into large-scale neural networks, influencing computer vision, speech recognition, and natural language processing.

  Co-founding and Guiding OpenAI's Direction

In 2015, Sutskever left Google Brain to co-launch OpenAI, shaping its mission to build general AI that is broadly beneficial. As Chief Scientist he has driven landmark projects such as GPT-3, DALL·E, and ChatGPT, championing open research while balancing commercial pressures and long-term safety goals.3

  Advancing Long-Term AI Safety Research

Recognizing the societal stakes of increasingly capable models, Sutskever advocates for alignment and controllability research. He co-initiated OpenAI's Superalignment agenda, allocating resources to ensure future systems behave consistently with human values and can be evaluated and steered even at extreme capability scales.45

  Founding Safe Superintelligence Inc.

After departing OpenAI in 2024, Sutskever launched Safe Superintelligence Inc. (SSI) with Daniel Gross and Daniel Levy to pursue a "safe-by-design" path toward superintelligent AI.6

By March 2025 the stealthy startup had secured more than $3 billion in funding and was reportedly valued at over $30 billion, despite withholding commercial releases until rigorous safety standards are met.78

Operating compact research hubs in Palo Alto and Tel Aviv, SSI's 2025 agenda prioritizes interpretability tooling, scalable oversight, and formal safety evaluations — activities Sutskever argues are prerequisites before unleashing superhuman systems.9

  References

  1. en.wikipedia.org

  2. proceedings.neurips.cc 2

  3. openai.com 2

  4. arxiv.org

  5. openai.com

  6. reuters.com

  7. reuters.com

  8. wsj.com

  9. bloomberg.com