Command Palette

Search for a command to run...

o3-pro

OpenAI

Frontier ModelReasoningVision ModelProprietary

Context

Knowledge Cutoff
Jun 01, 2024
Window
200k

PricingPer 1M tokens

Input
$20
Output
$80
Blended 3:1
$35

Capabilities

Speed
21 t/s
Input
Output
Reasoning tokens

Latency

TTFT
129.20 ms
500 token response
153.03 s

Benchmarks

Reasoning
●●●●
GPQA
84.5%

o3-pro is OpenAI’s top-accuracy reasoning model: a 200k-token transformer that handles text, images, and audio, runs an extra “deep think” pass, and beats o3-level baselines on hard coding, math, and science tasks. It ships with built-in tools for web browsing, code execution, and file analysis, but skips image generation to keep latency and safety in check.

For developers, this means you can feed enormous codebases or research dumps into one endpoint and get higher-quality answers or patches without chunking tricks. If precision matters more than speed—SWE-bench tickets, scientific analysis, multi-step agent workflows — o3-pro offers the strongest accuracy and longest context window in OpenAI's lineup, accessible today via ChatGPT Pro or API.