Command Palette

Search for a command to run...

Grok 2 Beta

xAI

Proprietary

Context

Release Date
Aug 13, 2024
Knowledge Cutoff
Mar 01, 2024
Window
128k

PricingPer 1M tokens

Input
$5
Output
$15
Blended 3:1
$7.5

Capabilities

Speed
67 t/s
Input
Output
Reasoning tokens

Latency

TTFT
0.34 ms
500 token response
7.75 s

Benchmarks

Intelligence
●●○○○
Math
●●○○○
Coding
○○○○
MMLU Pro
70.3%
GPQA
47.1%
HLE
4.7%
SciCode
29.5%
AIME
10.3%
MATH 500
73.7%
LiveCodeBench
24.1%
HumanEval
86.6%

Grok-2 is xAI's latest large language model, built as a step up from Grok-1.5. It beats GPT-4 Turbo, Claude 3.5, and other frontier systems on benchmarks such as MMLU, GPQA, MATH, and HumanEval, all while handling 128 k-token contexts at ≈67 tokens/second.

For developers, Grok-2 offers top-tier performance at $5 input / $15 output per million tokens through an upcoming, multi-region enterprise API that includes MFA, granular usage analytics, and team-management endpoints. If you need a fast, cost-efficient model that can drop into production and tap real-time data from X, Grok-2 is built for you.