Command Palette

Search for a command to run...

Mistral Large 2 (Nov '24)

Mistral

Open

Context

Release Date
Nov 18, 2024
Window
128k

PricingPer 1M tokens

Input
$2
Output
$6
Blended 3:1
$3

Capabilities

Speed
51 t/s
Input
Output
Reasoning tokens

Latency

TTFT
0.48 ms
500 token response
10.29 s

Benchmarks

Intelligence
●●○○○
Math
●●○○○
Coding
○○○○
MMLU Pro
69.7%
GPQA
48.6%
HLE
4.0%
SciCode
29.2%
AIME
11.0%
MATH 500
73.6%
LiveCodeBench
29.3%
HumanEval
89.8%

Mistral Large 2 is a 123 B-parameter language model with a 128 K-token context window, open-weight for research use. It upgrades code generation, math and reasoning to GPT-4-class quality, supports dozens of human languages plus 80+ programming languages, and exposes advanced parallel/serial function calling.

Developers can self-host under a research license or tap the cloud API at $2 / $6 per million input/output tokens for long-context chat, RAG or code-assistant workloads—with open weights, single-node inference, and no vendor lock-in.

Download mistral-large-instruct-2411