Mistral Magistral is a reasoning-tuned large language model, offered as an open-source 24 B-parameter "Small" and a higher-end "Medium" variant, each handling 128 K-token contexts. It delivers step-by-step answers across eight major languages, scores 73.6 % on AIME-24 and 0.898 on HumanEval, and streams tokens up to 10 × faster than typical GPT-class models.
Use it when your app needs transparent logic for calculations, coding, or regulated-industry workflows: the model exposes its chain-of-thought and lets you audit every step. Small can be self-hosted under Apache-2.0 while Medium is reachable via Mistral's API, SageMaker, and soon other clouds at $2/$5 per million input/output tokens—giving developers high reasoning power without vendor lock-in or ballooning costs.
Magistral Small is an open-weight model available for self-deployment under the Apache 2.0 license on Magistral Small 2506.