Mistral Large 2 is a 123 B-parameter language model with a 128 K-token context window, open-weight for research use. It upgrades code generation, math and reasoning to GPT-4-class quality, supports dozens of human languages plus 80+ programming languages, and exposes advanced parallel/serial function calling.
Developers can self-host under a research license or tap the cloud API at $2 / $6 per million input/output tokens for long-context chat, RAG or code-assistant workloads—with open weights, single-node inference, and no vendor lock-in.