Mistral-Small-3.2-24B-Instruct is a 24 B-parameter, Apache-2.0 LLM that tightens instruction following, halves repetition loops, and adds sturdier function calling compared to v3.1. It scores 65 % on WildBench v2 and ~80 % MMLU while handling text-and-image prompts in 24 languages.
Developers can self-host it with vLLM or Transformers on ~55 GB of GPU VRAM, skipping API fees and lock-in. Strong function-calling and vision support make it a practical open-source base for chatbots, agent pipelines, and multimodal apps.