Mistral AI releases Mistral Small 4: Powerful 119B MoE multimodal model under Apache 2.0
Mistral AI has launched Mistral Small 4, a 119B parameter Mixture-of-Experts model that integrates multimodal capabilities from Pixtral, reasoning from Magistral, and coding from Devstral. It supports 256k context, configurable reasoning effort, and delivers competitive performance to closed-source giants on benchmarks while being fully open-source. Available immediately on major inference platforms like vLLM and Transformers.
A major open frontier model release from a top lab accelerates agentic, multimodal AI development without vendor lock-in.