Updated 9 mins ago 18 visible stories 1 fresh this run 10 active categories
Top Story Signal 9/10
Models mistral.ai 7 hours ago 98% confidence

Mistral AI releases Mistral Small 4: Powerful 119B MoE multimodal model under Apache 2.0

Mistral AI has launched Mistral Small 4, a 119B parameter Mixture-of-Experts model that integrates multimodal capabilities from Pixtral, reasoning from Magistral, and coding from Devstral. It supports 256k context, configurable reasoning effort, and delivers competitive performance to closed-source giants on benchmarks while being fully open-source. Available immediately on major inference platforms like vLLM and Transformers.

Why it matters

A major open frontier model release from a top lab accelerates agentic, multimodal AI development without vendor lock-in.

Monitor

Intelligence Stream

Quick Bites

Mistral Small 4: 128 experts, 6B active, 256k ctx, beats GPT-OSS 120B on LiveCodeBench w/ 20% less output
Models
Mistral Small 4: 119B MoE unifies text/vision/reasoning/coding, 256k ctx, Apache 2.0
Models
Mistral Small 4: 119B MoE (6B active), 256k ctx, multimodal, beats GPT-o120B on LiveCodeBench w/ 20% less output
Models
Mistral Small 4: 119B MoE, unifies text/image/reasoning/coding, 256k ctx, Apache 2.0 open weights
Models
Qwen3.5 397B (~400B params) runs locally on MacBook at 1 tok/s via Claude Code
Models
Elon: Tesla AI5 edge-optimized but SpaceX/Tesla to buy Nvidia at scale.
Big Tech
Xiaomi MiMo-V2-Pro: 1T params agentic model, 1426 Elo GDPval-AA, $1/$3/M tokens
Models
Anthropic Dispatch: Phone dispatches tasks to local Claude desktop agent in sandbox—Max users live
AI

Market Pulse

AI Pulse
52/100
neutral

AI chip stocks digest GTC announcements; NVDA flat amid inference hype and Musk buy confirmation, no major >1% movers

NVDA -0.2%
NVIDIA chips

Recurring Movers

NVDA 12 hits · +0.2%
MU 2 hits · +2.1%
TSM 2 hits · +1.5%
XIACF 1 hits · +2.1%