Updated 3 mins ago 18 visible stories 2 fresh this run 9 active categories
Top Story Signal 9/10
Models blogs.nvidia.com 13 hours ago 100% confidence

NVIDIA Launches Nemotron 3 Super: Open 120B MoE Model Optimized for Agentic AI Systems

NVIDIA released Nemotron 3 Super, a fully open 120B parameter (12B active) hybrid Mamba-Transformer MoE model with 1M token context, designed for efficient multi-agent applications and high-throughput reasoning. It features latent MoE, multi-token prediction, and NVFP4 training, outperforming larger models in speed and efficiency while releasing weights, data, and recipes. Available now on Hugging Face and inference platforms.

Why it matters

Delivers state-of-the-art open agentic AI efficiency, accelerating multi-agent development on NVIDIA hardware and democratizing advanced capabilities.

Monitor

Intelligence Stream

Quick Bites

Yann LeCun's AMI: $1.03B seed, $3.5B valuation, JEPA world models for robotics
Startups
Nemotron 3 Super: Latent MoE activates 4x experts at 1x cost, multi-token pred for 3x faster gen
Models
Nemotron 3 Super: 85.6% on agentic PinchBench, 5x throughput on Blackwell.
Models
Oracle RPO surges 325% YoY to $553B on AI deals
Big Tech
Nemotron 3 Super: 120B MoE hits top agentic benchmarks, 5x throughput on NVIDIA NIM.
Models
Nemotron 3 Super on Workers AI: blazing fast inference ⚡️
Models
Nemotron 3 Super: 85.6% PinchBench (best open coding agent); 1M ctx; NVFP4 pretrained
Models
NVIDIA Nemotron 3 Super: 120B MoE (12B active) with 1M ctx crushes efficiency benchmarks
Models

Market Pulse

AI Pulse
55/100
neutral

AI markets quiet in early premarket; NVDA holds steady post-Nemotron amid GTC anticipation

NVDA +0.6%
NVIDIA chips

Recurring Movers

NVDA 8 hits · +1.2%
ORCL 3 hits · +8%
LWLG 2 hits · +12%
NBIS 2 hits · +16%