Offering a vision for the future of artificial intelligence (AI), NVIDIA CEO Jensen Huang unveiled the chip giant’s product updates at the GTC 2025 keynote Tuesday, detailing a roadmap that includes much faster semiconductors, an AI-optimized operating system, and humanoid robots with advanced reasoning capabilities.
Arguably the most impactful announcement: The Blackwell Ultra family of chips, building on the existing Blackwell line, will begin shipping later this year, and offer a significant advance forward in AI computing.
With twice the amount of bandwidth, the Blackwell Ultra chips will support higher performance in use cases ranging from agentic AI to building reasoning models. NVIDIA claims customers can generate 50 times the revenue from Blackwell Ultra compared with NVIDIA’s 2023-era Hopper chips.

“NVIDIA’s latest chip announcements expand its AI hardware portfolio across the entire computing spectrum, with the Blackwell Ultra GPU (GB 300) delivering 1.5x more inference performance than current GB 200 chips,” said Nick Patience, VP and practice lead at The Futurum Group.
“This isn’t just an incremental update—it’s engineered specifically for the age of reasoning AI, where models need to ‘think’ through complex problems. Combined with the new photonics technology and the expanded Blackwell family—from the desktop DGX Spark to the DGX SuperPod—NVIDIA is clearly positioning itself to power every level of AI computation, from individual developers to hyperscale data centers,” he said.
An example of how NVIDIA serves hyperscale data centers is the new Grace Blackwell NVLink72 rack. The unit includes—in a single rack—1 exaflop and 120 kW, with a liquid cooled infrastructure. The change from air-cooled to liquid-cooled enables the rack to better manage its intense power demands. With its ability to support the next generation of advanced modeling, this unit has been applauded by AI experts as a big leap forward in the AI infrastructure.
To support this level of extreme computing, Huang unveiled Dynamo, an AI-optimized operating system that allows Blackwell NVL systems to obtain up to 40x improved performance. The AI operating system is designed to read and prep memory access for inference and accelerate token generation, enabling it to further boost performance for the new generation of hardware.
Describing the upcoming roadmap, Huang said that in 2026, the company will release the Vera Rubin chip, which will combine a new CPU, Vera, with a new GPU, dubbed Rubin. This chip pairing can crunch 50 petaflops for inference, which is twice the level of the current Blackwell chips. In 2027, the company will debut the Rubin Ultra NVL576, with is expected to offer an astounding 600kW in a single rack.
Perhaps most futuristic in a sci-fi sense, Huang spoke of how humanoid robots are only a few years away from large-scale use in manufacturing plants. The company’s Isaac GROOT N1 is an open platform to customize humanoid robots, trained on both real and synthetic data.
NVIDIA says GROOT N1 will enable robots to have a dual-system architecture: System 1 is for rapid thinking and action, similar to human reflexes; System 2 will allow more careful, deeper reasoning for a deliberate decision process. The company will partner with Google DeepMind and Disney Research to develop elements of the GROOT robotics platform.