At CES 2026, NVIDIA unveiled new open models, open-source frameworks, and AI infrastructure to accelerate the end-to-end robotics development, enabling the next wave of “generalist-specialist” robots.

Partners including Boston Dynamics, Caterpillar Inc., Humanoid, LG Electronics, and NEURA Robotics are debuting new autonomous machines – ranging from mobile, manipulators to humanoids.

Many of these leaders, including Franka Robotics, are also leveraging GR00T-enabled workflows to simulate, train, and validate complex new behaviors.

Other  news:

Siemens and NVIDIA Expand Partnership to Build the Industrial AI Operating System

NVIDIA is expanding its partnership with Siemens to build the industrial AI operating system—redefining design, manufacturing, and operations for the physical world.

Together, they are bringing AI-driven innovation to every industry and industrial workflow, enabling more resilient sustainable manufacturing worldwide.

Siemens & and NVIDIA keynote replay HERE.

Steel, Sensors and Silicon: How Caterpillar Is Bringing Edge AI to the Jobsite

Following the Caterpillar keynote at CES 2026, Caterpillar and NVIDIA are bringing industrial-grade intelligence to the Cat® 306 CR Mini Excavator.

Powered by NVIDIA Riva open models and the Jetson Thor platform, and simulated using Omniverse, the new CAT AI Assistant enables a real-time voice assistant for operators.

NVIDIA released Cosmos Reason 2, the latest advancement in open, reasoning vision language models for physical AI. Cosmos Reason 2 surpasses its previous version in accuracy and tops the Physical AI Bench and Physical Reasoning leaderboards as the #1 open model for visual understanding.

Physical AI experts from NVIDIA also offer predictions for 2026:

The industrial landscape in Europe is undergoing an immense transformation, with leaders pivoting from traditional automation toward Physical AI – a generation of autonomous models that perceive, understand, interact with, and navigate the physical world.

Looking ahead, NVIDIA experts examine how Physical AI and Robotics are intrinsically poised to transform the industrial sector not only in 2026, but in the decades to come.

Everything Physical Will Be Born in Simulation

The most significant change expected in the coming year is the adoption of a “simulation first” philosophy, which implies that nothing physical is truly ‘new’ by the time it arrives on the factory floor.

“From breakthrough products to the factories they’re built in, everything manufactured will be born in a digital world. Simulation-first design breaks through the barriers of cost, risk, and speed, letting manufacturers iterate, test, and optimize long before breaking ground or cutting steel,” says Rev Lebaredian, NVIDIA’s Vice President of Omniverse and Simulation Technology.

All over the world, engineers are increasingly using high-fidelity digital twins to perfect every movement in a virtual environment, long before execution. By the time a robotic arm is installed in a facility, its job has been practiced millions of times in a virtual replica, ensuring that the moment power is switched on, the facility operates with peak efficiency, saving billions in potential downtime and redesigning costs.

“This digital approach lays the foundation for intelligent automation, as robots and AI-powered industrial facilities can be trained, validated, and continually improved through simulated environments before deployment,” Rev adds.

Robots with Common Sense

Thanks to new Physical AI reasoning models, autonomous machines possess a foundation of core skills that adapt to the real world. Previously, a robot was only as good as its specific code. However, present models enable a machine trained in a simulated warehouse to be deployed in a public facility, such as a hospital, and quickly learn how to navigate safely around people and obstacles.

It’s a versatility that enables robotics to scale into domains previously impractical. They’re now reasoning agents capable of identifying empty pallets, misplaced items, or hazardous spills, while autonomously deciding how to fix the problem without human intervention.

Vision Language Models Operating as the Control Tower for Outside-in Robotics
Deepu Talla, Vice President of Robotics and Edge AI at NVIDIA, says Vision Language Models will manage fleets of robots from 2026.

“VLM’s, AI that can perceive and reason against physical objects and behaviors, will operate as the control tower for outside-in robotics, enabling robots to collaborate and communicate with their environments.

Fixed overhead cameras will provide safety and operations co-pilots that help direct people and machines, while adapting in real-time to keep operations on schedule.”

Operators no longer need complex coding skills; they can type or sketch commands to instantly deploy an entire fleet, ensuring daily workflows run smoothly and solutions are quickly identified for any challenges or mishaps.

“This shift is already happening,” Deepu explains. “Ceiling-mounted cameras can now spot empty pallets, misplaced items, or spills, and send robots to fix them. Most teams run these systems onsite for privacy and speed, linking them to existing floor software and cameras.”

The payoff is clear: fewer incidents, faster changeovers, and consistent performance across sites, making autonomy a dependable part of daily operations.