NVIDIA has introduced NVIDIA Alpamayo on January 5th, 2026 as an open development portfolio for autonomous vehicle research, centered on reasoning-based models, closed-loop simulation, and large-scale open datasets. The company presents Alpamayo not as an in-vehicle autonomy product, but as a training, evaluation, and development-layer system intended to support how autonomous driving models are built and tested before deployment.
Across NVIDIA’s published materials, Alpamayo is framed as a response to a practical development challenge: evaluating autonomous driving behavior in interactive environments where model decisions influence subsequent outcomes. Rather than focusing on individual perception or planning components, NVIDIA positions Alpamayo at the system level, combining models, simulation, and data into a closed-loop workflow.
Above: A photo of NVIDIA President and CEO Jensen Huang speaking on NVIDIA's Alpamayo development at CES 2026 in Las Vegas. Photo from this video on YouTube. Used Fair Use Provision.
Alpamayo 1: Reasoning Models for Autonomous Driving Development
At the center of the initial release is Alpamayo 1, which NVIDIA describes as an open vision-language-action (VLA) model with explicit reasoning capability. According to the company, Alpamayo 1 is a 10-billion-parameter model designed to generate driving trajectories alongside reasoning traces that describe why a given action was selected.
NVIDIA states that Alpamayo 1 is intended to function as a teacher model, rather than a production runtime embedded directly in vehicles. In this configuration, Alpamayo is used to explore and evaluate driving behavior at scale, with outputs that can inform the development and distillation of smaller models integrated into complete autonomous vehicle stacks.
NVIDIA emphasizes that reasoning traces are presented as a development aid. By pairing actions with explicit decision logic, NVIDIA states the model outputs can be reviewed and evaluated during autonomous driving development.
AlpaSim and Closed-Loop Simulation
The second core element of the Alpamayo portfolio is AlpaSim, an open-source simulation framework designed to support closed-loop autonomous vehicle testing. NVIDIA describes AlpaSim as a modular, distributed system built around a microservices architecture, where rendering, traffic simulation, physics, and inference workloads can run as independent services and be assigned across GPUs.
According to NVIDIA’s technical documentation, AlpaSim is designed to scale horizontally, allowing multiple simulation scenes to be evaluated in parallel while overlapping rendering and inference pipelines. This architecture is presented as a way to move beyond static replay or offline metrics, enabling autonomous systems to interact dynamically with simulated environments.
NVIDIA positions AlpaSim as a foundational layer for evaluating reasoning-based models such as Alpamayo 1, particularly in scenarios where model decisions alter the environment and shape subsequent outcomes. The company does not present AlpaSim as a substitute for real-world testing, but as a development tool intended to improve evaluation realism earlier in the workflow.
Open Physical AI Datasets
The third pillar of the Alpamayo portfolio is a set of open Physical AI datasets intended to support training and evaluation. NVIDIA states that the initial autonomous vehicle dataset includes more than 1,700 hours of driving data collected across 25 countries and over 2,500 cities.
NVIDIA’s technical blog provides additional detail, describing 1,727 hours of data organized into 310,895 clips, each approximately 20 seconds long. The company states that all clips include multi-camera and LiDAR data, with radar available for a subset of the dataset. The datasets are distributed through Hugging Face and are positioned as shared resources for the research and development community.
While NVIDIA emphasizes dataset scale and geographic diversity, the company does not publish standardized performance benchmarks tied directly to Alpamayo 1 trained on this data. The datasets are described as inputs to development workflows rather than evidence of production readiness.
Published Disclosures and Limitations
Across its announcements, NVIDIA provides concrete disclosures about what Alpamayo includes and how it is intended to be used. These include the availability of an open reasoning VLA model, an open-source closed-loop simulation framework, and large-scale open datasets, all framed with a development and evaluation context.
At the same time, several categories remain explicitly undisclosed. NVIDIA does not publish standardized performance benchmarks for Alpamayo 1, does not specify production deployment timelines, and does not describe in-vehicle integration requirements. References to organizations such as Lucid, JLR, Uber, and Berkeley DeepDrive are presented as ecosystem participation rather than confirmed customer deployments.
This separation aligns with NVIDIA’s stated positioning of Alpamayo as a development-layer platform, rather than an autonomous driving solution.
Alpamayo as Development Infrastructure
Viewed through NVIDIA’s published materials, Alpamayo is presented as an effort to formalize how reasoning, simulation, and data interact within autonomous vehicle development workflows. NVIDIA emphasizes tooling, openness, and system-level integration, while leaving production outcomes, timelines, and real-world performance validation outside the scope of the initial disclosure.
As presented, Alpamayo functions as a reference framework for reasoning-based autonomy development rather than a finalized production platform