On September 12 Nvidia unveiled a palm-sized, energy-efficient artificial intelligence (AI) computer that automakers can use to power automated and autonomous vehicles for driving and mapping.
The new single-processor configuration of the NVIDIA® DRIVE™ PX 2 AI computing platform for AutoCruise functions — which include highway automated driving and HD mapping — consumes just 10 watts of power and enables vehicles to use deep neural networks to process data from multiple cameras and sensors. It will be deployed by China’s Baidu as the in-vehicle car computer for its self-driving cloud-to-car system.
DRIVE PX 2 enables automakers and their tier 1 suppliers to accelerate production of automated and autonomous vehicles. A car using the small form-factor DRIVE PX 2 for AutoCruise can understand in real time what is happening around it, precisely locate itself on an HD map and plan a safe path forward.
“Bringing an AI computer to the car in a small, efficient form factor is the goal of many automakers,” said Rob Csongor, vice president and general manager of Automotive at NVIDIA. “NVIDIA DRIVE PX 2 in the car solves this challenge for our OEM and tier 1 partners, and complements our data center solution for mapping and training.”
More than 80 automakers, tier 1 suppliers, startups and research institutions developing autonomous vehicle solutions are using DRIVE PX. DRIVE PX 2’s architecture scales from a single mobile processor configuration, to a combination of two mobile processors and two discrete GPUs, to multiple DRIVE PX 2s. This enables automakers and tier 1s to move from development into production for a wide range of self-driving solutions — from AutoCruise for the highway, to AutoChauffeur for point to point travel, to a fully autonomous vehicle.
The new small form-factor DRIVE PX 2 will be the AI engine of the Baidu self-driving car. Last week at Baidu World, in Beijing, NVIDIA and Baidu announced a partnership to deliver a self-driving cloud-to-car system for Chinese automakers, as well as global brands. –
Read more at Nvidia