Nvidia Announces Technologies for Robots, Cars, and Omniverse • The Register

Nvidia Announces Technologies for Robots, Cars, and Omniverse • The Register

GTC The silicon giant said at the GPU Technology Conference (GTC) today that Nvidia’s much-anticipated Hopper H100 accelerators will begin shipping later next month on OEM-built HGX systems.

However, those waiting to get their hands on Nvidia’s DGX H100 systems will have to wait until sometime in the first quarter of next year. DGX is Nvidia’s line of workstations and servers that use its own GPUs and interfaces, and HGX systems are partner-made servers again using Nv technology.

And while Nvidia is amplifying its Hopper architecture in the data center, most of the enterprise toolkit announced this week won’t get the chip giant’s main chassis anytime soon.

On the edge, Nvidia appears satisfied with the whole life of Ampere engineering.

Today, Nvidia has detailed a next-generation AI and robotics platform it calls IGX.

Nvidia_IGX_platform

Nvidia’s IGX platform is a full-size system board built around the Orin platform

IGX is a “comprehensive computing platform to accelerate the deployment of smart machines and medical devices in real time,” said Kimberly Powell, vice president of healthcare at Nvidia. At its core, the system is essentially an extended version of Nvidia’s previously announced Jetson AGX Orin module, announced this spring.

“IGX is a complete system with an Nvidia Orin robotics processor, an Ampere tensor-core GPU, a ConnectX streaming I/O processor, a functional safety island, and a security microcontroller because more and more robots and humans will be working in the same environment,” she added.

In terms of performance, there isn’t much new here. We’re told that the platform is based on the Orin industrial system on a 64GB unit that is comparable in performance to the AGX Orin unit launched earlier this year. This system features 32GB of memory, an octa-core Arm Cortex A78AE CPU and an Ampere-based GPU.

IGX already gains an integrated ConnectX-7 NIC for high-speed connectivity via two 200Gbps interfaces. The board also appears to feature a full array of M.2 storage, PCIe slots, and at least one legacy PCI slot for expansion.

Nvidia aims to use the IGX platform in a variety of AI and robotics use cases in healthcare, manufacturing and logistics, where secrecy or latency makes more centralized systems impractical.

Like the AGX Orin, the system is complemented by Nvidia’s AI Enterprise suite and Fleet Command platform for deployment and management.

One of the first applications of the IGX platform will use Nvidia’s robotics imaging platform.

“The Nvidia Clara Holoscan is our application framework that sits atop the IGX medical device and imaging robotics pipeline,” Powell explained.

Three medical device vendors — Activ Surgical, Moon Surgical and Proximinie — plan to use IGX and Clara Holoscan to power surgical robots and telepresence platforms. IGX Orin developer kits are scheduled to ship early next year with production systems available from ADLink, Advantech, Dedicated Computing, Kontron, MBX, and Onyx to name a few.

On the topic of Orin, Nvidia also revealed its Jetson Orin Nano arithmetic units. The Orin Nano is available in two configurations at launch, including an 8GB version capable of 40 TOPS AI inference and a shortened version of 4GB of memory capable of 20 TOPS.

Nvidia's Jetson Orin Nano

Nvidia’s new Jetson Orin Nano modules are compatible with their terminal

Like previous Jetson units, the Orin Nano uses a pin-compatible edge connector reminiscent of the connector used in a laptop’s SODIMM memory and consumes between 5W and 15W depending on the application and SKU. Nvidia’s Jetson Orin Nano units are available in January starting at $199.

OVX update

Nvidia’s OVX servers, which are designed to run its Omniverse platform, won’t work on Hopper either.

Instead, the company’s second-generation digital twinning and visualization systems come with eight L40 GPUs. The cards are based on the company’s next-generation Ada Lovelace architecture and feature Nvidia’s 3rd generation ray tracing cores and 4th generation Tensor Cores.

The GPUs are accompanied by a pair of Ice Lake Intel Xeon Platinum 8362 CPUs, for a total of 128 processor threads up to 3.6GHz.

Nvidia OVX Server for Omniverse

Nvidia’s custom OVX system packs eight Ada Lovelace GPUs into its gold chassis

The computing system is accompanied by three ConnectX-7 NICs, each capable of 400Gbps of throughput, and 16TB of NVMe storage. While the system is available as individual nodes, Nvidia envisions the system being deployed as part of what it calls the OVX SuperPod, which includes 32 systems connected using the company’s Spectrum-3 51.2 Tbps switches.

The second generation systems from Lenovo, Supermicro, and Inspur will be available from 2023. In the future, Nvidia plans to expand the availability of systems to additional partners.

Hop on Drive Thor

The only kit announced at GTC this week to get Nvidia’s Hopper architecture is the Drive Thor standalone computer system.

Drive Thor replaces Nvidia’s Atlan platform in its 2025 roadmap and promises to deliver 2,000 TOPS of inferential performance upon launch.

Nvidia Standalone Car Computer

Nvidia’s Drive Thor autonomous car computer promises 2000 TOPS performance when launched in 2025

“Drive Thor comes packed with the cutting-edge capabilities that were introduced in our Grace CPU, our Hopper GPU, and our next-generation GPU architecture,” Danny Shapiro, Vice President of Automotive Division at Nvidia, said at a press conference. He said Drive Thor is designed to unify the suite of computer systems that power modern cars on one central platform.

“Look at today’s advanced driver assistance systems – parking, driver monitoring, camera mirrors, digital instrument clusters, infotainment systems – all located on different computers distributed throughout the vehicle,” he said. “However, in 2025, these functions will no longer be separate computers. Instead, Drive Thor will enable manufacturers to efficiently integrate these functions into a single system.”

To handle all the information flowing from car sensors, the chip features multiple computing field isolation, which Nvidia says allows the chip to run simultaneous critical operations without interruption.

The technology also allows the chip to run multiple operating systems simultaneously to suit different vehicle applications. For example, a car’s primary OS might run on Linux, while the QNX or Android infotainment system might run.

However, it is not known when we can see the technology in action. As it stands, all three of Nvidia’s launch partners – Zeekr, Xpeng, and QCraft – are located in China. ®

#Nvidia #Announces #Technologies #Robots #Cars #Omniverse #Register

Leave a Comment

Your email address will not be published. Required fields are marked *