Nvidia Omniverse Will Support Scientific Digital Twins

Check out the Low-Code/No-Code Summit on-demand sessions to learn how to successfully innovate and achieve efficiencies by enhancing and scaling citizen developers. Watch now.


Nvidia has announced several important advancements and partnerships to extend Omniverse to scientific applications as well as high performance computing (HPC) systems. This will support scientific digital twins bridging the data silos that currently exist across different applications, models, instruments, and user experiences. This work will further Nvidia’s progress in building the Omniverse for entertainment, industrial, infrastructure, robotics, driverless cars, and medical.

The Omniverse platform uses special-purpose connectors to dynamically translate and align 3D data from dozens of formats and applications on the fly. Changes in one tool, application, or sensor are dynamically reflected in other tools and views that look at the same building, factory, road, or human body from different perspectives.

Scientists are using it to model fusion reactors, cell interactions, and planetary systems. Today, scientists spend a lot of time translating data between tools and then manually modifying data representation, model settings, and 3D rendering engines to see the results. Nvidia wants to use the USD (Universal Scene Description) format as an intermediate data level to automate this process.

Dion Harris, Nvidia Accelerated Computing Lead Product Manager, explained: “The USD format allows us to have a single standard by which you can represent all of these different types of data in a single complex model. You could go in and somehow create an API specifically for a certain type of data, but that process wouldn’t be scalable or extensible to other use cases or other types of data regimes.”

Here are the main updates:

  • Nvidia Omniverse now connects to scientific computing visualization tools on systems with Nvidia A100 GPUs and H100 Tensor Core.
  • Supports larger scientific and industrial digital twins using Nvidia OVX and Omniverse Cloud.
  • Enhances Holoscan to support scientific as well as medical use cases. New APIs for C++ and Python will make it easier for researchers to create sensor data processing workflows for Holoscan.
  • Added connections to Kitware’s ParaView for visualization, Nvidia IndeX for volumetric rendering, Nvidia Modulus for Physics-ML, and Neural VDB for large-scale sparse volumetric rendering.
  • MetroX-3 extends the range of the Nvidia Quantum-2 InfiniBand platform up to 25 miles. This will make it easier to connect scientific instruments spread across a large facility or campus.
  • Nvidia BlueField-3 DPUs will help orchestrate data management at the edge.

Building bigger twins

Processing latency is one of the biggest challenges when building Omniverse workflows that span many tools and applications. While it’s one thing to translate between just a few file formats or tools, creating live connections between many requires massive computing power. The larger Nvidia A100 and H100 could help reduce latency running larger models, and support for Nvidia OVX and Omniverse Cloud will help enterprises scale composable digital twins on more building blocks.

Nvidia created a demo showing how these new capabilities can simulate more aspects of data centers. Earlier this year, they reported on work to simulate data center network hardware and software. Now they can bring together engineering designs from tools like Autodesk Revit, PTC Creo, and Trimble SketchUp to share designs between different engineering teams. These can be combined with port maps in Patch Manager to plan cabling and physical connectivity within the data center. Then Cadence 6SigmaDCX can help analyze heat flows, and Nvidia Modulus can create faster surrogate models for real-time what-if analysis.

Nvidia is also working on a partnership with Lockheed Martin on a project for the National Oceanic and Atmospheric Administration. They plan to use Omniverse as part of an Earth observation digital twin to monitor the environment and collect data from ground stations, satellites, and sensors into a model. This could help improve our understanding of glacier melt, model climate impacts, assess drought risks, and prevent wildfires.

This digital twin will work with Lockheed’s OpenRosetta3D to store data, apply artificial intelligence (AI), and create connectors with various tools and applications that are standardized on the USD format to represent and share data across the system. Nvidia Nucleus will translate between native data formats and the USD format, then deliver it to Lockheed’s Agatha 3D viewer, based on Unity, to visualize data from multiple sensors and models.

Harris believes that these enhancements will usher in a new era of digital twins that shifts from passively mirroring a model of the world to actively shaping the world. A two-way connection will leverage IoT, AI, and the cloud to issue commands to teams in the field. For example, Nvidia is working with Lockheed Martin on using digital twins to help steer satellites to focus on areas most at risk for wildfires.

“We are only scratching the surface of digital twins,” Harris said.

VentureBeat’s mission is to be a digital public square for technical decision makers to gain insights into transformative business technology and transact. Discover our informative sessions.

Leave a Comment