For over two decades, simulation has played a critical role in accelerating the development and deployment of robotic automation across industry. By allowing engineers to design, test and refine robotic applications in a virtual environment, simulation tools such as ABB’s RobotStudio® suite have helped companies reduce risk, shorten commissioning times and improve system performance before equipment reaches the factory floor.
As robotics enters a new era driven by artificial intelligence, traditional simulation approaches are beginning to reach their limits. A persistent obstacle has emerged that increasingly restricts the scalability of AI-enabled automation: the gap between simulated behavior and real-world performance.
Traditional simulation and the ‘sim-to-real gap’
Conventional robot simulation environments are primarily designed around geometry and motion planning. They model robot kinematics, reachability and collision detection with high accuracy, allowing engineers to validate robot paths and cell layouts efficiently. While this approach has proven effective for many traditional automation tasks, it fails to capture the complexity and variability of real industrial environments.
Although Physical AI promises a new generation of robots capable of perceiving, reasoning and acting autonomously, AI-driven automation systems can struggle to scale beyond controlled environments. Factory floors rarely operate under perfect conditions. Lighting intensity changes throughout the day, surfaces create unpredictable reflections, parts may arrive misaligned and sensors can introduce noise or distortion. For AI-driven perception systems that rely on visual data to guide robotic actions, these seemingly minor variations can have a significant impact on performance.
Controller behavior can also introduce discrepancies. Many simulation tools approximate robot motion rather than executing the same control logic used by the physical robot, leading to potential differences in timing, acceleration profiles or path execution when programs are transferred from simulation to the real machine.
The discrepancy between how robotic systems behave in simulation and how they perform in a real physical application is known as the ‘sim-to-real gap’. Closing this gap involves not just more advanced AI models, but the ability to design, train, and validate robotic systems in simulation environments that accurately reflect the conditions of the real world.
Introducing RobotStudio® HyperReality
Originally launched in 1998, ABB’s RobotStudio offline programming and simulation software has established a strong reputation for enabling faster, safer and more efficient robot installation and commissioning.
Used by companies of all sizes across sectors from manufacturing to construction, it has proven invaluable as a way for users to minimize risk in their robotic automation projects by identifying potential issues and verifying performance before any physical equipment is installed.
Now, with the launch of RobotStudio® HyperReality, the RobotStudio simulation platform is equipped with powerful new capabilities designed for the age of Physical AI. Building on RobotStudio’s strengths as an offline programming environment for robotic motion, HyperReality creates a physics-accurate, photorealistic and controller-faithful simulation environment in which product design, perception modelling, AI training and robot execution can be developed simultaneously.
This capability enables a shift from traditional sequential engineering toward true concurrent engineering, where product design, manufacturing processes and automation systems are designed and validated in parallel before any move to physical production infrastructure.
The technology achieves this by combining ABB’s industrial robot, digital twin and virtual controller with photorealistic simulation and synthetic data generation powered by NVIDIA Omniverse. A real-time 3D development open platform that enables individuals and teams to build and operate industrial-grade, physically accurate digital twins and virtual worlds, Omniverse augments the capabilities of RobotStudio to create a powerful simulation framework that can reproduce real manufacturing conditions with sufficient fidelity to support industrial-scale physical AI deployment.
Three-in-one approach to digital twinning
At the core of RobotStudio HyperReality is a multi-layer digital twin architecture that integrates several domains into a single synchronized environment. There are three layers in total: the product digital twin, the photometric environment, and the robot digital twin. In combination, HyperReality can simultaneously model the product, the conditions around it, and the robot.
The first layer, the product digital twin, uses CAD models and manufacturing tolerances to assess the geometry, materials, and properties (i.e. surface textures, weight), to understand the actual product being assembled. From this, the system can infer how to interact with the product.
The second layer analyzes the photometric environment, or in other words the local surroundings. It takes into account lighting levels, reflections, shadows, as well as the camera itself. AI-enabled robots must be able to parse all this information to build an accurate 3D picture of the local environment, which in turn allows realistic simulation within the wider digital twin architecture.
The third layer is the robot digital twin. This layer adds to the two other digital twins by covering the robot’s mechanical structure and motion capabilities, along with its dynamics, sensors and control logic.
This “three-in-one” system opens avenues for more collaborative development, whereby product designers, automation engineers and AI specialists can all work within the same architecture from the earliest stages of development.
Precise control
Conventional approaches to simulation tend to attempt to approximate robot behavior. Where RobotStudio HyperReality differs is ABB’s Virtual Controller, which runs the same firmware used in ABB’s physical robot controllers. This essentially means that the controller is already highly familiar with the robot’s behavior. Motion paths, acceleration profiles, cycle timing and sensor interactions in simulations will therefore very closely match those executed by the real robot. In practice, this means that a simulation accuracy of approximately 99 percent can be achieved.
This accuracy can be refined further through ABB’s Absolute Accuracy technology. This calibrates the robot’s geometric model against its physical configuration. As a result, positioning errors can be reduced to around half a millimeter, compared to the typical deviations of eight to fifteen millimeters typical in conventional approaches. This level of accuracy opens new possibilities in high-precision applications such as electronics manufacturing.
NVIDIA Omniverse and photorealistic simulation
RobotStudio integrates NVIDIA’s real-time 3D simulation platform Omniverse. Based on technologies first developed by Pixar, Omniverse extends RobotStudio’s capabilities by introducing photorealistic, high-fidelity 3D modelling of surrounding environments.
Omniverse utilizes NVIDIA’s RTX-based graphics rendering technology to simulate complex lighting behavior, including reflections, shadows and material properties. This allows digital scenes to closely replicate how industrial cameras and sensors perceive objects under real factory conditions.
Omniverse’s domain randomization technology allows the system to automatically vary parameters to produce thousands of potential training scenarios based on likely situations that occur day-to-day in real-life production environments. The resulting data can then be used to train physical AI models.
AI training inside the simulation loop
RobotStudio HyperReality can integrate AI training directly into the simulation workflow. Robots in the simulated environment can generate vast amounts of labelled data from virtual cameras and sensors, which can then be used to train other machine learning models on perception and robot behaviors.
By simultaneously simulating perception inputs, environmental conditions and robot motion through HyperReality’s three-in-one approach, training can therefore cover the entire perception-to-action pipeline. Vision models are equipped with the intelligence to learn how to interpret various scenarios, while robot control policies learn how to respond to new variables.
Once trained, these models can then be validated through virtual commissioning exercises that simulate complete production tasks under multiple conditions. The same robot programs and controller logic used in simulation can be transferred directly to physical robots. This significantly reduces commissioning risk and engineering time, while also streamlining the adoption and integration of AI workflows.
Enabling industrial-scale physical AI
In RobotStudio HyperReality, controller-accurate robot simulation is combined seamlessly with photorealistic digital environments and large-scale synthetic data generation. In doing so, this essentially transforms simulation into a continuous engineering platform.
Product design, perception modelling, AI training and robot execution can all be developed and validated from within the same environment. Engineers from different disciplines can collaborate earlier in the development cycle, stress-testing variables and optimizing systems long before physical equipment is installed.
The result is a production development process that shifts from physical trial-and-error to digital validation. Manufacturers can design and optimize entire production lines virtually, reducing commissioning times by up to 80 percent, while also lowering development costs by as much as 40 percent. This in turn helps to accelerate time-to-market, even for highly complex products.
By closing the sim-to-real gap through high-fidelity simulation, controller-accurate execution and AI-driven training, RobotStudio HyperReality provides the foundation required to deploy physical AI reliably at industrial scale.
Enhancements and benefits delivered by RobotStudio® HyperReality:
| Enhancement | Benefit | What it means for users |
| Simulation accuracy | ~99% correlation between simulated and real robot behavior | Programs developed in simulation transfer reliably to physical robots |
| Positioning precision | Errors reduced from 8–15 mm to ~0.5 mm using Absolute Accuracy | Enables high-precision manufacturing tasks such as electronics assembly |
| Commissioning efficiency | Up to 80% reduction in commissioning time | Production systems can be validated virtually before installation |
| Development cost | Up to 40% lower development costs | Less physical prototyping and fewer engineering revisions |
| Concurrent engineering | Product design, automation systems and AI models developed simultaneously | Faster development cycles and improved cross-disciplinary collaboration |
| Controller-level simulation | Virtual Controller runs the same firmware as the physical robot | Motion paths, acceleration and cycle timing match real robot behaviour |
| Unified digital twin | Integrated product, environment and robot digital twins | Enables full-system validation including geometry, perception and robot interaction |
| Synthetic AI training | Photorealistic simulation and large-scale synthetic data generation | Improves robustness of machine vision and physical AI models |