Virtual prototyping of systems that contain a mix of sensor technologies optimizes the design pathway and reduces product time-to-market. Examples from high-voltage measurement, arc-fault detection and electronic overload protection illustrate the many advantages of this cosimulation-based approach.
Yannick Maret, Matija Varga, Stefano Maranò ABB Corporate Research Baden-Dättwill, Switzerland, email@example.com, firstname.lastname@example.org, email@example.com; Francisco Mendoza ABB Corporate Research Ladenburg, Germany, firstname.lastname@example.org; Joris Pascal Former ABB employee
Modern industrial systems often feature many heterogeneous components that operate in different modalities (software, optical, electrical, thermal, or mechanical) and on different timescales. The optimal design of such systems calls for methodologies that allow for cosimulation of the interaction of these components without the need to build physical prototypes. Such approaches are often termed “virtual prototyping” or “system simulation” →1.
The simulation approach described here is sometimes referred to as “network simulation.” In this approach, the interaction of design units is simulated according to a predefined set of rules – rather like Kirchhoff’s laws in electrical systems. This method stands in contrast to finite-element simulation where all minute elements of a problem are simulated.
One of the main goals of virtual prototyping is to reduce the number of physical prototypes needed to develop new technology and thus decrease time-to-market →2.
02 Virtual prototyping radically reduces cost, time-to-market and the number of physical prototype iterations needed.
Motivation for virtual prototyping
The growth of heterogeneous components in industrial sensor systems is well illustrated by the changes going on in high-voltage measurement systems. Here, for example, conventional instrument transformers (ITs) are steadily being replaced by nonconventional instrument transformers (NCITs).
Conventional ITs are current or voltage transformers that enable accurate measurements in the electrical grid. However, as the power carried by the grid increases, the magnetization of an IT’s iron core will begin to saturate and thus limit dynamic range. A conventional IT also suffers from: overheating effects due to eddy currents; sluggish transient behavior that slows its response to critical events such as short circuit or overvoltage events; and risk of fire, explosion or leakage should the IT be paper- or oil-filled. Further limitations of conventional IT technologies are size, weight and cost. NCITs solve these limitations by using alternative sensing principles, eg, electro-optical, air-core coils, etc.
The development of a new NCIT requires knowledge of physics (eg, optical components, mechanical parts), electronics (eg, signal conditioning, communications) and mathematics (signal processing).
This multidisciplinary interdependency stems from the NCIT architecture, where the constituent elements no longer have a simple passive connection but have to interact in a closed loop.
Virtual prototyping enables a new design methodology that encompasses the entirety of a complex system. The resulting virtual prototype allows the development team to efficiently verify, test and improve the design as a whole rather than as separate units.
Benefits of virtual prototyping for sensor system development
Virtual prototyping helps to partition a design in an optimal and objective way. In other words, the design team is aided in their decisions as to which tasks should be implemented in hardware, which in software and, similarly, if signals should be processed in an analog or digital manner.
Besides partitioning, virtual prototyping also assists in correctly assigning system specifications to individual modules by allowing error propagation between modules and overall sensor system accuracy to be studied. Design errors will also be detected more easily and sooner than with traditional prototyping. Furthermore, the impact on the full system of a design change – for example, a cost-optimization measure – in a single module can be quickly evaluated.
Virtual prototyping accelerates future development projects too, via model reuse. Ideally, developed components should be organized in libraries that are retrievable by future designers or automatic design tools.
Another application of virtual prototyping is the collaboration between different product families. For example, a current sensor model can be reused in a circuit-breaker virtual prototype, which will be further reused for customer application modeling.
Regarding the customer applications, virtual prototyping enables an understanding of customer needs that goes far beyond the simple fulfillment of the requirements given in the standards – for example, when customers regularly exceed sensor system specifications. By introducing these new use conditions into the virtual prototype, ABB can understand the impact of such misuse – and quantify it and inform the customer accordingly. Such specific requirements can be taken into account in the next sensor development.
Virtual prototyping: methodology
Several virtual prototyping methodologies exist. For sensor development, one tool used by ABB is based on VHDL-AMS Hardware Description Language (standard IEEE 1076). The AMS extension of VHDL stands for “analog and mixed signals.” As the name suggests, AMS supports the modeling of analog and mixed-signal entities and is operated on simulators that handle signals with both continuous and discrete time information. The simulator is event-driven, which means that only a change of signal value triggers the computation of a new operating point. This technique is highly efficient in terms of simulation time for combined analog and digital signal simulations. In addition, ABB employs SystemC, a library class (standard IEEE 1666) based on C++ for the modeling and simulation of digital hardware and software components in embedded systems. In order to build virtual prototypes that take digital hardware and software components into account, SystemC and VHDL-AMS models are cosimulated.
The use of VHDL-AMS facilitates top-down or bottom-up design techniques and even the combination of the two. This flexibility allows a project team to start part of a design with high-level models and steadily to refine the model’s level of detail as the project develops. This is a top-down approach.
In parallel, if part of the project team already has some detailed models of other components, these either can be used as they are or can be simplified to a higher modeling level to shorten the simulation time. This is a bottom-up approach.
VHDL-AMS has inbuilt support for multiple component descriptions and hence supports multiple levels of abstraction for a given model. →3 shows different levels of detail for a self-calibrating frontend for an NCIT based on a Rogowski coil. The self-calibration is achieved by injecting a high-frequency square signal with a very stable amplitude into the electronics. The software then computes calibration coefficients and uses them to correct the measured low-frequency signal. The self-calibration principle was first proven via simulation using a high level of abstraction. The first prototype functioned correctly but exhibited high-frequency noise in the measured signal. This phenomenon could be reproduced in simulation by increasing the level of detail of the transistors in the analog switches used for the square signal generation. ABB subsequently used the virtual prototype to identify and validate appropriate countermeasures (namely, analog filtering).
SystemC is used to create simulation models of digital components such as microcontrollers, analog-digital converters, storage units, transceivers, etc. →4. The high level of abstraction to which models are described makes it possible to create and reuse components with considerably less effort than with hardware description languages such as VHDL. Such components are put together to create virtual prototypes that mimic the behavior of real hardware platforms and that can execute native software applications. This makes it possible for software developers to start coding and debugging long before hardware prototypes are available.
A virtual prototype of the system can be devised as soon as high-level models of all key components are available. This should ideally occur early on in a project as it gives a clear overview of the product under development and supports idea generation and key architectural decisions.
Arc-fault detection device simulation
Electric arc faults in home installations pose a fire risk. An arc-fault detection device (AFDD) – consisting of an electromechanical breaker, sensors and a microcontroller – interrupts the electrical circuit upon detection of an arcing event and thereby reduces fire risk. AFDDs distinguish hazardous arcing from signals produced by household appliances (eg, drills, pumps, powerline communications) – though avoiding false trips caused by certain low-quality home appliances can be a challenge.
To simulate the different current and voltage signals that an AFDD can experience during normal or abnormal operation, ABB devised a VHDL-AMS model of a domestic electrical installation consisting of cables, different load types and an arc fault. Various electrical topologies, load types and fault locations were set via parameters →5.
05 A real home electrical installation was abstracted with a model comprising cables, loads and an arc-fault. By varying the parameters of the model, a rich dataset of current and voltage signals can be generated. Reproducing the same setups with real measurements would be very costly and time-consuming, if even possible.
The high-level model of the sensing section used a measured transfer function approximated as a rational Laplace expression. The microcontroller arc-fault detection algorithm is abstracted as a Matlab script for use in the simulation.
Combining the virtual prototype of the AFDD with that of the household appliance facilitates different test scenarios and tracking of AFDD status over time while exploring details of why a trip did or did not occur. The level of detail and customization of this virtual testbed will allow new sensing principles, electronics and algorithms to be explored, and new standard requirements to be assessed, without the need for a physical prototype or laboratory experiments. Once a principle has been tested successfully with the virtual prototype, final validations are performed on a physical prototype.
Electronic overload (EOL) relay simulation
An EOL relay uses a current transformer to measure the current in a motor. Current overloads will, after a certain time, trip a relay. EOL relays thus offer reliable and precise motor protection in the event of overload or phase failure.
An initial EOL virtual prototype containing digital components similar to →4 was used to perform software-in-the-loop tests. This helped to validate the functionality of a newly developed sensing algorithm. The simulation time for each test case was only a couple of minutes, which facilitated iterative adaption and improvement of the sensing algorithm.
In a later development phase, a more complete virtual prototype was used to test the full measurement chain of the device. This more sophisticated method required simulation of current transformers, analog electronics, digital electronics and embedded software subsystems →6. Monte Carlo simulations were then used to understand the effects that typical component deviations (eg, in resistors, capacitors, regulator output voltages, op-amp offset voltages, etc.) have on the EOL relay trip-time calculation accuracy. Simulation results provided valuable insights and helped identify and solve design flaws earlier than would normally be the case. Tests at various operation points and temperatures can now be performed in a couple of hours, thereby saving days of testing in the laboratory.
06 Arc-fault detection devices and EOL relays are two ABB products whose design encompasses many domains.
As modeling tools improve, and libraries become more extensive and more detailed, virtual prototyping will reproduce real-life device behavior even more closely and will thus find use in many other product areas. This evolution will be driven by the ever-shorter product life cycles that are evident in almost every area of modern industrial technology and the faster development and productization times these require.
The setup of the methodology described here and some results were achieved with the collaboration of Professor Jürgen Becker, FZI (Forschungszentrum Informatik), Karlsruhe Institute of Technology; Dr. Alain Vachoux and Juan Sebastián Rodriguez Estupiñán, EPFL (Ecole Polytechnique Fédérale de Lausanne); and Dr. Jean-Baptiste Kammerer and Simon Paulus, UNISTRA (University of Strasbourg).