James Stoupis, Rostan Rodrigues, Mohammad Razeghi-Jahromi, Amanuel Melese, Joemoan Xavier ABB Corporate Research, Electrification, Raleigh, N.C., United States,
james.stoupis@us.abb.com, rostan.rodrigues@us.abb.com, mohammad.razeghi-jahromi@us.abb.com,
amanuel.melese@us.abb.com, joemoan.i.xavier@us.abb.com
The recent proliferation of Internet-of-Things (IoT)-based technologies has led to massive improvements in digital computing hardware and software technologies; this translates to lower cost for comparatively higher computation and storage capability, compact-sized hardware, and compatibility with a larger selection of operating systems (OS). Additionally, communication protocols have increased the penetration of single board computers in many consumer and industrial applications. Such profound innovations could positively impact the energy sector. The application of a state-of-the-art edge computing infrastructure to the electrical power distribution grid would provide distributed intelligence and rapid response to time-critical grid issues, eg, fault detection, isolation, and restoration [1]. As more distributed energy resources (DERs) are integrated, the power distribution system becomes increasingly complex: Potentially destabilizing events, such as temporary and permanent faults [2], loss of measurement data, and cyber-attacks are well-known concerns [1]. To address these issues, ABB conducted a small-scale experimental validation of edge computing in power distribution automation. The resulting framework, Economical, Data Fusion-based Grid Edge Processor (EDGEPRO), presented below, can be used for classifying different faults, detecting anomalies in the grid, measurement data recovery, and other advanced analytics techniques. EDGEPRO has been designed for today’s distribution applications and to accommodate future hierarchical distribution.
Software framework for EDGEPRO
Because multiple data sources from different devices are at the core of smart grid platform integration, ABB needed to provide a scalable and secure platform for edge computing applications to accommodate various industrialization and commercialization options: the EDGEPRO embedded framework does this by supporting both Windows and Linux OSs. Such flexibility is crucial as communication libraries, eg, industrial communication, Web server, protection, and control proprietary software, are generally hosted in Windows OS, whereas novel secure VPN technologies, mesh wireless libraries, and machine learning (ML) applications are easiest to evaluate and implement in Linux.
How does the EDGEPRO device →01 or device network →02 function? Since it can implement a multi-layered hierarchical architecture with different edge computing devices (high-, medium-, low-cost) the network features a main EDGEPRO device with management capabilities for pushing applications to individual devices. A container registry hosted on the highest level EDGEPRO device, or in the cloud, provides image repositories for the applications hosted on each computing device. Repositories might contain various tagged versions of container images, and the lower-level EDGEPRO devices can pull down the required image by requesting device- or configuration-specific tags.
The management bus implements control functions; the container image pulls over secure protocols, featuring a broker that communicates over publish-subscribe-type message queuing protocols (MQTT, AMQP, etc.). Each application supports the receiving of message commands to start and stop application processes, receive configuration descriptive files that coordinate grouping with other devices and zone configurations. This interface manages pushing updates to deployed applications, syncing databases at each individual device for distributed applications, and any updates to security- or Public Key Infrastructure (PKI) technologies. The EDGEPRO lower-level devices regularly check for updates to individual device configuration- and zone configuration files exchanged over a JavaScript Object Notation (JSON) or equivalent. Thus, EDGEPRO can perform the necessary control functions.
Building containerized grid applications
For field applications, containerization and workflow orchestration – well-known in cloud computing – were adapted to the edge. ABB devised a simplified process for converting standalone grid applications into containers, which isolate applications into smaller units, and for deploying, managing, deleting, and updating these containers across many EDGEPRO devices →03. Docker Engine was selected to containerize the application due to its OS compatibility with Windows and Linux and implementation simplicity, while K3s, the simplest and least resource-consuming platform tool considered, was selected for container orchestration →03.
Experimental prototype design
To validate the EDGEPRO embedded framework, ABB developed a fault detection, isolation, and restoration (FDIR) application scheme →04, in which the ECDs communicate with Intelligent Electronic Devices (IEDs) and sensors, and other edge computing devices in the hierarchy.
Simulations were performed using Node-RED: a JavaScript-based Web server tool that enables the creation of logical and communication nodes representing different components in the FDIR scheme. Critically, this allows interactive simulation demonstrations to run inside the ECD3, or supervisor EDGEPRO device, where each logical state represents the state of the electrical system.
The final FDIR demonstration used container orchestration with three HPE EL10 devices that formed a K3s cluster. Master and agent tool was installed on respective ECDs to enable the orchestration of containers across all the ECD devices. A YAML file defined the container configuration for each ECD, and the FDIR container image was developed on an x86 platform and uploaded to the DockerHub public repository. After deploying the YAML file, the Node-RED application automatically started in all ECDs, which ran a specific part of the image based on identification.
The hardware setup featured actual recloser controllers (ABB RER620) and emulators, simplified software switches in Node-RED for other relay devices, commercially available wireless sensors and TI DSP-based sensor data acquisition and wireless modules. Additionally, to move the data and commands between ECD devices, ABB used MQTT, while Modbus TCP protocol communicated with IED RER620. It is noteworthy that other protocols are also supported eg, DNP3 and IEC 61850.
EDGEPRO platform validation results
To validate the EDGEPRO platform, ABB created multiple demonstrations →05. First, wireless sensors were connected to the actual electrical load, and the supervisor EDGEPRO device (ECD2) was set to trip the relay when the current exceeded 0.2 A. Having measured the load voltage and current, the sensors relayed the data to the respective EDGEPRO device, which then tripped the relay opening the recloser relay when the current exceeded the threshold. Once the fault cleared, the systems returned to normal operation →05.
This simple demonstration validated various aspects of the EDGEPRO platform, eg, connecting and collecting data from wireless sensors, and connectivity with ABB protection relay products →05.
A further demonstration implemented a simplified FDIR scheme for power distribution →06. In this case, containerization of the FDIR grid application was validated through the ECDs’ GUI panels →06. The containerized application was uploaded to a private repository on the internet and deployed using the K3s container orchestration tool to ensure that performance metrics were always met. Importantly, fault status indicators, system status, parameters, and waveforms were displayed on the GUI panels of all edge devices →06. The system successfully simulated faults at different system locations via GUI buttons and restored normal operation once the fault cleared (pre-fault load on faulted circuits was used to check the load before restoration).
Led by objectives
Due to the heterogeneity and multi-sources of data, ABB deployed the data fusion technique [3] on the edge processor platform. Two main objectives were addressed: “Fixing problematic data” for data-source quality issues, eg, inconsistency, imperfection, etc. and “Extracting higher-level information” to obtain knowledge from multiple data sources. Hence, ABB developed two main applications in the edge processor framework: event classification and data recovery.
Event classification
To help differentiate between permanent events (ie, cable/conductor faults, animal contacts, and equipment failures) and temporary fault events (vegetation management issues, lightning strikes, and switching transients), it was necessary to perform distribution grid event classification. Here, ABB developed and introduced ML-based and domain expertise-based methods. For the ML-based fault classification technique, data was sourced from the National Infrastructure for Artificial Intelligence (AI) on the grid (NI4AI), led by PingThings [4,5] with University of California, Berkeley. The infrastructure, including data, analytics platform, and user-community was provided to catalyze the use of artificial intelligence on the grid.
For event classification, 155 datasets were assigned to one of five classes: animal (15 datasets), lightning (24 datasets), vehicle (18 datasets), tree (43 datasets), and equipment (55 datasets) with differing data points (from 50 to 30,000) and sampling rates (from 50 to 1000 μs). Possible attributes included: time, voltage (V) Va, Vb, Vc; current (I) Ia, Ib, Ic, and In. The three phase currents Ia, Ib, and Ic of individual cases were concatenated to extract features using the Python package “tsfresh”, which automatically calculates a large number (773 in this case) of time series characteristics.
To improve imbalanced classification performance, a known challenge of ML techniques applied to classification datasets, ABB employed the Synthetic Minority Oversampling Technique (SMOTE). This resulted in 55 instances for each class. For the data set, 75 percent of the data was used for training while 25 percent was kept for test/validation purposes. Because the best-suited algorithm for solving the ML problem was unknown at the outset, ten promising classification algorithms were investigated using spot-checking:
• Two linear algorithms: Logistic Regression (LR) and Linear Discriminant Analysis (LDA)
• Four nonlinear algorithms: k-Nearest Neighbors (KNN), Naïve Bayes (NB), Classification Trees (CART), and Support Vector Machines (SVM)
• Four ensemble algorithms: Random Forest (RF) and Extra Trees (ET); AdaBoost (AB) and Stochastic Gradient Boosting (GBM)
Having estimated the skill of these ML models with the classification metric, 10 fold cross validation, and accuracy, ABB selected the ensemble algorithm, ET, for classification as it outperformed the other models after data standardization. Subsequently, a grid search algorithm, hyper parameter optimization, which serves to tune the algorithm, was applied to find the optimal number of “trees” (best results with n-estimators = 300). The results of the event classification using the tuned ET algorithm and standardized data →07 demonstrate the success of the application.
07 The results of event classification are displayed.
Data recovery for state estimation
As the grid transitions to an intelligent electrical power grid, integrating ever more DERs, the real-time monitoring and control of the system is critical. Synchrophasor measurements using rapid time-stamped devices, or Phasor measurement units (PMU), can address this challenge by determining the state of the system [6].
The first step was to reconstruct the missing data. Data were sampled at synchronized instants and correlated with measurements of nearby PMUs based on the power system topology. PMU data exhibited low-dimensional structure despite the high-dimensionality of raw data. The resulting matrix contains measurements of nearby PMUs of approximately low rank. Because ABB determined that reconstructing missing PMU data could be formulated as a low-rank matrix completion problem, matrix completion methods, eg, nuclear, Hankel, and total variation norm minimization, were introduced and developed based on convex optimization technique to recover both randomly and temporally missing data with applications to synchrophasor (PMU) data recovery for state estimation [7]. In this way, ABB showed that EDGEPRO can be successfully used in power distribution automation for data recovery and state estimation.
The future is hierarchical
The edge computing concept presented here focuses on distributed intelligence, and yet the concept of hierarchical grid intelligence is likely the future of grid protection, automation, and control architecture. ABB’s EDGEPRO framework is designed for today’s use and to accommodate this hierarchical distribution future.
Grid edge computing devices will provide distributed intelligence and fast response to time-critical grid issues (eg, fault detection, isolation, and restoration), communicating pertinent data to devices at the substation level (eg, substation computers, high-end ECDs) and providing another layer of protection, automation, and control (PAC). Moreover, such devices can potentially provide a sanity check that the ECD-based control decisions made were indeed correct. Further, by interfacing and coordinating with the cloud and end-customer application-based IoT systems, a comprehensive picture of the effects of grid events can be obtained. This level of communication enables the link from the utility, and others, to the consumer and industrial and commercial sites.
Advanced analytics is another application that can be applied at all levels of ABB’s framework to provide more hierarchical intelligence →08. At the grid edge, basic to slightly advanced applications, eg, fault detection, isolation, and restoration, can be deployed; the type of ECD whether high- or low cost, will determine the level of deployable advanced application →08.
At the substation and cloud levels, where computation power and capabilities are greater, more advanced applications are possible. For example, ML-based applications are deployable at all levels; the technique complexity and data models will determine the deployment hardware platform. A simpler ML method could be deployed on low cost-ECDs, whereas a more complex method, eg, deep/reinforcement learning-based method, would be needed on a substation computer or the cloud.
Such a flexible deployment architecture provides the basis for a hierarchical grid intelligence framework, allowing for dynamic protection, control, and an automation system that can accommodate many new sub-systems to the grid (eg, DERs, energy storage).
More future applications
Considering ABB’s proposed architecture →08, further applications are possible, including site-specific (eg, event analysis and detection) to broader system-level applications (eg, DER monitoring and control), as existing technologies, eg, data fusion, ML/AI, and 5G real-time communications, expand; thereby enabling a paradigm shift in distribution PAC functionality. While data fusion merges various data sources and types to manipulate data and provide useful inputs to end applications, ML techniques use supervised, unsupervised, and reinforcement learning methods to solve traditional and new distribution protection, automation, and control applications. The deployment of 5G and other advanced, real-time communication systems will enable the deployment of applications that could only be realized on paper 20 years ago →09.
Looking to the future, but grounded in the present, ABB’s proposed concept provides for a faster response to grid events than centralized or substation-based solutions: coordination with SCADA, distribution management systems and substation computers is feasible. Such new wave technology can provide significant benefits to the utility distribution grid, distribution automation, overall grid management, advanced analytics, DER monitoring and control, and asset management. The merging of edge computing technology, the ubiquitous communication medium and protocols, and advanced analytics will provide strong distributed intelligence platforms to support the next generation of distribution automation and grid management products →10.
Acknowledgement
The information, data, or work presented herein, was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), US Department of Energy. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
References
[1] R. Rodrigues, et al., “Applying edge computing to distribution automation”, 2023 IEEE Conference & Exposition (Grid Edge), PES IEEE April, 2023, Available: https://ieeexplore.ieee.org/document/10102738 [Accessed Aug. 3, 2023.]
[2] P. Perani, “In Grid we trust”, in ABB Review 03/2023, pp. 172 – 179.
[3] Y. Zhang et al., “Information fusion for edge intelligence: A survey”, in Information Fusion, vol 81, 2022, pp.171 – 186, Available: https://www.sciencedirect.com/science/article/abs/pii/S1566253521002438 [Accessed Aug. 3, 2023.]
[4] PingThings Website, [Online] Available: https://ni4ai.org/info [Accessed Aug. 3, 2023.]
[5] DOE/EPRI National Database Repository of Power System Events, [Online] Available: https://pqmon.epri.com/see_all.html [Accessed Aug. 3, 2023.]
[6] P. Joshi and H. Verma, “Synchrophasor measurement applications and optimal PMU placement: A review” in Electric Power Systems Research, vol 199, 2021, Available: https://www.sciencedirect.com/science/article/abs/pii/S0378779621004090 [Accessed Aug. 3, 2023.]
[7] A. Primadianto, and C. Lu. “A review on distribution system state estimation” In IEEE Transactions on Power Systems, vol 32, 5, 2016, pp. 3875 – 3883.Title photo: © Andreas Moglestue