Global site

ABB's website uses cookies. By staying here you are agreeing to our use of cookies. Learn more

Contextualization, the alchemy for data

Share this page

November 2020
Sanjay Shyam Bellara 

Head - Industry Analytics Platform Development
Process Automation Digital, ABB

Industries are demanding increasingly higher levels of automation to enable optimization. They are seeking the power of intuitive and predictive operations that comes with the high degree of connectivity of individual assets and of complete asset systems.

With much higher levels of instrumentation and sensors making their way into industrial asset systems, the magnitude of data being collected implies a massive opportunity for data-driven insights. But this also comes with the challenge of being able to effectively harness a vast amount of data for insights. The answer lies in the concept of contextualization.

Data – all around us

Industry estimates indicate that we’re approaching a world containing an exabyte (one quintillion bytes) of data. Another interesting statistic is how the industrial world is performing in this data storm; and the indication which states that 40% of all data on the Internet will soon be machine data.

Today, almost every aspect of an industrial operation has witnessed a high degree of automation and this implies the presence of significant amount of real-time and historic data that can be harnessed for various objectives, all of which can greatly enhance operational excellence for industrial enterprises. The high levels of automation are augmented with all-pervasive software systems which help control workflows and provide information at all stages in the value chain.

Data in the modern industrial enterprise, therefore, flows from individual assets, from OT systems, IT and ET systems and other sources of specialist information such as ERP systems, supply chain management systems and geospatial data.

The problem with data chains today

Only 27% of data in an organization used for analytics on average

However, with all this data, there is a fundamental challenge that industries face, which is of data being present and utilized only in silos currently. Data exists across the enterprise but is usually available (or taken advantage of) solely in the context of the function that it supports. Operations data and supply chain data, for instance work in isolation and are utilized only by plant operators, supply chain managers respectively. By virtue of it being seen in a myopic context, the value of this data is extremely limited.

Take for example an asset like a motor. Modern-day motors have a complex set of sensors to allow measurement of various key operating parameters, including speed, vibration and sound. Operations resources use this data to monitor performance and gauge how the motor is functioning. However, the ability to monitor whether it is performing as planned for its purpose in the context of the engineering design of the plant or as per the manufacturer’s recommendations is rarely visible. Also fuzzy is the view of whether the motor is being maintained to a plan, preventive or reactive.

In the context of enterprise commercial performance, it becomes challenging to forecast the cost implication of the motor facing an outage. In the event of an upset condition of the motor often resulting in plant downtime, stores are contacted for spares or a new motor with limited knowledge of their availability. This leads to loss in production throughput and could also result in increased maintenance expense with premiums having to be paid for urgent procurement.

The power of cross-functional data

The solution for this data conundrum lies in contextualized, cross-functional data. A seamless platform can help monitor and gauge all these parameters seamlessly by collating data from different systems. Such cross-functional data provides a rich source of input to run a wide range of analysis – whether for monitoring or predictive scenario analysis and applied for planning, early actions, etc.

In the same example of the motor, the life of the asset can be extended through a proactive approach, breakdowns avoided and maintenance programs optimized, leading to direct cost savings. Further commercial, operational and productivity benefits can be realized though minimization of plant downtime resultant from upset conditions and by optimizing supply chain processes.

To achieve this, a proactive and holistic approach to consolidating and harnessing data is key. Current state of an asset, its preventive maintenance history, breakdown information and historical performance patterns of similar assets are amongst the several data points that can be used to identify preventive measures, awareness of spare part requirements, vendors and typical procurement period. Maintenance teams can take preventive actions, pre-order spare parts for foreseen issues and therefore optimize the entire process.

Aggregating and contextualizing cross-functional data is the force multiplier for digitalization

Transforming data value through contextualization

To truly capitalize on the tremendous amount of data being generated by industrial systems today, it is important to look at this data relatively and not in isolation. This is the power of contextualization. Rich insights can be derived from source data only when it is viewed in context, failing which its visibility and value continue to be in silos.

The concept of contextualization typifies the very foundation of Industry 4.0, that at its core is based on the concept of cyber-physical systems and autonomous operations. These provide the capability to see assets (and even entire plants) in virtual, digital forms and run them successfully. To achieve this level of data value chain maturity, all data has to be seamlessly drawn, integrated and contextualized to make it ready for predictive and prescriptive analytics, actionable insights and automation.

Contextualization helps break silos and ensures that data is looked at relatively

Moving to the contextualized, cross-functional data value chain regime

For all its advantages, moving to a scenario where the power of contextualized, cross-functional data is completely utilized can be a daunting (and expensive) ask. First and foremost, it requires assessment of existing data across the systems landscape to get a fix on the data that needs to be contextualized. Depending upon the automation maturity of the enterprise, it could require changes as widespread as operational methodologies, IT systems, instrumentation & sensor architectures and even organizational cultures. The level at which data is currently being brought together, stored and contextualized also plays a strong role in determining the complexity of rolling out powerful data value chains.

In essence, contextualization is based on the combination of domain knowledge, digitalization and technology. The key to all of this is to get a ready-built solution which is already aware of industry processes and is able to correlate data across functions. This is how the turnaround time, cost and acceptability of the solution can be kept at optimal levels. Once such a solution is finalized, it is important to put this knowledge into a system or platform.

Effective digitalization then requires the right technologies. In this case, it would mean adopting technologies that address different aspects of the solution - interfacing with invested devices and systems across L1 - L4 networks, ingesting vast data into a data lake and/or data virtualization, data exploration and treatment, services to publish the contextualized data for analytics, AI/ML model training, publishing and so on. Further, the ideal platform is one easily deployable either on premise or on the cloud. This will aid in establishing cost-effective connectivity with partner ecosystems and help reduce capital expense. Today, a plethora of platforms and point solutions readily available in the market support diverse requirements; and the ideal architecture combines these in a way that contextualization is achieved, enabling intelligent industries. Making data available in its contextualized form gets it ready for intervention through AI / ML and to provide analytics.

In essence, contextualization is based on the combination of domain knowledge, digitalization and technology.

Benefit from the power of contextualization

By implementing such a system, industrial enterprises can focus on their core competence, which is running plants optimally, rather than on data operations. Digitalization solutions like ABB Ability™ Genix are ideal in this entire context because they bring together deep domain knowledge, awareness of industry processes, expertise in asset performance and edge systems as well as IT and AI / ML into a comprehensive, easy to implement platform.

Focus on core competence rather than managing data operations 


About the author

Sanjay Bellara heads the digital industrial analytics platform development function in ABB’s Process Automation business. In this role, his responsibility includes designing and developing the ABB Ability™ Genix Industrial Analytics and AI Suite, along with business value applications on it, for deployment as SaaS, dedicated instance on the cloud or on-premise. His expertise across business intelligence, big data, Industrial Internet of Things and advanced analytics plays a crucial role in the conceptualization and implementation of industrial analytics to create real business value.

Sanjay has over 28 years of experience in technology, product and application development, R&D, solution delivery and organizational leadership; and over a decade of experience in analytical solutions. In these leadership roles, he has engaged with leading organizations at CxO levels. He has delivered enterprise solutions and products, set up offshore R&D, product development centers and led digital transformation programs. His sectoral experience extends across smart cities, oil & gas, utilities, energy, chemicals, transportation, e-learning and government, social sectors. He has driven technology evolution and alignment of technology to business needs across technologies such as MEAN Stack, data sciences, machine learning, mobility and cloud computing. Sanjay holds a bachelor’s degree in Computer Engineering from the University of Mumbai.

Learn more