Global
Austria
Bulgaria
Croatia
Czech Republic
Denmark
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Netherlands
Norway
Poland
Portugal
Romania
Russia
Serbia
Slovakia
Slovenia
Spain
Sweden
Turkiye
United Kingdom
Global
Argentina
Aruba
Bolivia
Brazil
Chile
Colombia
Costa Rica
Dominican Republic
Ecuador
El Salvador
Guatemala
Honduras
Mexico
Panama
Paraguay
Peru
Puerto Rico
United States of America
Uruguay
Global
Bahrain
Israel
Jordan
Kuwait
Lebanon
Oman
Pakistan
Palestine
Qatar
Saudi Arabia
South Africa
United Arab Emirates
Global
Australia
Bangladesh
India
Indonesia
Japan
Kazakhstan
Malaysia
New Zealand
Philippines
Singapore
South Korea
Sri Lanka
Taiwan (Chinese Taipei)
Thailand
Vietnam
ABB Review | 02/2024
This article was originally published in issue 02/2024 of ABB Review.
At the core of all industrial processes is the quest for ever more precise levels of control. Today, this quest is driven by increasing computer power and digitization in automation. However, although most of the underlying mathematics stems from the 1960s, only recently has it become feasible to apply some algorithms to real-time scenarios.
Heiko Petersen, Patrick Meade Vargas, ABB Process Automation Mannheim, Germany, heiko.petersen@
de.abb.com, patrick.meade@
de.abb.com
Ruomu Tan, ABB Corporate Research Process Automation, Ladenburg, Germany, ruomu.tan@de.abb.com
ABB’s Ability™ PlantInsight platform →01 is a case in point. The platform makes it possible to run a variety of machine-learning (ML) algorithms for detection, segmentation, and prediction of specific patterns in vast amounts of process data. This, in turn, enables the implementation of AI-based optimization solutions that help reduce pollutants, extend equipment lifespans, and lower production costs.
Nowadays, artificial intelligence (AI) seems omnipresent. It is a common topic of conversation, bookstores are flooded with literature about it, and few applications seem to do without it. But considering the sheer amount of hype, one may wonder why AI is still so seldom used in process industries. Or could it perhaps be that it is already used, but just not recognized as such?
Generally, a system is considered to be AI-driven when it performs tasks typically done by humans, such as visual perception, decision-making, speech recognition, and translation. As a matter of fact, AI-driven systems can outperform humans in a range of activities such as solving numerical problems, pattern recognition, and retrieving information from a massive number of sources. Nevertheless, such systems are still in their infancy when it comes to abstract reasoning or creatively turning information into eloquent texts, to say nothing of social interactions, consciousness, or self-awareness, all of which are routine for humans but out of reach for machines – at least so far.
In view of this, it is important to distinguish between different levels of AI. According to Kaplan and Haenlein [1], the evolution of AI can be divided into three stages:
Most of today’s AI solutions fall into the first category. In this sense, even James Watts’ flyball governor, a speed regulator for his rotary steam engine of 1768, could be considered AI at stage one. However, it was never marketed as such – and the same can be said for the millions of control solutions operating in the power, refining, and chemical industries.
Typically, AI systems not only consist of a brain, or in other words, a sophisticated algorithm; they also must be able to perceive and interact with the world. Vision, hearing, speaking and motion complement the brain and allow AI-based systems to solve real-world problems – tasks very similar to those encountered by process control systems. While sensors measure process values (dependent variables), such as pressure, flow, temperature, etc., the controller takes these inputs and calculates the best way to adjust actuators such as valves, dampers, etc. (independent variables) to meet certain control objectives. In this scenario the controller’s role is that of a brain, running algebraic calculations and making logical decisions.
One of the most obvious reasons why AI is gaining ground now is the exponential increase of available computing power. Some of the constraints data scientists had in the past, such as a limited number of neurons in an artificial neuronal network (ANN), basically vanished, thus opening the door to leveraging the full potential of deep learning networks. Furthermore, anybody with a laptop and access to a cloud solution can run a training algorithm. This opens the market for new business models such as self-service model training and software as a service (SaaS). This not only democratizes AI but reduces engineering requirements on control solutions.
In the process industry, digitization began in the late ‘70s with the widespread introduction of programmable logic controllers (PLC) and distributed control systems (DCS), which replaced analog controllers. Adding new data points and control features became a programming, rather than a hardware installation and configuration task. This significantly increased the flexibility of the control process while reducing costs. However, adding more control features led to more complex control structures, which were often difficult to understand and maintain. They also required significant engineering effort and process know-how. The need for a leaner and more transparent control approach arose.
Advancements in mathematics and system theory, and the increasing availability of computer power, enabled the development of more advanced process controls. The mathematical fundaments behind this process can be traced back to the work of Rudolf Kalman et al. in the early 1960s [2]. While differential equations describe the dynamics of a physical system in a kind of ‘cleanroom’ scenario, Kalman added terms for state disturbance and measurement noise, something inevitable in any real-world application. Moreover, he directly formulated his equations using matrix representation, accounting for multiple differential equations with their respective inputs and outputs. This multi-inputs, multi-outputs (MIMO) approach made it possible to calculate an optimal control strategy not only for one actuator at a time, but for many simultaneously.
Moreover, it turned out that Kalman’s mathematical solution could also be used to look into the future of a process. In contrast to a simple controller, which only calculates the next optimal step for one variable, it was now possible to look multiple steps ahead into the future for multiple variables. The goal remained the same: to minimize the control error, which is the difference between desired and actual process values. But whereas a simple control is ‘driving by sight’, a forward-looking regulator creates a longer-term plan to act upon.
However, as things often do not go according to plan, it became evident that controllers must be able to adjust to changing situations based on feedback from a process. This led to the development of Model Predictive Control (MPC), which generates an optimal control path but triggers only the first step in each iteration. A moment later, once feedback is received, it repeats the process of calculating the optimal path until the desired operating point is reached →02.
Although these steps have significantly improved many processes, there are multiple areas where process control is still limited. The following section describes some of these areas and how AI can contribute to overcoming the remaining limitations.
As described above, controllers require feedback from the process they are controlling, otherwise their performance may suffer. This problem intensifies the longer the delay between action and feedback. Specifically, data with large time gaps compared to the actual process might pose issues. This is typically the case for laboratory data covering product properties that cannot be measured continuously or in real-time, such as viscosity or flashpoint. Adjustments to a process can be performed only after receiving results from the lab, which, because of the inherent delay, might compromise product quality.
One way to overcome this is to estimate the values of a product’s qualities in real time using ML models, such as artificial neural networks (ANN). Here, the accuracy of the models can be continuously improved with each new lab measurement. Predicted qualities can then be used without delay by the control algorithm to adjust the process. In this configuration, conventional and AI-based control algorithms work hand in hand to achieve and maintain desired production goals. This concept can also be applied to processes with long dead times or processes that use sensors that need to recalibrate regularly and are thus not continuously available. A practical use case pertaining to emission reductions illustrates the above concepts →03.
Like most systems in the real world, industrial processes are often non-linear. This results in a systemic discrepancy between the real process and its linear process model. In the context of short time horizons and minor process alterations the resulting error may be neglectable. However, on a larger scale it may affect control performance. Although some non-linearities can be offset through transformation of their associated process data – for instance linearization of a control valve’s characteristic curve – linearization is not always perfect and can be costly when dealing with many process variables.
AI techniques, on the other hand, can deal very well with non-linearities. ML models can basically adapt to any non-linear behavior. While most MPC implementations use a linear approach for modeling, the framework itself makes no assumption about the type of process model used or its linearity. Therefore, non-linear models trained with ML algorithms can also be used to reduce modeling errors. This leads to more accurate control and prevents the controller from getting trapped in minor optimizations.
At the heart of any advanced process control system is a process model. However, the process of identifying the dynamics of a physical system is costly and requires domain know-how and experience.
Traditionally, there are two approaches to model design: a so-called first principles model, which is based on the design, mechanics, and fundamental physics of a system, and a so-called empirical model, which is based on observations of how a system reacts to stimuli, for example, by means of step-response experiments.
Both approaches can be highly complex, costly, and in some cases, due to the nature of the process, impossible to implement. However, in many cases, this burden can be avoided if adequate historical process data is available. During normal plant operation, setpoints are regularly changed and disturbances are continuously happening, both triggering reactions in the process, and thus revealing its dynamic behavior. These footprints can be used by ML algorithms to easily create accurate models →04. To accomplish this, the data must be representative and thus cannot be randomly picked. For instance, abnormal process behavior, or periods with missing data must be removed. Doing this manually would be costly, but for an algorithm this is the perfect task. Selecting, segmenting and clustering vast amounts of data is a home run for machine learning →05.
Over the years, ABB has developed a suite of control and optimization solutions that have followed and often led technical developments in this area. From first principle modeling to proportional integral derivative (PID) loop monitoring, model predictive control, and dynamic optimization, ABB provides a wide range of solutions. Today, with the assistance of hardware and machine learning algorithms, it is now possible to complement this offering with the benefits and opportunities of artificial intelligence. With this in mind, ABB has developed ABB Ability™ PlantInsight, a platform that leverages the full potential of ML algorithms. This web-based application makes it possible to run a multitude of machine learning algorithms for prediction, segmentation, and detection of specific patterns in huge amounts of process data, while its modular concept makes it easy to embed proprietary python scripts and complement them with existing ones.
All in all, it can be said that joining the world of control to that of artificial intelligence can yield significantly improved results in terms of controlling industrial processes. Indeed, the more such hybrid control solutions spread, the more both worlds will converge – an apparently natural process since both share the same theoretical foundations. As this process evolves, continued progress is set to pave the way to the introduction of tomorrow’s fully autonomous production facilities.
References
[1] Andreas Kaplan, Michael Haenlein, “Siri-Siri in my hand, who is the fairest in the Land?” in Business Horizons, vol. 62, no.1, 2019, pp. 15 – 25.
[2] Kalman, R. E. (1960). “A New Approach to Linear Filtering and Prediction Problems.” Journal of Basic Engineering. 82, pp. 35 – 45.