Automatic control is based on the concept of feedback. The essence of the feedback theory consists of three components: measurement, comparison, and correction. By measuring the quantity of interest, comparing it with the desired value, and using the error to correct the driving force, this closed sequence of information transmission, referred to as feedback, underlies the automatic control based on self-regulation.
The control technology in use today is all based on the essence of feedback. However, how to acquire the measurement, make the comparison, and most importantly, do the correction is the key to the theory and application of automatic control. Let us discuss the existing automatic control theories and applications in the following.
PID Control
The most widely used industrial controller today is still the 55 year old Proportional-Integral-Derivative (PID) controller. PID is simple, easy to implement, and requires no plant model. On the factory floor, we often found that the control algorithm used on-line for controlling complex systems such as chemical reactors is still the PID. In these cases, a fancy and expensive Distributed Control System (DCS) may be used, but the user still selects the PIDrather than the advanced control algorithms that may be embedded in the DCS. The user selects PID because it is the only algorithm he understands.
PID really works in many systems. But it has major shortcomings because it is too simple. Firstly, PID works for the plant that is basically linear, time-invariant, and may have only small or no dynamic changes. These conditions are too restrictive to many industrial processes. Secondly, PID has to be tuned right; that is, its parameters have to be set properly based on the plant dynamics. In real applications, tuning of a PID is often a frustrating experience if the number of the control loops is large, the process response is slow, or the system perturbation drives the proper tuning away gradually. And last, PID cannot work effectively in controlling complex systems, such as chemical reactors, distillation columns, and blast furnaces, which are usually nonlinear, time-variant, coupled, and have parameter or structure uncertainties. Due to these shortcomings, many industrial control systems today suffer through safety, quality, waste of energy, and productivity problems by continuing to use PID control.
On the factory floor, it is very common to see that many loops are left in the manual mode (the loop is open) since the operators have trouble keeping the control loop running smoothly in the automatic mode (the loop is closed).
Some PID self-tuning methods have been developed to deal with PID tuning problems. Many commercial single loop controllers and distributed control systems are equipped with auto-tuning or self-tuning PID controllers. But their applications have met major obstacles.
If the self-tuning is model based, it requires a bump in the closed-loop situation in order to find the plant model on-line to re-tune the PID. Operators find this procedure uncomfortable.
If the self-tuning is rule based, it is often difficult to distinguish between the effects of load disturbances and genuine changes in the process dynamics, the controller may thus overreact to a disturbance and create an unnecessary adaptation transition. In addition, in a rule based system, the reliability of the tuning may be questionable since there are no mature stability analysis methods available for the rule based systems.
Therefore, experience has shown that many self-tuning PID controllers are being operated in the so called auto-tuning mode rather than in the continuous self-tuning mode. Auto-tuning is usually defined as a feature that the PID parameters are calculated automatically based on a simplified process model that may be acquired in the open-loop situation. [31]-[33]
Although we have listed all the shortcomings of a PID, we should not blame it since it is the PID that keeps most of our plants running. This situation will not be changed unless we can offer something that is better, easier to use, and makes technical and economical sense to the user.
Adaptive Control
An adaptive control system can be defined as a feedback control system intelligent enough to adjust its characteristics in a changing environment so as to operate in an optimal manner according to some specified criteria. [35]
In general, adaptive control achieved great success in the applications of aircraft, missile, and spacecraft controls. The reasons for adaptive control’s success in these areas may include the following factors: (1) In these mechanical related systems, the control problem is well suited for the traditional adaptive control methods. This is because the plant structure is well known, the adaptive algorithm is usually used to compensate for the uncertainties in the plant parameters; and (2) The significant amount of resources that were available for research and development in the space and military related applications from the 1960s to 1980s.
In the industrial and process control applications, however, the traditional adaptive control has not been very successful. The most credible achievement is just the PID self-tuning scheme that is widely implemented in commercial products but not very well used or accepted by the user.
Traditional adaptive control methods, either model reference or self-tuning, usually require some kind of identification for the plant dynamics. This contributes to a number of fundamental problems such as the headache of off line training it may require, the tradeoff between the persistent excitation of signals for correct identification and the steady system response for control performance, the assumption of the plant structure, the model convergence and system stability issues in real applications, etc. In addition, traditional adaptive control methods assume the knowledge of the plant structure. They have major difficulties in dealing with nonlinear, structure variant, or large time delayed plants. [34]-[40]
Robust Control
Robust control is a controller design method that focuses on the reliability (robustness) of the control law. Robustness is usually defined as the minimum requirement a control system has to satisfy to be useful in a practical environment. Once the controller is designed, its parameters do not change and control performances are guaranteed.
The robust control methods, either in time domain or frequency domain, usually assume the knowledge of plant dynamics and its variation ranges. Some algorithms may not need a precise plant model but then require some kind of off-line identification. A robust control system should be designed based on the worst case scenario so that it usually does not work in the optimal status in sense of control performance.
Robust control methods are well suited in applications where the control system stability and reliability are the top priorities, plant dynamics are known, and variation ranges for uncertainties can be estimated. For instance, aircraft and spacecraft are some examples of these systems. In process control applications, some control systems can also be designed with robust control methods. However, the design of a robust control system requires high level expertise. Once the design is done, the system should work well. But on the other hand, the system will need to be re-designed when future upgrades or major modifications are required. [41][42]
Predictive Control
Predictive control is probably the one that has the most industrial control applications compared to the other advanced control methods. The essence of predictive control is based on three key elements: (1) predictive model, (2)optimization in range of a temporal window, and (3) feedback correction. These three steps are usually carried on continuously by computer programs on-line.
Predicative control is a control algorithm based on predictive model of the plant. The model is used to predict the future output based on the historical information of the plant as well as the future input. It emphasizes the function of the model not the structure of the model. Therefore, state equation, transfer function, and even step response or impulse response can be used as the predictive model. Predictive model has the capability of showing the future behavior of the system. Therefore, we can experiment with different control laws to see the system output just like doing a computer simulation.
Predictive control is an algorithm of optimal control. It calculates future control action based on a penalty function or performance function. However, the optimization of predictive control is limited to a moving time interval and is carried on continuously on-line. The moving time interval is sometimes called temporal window. This is the key difference compared to traditional optimal control that uses a performance function to judge global optimization. This idea works well for complex systems with dynamic changes and uncertainties since there is no reason in this case to judge the optimization performance based on the full time range.
Predicative control is also an algorithm of feedback control. If there is a mismatch between the model and plant, or if there is a control performance problem caused by the system uncertainties, the predictive control could compensate the error or adjust the model parameters based on on-line identification.
Due to its essence of predictive control, the design of such a control system is very complicated and requires high level expertise although the predictive control system should work well in controlling various complex process control systems. This could be the main reason why predictive control is not used as widely as it deserves to be. [43]-[48]
Optimal Control
Optimal control is a very important component in modern control theory. Its great success in space, aerospace, and military applications have changed our lives in many ways.
The statement of a typical optimal control problem can be expressed in the following: The state equation and its initial condition of a system to be controlled are given. The defined objective set is also provided.
Find a feasible control such that the system starting from the given initial condition transfers its state to the objective set, and minimize a performance index.
In principal, optimal control problems belong to the Calculus of Variations. Pontryagin’s Maximum Principal and Bellman’s Dynamic Programming are two powerful tools to solve closed set constrained variation problems which are related to most optimal control problems.
In practice, optimal control is very well suited for space, aerospace, and military applications such as the moon landing problem of a spacecraft, flight control problem of a rocket, and the missile blocking problem of a defense missile. In industrial systems, there are some optimal control related problems such as the control of bacteria content of a bioengineering system, etc.
However, most process control problems are related to the control of flow, pressure, temperature, and level. They are not well suited by using traditional optimal control techniques. [49]-[51]
Intelligent Control
Intelligent Control is another major field in modern control studies. Although there are different definitions regarding Intelligent Control, we could consider it as a control paradigm that uses various artificial intelligence techniques, which may include the following methods: Learning Control, Expert Control, Fuzzy Control, and Neural Control. [52]-[54]
Learning Control uses pattern recognition techniques to obtain the current status of the control loop; and then makes control decisions based on the loop status as well as the knowledge or experience stored previously. Since Learning Control is under the limitation of its stored knowledge, its application has never been popular.
Expert Control, based on the expert system technology, uses a knowledge base to make control decisions. The knowledge base is built by expertise, system data acquired on-line, and inference machine designed. Since the knowledge in Expert Control is represented symbolically and is always in discrete format, it is suitable for solving decision making problems such as production planning, scheduling, and fault diagnosis. It is not suitable for continuous control problems.
Fuzzy Control, unlike Learning Control and Expert Control, is built on top of good mathematical foundations with Fuzzy Set Theory. It represents knowledge or experience in good mathematical format so that plant and system dynamic characteristics can be described by fuzzy sets and fuzzy relational functions. Control decisions can be generated based on the fuzzy sets and functions with rules. Although fuzzy control seems to have great potential for solving complex control problems, its design procedure is complicated and takes a great deal of specialty. Also, fuzzy math does not belong to the Field of Mathematics so many basic mathematical operations do not exist. For instance, the inverse addition is not available in fuzzy math. Then, it is very difficult to solve a fuzzy equation. We know that solving a differential equation is one of the basic practices in traditional control theory and applications. Therefore, lack of good mathematical tools is a fundamental problem for Fuzzy Control to become powerful.
Neural Network Control can be considered a control method using artificial neural networks. It presents a new area of interest with great potential since artificial neural networks are built on the firm foundation that includes rich and well understood mathematical tools. Since Control Pack uses artificial neural networks (ANN) as the basic control component, Neural Network Control will be discussed in more details in Chapter 2.