Carnegie Mellon University
18-849b Dependable Embedded Systems
Spring 1999
Author: Leo Rollins
Conventional control theory has allowed man to control and automate his environment for centuries. Modern control techniques have allowed engineers to optimize the control systems they build for cost and performance. However, optimal control algorithms are not always tolerant to changes in the control system or the environment. Robust control theory is a method to measure the performance changes of a control system with changing system parameters. Application of this technique is important to building dependable embedded systems. The goal is to allow exploration of the design space for alternatives that are insensitive to changes in the system and can maintain their stability and performance. One desirable outcome is for systems that exhibit graceful degradation in the presence of changes or partial system faults.
In order to gain a perspective for robust control, it is useful to examine some basic concepts from control theory. Control theory can be broken down historically into two main areas: conventional control and modern control. Conventional control covers the concepts and techniques developed up to 1950. Modern control covers the techniques from 1950 to the present. Each of these is examined in this introduction.
Conventional control became interesting with the development of feedback theory. Feedback was used in order to stabilize the control system. One early use of feedback control was the development of the flyball governor for stabilizing steam engines in locomotives. Another example was the use of feedback for telephone signals in the 1920s. The problem was the transmission of signals over long lines. There was a limit to the number of repeaters that could be added in series to a telephone line due to distortion. Harold Stephen Black proposed a feedback system that would use feedback to limit the distortion. Even though the added feedback sacrificed some gain in the repeater, it enhanced the overall performance. Refer to [Bennet96] for more historical treatment of control theory.
Conventional control relies upon developing a model of the control system using differential equations. LaPlace transforms are then used to express the system equations in the frequency domain where they can be manipulated algebraically. Figure 1 shows a typical control loop. The input to the system is some reference signal, which represents the desired control value. This reference is fed through a forward transfer function G(s) to determine the plant output, y. The output is fed back through a feedback transfer function, H(s). The feedback signal is subtracted from the reference to determine the error signal, e. Further control is based on the error signal. Therefore, the system serves to bring the output as close as possible to the desired reference input. Due to the complexity of the mathematics, conventional control methods were used mostly for Single-Input-Single-Output (SISO) systems. Refer to [Oppenheim97] for an introduction to conventional control techniques.
Figure 1: Typical Control Loop
One development that was key to future developments in robust control was the root-locus method. In the frequency domain, G(s) and H(s) were expressed as ratios of polynomials in the complex frequency variable, s. Nyquist, Bode and others realized that the roots of the denominator polynomial determined the stability of the control system. These roots were referred to as "poles" of the transfer functions. The location of these poles had to be in the left half-plane of the complex frequency plot to guarantee stability. Root locus was developed as a method to graphically show the movements of poles in the frequency domain as the coefficients of the s-polynomial were changed. Movement into the right half plane meant an unstable system. Thus systems could be judged by their sensitivity to small changes in the denominator coefficients.
Modern control methods were developed with a realization that control system equations could be structured in such a way that computers could efficiently solve them. It was shown that any nth order differential equation describing a control system could be reduced to n 1st order equations. These equations could be arranged in the form of matrix equations. This method is often referred to as the state variable method. The canonical form of state equations is shown below, where x is a vector representing the system "state", is a vector representing the change in "state", u is a vector of inputs, y is a vector of outputs, and A, B, C, D are constant matrices which are defined by the particular control system.
Modern control methods were extremely successful because they could be efficiently implemented on computers, they could handle Multiple-Input-Multiple-Output (MIMO) systems, and they could be optimized. Methods to optimize the constant state matrices were developed. For instance a spacecraft control system could be optimized to reach a destination in the minimum time or to use the minimum amount of fuel or some weighted combination of the two. The ability to design for performance and cost made these modern control systems highly desirable. There are many books covering the mathematical details of modern control theory. One example is [Chen84]. A lighter overview of the key developments in modern control can be found in [Bryson96]
From [Chandraseken98], "Robust control refers to the control of unknown plants with unknown dynamics subject to unknown disturbances". Clearly, the key issue with robust control systems is uncertainty and how the control system can deal with this problem. Figure 2 shows an expanded view of the simple control loop presented earlier. Uncertainty is shown entering the system in three places. There is uncertainty in the model of the plant. There are disturbances that occur in the plant system. Also there is noise which is read on the sensor inputs. Each of these uncertainties can have an additive or multiplicative component.
Figure 2: Plant control loop with uncertainty
The figure above also shows the separation of the computer control system with that of the plant. It is important to understand that the control system designer has little control of the uncertainty in the plant. The designer creates a control system that is based on a model of the plant. However, the implemented control system must interact with the actual plant, not the model of the plant.
Control system engineers are concerned with three main topics: observability, controllability and stability. Observability is the ability to observe all of the parameters or state variables in the system. Controllability is the ability to move a system from any given state to any desired state. Stability is often phrased as the bounded response of the system to any bounded input. Any successful control system will have and maintain all three of these properties. Uncertainty presents a challenge to the control system engineer who tries to maintain these properties using limited information.
One method to deal with uncertainty in the past is stochastic control. In stochastic control, uncertainties in the system are modeled as probability distributions. These distributions are combined to yield the control law. This method deals with the expected value of control. Abnormal situations may arise that deliver results that are not necessarily close to the expected value. This may not be acceptable for embedded control systems that have safety implications. An introduction to stochastic control can be found in [Lewis86].
Robust control methods seek to bound the uncertainty rather than express it in the form of a distribution. Given a bound on the uncertainty, the control can deliver results that meet the control system requirements in all cases. Therefore robust control theory might be stated as a worst-case analysis method rather than a typical case method. It must be recognized that some performance may be sacrificed in order to guarantee that the system meets certain requirements. However, this seems to be a common theme when dealing with safety critical embedded systems.
One of the most difficult parts of designing a good control system is modeling the behavior of the plant. There are a variety of reasons for why modeling is difficult.
In an embedded system, computation resources and cost are a significant issue. The issue for the control engineer is to synthesize a model that is simple enough to implement within these constraints but performs accurately enough to meet the performance requirements. The robust control engineer also wants this simple model to be insensitive to uncertainty. This simplification of the plant model is often referred to as model reduction. General issues related to the difficulty of synthesizing good models are covered well by [Chandraseken98]. A more detailed treatment of modeling for a variety of physical system types can be found in [Close78].
One technique for handling the model uncertainty that often occurs at high frequencies is to balance performance and robustness in the system through gain scheduling. A high gain means that the system will respond quickly to differences between the desired state and the actual state of the plant. At low frequencies where the plant is accurately modeled, this high gain (near 1) results in high performance of the system. This region of operation is called the performance band. At high frequencies where the plant is not modeled accurately, the gain is lower. A low gain at high frequencies results in a larger error term between the measured output and the reference signal. This region is called the robustness band. In this region the feedback from the output is essentially ignored. The method for changing the gain over different frequencies is through the transfer function. This involves setting the poles and zeros of the transfer function to achieve a filter. Between these two regions, performance and robustness, there is a transition region. In this region the controller does not perform well for either performance or robustness. The transition region cannot be made arbitrarily small because it depends on the number of poles and zeros of the transfer function. Adding terms to the transfer function increases the complexity of the control system. Thus, there is a trade-off between the simplicity of the model and the minimal size of the transition band. Gain scheduling is covered by [Ackermann93]
There are a variety of techniques that have been developed for robust control. These techniques are difficult to understand and tedious to implement. Descriptions of these techniques in papers and books tend to focus on the details of the mathematics and not the overall concept. This section attempts to catalog the major ones and briefly describe the basic concept behind each technique. A detailed understanding of a particular technique requires extensive study. This study has not been undertaken by the author of this report.
Adaptive control - An adaptive control system sets up observers for each significant state variable in the system. The system can adjust each observer to account for time varying parameters of the system. In an adaptive system, there is always a dual role of the control system. The output is to be brought closer to the desired input while, at the same time, the system continues to learn about changes in the system parameters. This method sometimes suffers from problems in convergence for the system parameters. Background information on this technique can be found in [Astrom96].
H2 and Hinfinity - Hankel norms are used to measure control system properties. A norm is an abstraction of the concept of length. Both of these techniques are frequency domain techniques. H2 control seeks to bound the power gain of the system while Hinfinity control seeks to bound the energy gain of the system. Gains in power or energy in the system indicate operation of the system near a pole in the transfer function. These situations are unstable. H2 and Hinfinity control are discussed in [Chandrasekharan96].
Parameter Estimation - Parameter estimation techniques establish boundaries in the frequency domain that cannot be crossed to maintain stability. These boundaries are evaluated by given uncertainty vectors. This technique is graphical. It has some similarities to the root locus technique. The advancement of this technique is based upon computational simplifications in evaluating whether multiple uncertainties cause the system to cross a stability boundary. These techniques claim to give the user clues on how to change the system to make it more insensitive to uncertainties. A detailed treatment can be found in [Ackermann93].
Lyapanov - This is claimed to be the only universal technique for assessing non-linear systems. The technique focuses on stability. Lyaponov functions are constructed, which are described as energy like functions, that model the behavior of real systems. These functions are evaluated along the system trajectory to see if the first derivative is always dissipative in energy. Any gain in energy represents the system is operating near a pole and will therefore be unstable. Lyaponov techniques are discussed in detail in [Qu98].
Fuzzy Control - Fuzzy control is based upon the construction of fuzzy sets to describe the uncertainty inherent in all variables and a method of combining these variables called fuzzy logic. Fuzzy control is applicable to robust control because it is a method of handling the uncertainty of the system. Fuzzy control is a controversial issue. Its proponents claim the ability to control without the requirement for complex mathematical modeling. It appears to have applications where there are a large number of variables to be controlled and it is intuitively obvious (but not mathematically obvious) how to control the system. One example is the control required to park a car. Refer to [Abramovitch94] for an objective analysis of fuzzy control and references for further reading.
Because robust control requires a variety of skills to build accurate models of the system, it is related to the system approach of using multi-disciplinary design teams.
All real control systems require some form of I/O. Robust control systems are especially concerned with the uncertainty included with the measurement of sensors.
In sampled control systems (digital systems) a key factor in the determination of the stability of the system is the sample rate. Thus, the scheduling methods found in real-time theory are of interest.
Robustness concerns how a system reacts to erroneous or failed inputs or stressful environmental conditions. Some of the uncertainty in a control system is due to these factors.
There is a concern for the extremes of operation in an embedded control system that has safety implications. It is in these extremes that uncertainty is high and robust control methods can be of service.
Good models of systems are difficult to construct. They require a variety of skills from physics, electrical, mechanical and computer engineering to design and implement.
A high volume of research in robust control over the past 15 years has lead to a growth in techniques.
The techniques for robust control have been criticized for their accessibility to the practicing engineer, the tediousness of the methods, the general application to normal systems and the conservatism that they often present.
To bring the techniques to use by the general industry, a variety of tools have been developed. However, there is always an issue of the correctness of the tools especially when they are used to simplify a very complex technique.
With the high level of research devoted to robust control the gap between robust control theory and its application may be closing.
Notes: This paper takes an objective approach to the controversy surrounding fuzzy control methods. Key points include the fact that fuzzy control is applicabile in common sense situations, that fuzzy logic does not generate control laws, and that the sample rates of successful systems are often much higher than the dynamics of the system.
Notes: This book presents parameter estimation techniques in detail.
Notes: This paper outlines the major early techniques. Like many other "new" techniques in control, the ground work was laid much earlier and the topics are resurfacing again. The author points out the concept of the dual role of adaptive control systems.
Notes: This paper provides a good background for the developments in control enginnering and their historical context. In fact this entire issue of IEEE control is dedicated to the history of control. This paper focuses on conventional control (before 1950).
Notes: Another historical paper, this one give an outline of the development of techniques for modern control.
Notes: This book attempts to bring the complex techniques for robust control out of research results to the practicing engineer. Of the many books on robust control this appears to be the most readable. At least its introduction and motivation are readable.
Notes: This book is a resource for those interested in the mathematical details of modern control theory. It covers the state variable approach, observability, controllability, stability, and the matrix theorms used in the state variable approach.
Notes: Optimal estimation treats the problem of optimal control with the addition of a noisy environment. An introduction to stochastic control is treated as the combination of optimal control (deterministic) and optimal estimation (non-deterministic).
Notes: The majority of this book is dedicated to signal properties and transformations, which serve as the underlying mathematics for control theory. LaPlace transforms are covered in chapter 9. However, chapter 11 covers linear feedback theory in some detail. The root-locus and Nyquist Stability Criterion are also covered here. This serves as a good introduction to conventional control.
Notes: This book covers modeling of a variety of system types including electrical, mechanical and thermal systems. This book was used as a text at the Rensselaer Polytechnic Institute for all engineering undergraduates.
Notes: This book presents methods for analyzing and designing nonlinear systems. It focuses on the Lyaponov first and second methods for analyzing the stability of systems.
Notes: This site presents tools associated with the parameter estimation techniques described in [Ackermann93]. The tools are bundled as a toolbox in Matlab called Paradise (PArametric Robust Analysis and Design Integrated Software Environment). A high level description of some of the techniques is presented.
Notes: This book presents a mostly non-technical introduction. It is valuable because it traces the history of fuzzy logic from its origins with Lofti Zedeh. Many applications of fuzzy logic are covered.
Notes: It is interesting to note that a book of this age covers the general control problem and the state estimation problem, as well as parameter estimation and adaptive control. Obviously, many of the ideas in the control field have deep roots.
Notes: This paper covers some of the early developments in robust control theory and its relationship to stability. Some coverage of the problems with H2 and Hinfinity control are also covered.
Many of the techniques for robust control are highly mathematical. Applied mathematicians have developed them. It is sometimes difficult even to understand the basic concept that is being exploited. Possibly this is an indicator of the (lack of) maturity of the subject.