Stability and Stabilization: An Introduction - Hardcover

Terrell, William J.

 
9780691134444: Stability and Stabilization: An Introduction

Inhaltsangabe

Stability and Stabilization is the first intermediate-level textbook that covers stability and stabilization of equilibria for both linear and nonlinear time-invariant systems of ordinary differential equations. Designed for advanced undergraduates and beginning graduate students in the sciences, engineering, and mathematics, the book takes a unique modern approach that bridges the gap between linear and nonlinear systems.


Presenting stability and stabilization of equilibria as a core problem of mathematical control theory, the book emphasizes the subject's mathematical coherence and unity, and it introduces and develops many of the core concepts of systems and control theory. There are five chapters on linear systems and nine chapters on nonlinear systems; an introductory chapter; a mathematical background chapter; a short final chapter on further reading; and appendixes on basic analysis, ordinary differential equations, manifolds and the Frobenius theorem, and comparison functions and their use in differential equations. The introduction to linear system theory presents the full framework of basic state-space theory, providing just enough detail to prepare students for the material on nonlinear systems.


  • Focuses on stability and feedback stabilization

  • Bridges the gap between linear and nonlinear systems for advanced undergraduates and beginning graduate students

  • Balances coverage of linear and nonlinear systems

  • Covers cascade systems

  • Includes many examples and exercises

Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.

Über die Autorin bzw. den Autor

William J. Terrell is associate professor of mathematics and applied mathematics at Virginia Commonwealth University. In 2000, he received a Lester R. Ford Award for excellence in expository writing from the Mathematical Association of America.

Von der hinteren Coverseite

"This book is a pleasant surprise. William Terrell selects and presents the field's key results in a fresh and unbiased way. He is enthusiastic about the material and his goal of setting forth linear and nonlinear stabilization in a unified format."--Miroslav Krstic, University of California, San Diego

"This textbook has very positive features. The arguments are complete; it does not shy away from making correct proofs one of its main goals; it strikes an unusually good balance between linear and nonlinear systems; and it has many examples and exercises. It is also mathematically sophisticated for an introductory text, and it covers very recent material."--Jan Willems, coauthor ofIntroduction to Mathematical Systems Theory

Auszug. © Genehmigter Nachdruck. Alle Rechte vorbehalten.

Stability and Stabilization

An Introduction

By William J. Terrell

PRINCETON UNIVERSITY PRESS

Copyright © 2009 Princeton University Press
All rights reserved.
ISBN: 978-0-691-13444-4

Contents

List of Figures, xi,
Preface, xiii,
1 Introduction, 1,
2 Mathematical Background, 12,
3 Linear Systems and Stability, 49,
4 Controllability of Linear Time Invariant Systems, 82,
5 Observability and Duality, 109,
6 Stabilizability of LTI Systems, 124,
7 Detectability and Duality, 138,
8 Stability Theory, 161,
9 Cascade Systems, 189,
10 Center Manifold Theory, 212,
11 Zero Dynamics, 233,
12 Feedback Linearization of Single-Input Nonlinear Systems, 268,
13 An Introduction to Damping Control, 289,
14 Passivity, 302,
15 Partially Linear Cascade Systems, 331,
16 Input-to-State Stability, 359,
17 Some Further Reading, 378,
Appendix A Notation: A Brief Key, 381,
Appendix B Analysis in R and Rn, 383,
Appendix C Ordinary Differential Equations, 393,
Appendix D Manifolds and the Preimage Theorem; Distributions and the Frobenius Theorem, 403,
Appendix E Comparison Functions and a Comparison Lemma, 420,
Appendix F Hints and Solutions for Selected Exercises, 430,
Bibliography, 443,
Index, 451,


CHAPTER 1

Introduction


In this short introductory chapter, we introduce the main problem of stability and stabilization of equilibria, and indicate briefly the central role it plays in mathematical control theory. The presentation here is mostly informal. Precise definitions are given later. The chapter serves to give some perspective while stating the primary theme of the text.

We start with a discussion of simple equations from an elementary differential equations course in order to contrast open loop control and feedback control. These examples lead us to a statement of the main problem considered in the book, followed by an indication of the central importance of stability and stabilization in mathematical control theory. We then note a few important omissions. A separate section gives a complete chapter-by-chapter description of the book. The final section of the chapter is a list of suggested collateral reading.


1.1 OPEN LOOP CONTROL

Students of elementary differential equations already have experience with open loop controls. These controls appear as a given time-dependent forcing term in the second order linear equations that are covered in the first course on the subject. A couple of simple examples will serve to illustrate the notion of open loop control and allow us to set the stage for a discussion of feedback control in the next section.

The Forced Harmonic Oscillator. Consider the nonhomogeneous linear mass-spring equation with unit mass and unit spring constant,

[??] + y = u(t).

We use [??] and [??] to denote the first and second derivatives of y(t) with respect to time. The equation involves a known right-hand side, which can be viewed as a preprogrammed, or open loop, control defined by u(t). The general real-valued solution for such equations is considered in differential equations courses, and it takes the form

y(t) = yh(t) + yp(t),

where yp(t) is any particular solution of the nonhomogeneous equation and yh(t) denotes the general solution of the homogeneous equation, [??] + y = 0. For this mass-spring equation, we have

yh(t) = c1 cos t + c2 sin t,

where the constants c1 and c2are uniquely determined by initial conditions for y(0) and [??](0).

Suppose the input signal is u(t) = sin t. This would not be an effective control, for example, if our purpose is to damp out the motion asymptotically or to regulate the motion to track a specified position or velocity trajectory. Since the frequency of the input signal equals the natural frequency of the unforced harmonic oscillator, [??] + y = 0, the sine input creates a resonance that produces unbounded motion of the mass.

On the other hand, the decaying input u(t) = e-t yields a particular solution given by yp(t) = 1/2 e-t. In this case, every solution approaches a periodic response as t -> ∞, given by yh(t), which depends on the initial conditions y(0) and [??](0), but not on the input signal.

Suppose we wanted to apply a continuous input signal which would guarantee that all solutions approach the origin defined by zero position and zero velocity. It is not difficult to see that we cannot do this with a continuous open loop control. The theory for second-order linear equations implies that there is no continuous open loop control u(t) such that each solution of [??] + y = u(t) approaches the origin as t -> ∞, independently of initial conditions.


The Double Integrator. An even simpler equation is [??] = u(t). The general solution has the form y(t) = c1 + c2t + yp(t), where yp(t) is a particular solution that depends on u(t). Again, there is no continuous control u(t) that will guarantee that the solutions will approach the origin defined by zero position and zero velocity, independently of initial conditions.


Open loop, or preprogrammed, control does not respond to the state of the system it controls during operation. A standard feature of engineering design involves the idea of injecting a signal into a system to determine the response to an impulse, step, or ramp input signal. Recent work on the active approach to the design of signals for failure detection uses open loop controls as test signals to detect abnormal behavior; an understanding of such open loop controls may enable more autonomous operation of equipment and condition-based maintenance, resulting in less costly or safer operation.

The main focus of this book is on principles of stability and feedback stabilization of an equilibrium of a dynamical system. The next section explains this terminology and gives a general statement of this core problem of dynamics and control.


1.2 THE FEEDBACK STABILIZATION PROBLEM

The main theme of stability and stabilization is focused by an emphasis on time invariant (autonomous) systems of the form

[??] = f(x),

where f : D [subset] Rn ->Rn is a continuously differentiable mapping (a smooth vector field on an open set D [subset] Rn) and [??] := dx/dt. If f is continuously differentiable, then f satisfies a local Lipschitz continuity condition in a neighborhood of each point in its domain. From the theory of ordinary differential equations, the condition of local Lipschitz continuity of f guarantees the existence and uniqueness of solutions of initial value problems

[??] = f(x), x(0) = x0,

where x0 is a given point of D.

The state of the system at time t is described by the vector x. Assuming that f(0) = 0, so that the origin is an equilibrium (constant) solution of the system, the core problem is to determine the stability properties of the equilibrium. The main emphasis is on conditions for asymptotic stability of the equilibrium. A precise definition of the term asymptotic stability of x = 0 is given later. For the moment, we simply state its intuitive meaning: Solutions x(t)...

„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.