This textbook offers a concise yet rigorous introduction to calculus of variations and optimal control theory, and is a self-contained resource for graduate students in engineering, applied mathematics, and related subjects. Designed specifically for a one-semester course, the book begins with calculus of variations, preparing the ground for optimal control. It then gives a complete proof of the maximum principle and covers key topics such as the Hamilton-Jacobi-Bellman theory of dynamic programming and linear-quadratic optimal control. Calculus of Variations and Optimal Control Theory also traces the historical development of the subject and features numerous exercises, notes and references at the end of each chapter, and suggestions for further study.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Daniel Liberzon is associate professor of electrical and computer engineering at the University of Illinois, Urbana-Champaign. He is the author of Switching in Systems and Control.
"A very scholarly and concise introduction to optimal control theory. Liberzon nicely balances rigor and accessibility, and provides fascinating historical perspectives and thought-provoking exercises. A course based on this book will be a pleasure to take."--Andrew R. Teel, University of California, Santa Barbara
"A very scholarly and concise introduction to optimal control theory. Liberzon nicely balances rigor and accessibility, and provides fascinating historical perspectives and thought-provoking exercises. A course based on this book will be a pleasure to take."--Andrew R. Teel, University of California, Santa Barbara
Preface..............................................................xiii1 Introduction.......................................................12 Calculus of Variations.............................................263 From Calculus of Variations to Optimal Control.....................714 The Maximum Principle..............................................1025 The Hamilton-Jacobi-Bellman Equation...............................1566 The Linear Quadratic Regulator.....................................1807 Advanced Topics....................................................200Bibliography.........................................................225Index................................................................231
1.1 OPTIMAL CONTROL PROBLEM
We begin by describing, very informally and in general terms, the class of optimal control problems that we want to eventually be able to solve. The goal of this brief motivational discussion is to fix the basic concepts and terminology without worrying about technical details.
The first basic ingredient of an optimal control problem is a control system. It generates possible behaviors. In this book, control systems will be described by ordinary differential equations (ODEs) of the form
[??] = f(t, x, u), x(t0) = x0 (1.1)
where x is the state taking values in Rn, u is the control input taking values in some control set U [subset] Rm, t is time, t0 is the initial time, and x0 is the initial state. Both x and u are functions of t, but we will often suppress their time arguments.
The second basic ingredient is the cost functional. It associates a cost with each possible behavior. For a given initial data (t0; x0), the behaviors are parameterized by control functions u. Thus, the cost functional assigns a cost value to each admissible control. In this book, cost functionals will be denoted by J and will be of the form
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.2)
where L and K are given functions (running cost and terminal cost, respectively), tf is the final (or terminal) time which is either free or fixed, and xf := x(tf) is the final (or terminal) state which is either free or fixed or belongs to some given target set. Note again that u itself is a function of time; this is why we say that J is a functional (a real-valued function on a space of functions).
The optimal control problem can then be posed as follows: Find a control u that minimizes J(u) over all admissible controls (or at least over nearby controls). Later we will need to come back to this problem formulation and fill in some technical details. In particular, we will need to specify what regularity properties should be imposed on the function f and on the admissible controls u to ensure that state trajectories of the control system are well defined. Several versions of the above problem (depending, for example, on the role of the final time and the final state) will be stated more precisely when we are ready to study them. The reader who wishes to preview this material can find it in Section 3.3.
It can be argued that optimality is a universal principle of life, in the sense that many|if not most|processes in nature are governed by solutions to some optimization problems (although we may never know exactly what is being optimized). We will soon see that fundamental laws of mechanics can be cast in an optimization context. From an engineering point of view, optimality provides a very useful design principle, and the cost to be minimized (or the profit to be maximized) is often naturally contained in the problem itself. Some examples of optimal control problems arising in applications include the following:
• Send a rocket to the moon with minimal fuel consumption.
• Produce a given amount of chemical in minimal time and/or with minimal amount of catalyst used (or maximize the amount produced in given time).
• Bring sales of a new product to a desired level while minimizing the amount of money spent on the advertising campaign.
• Maximize throughput or accuracy of information transmission over a communication channel with a given bandwidth/capacity.
The reader will easily think of other examples. Several specific optimal control problems will be examined in detail later in the book. We briey discuss one simple example here to better illustrate the general problem formulation.
Example 1.1 Consider a simple model of a car moving on a horizontal line. Let x [member of] R be the car's position and let u be the acceleration which acts as the control input. We put a bound on the maximal allowable acceleration by letting the control set U be the bounded interval [-1; 1] (negative acceleration corresponds to braking). The dynamics of the car are [??] = u. In order to arrive at a first-order differential equation model of the form (1.1), let us relabel the car's position x as x1 and denote its velocity [??] by x2. This gives the control system [??]1 = x2, [??]2 = u with state [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Now, suppose that we want to "park" the car at the origin, i.e., bring it to rest there, in minimal time. This objective is captured by the cost functional (1.2) with the constant running cost L [equivalent to] 1, no terminal cost (K [equivalent to] 0), and the fixed final state [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. We will solve this optimal control problem in Section 4.4.1. (The basic form of the optimal control strategy may be intuitively obvious, but obtaining a complete description of the optimal control requires some work.)
In this book we focus on the mathematical theory of optimal control. We will not undertake an in-depth study of any of the applications mentioned above. Instead, we will concentrate on the fundamental aspects common to all of them. After finishing this book, the reader familiar with a specific application domain should have no difficulty reading papers that deal with applications of optimal control theory to that domain, and will be prepared to think creatively about new ways of applying the theory.
We can view the optimal control problem as that of choosing the best path among all paths feasible for the system, with respect to the given cost function. In this sense, the problem is infinite-dimensional, because the space of paths is an infinite-dimensional function space. This problem is also a dynamic optimization problem, in the sense that it involves a dynamical system and time. However, to gain appreciation for this problem, it will be useful to first recall some basic facts about the more standard static finitedimensional optimization problem, concerned with finding a minimum of a given function f :...
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
Anbieter: Labyrinth Books, Princeton, NJ, USA
Zustand: New. Artikel-Nr. 141573
Anzahl: 1 verfügbar
Anbieter: Romtrade Corp., STERLING HEIGHTS, MI, USA
Zustand: New. This is a Brand-new US Edition. This Item may be shipped from US or any other country as we have multiple locations worldwide. Artikel-Nr. ABNR-232489
Anbieter: PBShop.store US, Wood Dale, IL, USA
HRD. Zustand: New. New Book. Shipped from UK. Established seller since 2000. Artikel-Nr. WP-9780691151878
Anbieter: PBShop.store UK, Fairford, GLOS, Vereinigtes Königreich
HRD. Zustand: New. New Book. Shipped from UK. Established seller since 2000. Artikel-Nr. WP-9780691151878
Anzahl: 9 verfügbar
Anbieter: Majestic Books, Hounslow, Vereinigtes Königreich
Zustand: New. pp. 256 Illus. Artikel-Nr. 56774158
Anzahl: 3 verfügbar
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
Hardcover. Zustand: Brand New. 256 pages. 10.10x7.00x1.00 inches. In Stock. Artikel-Nr. x-0691151873
Anzahl: 2 verfügbar
Anbieter: medimops, Berlin, Deutschland
Zustand: very good. Gut/Very good: Buch bzw. Schutzumschlag mit wenigen Gebrauchsspuren an Einband, Schutzumschlag oder Seiten. / Describes a book or dust jacket that does show some signs of wear on either the binding, dust jacket or pages. Artikel-Nr. M00691151873-V
Anzahl: 2 verfügbar