Robust Optimization (Princeton Series in Applied Mathematics) - Hardcover

Buch 7 von 33: Princeton Series in Applied Mathematics

Ben-Tal, Aharon; El Ghaoui, Laurent; Nemirovski, Arkadi

 
9780691143682: Robust Optimization (Princeton Series in Applied Mathematics)

Inhaltsangabe

Robust optimization is still a relatively new approach to optimization problems affected by uncertainty, but it has already proved so useful in real applications that it is difficult to tackle such problems today without considering this powerful methodology. Written by the principal developers of robust optimization, and describing the main achievements of a decade of research, this is the first book to provide a comprehensive and up-to-date account of the subject. Robust optimization is designed to meet some major challenges associated with uncertainty-affected optimization problems: to operate under lack of full information on the nature of uncertainty; to model the problem in a form that can be solved efficiently; and to provide guarantees about the performance of the solution. The book starts with a relatively simple treatment of uncertain linear programming, proceeding with a deep analysis of the interconnections between the construction of appropriate uncertainty sets and the classical chance constraints (probabilistic) approach. It then develops the robust optimization theory for uncertain conic quadratic and semidefinite optimization problems and dynamic (multistage) problems. The theory is supported by numerous examples and computational illustrations. An essential book for anyone working on optimization and decision making under uncertainty, Robust Optimization also makes an ideal graduate textbook on the subject.

Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.

Über die Autorin bzw. den Autor

Aharon Ben-Tal, Laurent El Ghaoui & Arkadi Nemirovski

Auszug. © Genehmigter Nachdruck. Alle Rechte vorbehalten.

Robust Optimization

By Aharon Ben-Tal Laurent El Ghaoui Arkadi Nemirovski

Princeton University Press

Copyright © 2009 Princeton University Press
All right reserved.

ISBN: 978-0-691-14368-2

Contents

Preface.............................................................................................ixPART I. ROBUST LINEAR OPTIMIZATION..................................................................1Chapter 1. Uncertain Linear Optimization Problems and their Robust Counterparts.....................3Chapter 2. Robust Counterpart Approximations of Scalar Chance Constraints...........................27Chapter 3. Globalized Robust Counterparts of Uncertain LO Problems..................................67Chapter 4. More on Safe Tractable Approximations of Scalar Chance Constraints.......................81PART II. ROBUST CONIC OPTIMIZATION..................................................................147Chapter 5. Uncertain Conic Optimization: The Concepts...............................................149Chapter 6. Uncertain Conic Quadratic Problems with Tractable RCs....................................159Chapter 7. Approximating RCs of Uncertain Conic Quadratic Problems..................................179Chapter 8. Uncertain Semidefinite Problems with Tractable RCs.......................................203Chapter 9. Approximating RCs of Uncertain Semidefinite Problems.....................................225Chapter 10. Approximating Chance Constrained CQIs and LMIs..........................................235Chapter 11. Globalized Robust Counterparts of Uncertain Conic Problems..............................279Chapter 12. Robust Classification and Estimation....................................................301PART III. ROBUST MULTI-STAGE OPTIMIZATION...........................................................339Chapter 13. Robust Markov Decision Processes........................................................341Chapter 14. Robust Adjustable Multistage Optimization...............................................355PART IV. SELECTED APPLICATIONS......................................................................415Chapter 15. Selected Applications...................................................................417Appendix A. Notation and Prerequisites..............................................................447Appendix B. Some Auxiliary Proofs...................................................................469Appendix C. Solutions to Selected Exercises.........................................................511Bibliography........................................................................................531Index...............................................................................................539

Chapter One

Uncertain Linear Optimization Problems and their Robust Counterparts

In this chapter, we introduce the concept of the uncertain Linear Optimization problem and its Robust Counterpart, and study the computational issues associated with the emerging optimization problems.

1.1 DATA UNCERTAINTY IN LINEAR OPTIMIZATION

Recall that the Linear Optimization (LO) problem is of the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1.1)

where x [member of] [R.sup.n] is the vector of decision variables, c [member of] [R.sup.n] and d [member of] R form the objective, A is an m x n constraint matrix, and b [member of] [R.sup.m] is the right hand side vector.

Clearly, the constant term d in the objective, while affecting the optimal value, does not affect the optimal solution, this is why it is traditionally skipped. As we shall see, when treating the LO problems with uncertain data there are good reasons not to neglect this constant term.

The structure of problem (1.1.1) is given by the number m of constraints and the number n of variables, while the data of the problem are the collection (c, d, A, b), which we will arrange into an (m + 1) x (n + 1) data matrix

D = [c.sup.T]/A | d/b].

Usually not all constraints of an LO program, as it arises in applications, are of the form [a.sup.T] x [less than or equal to] const; there can be linear "[greater than or equal to]" inequalities and linear equalities as well. Clearly, the constraints of the latter two types can be represented equivalently by linear "[less than or equal to]" inequalities, and we will assume henceforth that these are the only constraints in the problem.

Typically, the data of real world LOs (Linear Optimization problems) is not known exactly. The most common reasons for data uncertainty are as follows:

Some of data entries (future demands, returns, etc.) do not exist when the problem is solved and hence are replaced with their forecasts. These data entries are thus subject to prediction errors;

Some of the data (parameters of technological devices/processes, contents associated with raw materials, etc.) cannot be measured exactly - in reality their values drift around the measured "nominal" values; these data are subject to measurement errors;

Some of the decision variables (intensities with which we intend to use various technological processes, parameters of physical devices we are designing, etc.) cannot be implemented exactly as computed. The resulting implementation errors are equivalent to appropriate artificial data uncertainties.

Indeed, the contribution of a particular decision variable [x.sub.j] to the left hand side of constraint i is the product [a.sub.ij][x.sub.j]. Hence the consequences of an additive implementation error [x.sub.j] -> [x.sub.j] + [member of] are as if there were no implementation error at all, but the left hand side of the constraint got an extra additive term [a.sub.ij] [member of], which, in turn, is equivalent to the perturbation [b.sub.i] -> [b.sub.j] - [a.sub.ij] [member of] in the right hand side of the constraint. The consequences of a more typical multiplicative implementation error [x.sub.j] -> (1 + [member of]) [x.sub.j] are as if there were no implementation error, but each of the data coefficients [a.sub.ij] was subject to perturbation [a.sub.ij] -> (1 + [member of]) [a.sub.ij]. Similarly, the influence of additive and multiplicative implementation error in [x.sub.j] on the value of the objective can be mimicked by appropriate perturbations in d or [c.sub.j].

In the traditional LO methodology, a small data uncertainty (say, 1% or less) is just ignored; the problem is solved as if the given ("nominal") data were exact, and the resulting nominal optimal solution is what is recommended for use, in hope that small data uncertainties will not affect significantly the feasibility and optimality properties of this solution, or that small adjustments of the nominal solution will be sufficient to make it feasible. We are about to demonstrate that these hopes are not necessarily justified, and sometimes even small data uncertainty deserves significant attention.

1.1.1 Introductory Example

Consider the following very simple linear optimization problem:

Example 1.1.1. A company produces two kinds of drugs, DrugI and DrugII, containing a specific...

„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.