This self-contained textbook is an informal introduction to optimization through the use of numerous illustrations and applications. The focus is on analytically solving optimization problems with a finite number of continuous variables. In addition, the authors provide introductions to classical and modern numerical methods of optimization and to dynamic optimization.
The book's overarching point is that most problems may be solved by the direct application of the theorems of Fermat, Lagrange, and Weierstrass. The authors show how the intuition for each of the theoretical results can be supported by simple geometric figures. They include numerous applications through the use of varied classical and practical problems. Even experts may find some of these applications truly surprising.
A basic mathematical knowledge is sufficient to understand the topics covered in this book. More advanced readers, even experts, will be surprised to see how all main results can be grounded on the Fermat-Lagrange theorem. The book can be used for courses on continuous optimization, from introductory to advanced, for any field for which optimization is relevant.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Jan Brinkhuis is Associate Professor of Finance and Mathematical Methods and Techniques at the Econometric Institute of Erasmus University, Rotterdam. Vladimir Tikhomirov holds the Chair of Optimal Control in the Department of Mechanics and Mathematics at the Lomonosov Moscow State University.
"Well written and well organized. The book's examples are highly varied, interesting and well thought out."--Steinar Hauan, Carnegie Mellon University
"An extremely interesting introduction to the field of mathematical optimization. I know of no other book in the field that offers so many illustrations of the applicability of deep theoretical issues in optimization. It will command a broad audience, from beginners to experts."--Kees Roos, Delft University of Technology
"Well written and well organized. The book's examples are highly varied, interesting and well thought out."--Steinar Hauan, Carnegie Mellon University
"An extremely interesting introduction to the field of mathematical optimization. I know of no other book in the field that offers so many illustrations of the applicability of deep theoretical issues in optimization. It will command a broad audience, from beginners to experts."--Kees Roos, Delft University of Technology
Preface..................................................................................xiNecessary Conditions: What Is the Point?.................................................1Chapter 1. Fermat: One Variable without Constraints......................................3Chapter 2. Fermat: Two or More Variables without Constraints.............................85Chapter 3. Lagrange: Equality Constraints................................................135Chapter 4. Inequality Constraints and Convexity..........................................199Chapter 5. Second Order Conditions.......................................................261Chapter 6. Basic Algorithms..............................................................273Chapter 7. Advanced Algorithms...........................................................325Chapter 8. Economic Applications.........................................................363Chapter 9. Mathematical Applications.....................................................391Chapter 10. Mixed Smooth-Convex Problems.................................................417Chapter 11. Dynamic Programming in Discrete Time.........................................441Chapter 12. Dynamic Optimization in Continuous Time......................................475Appendix A. On Linear Algebra: Vector and Matrix Calculus................................503Appendix B. On Real Analysis.............................................................519Appendix C. The Weierstrass Theorem on Existence of Global Solutions.....................537Appendix D. Crash Course on Problem Solving..............................................547Appendix E. Crash Course on Optimization Theory: Geometrical Style.......................553Appendix F. Crash Course on Optimization Theory: Analytical Style........................561Appendix G. Conditions of Extremum from Fermat to Pontryagin.............................583Appendix H. Solutions of Exercises of Chapters 1–4.................................601Bibliography.............................................................................645Index....................................................................................651
When a quantity is the greatest or the smallest, at that moment its flow is neither forward nor backward. I. Newton
? How to find the maxima and minima of a function f of one variable x without constraints?
1.0 SUMMARY
You can never be too rich or too thin. W. Simpson, wife of Edward VIII
One variable of optimization. The epigraph to this summary describes a view in upper-class circles in England at the beginning of the previous century. It is meant to surprise, going against the usual view that somewhere between too small and too large is the optimum, the "golden mean." Many pragmatic problems lead to the search for the golden mean (or the optimal trade-off or the optimal compromise). For example, suppose you want to play a computer game and your video card does not allow you to have optimal quality ("high resolution screen") as well as optimal performance ("flowing movements"); then you have to make an optimal compromise. This chapter considers many examples where this golden mean is sought. For example, we will be confronted with the problem that a certain type of vase with one long-stemmed rose in it is unstable if there is too little water in it, but as well if there is too much water in it. How much water will give optimal stability? Usually, the reason for such optimization problems is that a trade-off has to be made between two effects. For example, the height of houses in cities like New York or Hong Kong is determined as the result of the following trade-off. On the one hand, you need many people to share the high cost of the land on which the house is built. On the other hand, if you build the house very high, then the specialized costs are forbidding.
Derivative equal to zero. All searches for the "golden mean" can be modeled as problems of optimizing a function f of one variable x, minimization (maximization) if f(x) represents some sort of cost (profit). The following method, due to Fermat, usually gives the correct answer: "put the derivative of f equal to zero," solve the equation, and – if the optimal x has to be an integer – round off to the nearest integer. This is well known from high school, but we try to take a fresh look at this method. For example, we raise the question why this method is so successful. The technical reason for this is of course the great strength of the available calculus for determining derivatives of given functions. We will see that in economic applications a conceptual reason for this success is the equimarginal rule. That is, rational decision makers take a marginal action only if the marginal benefit of the action exceeds the marginal cost; they will continue to take action till marginal benefit equals marginal cost.
Snellius's law. The most striking application of the method of Fermat is perhaps the derivation of the law of Snellius on the refraction of light on the boundary between two media—for example, water and air. This law was discovered empirically. The method of Fermat throws a striking light on this technical rule, showing that it is a consequence of the simple principle that light always takes the fastest path (at least for small distances).
1.1 INTRODUCTION
Optimization and the differential calculus. The first general method of solution of extremal problems is due to Pierre de Fermat (1608–1665). In 1638 he presented his idea in a letter to the prominent mathematicians Gilles Persone de Roberval (1602–1675) and Marin Mersenne (1588–1648). Scientific journals did not yet exist, and writing a letter to learned correspondents was a usual way to communicate a new discovery. Intuitively, the idea is that the tangent line at the highest or lowest point of a graph of a function is horizontal. Of course, this tangent line is only defined if the graph has no "kink" at this point.
The exact meaning became clear later when Isaac Newton (1642/43–1727) and Gottfried von Leibniz (1646–1716) invented the elements of classical analysis. One of the motivations for creating analysis was the desire of Newton and Leibniz to find general approaches to the solution of problems of maximum and minimum. This was reflected, in particular, in the title of the first published work devoted to the differential calculus (written by Leibniz, published in 1684). It begins with the words "Nova methodus pro maximis et minimis ...."
The Fermat theorem. In his letter to Roberval and Mersenne, Fermat had—from our modern point of view—the following proposition in mind, now called the (one-variable) Fermat theorem (but he could express his idea only for polynomials): if x is a point of local minimum (or maximum) of f, then the main linear part of the increment is equal to zero. The following example illustrates how this idea works.
Example 1.1.1 Verify the idea of Fermat for the function f(x) = x2.
Solution. To begin with, we note that the graph of f is a parabola that has its lowest point at x = 0.
Now let us see how the idea of Fermat leads to this point. Let x be a point of local minimum of f. Let x be an arbitrary point close to x; write h for the increment of the argument, x - x, that is, x = x + h. The increment of the function,
f(x) - f(x) = (x + h)2 - x2 = 2xh + h2,
is the sum of the main linear part 2xh and the remainder h2.
This terminology is reasonable: the graph of the function h [right arrow] 2xh is a straight line through the origin and the term h2, which remains, is negligible—in comparison to h—for h small enough. That h2 is negligible can be illustrated using decimal notation: if h [approximately equal to] 10-k ("k-th decimal behind point"), then h2 = 10-2k ("2k-th decimal behind point"). For example, if k = 2, then h = 1/100 = .01, and then the remainder h2 = 1/10000 = .0001 is negligible in comparison to h.
That the main linear part is equal to zero means that 2xh is zero for each h, and so x = 0.
Royal road. If you want a shortcut to the applications in this chapter, then you can read the statements of the Fermat theorem 1.4, the Weierstrass theorem 1.6, and its corollary 1.7, as well as the solutions of examples 1.3.4, 1.3.5, and 1.3.7 (solution 3); thus prepared, you are ready to enjoy as many of the applications in sections 1.4 and 1.6 as you like. After this, you can turn to the next chapter.
1.2 THE DERIVATIVE FOR ONE VARIABLE
1.2.1 What is the derivative?
To solve problems of interest, one has to combine the idea of Fermat with the differential calculus. We begin by recalling the basic notion of the differential calculus for functions of one variable. It is the notion of derivative. The following experiment illustrates the geometrical idea of the derivative.
"Zooming in" approach to the derivative. Choose a point on the graph of a function drawn on a computer screen. Zoom in a couple of times. Then the graph looks like a straight line. Its slope is the derivative. Note that a straight line through a given point is determined by its slope.
Analytical definition of the derivative. For the moment, we restrict our attention to the analytical definition, due to Auguste Cauchy (1789–1857). This is known from high school: it views the derivative f'(x) of a function f at a number x as the limit of the quotient of the increments of the function, f(x + h) - f(x), and the argument, h = (x + h) - x, if h tends to zero, h [right arrow] 0 (Fig. 1.1).
Now we will give a more precise formulation of this analytical definition. To begin with, note that the derivative f'(x) depends only on the behavior of f at a neighborhood of x. This means that for a sufficiently small number e > 0 it suffices to consider f(x) only for x with |x - x| < e.
We call the set of all numbers x for which
|x - x| < e
the e-neighborhood of x. This can also be defined by the inequalities
x - e < x < x + e,
or by the inclusion x [member of] (x-e, x+e). The notation f : (x-e, x+e) [right arrow] R denotes that f is defined on the e-neighborhood of x and takes its values in R, the real numbers.
Definition 1.1 Analytical definition of the derivative. Let x be a real number, e > 0 a positive number, and f : (x - e, x + e) [right arrow] R a function of one variable x. The function f is called differentiable at x if the limit
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
exists. Then this limit is called the (first) derivative of f at x and it is denoted by f'(x).
For a linear function f(x) = ax one has f'(x) = a for all x [member of] R, that is, for all numbers x of the real line R. The following example is more interesting.
Example 1.2.1 Compute the derivative of the quadratic function f(x) = x2 at a given number x.
Solution. For each number h one has
f(x + h) - f(x) = (x + h)2 - x2 = 2xh + h2
and so
f(x + h) - f(x)/h = 2x + h.
Taking the limit h [right arrow] 0, we get f'(x) = 2x.
Analogously, one can show that for the function f(x) = xn the derivative is f'(x) = nxn-1. It is time for a more challenging example.
Example 1.2.2 Compute the derivative of the absolute value function f(x) = |x|.
Solution. The answer is obvious if you look at the graph of f and observe that it has a kink at x = 0 (Fig. 1.2). To begin with, you see that f is not differentiable at x = 0. For x > 0 one has f(x) = x and so f'(x) = 1, and for x < 0 one has f(x) = -x and so f'(x) = -1. These results can be expressed by one formula,
|x| = x/|x| = sgn(x) for x [not equal to] 0.
Without seeing the graph, we could also compute the derivative using the definition. For each nonzero number h one has
f(x + h) - f(x)/h = |x + h| - |x|/h.
? For x < 0 this equals, provided |h| is small enough,
(-x - h) - (-x)/h = -1,
and this leads to f'(x) = -1.
? For x > 0 a similar calculation gives f'(x) = 1.
? For x = 0 this equals 1 for all h > 0 and -1 for all h < 0. This shows that f'(0) does not exist.
The geometrical sense of the differentiability of f at x is that the graph of f makes no "jump" at x and has no "kink" at x. The following example illustrates this.
Example 1.2.3 In Figure 1.3, the graphs are drawn of three functions f, g, and h. Determine for each of these functions the points of differentiability.
Solution. The functions g and h are not differentiable at all integers, as the graph of g has "kinks" at these points and the graph of h makes "jumps" at these points. The functions are differentiable at all other points. Finally, f is differentiable at all points.
Sometimes other notation is used for the derivative, such as d/dxf(x) or df/dx(x). One writes D1(x) for the set of functions of one variable x for which the first derivative at x exists.
1.2.2 The differential calculus
Newton and Leibniz computed derivatives by using the definition of the derivative. This required great skill, and such computations were beyond the capabilities of their contemporaries. Fortunately, now everyone can learn how to calculate derivatives. The differential calculus makes it possible to determine derivatives of functions in a routine way. This consists of a list of basic examples such as
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
and rules such as the product rule
(f(x)g(x))' = f'(x)g(x) + f(x)g'(x)
and the chain rule
(f(g(x))' = f'(g(x))g'(x).
In the d/dx notation for the derivative this takes on a form that is easy to memorize,
dz/dx = dz/dy dy/dx,
where z is a function of y and y a function of x.
Modest role of definitions. Thus the definition of the derivative plays a very modest role: to compute a derivative, you use a userfriendly calculus, but not the definition. Users might be pleasantly surprised to find that a similar observation can be made for most definitions.
The use of the differential calculus will be illustrated in the analysis of all concrete optimization problems.
1.2.3 Geologists and sin 1°
If you want an immediate impression of the power of the differential calculus, then you can read about the following holiday experience. This example will provide insight, but the details will not be used in the solution of concrete optimization problems.
Once a colleague of ours met in the mountains a group of geologists. They got excited when they heard he was a mathematician, and what was even stranger is the sort of questions they started asking him. They needed the first four or five decimals of ln 2 and of sin 1°. They even wanted him to explain how they could find these decimals by themselves. "Next time, we might need sin 2 or some cosine and then we won't have you."
Well, not wanting to be unfriendly, our friend agreed to demonstrate this to them on a laptop. However, the best computing power that could be found was a calculator, and this did not even have a sine or logarithm button. Excuse enough not to be able to help, but for some reason he took pity on them and explained how to get these decimals even if you have no computing power. The method is illustrated below for sin 1°.
The geologists explained why they needed these decimals. They possessed some logarithmic lists about certain materials, but these were logarithms to base 2 and the geologists needed to convert these into natural logarithms; therefore, they had to multiply by the constant ln 2. This explains ln 2. They also told a long, enthusiastic story about sin 1°. It involved their wish to make an accurate map of the site of their research, and for this they had to measure angles and do all sorts of calculations.
(Continues...)
Excerpted from Optimization: Insights and Applicationsby Jan Brinkhuis Vladimir Tikhomirov Copyright © 2005 by Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
EUR 12,01 für den Versand von USA nach Deutschland
Versandziele, Kosten & DauerEUR 4,99 für den Versand von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & DauerAnbieter: Better World Books, Mishawaka, IN, USA
Zustand: Good. Former library book; may include library markings. Used book that is in clean, average condition without any missing pages. Artikel-Nr. 50567716-6
Anzahl: 1 verfügbar
Anbieter: PBShop.store UK, Fairford, GLOS, Vereinigtes Königreich
HRD. Zustand: New. New Book. Shipped from UK. Established seller since 2000. Artikel-Nr. WP-9780691102870
Anzahl: 2 verfügbar
Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
Zustand: New. In. Artikel-Nr. ria9780691102870_new
Anzahl: 2 verfügbar
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
Hardcover. Zustand: Brand New. illustrated edition. 676 pages. 9.25x6.25x2.00 inches. In Stock. Artikel-Nr. x-0691102872
Anzahl: 2 verfügbar