A comprehensive, self-contained primer on validated numerics
This textbook provides a comprehensive introduction to the theory and practice of validated numerics, an emerging new field that combines the strengths of scientific computing and pure mathematics. In numerous fields ranging from pharmaceutics and engineering to weather prediction and robotics, fast and precise computations are essential. Based on the theory of set-valued analysis, a new suite of numerical methods is developed, producing efficient and reliable solvers for numerous problems in nonlinear analysis. Validated numerics yields rigorous computations that can find all possible solutions to a problem while taking into account all possible sources of error-fast, and with guaranteed accuracy.
Validated Numerics offers a self-contained primer on the subject, guiding readers from the basics to more advanced concepts and techniques. This book is an essential resource for those entering this fast-developing field, and it is also the ideal textbook for graduate students and advanced undergraduates needing an accessible introduction to the subject. Validated Numerics features many examples, exercises, and computer labs using MATLAB/C++, as well as detailed appendixes and an extensive bibliography for further reading.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Warwick Tucker is professor of mathematics and principal investigator for the Computer-Aided Proofs in Analysis (CAPA) Group at Uppsala University in Sweden. He has been honored with several awards, including the European Mathematical Society's Prize for Distinguished Contributions in Mathematics, the R. E. Moore Prize for Applications of Interval Analysis, and the Swedish Mathematical Society's Wallenberg Prize.
"Validated Numerics contains introductory material on interval arithmetic and rigorous computations that is easily accessible to students with little background in mathematics and computer programming. I am not aware of any other book like it. The exercises and computer labs make it ideal for the classroom, and the references offer a good starting point for readers trying to gain deeper knowledge in this area."--Zbigniew Galias, AGH University of Science and Technology, Kraków
"A significant contribution, particularly since there are not many texts in this area.Validated Numerics will be read by those interested in interval arithmetic, numerical analysis, and ways to make computer simulations more robust and less susceptible to errors. It is well written and well organized."--A. J. Meir, Auburn University
Preface........................................................ixIntroduction...................................................xiChapter 1. Computer Arithmetic.................................1Chapter 2. Interval Arithmetic.................................24Chapter 3. Interval Analysis...................................46Chapter 4. Automatic Differentiation...........................60Chapter 5. Interval Analysis in Action.........................73Chapter 6. Ordinary Differential Equations.....................106Appendix A. Mathematical Foundations...........................118Appendix B. Program Codes......................................126Bibliography...................................................131Index..........................................................137
In this chapter, we give an elementary overview of how (and what type of) numbers are represented, stored, and manipulated in a computer. This will provide insight as to why some computations produce grossly incorrect results. This topic is covered much more extensively in, for example, [Mu09], [Hi96], [Ov01], and [Wi63].
1.1 POSITIONAL SYSTEMS
Our everyday decimal number system is a positional system in base 10. Since computer arithmetic is often built on positional systems in other bases (e.g., 2 or 16), we will begin this section by recalling how real numbers are represented in a positional system with an arbitrary integer base β ≥ 2. Setting aside practical restrictions, such as the finite storage capabilities of a computer, any real number can be expressed as an infinite string
(-1)σ (bnbn-1 ... b0.b-1b-2 ...)β, (1.1)
where bn, bn-1, ... are integers in the range [0, β - 1], and σ [member of] {0, 1} provides the sign of the number. The real number corresponding to (1.1) is
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
If the number ends in an infinite number of consecutive zeros we omit them in the expression (1.1). Thus we write (12.25)10 instead of (12.25000 ...)10. Also, we omit any zeros preceding the integer part (-1)σ (bnbn-1 ... b0)β. Thus we write (12.25)10 instead of (0012.25)10, and (0.0025)10 instead of (000.0025)10. Allowing for either leading or trailing extra zeros is called padding and is not common practice since it leads to redundancies in the representation.
Even without padding, the positional system is slightly flawed. No matter what base we choose, there are still real numbers that do not have a unique representation. For example, the decimal number (12.2549999 ...)10 is equal to (12.255)10, and the binary number (100.01101111 ...)2 is equal to (100.0111)2. This redundancy, however, can be avoided if we add the requirement that 0 ≤ bi ≤ β -2 for infinitely many i.
Exercise 1.1. Prove that any real number x [not equal to] = 0 has a unique representation (1.1) in a positional system (allowing no padding) with integer base β ≥ 2 under the two conditions (a) 0 ≤ bi ≤ β - 1 for all i, and (b) 0 ≤ bi ≤ β - 2 for infinitely many i.
Exercise 1.2. What is the correct way to represent zero in a positional system allowing no padding?
1.2 FLOATING POINT NUMBERS
When expressing a real number on the form (1.1), the placement of the decimal point is crucial. The floating point number system provides a more convenient way to represent real numbers. A floating point number is a real number on the form
x = (-1)σ m x βe, (1.2)
where (-1)σ is the sign of x, m is called the mantissa, and e is called the exponent of x. Writing numbers in floating point notation frees us from the burden of keeping track of the decimal point: it always follows the first digit of the mantissa. It is customary to write the mantissa as
m = (b0.b1b2 ...)β
where, compared to the previous section, the indexing of the bi has the opposite sign. As this is the standard notation, we will adopt this practice in what follows. Thus we may define the set of floating point numbers in base β as:
Fβ = {(-1)σ m × βe : m = (b0.b1b2 ...)β},
where, as before, we request that β is an integer no less than 2, and that 0 ≤ bi ≤ β - 1 for all i, and that 0 ≤ bi ≤ β - 2 for infinitely many i. The exponent e may be any integer.
Expressing real numbers in floating point form introduces a new type of redundancy. For example, the base 10 number 123 can be expressed as (1.23)10 x 102, (0.123)10 x 103 , (0.0123)10 x 104, and so on. In order to have unique representations for non-zero real numbers, we demand that the leading digit b0 be non-zero, except for the special case x = 0. Floating point numbers satisfying this additional requirement are said to be normal or normalized.
Exercise 1.3. Show that a non-zero floating point number is normal if and only if its associated exponent e is chosen minimal.
So far, we have simply toyed with different representations of the real numbers R. As this set is uncountably infinite, whereas a machine can only store a finite amount of information, more drastic means are called for: we must introduce a new, much smaller set of numbers designed to fit into a computer that at the same time approximate the real numbers in some well-defined sense.
As a first step toward this goal, we restrict the number of digits representing the mantissa. This yields the set
Fβ, p = {x [member of] Fβ : m = (b0.b1b2 ... bp-1)β}.
The number p is called the precision of the floating point system. It is a nice exercise to show that although Fβ, p is a much smaller set than Fβ, it is countably infinite. This means that even Fβ, p is too large for our needs. Note, however, that the restriction 0 ≤ bi ≤ β - 2 for infinitely many i becomes void in finite precision.
A finite set of floating point numbers can be formed by imposing a fixed precision, as well as bounds on the admissible exponents. Such a set is specified by four integers: the base β, the precision p, and the minimal and maximal exponents [??] and [??], respectively. Given these quantities, we can define parameterized sets of computer representable floating point numbers:
F[??], [??]β, p = {x [member of] Fβ, p : [??] ≤ e ≤ [??]}.
Exercise 1.4. Show that F[??], [??]β, p is finite, whereas Fβ, p is countably infinite, and Fβ is uncountably infinite, with
F[??], [??]β, p [subset] Fβ, p [subset] Fβ.
(See appendix A for the different notions of infinite.)
Exercise 1.5. How many normal numbers belong to the set F[??], [??]β, p?
Using a base other than 10 forces us to have to rethink which numbers have a finite representation. This can cause some confusion to the novice programmer.
Example 1.2.1 With β = 2 and p < ∞, the number 1/10 is not exactly representable in Fβ, p. This can be seen by noting that
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Interpreting the first sum as a binary representation, we have
1/10 = (0.00011001100110011 ...)2 = (1.1001100110011 ...)2 x 2-4.
Since this is a non-terminating binary number, it has no exact representation in F2, p, regardless of the choice of precision p.
This example may come as a surprise to all who use the popular step-sizes 0.1 or 0.001 in, say, numerical integration methods. Most computers use β = 2 in their internal floating point representation, which means that on these computers 1000 x 0.001 [not equal to] 1. This simple fact can have devastating consequences, for example, for very long integration procedures. More suitable step-sizes would be 2-3 = 0.125 and 2-10 = 0.0009765625, which are exactly representable in base 2. Example 1.4.1 further illustrates how sensitive numerical computations can be to these small conversion errors.
1.2.1 Subnormal Numbers
As mentioned earlier, without the normalizing restriction b0 [not equal to] 0, a non-zero real number may have several representations in the set F[??], [??]β, p. (Note, however, that most real numbers no longer have any representation in this finite set.) We already remarked that these redundancies may be avoided by normalization, that is, by demanding that all non-zero floating point numbers have a non-zero leading digit. To illustrate the concept of normalized numbers, we will study a small toy system of floating point numbers, illustrated in Figure 1.1.
It is clear that the floating point numbers are not uniformly distributed: the positive numbers are given by
{0.5, 0.625, 0.75, 0.875, 1, 1.25, 1.5, 1.75, 2, 2.5, 3, 3.5, 4, 5, 6, 7}.
Thus the intermediate distances between consecutive numbers range within the set {0.125, 0.25, 0.5, 1}. In Section 1.3, we will explain why this type of non-uniform spacing is a good idea. We will also describe how to deal with real numbers having modulus greater than the largest positive normal number Nnmax, which in our toy system is (1.11)2 x 22 = 7.
Note that the smallest positive normal number in F-1,22,3 is Nnmin = (1.00)2 x 2-1 = 0.5, which leaves an undesired gap centered around the origin. This leads not only to a huge loss of accuracy when approximating numbers of small magnitude but also to the violation of some of our most valued mathematical laws (see Exercise 1.18). A way to work around these problems is to allow for some numbers that are not normal.
A non-zero floating point number in F[??], [??]β, p with b0 = 0 and e = [??] is said to be subnormal (or denormalized). Subnormal numbers allow for a gradual underflow to zero (compare Figures 1.1 and 1.2). Extending the set of admissible floating point numbers to include the subnormal numbers still preserves uniqueness of representation, although the use of these additional numbers comes with some penalties, as we shall shortly see. For our toy system at hand, the positive subnormal numbers are {0.125, 0.25, 0.375}.
Figure 1.2 illustrates the normal and subnormal numbers of -1,22,3. The difference between these two sets is striking: the smallest positive normal number Nnmin is (1.00)2 x 2-1 = 0.5, whereas the smallest positive subnormal number Nnmin is (0.01)2 x 2-1 = 0.125. The largest subnormal number Nsmax is (0.11)2 x 2-1 = 0.375. Geometrically, introducing subnormal numbers corresponds to filling the gap centered around the origin with evenly spaced numbers. The spacing should be the same as that between the two smallest positive normal numbers.
From now on, when we refer to the set F[??], [??]β, p, we mean the set of normal and subnormal numbers. We will use F to denote any set of type F[??], [??]β, p or Fβ, p; when needed, the exact parameters of the set in question will be provided. A real number x with Nnmin ≤ |x| ≤ Nnmax is said to be in the normalized range of the associated floating point system.
Exercise 1.6. Prove that the non-zero normal and subnormal numbers of F[??], [??]β, p have unique representations.
Exercise 1.7. How many positive subnormal numbers are there in the set F[??], [??]β, p?
Exercise 1.8. Derive formulas for Nsmin, Nsmax, Nnmin, and Nnmax for a general set of floating point numbers F[??], [??]β, p.
We conclude this section by remarking that although the floating point numbers are not uniformly spaced; for [??] ≤ m, n < [??], the sets F[??], [??]β, p [intersection] [βm, βm+1) and F[??], [??]β, p, n+1) [intersection] [βn, βn+1) have the same cardinality. This is apparent in Figures 1.1 and 1.2. We also note that any floating point system F is symmetric with respect to the origin: x [member of] F <-> x [member of] F.
Exercise 1.9. Compute the number of elements of the set F[??], [??]β, p [intersection] [βn, βn+1), provided that [??] ≤ n < [??].
1.3 ROUNDING
We have now reached the stage where we have succeeded in condensing the uncountable set of real numbers R into a finite set of floating point numbers F. Almost all commercial computers use a set like F, with some minor additions, to approximate the real numbers. In order to make computing feasible over F, we must find a means to associate any real number x [member of] R to a member y of F. Such an association is called rounding and defines a map from R onto F. Obviously, we cannot make the map invertible, but we would like to make it as close as possible to a homeomorphism.
Before defining such a mapping, we will extend both the domain and range into the sets R* = R [union] {-∞, +∞} and F* = F [union] {-∞, +∞}, respectively. This provides an elegant means for representing real numbers that are too large in absolute value to fit into F. In the actual implementation, the symbols {-∞, +∞} are specially coded and do not have a valid representation such as (1.2).
Following the excellent treatise on computer arithmetic, [KM81], a rounding operation [??] : R* -> F* should have the following properties:
(R1) x [member of] F* -> [??] (x) = x,
(R2) x, y [member of] R* and x ≤ y -> (x) ≤ [??] (y).
Property (R1) simply states that all members of F* are fixed under [??]. Clearly an already representable number does not require any further rounding. Property (R2) states that the rounding is monotone. Indeed, it would be very difficult to interpret the meaning of any numerical procedure without this property. Combining (R1) and (R2), one can easily prove that the rounding [??] is of maximum quality, that is, the interior of the interval spanned by x and [??] (x) contains no points of F*.
Lemma 1.10. Let x [member of] R*. If [??] : R* -> F* satisfies both (R1) and (R2), then the interval spanned by x and [??] (x) contains no points of F* in its interior.
Proof. The claim is trivially true if x = [??](x) (since then the interior is empty), so assume that x [not equal to] (x). Without loss of generality we may assume that x < [??](x). Now suppose that the claim is false, that is, there exists an element y [member of] F* with x < y < (x). Since, by (R1), we have [??](y) = y, we must, by (R2), have (x) ≤ y. This gives the desired contradiction.
We will now describe four rounding modes that are available on most commercial computers.
(Continues...)
Excerpted from Validated Numericsby Warwick Tucker Copyright © 2011 by Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
Gratis für den Versand innerhalb von/der USA
Versandziele, Kosten & DauerEUR 3,84 für den Versand innerhalb von/der USA
Versandziele, Kosten & DauerAnbieter: Better World Books, Mishawaka, IN, USA
Zustand: Very Good. Former library book; may include library markings. Used book that is in excellent condition. May show signs of wear or have minor defects. Artikel-Nr. 52780915-6
Anzahl: 1 verfügbar
Anbieter: Labyrinth Books, Princeton, NJ, USA
Zustand: New. Artikel-Nr. 135837
Anzahl: 14 verfügbar
Anbieter: PBShop.store US, Wood Dale, IL, USA
HRD. Zustand: New. New Book. Shipped from UK. Established seller since 2000. Artikel-Nr. WP-9780691147819
Anzahl: 1 verfügbar
Anbieter: PBShop.store UK, Fairford, GLOS, Vereinigtes Königreich
HRD. Zustand: New. New Book. Shipped from UK. Established seller since 2000. Artikel-Nr. WP-9780691147819
Anzahl: 1 verfügbar
Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
Zustand: New. In. Artikel-Nr. ria9780691147819_new
Anzahl: 2 verfügbar
Anbieter: Kennys Bookstore, Olney, MD, USA
Zustand: New. Provides a comprehensive introduction to the theory and practice of validated numerics, a field that combines the strengths of scientific computing and pure mathematics. This title features many examples, exercises, and computer labs using MATLAB/C++. It is suitable for graduate students and advanced undergraduates. Num Pages: 152 pages, 41 line illus. 12 tables. BIC Classification: PBWH; PDN; UYA. Category: (P) Professional & Vocational; (U) Tertiary Education (US: College). Dimension: 233 x 163 x 16. Weight in Grams: 362. . 2011. Hardcover. . . . . Books ship from the US and Ireland. Artikel-Nr. V9780691147819
Anzahl: 1 verfügbar
Anbieter: moluna, Greven, Deutschland
Zustand: New. Provides a comprehensive introduction to the theory and practice of validated numerics, a field that combines the strengths of scientific computing and pure mathematics. This title features many examples, exercises, and computer labs using MATLAB/C++. It is. Artikel-Nr. 594884535
Anzahl: 1 verfügbar
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
Hardcover. Zustand: Brand New. 240 pages. 9.84x5.91x4.53 inches. In Stock. Artikel-Nr. x-0691147817
Anzahl: 2 verfügbar
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Buch. Zustand: Neu. Neuware - 'Validated Numerics contains introductory material on interval arithmetic and rigorous computations that is easily accessible to students with little background in mathematics and computer programming. I am not aware of any other book like it. The exercises and computer labs make it ideal for the classroom, and the references offer a good starting point for readers trying to gain deeper knowledge in this area.'--Zbigniew Galias, AGH University of Science and Technology, Kraków. Artikel-Nr. 9780691147819
Anzahl: 1 verfügbar