Models for Probability and Statistical Inference: Theory and Applications (Wiley Series in Probability and Statistics) - Hardcover

Buch 213 von 354: Wiley Series in Probability and Statistics

Stapleton, James H.

 
9780470073728: Models for Probability and Statistical Inference: Theory and Applications (Wiley Series in Probability and Statistics)

Inhaltsangabe

This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readers

Models for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.

Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses modes of convergence of sequences of random variables, with special attention to convergence in distribution. The second half of the book addresses statistical inference, beginning with a discussion on point estimation and followed by coverage of consistency and confidence intervals. Further areas of exploration include: distributions defined in terms of the multivariate normal, chi-square, t, and F (central and non-central); the one- and two-sample Wilcoxon test, together with methods of estimation based on both; linear models with a linear space-projection approach; and logistic regression.

Each section contains a set of problems ranging in difficulty from simple to more complex, and selected answers as well as proofs to almost all statements are provided. An abundant amount of figures in addition to helpful simulations and graphs produced by the statistical package S-Plus(r) are included to help build the intuition of readers.

Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.

Über die Autorin bzw. den Autor

James H. Stapleton, PhD, has recently retired after forty-nine years as professor in the Department of Statistics and Probability at Michigan State University, including eight years as chairperson and almost twenty years as graduate director. Dr. Stapleton is the author of Linear Statistical Models (Wiley), and he received his PhD in mathematical statistics from Purdue University.

Von der hinteren Coverseite

This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readers

Models for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.

Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses modes of convergence of sequences of random variables, with special attention to convergence in distribution. The second half of the book addresses statistical inference, beginning with a discussion on point estimation and followed by coverage of consistency and confidence intervals. Further areas of exploration include: distributions defined in terms of the multivariate normal, chi-square, t, and F (central and non-central); the one- and two-sample Wilcoxon test, together with methods of estimation based on both; linear models with a linear space-projection approach; and logistic regression.

Each section contains a set of problems ranging in difficulty from simple to more complex, and selected answers as well as proofs to almost all statements are provided. An abundant amount of figures in addition to helpful simulations and graphs produced by the statistical package S-Plus® are included to help build the intuition of readers.

Aus dem Klappentext

This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readers

Models for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.

Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses modes of convergence of sequences of random variables, with special attention to convergence in distribution. The second half of the book addresses statistical inference, beginning with a discussion on point estimation and followed by coverage of consistency and confidence intervals. Further areas of exploration include: distributions defined in terms of the multivariate normal, chi-square, t, and F (central and non-central); the one- and two-sample Wilcoxon test, together with methods of estimation based on both; linear models with a linear space-projection approach; and logistic regression.

Each section contains a set of problems ranging in difficulty from simple to more complex, and selected answers as well as proofs to almost all statements are provided. An abundant amount of figures in addition to helpful simulations and graphs produced by the statistical package S-Plus® are included to help build the intuition of readers.

Auszug. © Genehmigter Nachdruck. Alle Rechte vorbehalten.

Models for Probability and Statistical Inference

Theory and ApplicationsBy James H. Stapleton

John Wiley & Sons

Copyright © 2008 John Wiley & Sons, Inc.
All right reserved.

ISBN: 978-0-470-07372-8

Chapter One

Discrete Probability Models

1.1 INTRODUCTION

The mathematical study of probability can be traced to the seventeenth-century correspondence between Blaise Pascal and Pierre de Fermat, French mathematicians of lasting fame. Chevalier de Mere had posed questions to Pascal concerning gambling, which led to Pascal's correspondence with Fermat. One question was this: Is a gambler equally likely to succeed in the two games: (1) at least one 6 in four throws of one six-sided die, and (2) at least one double-6 (6-6) in 24 throws of two six-sided dice? At that time it seemed to many that the answer was yes. Some believe that de Mere had empirical evidence that the first event was more likely to occur than the second, although we should be skeptical of that, since the probabilities turn out to be 0.5178 and 0.4914, quite close. After students have studied Chapter One they should be able to verify these, then, after Chapter Six, be able to determine how many times de Mere would have to play these games in order to distinguish between the probabilities.

In the eighteenth century, probability theory was applied to astronomy and to the study of errors of measurement in general. In the nineteenth and twentieth centuries, applications were extended to biology, the social sciences, medicine, engineering-to almost every discipline. Applications to genetics, for example, continue to grow rapidly, as probabilistic models are developed to handle the masses of data being collected. Large banks, credit companies, and insurance and marketing firms are all using probability and statistics to help them determine operating rules.

We begin with discrete probability theory, for which the events of interest often concern count data. Although many of the examples used to illustrate the theory involve gambling games, students should remember that the theory and methods are applicable to many disciplines.

1.2 SAMPLE SPACES, EVENTS, AND PROBABILITY MEASURES

We begin our study of probability by considering the results of 400 consecutive throws of a fair die, a six-sided cube for which each of the numbers 1, 2, ..., 6 is equally likely to be the number showing when the die is thrown.

The frequencies are:

1 2 3 4 5 6 60 73 65 58 74 70

We use these data to motivate the definitions and theory to be presented. Consider, for example, the following question: What is the probability that the five numbers appearing in five throws of a die are all different? Among the 80 consecutive sequences of five numbers above, in only four cases were all five numbers different, a relative frequency of 5/80 = 0.0625. In another experiment, with 2000 sequences of five throws each, all were different 183 times, a relative frequency of 0.0915. Is there a way to determine the long-run relative frequency? Put another way, what could we expect the relative frequency to be in 1 million throws of five dice?

It should seem reasonable that all possible sequences of five consecutive integers from 1 to 6 are equally likely. For example, prior to the 400-throw experiment, each of the first two sequences, 61635 and 52244, were equally likely. For this example, such five-digit sequences will be called outcomes or sample points. The collection of all possible such five-digit sequences will be denoted by S, the sample space. In more mathematical language, S is the Cartesian product of the set A = {1, 2, 3, 4, 5, 6} with itself five times. This collection of sequences is often written as [A.sup.(5)]. Thus, S = [A.sup.(5)] = A A A A A. The number of outcomes (or sample points) in S is [6.sup.5] = 7776. It should seem reasonable to suppose that all outcomes (five-digit sequences) have probability 1/[6.sup.5].

We have already defined a probability model for this experiment. As we will see, it is enough in cases in which the sample space is discrete (finite or countably infinite) to assign probabilities, nonnegative numbers summing to 1, to each outcome in the sample space S. A discrete probability model has been defined for an experiment when (1) a finite or countably infinite sample space has been defined, with each possible result of the experiment corresponding to exactly one outcome; and (2) probabilities, nonnegative numbers, have been assigned to the outcomes in such a way that they sum to 1. It is not necessary that the probabilities assigned all be the same as they are for this example, although that is often realistic and convenient.

We are interested in the event A that all five digits in an outcome are different. Notice that this event A is a subset of the sample space S. We say that an event A has occurred if the outcome is a member of A. In this case event A did not occur for any of the eight outcomes in the first row above.

We define the probability of the event A, denoted P(A), to be the sum of the probabilities of the outcomes in A. By defining the probability of an event in this way, we assure that the probability measure P, defined for all subsets (events, in probability language) of S, obeys certain axioms for probability measures (to be stated later). Because our probability measure P has assigned all probabilities of outcomes to be equally likely, to find P(A) it is enough for us to determine the number of outcomes N(A) in A, for then P(A) = N(A)]1/N(S)] = N(A)/N(S). Of course, this is the case only because we assigned equal probabilities to all outcomes.

To determine N(A), we can apply the multiplication principle. A is the collection of 5-tuples with all components different. Each outcome in A corresponds to a way of filling in the boxes of the following cells:

[ILLUSTRATION OMITTED]

The first cell can hold any of the six numbers. Given the number in the first cell, and given that the outcome must be in A, the second cell can be any of five numbers, all different from the number in the first cell. Similarly, given the numbers in the first two cells, the third cell can contain any of four different numbers. Continuing in this way, we find that N(A) = (6)(5)(4)(3)(2) = 720 and that P(A) = 720/7776 = 0.0926, close to the value obtained for 2000 experiments. The number N(A) = 720 is the number of permutations of six things taken five at a time, indicated by P(6, 5).

Example 1.2.1 Consider the following discrete probability model, with sample space S = {a, b, c, d, e, f}.

Outcome [omega] a b c d e f P([omega]) 0.30 0.20 0.25 0.10 0.10 0.05

Let A = {a, b, d} and B = {b, d, e}. Then A [union] B = {a, b, d, e} and P(A [union] B) = 0.3 + 0.2 + 0.1 + 0.1 = 0.7. In addition, A [intersection] B = {b, d}, so that P(A [intersection] B) = 0.2 + 0.1 = 0.3. Notice that P(A [union] B) = P(A) + P(B) - P(A [intersection] B). (Why must this be true?). The complement of an event D, denoted by [D.sup.c], is the collection of outcomes in S that are not in D. Thus, P([A.sup.c]) = P({c, e, f }) = 0.15 + 0.15 +...

„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.

Weitere beliebte Ausgaben desselben Titels

9780470183410: Models for Probability and Statistical Inference: Theory and Applications (Wiley Series in Probability and Statistics)

Vorgestellte Ausgabe

ISBN 10:  0470183411 ISBN 13:  9780470183410
Hardcover