Verwandte Artikel zu Hidden Markov Processes: Theory and Applications to...

Hidden Markov Processes: Theory and Applications to Biology: 46 (Princeton Series in Applied Mathematics) - Hardcover

 
9780691133157: Hidden Markov Processes: Theory and Applications to Biology: 46 (Princeton Series in Applied Mathematics)

Inhaltsangabe

This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. The book starts from first principles, so that no previous knowledge of probability is necessary. However, the work is rigorous and mathematical, making it useful to engineers and mathematicians, even those not interested in biological applications. A range of exercises is provided, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Biological applications are taken from post-genomic biology, especially genomics and proteomics. The topics examined include standard material such as the Perron-Frobenius theorem, transient and recurrent states, hitting probabilities and hitting times, maximum likelihood estimation, the Viterbi algorithm, and the Baum-Welch algorithm. The book contains discussions of extremely useful topics not usually seen at the basic level, such as ergodicity of Markov processes, Markov Chain Monte Carlo (MCMC), information theory, and large deviation theory for both i.i.d and Markov processes. The book also presents state-of-the-art realization theory for hidden Markov models. Among biological applications, it offers an in-depth look at the BLAST (Basic Local Alignment Search Technique) algorithm, including a comprehensive explanation of the underlying theory. Other applications such as profile hidden Markov models are also explored.

Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.

Über die Autorin bzw. den Autor

M. Vidyasagar is the Cecil and Ida Green Chair in Systems Biology Science at the University of Texas, Dallas. His many books include Computational Cancer Biology: An Interaction Network Approach and Control System Synthesis: A Factorization Approach.

Von der hinteren Coverseite

"This book provides a terrific introduction to an important and widely studied field--Markov processes (including hidden Markov processes)--with a particular view toward applications to problems in biology. With a wonderful balance of rigor, intuition, and choice of topics, the book gives a unique treatment of the subject for those interested in both fundamental theory and important applications."--Sanjeev Kulkarni, Princeton University

"Vidyasagar uses sound scholarship to address hidden Markov processes and their application to problems in computational biology, in particular to genomics and proteomics. The well-organized book examines topics not often covered, such as realization theory and order determination for hidden Markov processes, and also looks at significant properties such as ergodicity and mixing. This work will be useful to systems researchers as well as computational biologists."--Steve Marcus, University of Maryland

Aus dem Klappentext

"This book provides a terrific introduction to an important and widely studied field--Markov processes (including hidden Markov processes)--with a particular view toward applications to problems in biology. With a wonderful balance of rigor, intuition, and choice of topics, the book gives a unique treatment of the subject for those interested in both fundamental theory and important applications."--Sanjeev Kulkarni, Princeton University

"Vidyasagar uses sound scholarship to address hidden Markov processes and their application to problems in computational biology, in particular to genomics and proteomics. The well-organized book examines topics not often covered, such as realization theory and order determination for hidden Markov processes, and also looks at significant properties such as ergodicity and mixing. This work will be useful to systems researchers as well as computational biologists."--Steve Marcus, University of Maryland

Auszug. © Genehmigter Nachdruck. Alle Rechte vorbehalten.

Hidden Markov Processes

Theory and Applications to Biology

By M. Vidyasagar

PRINCETON UNIVERSITY PRESS

Copyright © 2014 Princeton University Press
All rights reserved.
ISBN: 978-0-691-13315-7

Contents

Preface, xi,
PART 1. PRELIMINARIES, 1,
Chapter 1. Introduction to Probability and Random Variables, 3,
Chapter 2. Introduction to Information Theory, 45,
Chapter 3. Nonnegative Matrices, 71,
PART 2. HIDDEN MARKOV PROCESSES, 99,
Chapter 4. Markov Processes, 101,
Chapter 5. Introduction to Large Deviation Theory, 129,
Chapter 6. Hidden Markov Processes: Basic Properties, 164,
Chapter 7. Hidden Markov Processes: The Complete Realization Problem, 177,
PART 3. APPLICATIONS TO BIOLOGY, 223,
Chapter 8. Some Applications to Computational Biology, 225,
Chapter 9. BLAST Theory, 255,
Bibliography, 273,
Index, 285,


CHAPTER 1

Introduction to Probability and Random Variables


1.1 INTRODUCTION TO RANDOM VARIABLES

1.1.1 Motivation

Probability theory is an attempt to formalize the notion of uncertainty in the outcome of an experiment. For instance, suppose an urn contains four balls, colored red, blue, white, and green respectively. Suppose we dip our hand in the urn and pull out one of the balls "at random." What is the likelihood that the ball we pull out will be red? If we make multiple draws, replacing the drawn ball each time and shaking the urn thoroughly before the next draw, what is the likelihood that we have to make at least ten draws before we draw a red ball for the first time? Probability theory provides a mathematical abstraction and a framework where such issues can be addressed.

When there are only finitely many possible outcomes, probability theory becomes relatively simple. For instance, in the above example, when we draw a ball there are only four possible outcomes, namely: {R, B, W, G} with the obvious notation. If we draw two balls, after replacing the first ball drawn, then there 42 = 16 possible outcomes, represented as {RR, ..., GG}. In such situations, one can get by with simple "counting" arguments. The counting approach can also be made to work when the set of possible outcomes is countably infinite. This situation is studied in Section 1.3. However, in probability theory infinity is never very far away, and counting arguments can lead to serious logical inconsistencies if applied to situations where the set of possible outcomes is uncountably infinite. The great Russian mathematician A. N. Kolmogorov invented axiomatic probability theory in the 1930s precisely to address the issues thrown up by having uncountably many possible outcomes. Subsequent developments in probability theory have been based on the axiomatic foundation laid out in [81].

Example 1.1 Let us return to the example above. Suppose that all the four balls are identical in size and shape, and differ only in their color. Then it is reasonable to suppose that drawing any one color is as likely as drawing any other color, neither more nor less. This leads to the observation that the likelihood of drawing a red ball (or any other ball) is 1/4 = 0:25.

Example 1.2 Now suppose that the four balls are all spherical, and that their diameters are in the ratio 4 : 3 : 2 : 1 in the order red, blue, white, and green. We can suppose that the likelihood of our fingers touching and drawing a particular ball is proportional to its surface area. In this case, it follows that the likelihoods of drawing the four balls are in the proportion 42 : 32 : 22 : 12 or 16 : 9 : 4 : 1 in the order red, blue, white, and green. This leads to the conclusion that

P(R) = 16/30; P(B) = 9/30; P(W) = 4/30; P(G) = 1/30:


Example 1.3 There can be instances where such analytical reasoning can fail. Suppose that all balls have the same diameter, but the red ball is coated with an adhesive resin that makes it more likely to stick to our fingers when we touch it. The complicated interaction between the surface adhesion of our fingers and the surface of the ball may be too difficult to analyze, so we have no recourse other than to draw balls repeatedly and see how many times the red ball comes out. Suppose we make 1,000 draws, and the outcomes are: 451 red, 187 blue, 174 white, and 188 green. Then we can write

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The symbol [??] is used instead of P to highlight the fact that these are simply observed frequencies, and not the true but unknown probabilities. Often the observed frequency of an outcome is referred to as its empirical probability, or the empirical estimate of the true but unknown probability based on a particular set of experiments. It is tempting to treat the observed frequencies as true probabilities, but that would not be correct. The reason is that if the experiment is repeated, the outcomes would in general be quite different. The reader can convince himself/herself of the difference between frequencies and probabilities by tossing a coin ten times, and another ten times. It is extremely unlikely that the same set of results will turn up both times. One of the important questions addressed in this book is: Just how close are the observed frequencies to the true but unknown probabilities, and just how quickly do these observed frequencies converge to the true probabilities? Such questions are addressed in Section 1.3.3.


1.1.2 Definition of a Random Variable and Probability

Suppose we wish to study the behavior of a "random" variable X that can assume one of only a finite set of values belonging to a set A = {a1, ..., an}. The set A of possible values is often referred to as the "alphabet" of the random variable. For example, in the ball-drawing experiment discussed in the preceding subsection, X can be thought of as the color of the ball drawn, and assumes values in the set {R, B, W, G}. This example, incidentally, serves to highlight the fact that the set of outcomes can consist of abstract symbols, and need not consist of numbers. This usage, adopted in this book, is at variance from the convention in many mathematics texts, where it is assumed that A is a subset of the real numbers R. However, since biological applications are a prime motivator for this book, it makes no sense to restrict A in this way. In genomics, for example, A consists of the four symbol set of nucleic acids, or nucleotides, usually denoted by {A, C, G, T}. Moreover, by allowing A to consist of arbitrary symbols, we also allow explicitly the possibility that there is no natural ordering of these symbols. For instance, in this book the nucleotides are written in the order A, C, G, T purely to follow the English alphabetical ordering. But there is no consensus on the ordering in biology texts. Thus any method of analysis that is developed here must be permutation independent. In other words, if we choose to order the symbols in the set A in some other fashion, the methods of analysis must give the same answers as before.

Now we give a general definition of the notion of probability, and introduce the notation that is used throughout the book.


Definition 1.1Given an integer n, the n-dimensional simplex Sn is defined as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (1.1)

Thus Sn consists of all nonnegative vectors whose components add up to one.


Definition 1.2Suppose A = {a1, ..., an} is a finite set. Then aprobability distributionon the set A is any vector μ [member of] Sn.

The interpretation of a probability distribution μ on the set A is that we say

Pr{X = ai} = μi

to be read as "the probability that the random variable X equals xi is μi." Thus, if A = {R, B, W, G} and μ = [0.25 0.25 0.25 0.25], then all the four outcomes of drawing the various colored balls are equally likely. This is the case in Example 1.1. If the situation is as in Example 1.2, where the balls have different diameters in the proportion 4 : 3 : 2 : 1, the probability distribution is

μ = [16/30 9/30 4/30 1/30].

If we now choose to reorder the elements of the set A in the form {R, W, G, B}, then the probability distribution gets reordered correspondingly, as

μ = [16/30 4/30 1/30 9/30].

Thus, when we speak of the probability distribution μ on the set A, we need to specify the ordering of the elements of the set.

The way we have defined it above, a probability distribution associates a weight with each element of the set A of possible outcomes. Thus μ can be thought of as a map from A into the interval [0, 1]. This notion of a weight of individual elements can be readily extended to define the weight of each subset of A. This is called the probability measure Pμ associated with the distribution μ. Suppose A [subset or equal to] A. Then we define

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (1.2)

where IA(·) is the so-called indicator function of the set A, defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (1.3)

So (1.2) states that the probability measure of the set A, denoted by Pμ(A), is the sum of the probability weights of the individual elements of the set A. Thus, whereas μ maps the set A into [0, 1], the corresponding probability measure Pμ maps the "power set" 2A (that is, the collection of all subsets of A) into the interval [0, 1].

In this text, we need to deal with three kinds of objects:

• A probability distribution μ on a finite set A.

• A random variable X assuming values in A, with the probability distribution μ.

• A probability measure Pμi on the power set 2A, associated with the probability distribution μ.


We will use whichever interpretation is most convenient and natural in the given context. As for notation, throughout the text, boldface Greek letters such as μ denote probability distributions. The probability measure corresponding to μ is denoted by Pμ. Strictly speaking, we should write Pμ, but for reasons of aesthetics and appearance we prefer to use Pμ. Similar notation applies to all other boldface Greek letters.

From (1.2), it follows readily that the empty set θ has probability measure zero, while the complete set A has probability measure one. This is true irrespective of what the underlying probability distribution μ is. Moreover, the following additional observations are easy consequences of (1.2):


Theorem 1.3Suppose A is a finite set and μ is a probability distribution on A, and let Pμ denote the corresponding probability measure on A. Then

1. 0 ≤ Pμ(A) ≤ 1 [for all] [subset or equal to] A.

2. Pμ(θ) = 0 and Pμ (A) = 1.

3. If A, B are disjoint subsets of A, then

Pμ (A [union] B) = Pμ(A) + Pμ(B). (1.4)


In the next paragraph we give a brief glimpse of axiomatic probability theory in a general setting, where the set A of possible outcomes is not necessarily finite. This paragraph is not needed to understand the remainder of the book, and therefore the reader can skip it with no aftereffects. In axiomatic probability theory, one actually begins with generalizations of the two properties above. One starts with a collection of subsets S of A that has three properties:

1. Both the empty set θ and A itself belong to S.

2. If A belongs to S, so does its complement Ac.

3. If {A1, A2, ...} is a countable collection of sets belonging to S, then their union [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] also belongs to S.


Such a collection S is called a σ-algebra of subsets of A, and the pair (A, S) is called a measurable space. Note that, on the same set A, it is possible to define different σ-algebras. Given a measurable space (A, S), a probability measureP is defined to be a function that maps the σ-algebra S into [0, 1], or in other words a map that assigns a number P(A) [member of] [0, 1] to each set A belonging to S, such that two properties hold.

1. P(θ) = 0 and P(A) = 1.

2. If {A1, A2, ...} is a countable collection of pairwise disjoint sets belonging to S, then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Starting with just these two simple sets of axioms, together with the notion of independence (introduced later), it is possible to build a tremendously rich edifice of probability theory; see [81].

In the case where the set A is either finite or countably infinite, by tradition one takes S to be the collection of all subsets of A, because any other σ-algebra S' of subsets of A must in fact be a subalgebra of S, in the sense that every set that is contained in the collection S' must also belong to S. Now suppose P is a probability measure on S. Then P assigns a weight P({ai}) =: μi to each element ai [member of] A. Moreover, if A is a subset of A, then the measure P(A) is just the sum of the weights assigned to individual elements of A. So if we let μ denote the sequence of nonnegative numbers μ := (μi, i = 1, 2, ...), then, in conformity with earlier notation, we can identify the probability measure P with Pμ. Conversely, if μ is any sequence of nonnegative numbers such that [summation]∞i=1 μi = 1, then the associated probability measure Pμ is defined for every subset A [subset or equal to] A by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where as before IA(·) denotes the indicator function of the set A.

If A is finite, and if {A1, A2, ...} is a countable collection of pairwise disjoint sets, then the only possibility is that all but finitely many sets are empty. So Property 3 above can be simplified to:

P(A [union] B) = P(A) + P(B) if A [intersection] B = θ,

which is precisely Property 2 from Theorem 1.3. In other words, the case where A is countably infinite is not any more complicated than the case where A is a finite set. This is why most "elementary" books on Markov chains assume that the underlying set A is countably infinite. But if A is an uncountable infinite set (such as the real numbers, for example), this approach based on assigning weights to individual elements of the set A does not work, and one requires the more general version of the theory of probability as introduced in [81].

At this point the reader can well ask: But what does it all mean? As with much of mathematics, probability theory exists at many distinct levels. It can be viewed as an exercise in pure reasoning, an intellectual pastime, a challenge to one's wits. While that may satisfy some persons, the theory would have very little by way of application to "real" situations unless the notion of probability is given a little more concrete interpretation. So we can think of the probability distribution μ as arising in one of two ways. First, the distribution can be postulated, as in the previous subsection. Thus if we are drawing from an urn containing four balls that are identical in all respects save their color, it makes sense to postulate that each of the four outcomes is equally likely. Similarly, if the balls are identical except for their diameter, and if we believe that the likelihood of drawing a ball is proportional to the surface area, then once again we can postulate that the four components of μ are in proportion to the surface areas (or equivalently, to the diameter squared) of the four balls. Then the requirement that the components of μ must add up to one gives the normalizing constant. Second, the distribution can be estimated, as with the adhesive-coated balls in Example 1.3. In this case there is a true but unknown probability vector μ, and our estimate of μ, based on 1,000 draws of balls, is [??] = [0.451 0.187 0.174 0.188]. Then we can try to develop theories that allow us to say how close [??] is to μ, and with what confidence we can make this statement. This question is addressed in Section 1.3.3.


1.1.3 Function of a Random Variable, Expected Value

Suppose X is a random variable assuming values in a finite set A = {a1, ..., an}, with the probability measure Pμ and the probability distribution μ. Suppose f is a function mapping the set A into another set B. Since A is finite, it is clear that the set {f(a1), ..., f(an)} is finite. So there is no loss of generality in assuming that the set B (the range of the function f) is also a finite set. Moreover, it is not assumed that the values f(a1), ..., f(an) are distinct. Thus the image of the set A under the function f can have fewer than n elements. Now f(X) is itself a random variable. Moreover, the distribution of f(X) can be computed readily from the distribution of X. Suppose μ [member of] Sn is the distribution of X. Thus μi = Pr{X = xi}. To compute the distribution of f(X), we need to address the possibility that f(a1), ..., f(an) may not be distinct elements. Let B = {b1, ..., bm} denote the set of all possible outcomes of f(X), and note that mn. Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].


(Continues...)
Excerpted from Hidden Markov Processes by M. Vidyasagar. Copyright © 2014 Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.

EUR 14,57 für den Versand von USA nach Deutschland

Versandziele, Kosten & Dauer

Suchergebnisse für Hidden Markov Processes: Theory and Applications to...

Beispielbild für diese ISBN

Vidyasagar, M.
ISBN 10: 0691133158 ISBN 13: 9780691133157
Neu Hardcover

Anbieter: Labyrinth Books, Princeton, NJ, USA

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Zustand: New. Artikel-Nr. 181476

Verkäufer kontaktieren

Neu kaufen

EUR 38,82
Währung umrechnen
Versand: EUR 14,57
Von USA nach Deutschland
Versandziele, Kosten & Dauer

Anzahl: 4 verfügbar

In den Warenkorb

Beispielbild für diese ISBN

M Vidyasagar
ISBN 10: 0691133158 ISBN 13: 9780691133157
Neu Hardcover

Anbieter: PBShop.store UK, Fairford, GLOS, Vereinigtes Königreich

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

HRD. Zustand: New. New Book. Shipped from UK. Established seller since 2000. Artikel-Nr. WP-9780691133157

Verkäufer kontaktieren

Neu kaufen

EUR 56,69
Währung umrechnen
Versand: EUR 4,56
Von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & Dauer

Anzahl: 4 verfügbar

In den Warenkorb

Beispielbild für diese ISBN

M Vidyasagar
ISBN 10: 0691133158 ISBN 13: 9780691133157
Neu Hardcover

Anbieter: PBShop.store US, Wood Dale, IL, USA

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

HRD. Zustand: New. New Book. Shipped from UK. Established seller since 2000. Artikel-Nr. WP-9780691133157

Verkäufer kontaktieren

Neu kaufen

EUR 60,95
Währung umrechnen
Versand: EUR 0,69
Von USA nach Deutschland
Versandziele, Kosten & Dauer

Anzahl: 4 verfügbar

In den Warenkorb

Foto des Verkäufers

M Vidyasagar
Verlag: PRINCETON UNIV PR, 2014
ISBN 10: 0691133158 ISBN 13: 9780691133157
Neu Hardcover

Anbieter: moluna, Greven, Deutschland

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Gebunden. Zustand: New. Explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. This book provides a range of exercises, including drills to familiarize the reader with concepts and more adv. Artikel-Nr. 594883708

Verkäufer kontaktieren

Neu kaufen

EUR 62,32
Währung umrechnen
Versand: Gratis
Innerhalb Deutschlands
Versandziele, Kosten & Dauer

Anzahl: 1 verfügbar

In den Warenkorb

Beispielbild für diese ISBN

Vidyasagar, M.
ISBN 10: 0691133158 ISBN 13: 9780691133157
Neu Hardcover

Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Zustand: New. In. Artikel-Nr. ria9780691133157_new

Verkäufer kontaktieren

Neu kaufen

EUR 69,62
Währung umrechnen
Versand: EUR 5,77
Von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & Dauer

Anzahl: 4 verfügbar

In den Warenkorb

Foto des Verkäufers

M. Vidyasagar
ISBN 10: 0691133158 ISBN 13: 9780691133157
Neu Hardcover

Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Buch. Zustand: Neu. Neuware - 'This book provides a terrific introduction to an important and widely studied field--Markov processes (including hidden Markov processes)--with a particular view toward applications to problems in biology. With a wonderful balance of rigor, intuition, and choice of topics, the book gives a unique treatment of the subject for those interested in both fundamental theory and important applications.'--Sanjeev Kulkarni, Princeton University. Artikel-Nr. 9780691133157

Verkäufer kontaktieren

Neu kaufen

EUR 77,15
Währung umrechnen
Versand: Gratis
Innerhalb Deutschlands
Versandziele, Kosten & Dauer

Anzahl: 1 verfügbar

In den Warenkorb

Beispielbild für diese ISBN

M. Vidyasagar
ISBN 10: 0691133158 ISBN 13: 9780691133157
Neu Hardcover

Anbieter: Kennys Bookstore, Olney, MD, USA

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Zustand: New. Explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. This book provides a range of exercises, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Series: Princeton Series in Applied Mathematics. Num Pages: 312 pages, 50 line illus. BIC Classification: PBT; PDE; PS. Category: (P) Professional & Vocational; (U) Tertiary Education (US: College). Dimension: 235 x 164 x 25. Weight in Grams: 554. . 2014. Hardcover. . . . . Books ship from the US and Ireland. Artikel-Nr. V9780691133157

Verkäufer kontaktieren

Neu kaufen

EUR 77,63
Währung umrechnen
Versand: EUR 1,89
Von USA nach Deutschland
Versandziele, Kosten & Dauer

Anzahl: 1 verfügbar

In den Warenkorb

Beispielbild für diese ISBN

Vidyasagar M Vidyasagar M.
ISBN 10: 0691133158 ISBN 13: 9780691133157
Neu Hardcover

Anbieter: Majestic Books, Hounslow, Vereinigtes Königreich

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Zustand: New. pp. 312 50 Illus. Artikel-Nr. 96001029

Verkäufer kontaktieren

Neu kaufen

EUR 70,65
Währung umrechnen
Versand: EUR 10,25
Von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & Dauer

Anzahl: 1 verfügbar

In den Warenkorb

Beispielbild für diese ISBN

Vidyasagar, M.
Verlag: Princeton Univ Pr, 2014
ISBN 10: 0691133158 ISBN 13: 9780691133157
Neu Hardcover

Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Hardcover. Zustand: Brand New. 304 pages. 9.00x6.00x1.00 inches. In Stock. Artikel-Nr. __0691133158

Verkäufer kontaktieren

Neu kaufen

EUR 81,30
Währung umrechnen
Versand: EUR 11,58
Von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & Dauer

Anzahl: 2 verfügbar

In den Warenkorb