Anbieter: Phatpocket Limited, Waltham Abbey, HERTS, Vereinigtes Königreich
EUR 18,78
Anzahl: 1 verfügbar
In den WarenkorbZustand: Good. Your purchase helps support Sri Lankan Children's Charity 'The Rainbow Centre'. Ex-library, so some stamps and wear, but in good overall condition. Our donations to The Rainbow Centre have helped provide an education and a safe haven to hundreds of children who live in appalling conditions.
Verlag: Springer Berlin / Heidelberg, 2011
ISBN 10: 3642183239 ISBN 13: 9783642183232
Sprache: Englisch
Anbieter: Better World Books, Mishawaka, IN, USA
Zustand: Fine. Used book that is in almost brand-new condition.
Hardcover. Zustand: As New. No Jacket. Pages are clean and are not marred by notes or folds of any kind. ~ ThriftBooks: Read More, Spend Less.
Anbieter: Romtrade Corp., STERLING HEIGHTS, MI, USA
Zustand: New. This is a Brand-new US Edition. This Item may be shipped from US or any other country as we have multiple locations worldwide.
Anbieter: Majestic Books, Hounslow, Vereinigtes Königreich
EUR 100,71
Anzahl: 1 verfügbar
In den WarenkorbZustand: New. pp. 316 52:B&W 6.14 x 9.21in or 234 x 156mm (Royal 8vo) Case Laminate on White w/Gloss Lam.
Verlag: Springer Berlin Heidelberg, 2011
ISBN 10: 3642183239 ISBN 13: 9783642183232
Sprache: Englisch
Anbieter: moluna, Greven, Deutschland
EUR 64,33
Anzahl: Mehr als 20 verfügbar
In den WarenkorbKartoniert / Broschiert. Zustand: New.
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
EUR 114,82
Anzahl: 2 verfügbar
In den WarenkorbPaperback. Zustand: Brand New. 2011 edition. 404 pages. 9.25x6.25x1.00 inches. In Stock.
Verlag: Springer Berlin Heidelberg, Springer Berlin Heidelberg Jun 2011, 2011
ISBN 10: 3642183239 ISBN 13: 9783642183232
Sprache: Englisch
Anbieter: buchversandmimpf2000, Emtmannsberg, BAYE, Deutschland
Taschenbuch. Zustand: Neu. Neuware -The theory of Markov decision processes focuses on controlled Markov chains in discrete time. The authors establish the theory for general state and action spaces and at the same time show its application by means of numerous examples, mostly taken from the fields of finance and operations research. By using a structural approach many technicalities (concerning measure theory) are avoided. They cover problems with finite and infinite horizons, as well as partially observable Markov decision processes, piecewise deterministic Markov decision processes and stopping problems.The book presents Markov decision processes in action and includes various state-of-the-art applications with a particular view towards finance. It is useful for upper-level undergraduates, Master's students and researchers in both applied probability and finance, and provides exercises (without solutions).Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 404 pp. Englisch.
Taschenbuch. Zustand: Neu. Markov Decision Processes with Applications to Finance | Ulrich Rieder (u. a.) | Taschenbuch | xvi | Englisch | 2011 | Springer-Verlag GmbH | EAN 9783642183232 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu.
Verlag: Springer Berlin Heidelberg, 2011
ISBN 10: 3642183239 ISBN 13: 9783642183232
Sprache: Englisch
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Taschenbuch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - The theory of Markov decision processes focuses on controlled Markov chains in discrete time. The authors establish the theory for general state and action spaces and at the same time show its application by means of numerous examples, mostly taken from the fields of finance and operations research. By using a structural approach many technicalities (concerning measure theory) are avoided. They cover problems with finite and infinite horizons, as well as partially observable Markov decision processes, piecewise deterministic Markov decision processes and stopping problems. The book presents Markov decision processes in action and includes various state-of-the-art applications with a particular view towards finance. It is useful for upper-level undergraduates, Master's students and researchers in both applied probability and finance, and provides exercises (without solutions).
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
EUR 153,77
Anzahl: 2 verfügbar
In den WarenkorbPaperback. Zustand: Brand New. 298 pages. 9.00x6.00x0.72 inches. In Stock.
Taschenbuch. Zustand: Neu. Markov Decision Processes with Their Applications | Wuyi Yue (u. a.) | Taschenbuch | xv | Englisch | 2010 | Springer US | EAN 9781441942388 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu.
Verlag: Springer US, Springer New York Nov 2007, 2007
ISBN 10: 0387369503 ISBN 13: 9780387369501
Sprache: Englisch
Anbieter: buchversandmimpf2000, Emtmannsberg, BAYE, Deutschland
Buch. Zustand: Neu. Neuware -Markov decision processes (MDPs), also called stochastic dynamic programming, were first studied in the 1960s. MDPs can be used to model and solve dynamic decision-making problems that are multi-period and occur in stochastic circumstances. There are three basic branches in MDPs: discrete-time MDPs, continuous-time MDPs and semi-Markov decision processes. Starting from these three branches, many generalized MDPs models have been applied to various practical problems. These models include partially observable MDPs, adaptive MDPs, MDPs in stochastic environments, and MDPs with multiple objectives, constraints or imprecise parameters.Markov Decision Processes With Their Applications examines MDPs and their applications in the optimal control of discrete event systems (DESs), optimal replacement, and optimal allocations in sequential online auctions. The book presents four main topics that are used to study optimal control problems: a new methodology for MDPs with discounted total reward criterion; transformation of continuous-time MDPs and semi-Markov decision processes into a discrete-time MDPs model, thereby simplifying the application of MDPs; MDPs in stochastic environments, which greatly extends the area where MDPs can be applied; applications of MDPs in optimal control of discrete event systems, optimal replacement, and optimal allocation in sequential online auctions.This book is intended for researchers, mathematicians, advanced graduate students, and engineers who are interested in optimal control, operation research, communications, manufacturing, economics, and electronic commerce.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 316 pp. Englisch.
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
EUR 165,54
Anzahl: 2 verfügbar
In den WarenkorbPaperback. Zustand: Brand New. 2009 edition. 252 pages. 9.20x6.10x0.57 inches. In Stock.
Anbieter: preigu, Osnabrück, Deutschland
Taschenbuch. Zustand: Neu. Continuous-Time Markov Decision Processes | Theory and Applications | Onésimo Hernández-Lerma (u. a.) | Taschenbuch | xviii | Englisch | 2012 | Springer-Verlag GmbH | EAN 9783642260728 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu.
Verlag: Springer US, Springer New York, 2007
ISBN 10: 0387369503 ISBN 13: 9780387369501
Sprache: Englisch
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Buch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - Markov decision processes (MDPs), also called stochastic dynamic programming, were first studied in the 1960s. MDPs can be used to model and solve dynamic decision-making problems that are multi-period and occur in stochastic circumstances. There are three basic branches in MDPs: discrete-time MDPs, continuous-time MDPs and semi-Markov decision processes. Starting from these three branches, many generalized MDPs models have been applied to various practical problems. These models include partially observable MDPs, adaptive MDPs, MDPs in stochastic environments, and MDPs with multiple objectives, constraints or imprecise parameters.Markov Decision Processes With Their Applications examines MDPs and their applications in the optimal control of discrete event systems (DESs), optimal replacement, and optimal allocations in sequential online auctions. The book presents four main topics that are used to study optimal control problems: a new methodology for MDPs with discounted total reward criterion; transformation of continuous-time MDPs and semi-Markov decision processes into a discrete-time MDPs model, thereby simplifying the application of MDPs; MDPs in stochastic environments, which greatly extends the area where MDPs can be applied; applications of MDPs in optimal control of discrete event systems, optimal replacement, and optimal allocation in sequential online auctions.This book is intended for researchers, mathematicians, advanced graduate students, and engineers who are interested in optimal control, operation research, communications, manufacturing, economics, and electronic commerce.
Taschenbuch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - Markov decision processes (MDPs), also called stochastic dynamic programming, were first studied in the 1960s. MDPs can be used to model and solve dynamic decision-making problems that are multi-period and occur in stochastic circumstances. There are three basic branches in MDPs: discrete-time MDPs, continuous-time MDPs and semi-Markov decision processes. Starting from these three branches, many generalized MDPs models have been applied to various practical problems. These models include partially observable MDPs, adaptive MDPs, MDPs in stochastic environments, and MDPs with multiple objectives, constraints or imprecise parameters.Markov Decision Processes With Their Applications examines MDPs and their applications in the optimal control of discrete event systems (DESs), optimal replacement, and optimal allocations in sequential online auctions. The book presents four main topics that are used to study optimal control problems: a new methodology for MDPs with discounted total reward criterion; transformation of continuous-time MDPs and semi-Markov decision processes into a discrete-time MDPs model, thereby simplifying the application of MDPs; MDPs in stochastic environments, which greatly extends the area where MDPs can be applied; applications of MDPs in optimal control of discrete event systems, optimal replacement, and optimal allocation in sequential online auctions.This book is intended for researchers, mathematicians, advanced graduate students, and engineers who are interested in optimal control, operation research, communications, manufacturing, economics, and electronic commerce.
Verlag: Springer Berlin Heidelberg, 2012
ISBN 10: 3642260721 ISBN 13: 9783642260728
Sprache: Englisch
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Taschenbuch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. Thisvolume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.
Verlag: Springer, Berlin, Springer Berlin Heidelberg, Springer, 2009
ISBN 10: 3642025463 ISBN 13: 9783642025464
Sprache: Englisch
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Buch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. Thisvolume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.
Anbieter: Buchpark, Trebbin, Deutschland
Zustand: Sehr gut. Zustand: Sehr gut | Sprache: Englisch | Produktart: Bücher.
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Buch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book offers a structured exploration of how Markov Decision Processes (MDPs) and Deep Reinforcement Learning (DRL) can be used to model and optimize UAV-assisted Internet of Things (IoT) networks, with a focus on minimizing the Age of Information (AoI) during data collection. Adopting a tutorial-style approach, it bridges theoretical models and practical algorithms for real-time decision-making in tasks like UAV trajectory planning, sensor transmission scheduling, and energy-efficient data gathering. Applications span precision agriculture, environmental monitoring, smart cities, and emergency response, showcasing the adaptability of DRL in UAV-based IoT systems. Designed as a foundational reference, it is ideal for researchers and engineers aiming to deepen their understanding of adaptive UAV planning across diverse IoT applications.
Verlag: World Scientific Europe Ltd, 2025
ISBN 10: 1800616759 ISBN 13: 9781800616752
Sprache: Englisch
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
EUR 222,24
Anzahl: 2 verfügbar
In den WarenkorbHardcover. Zustand: Brand New. 489 pages. 9.25x6.25x1.25 inches. In Stock.
Zustand: good. Befriedigend/Good: Durchschnittlich erhaltenes Buch bzw. Schutzumschlag mit Gebrauchsspuren, aber vollständigen Seiten. / Describes the average WORN book or dust jacket that has all the pages present.
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Buch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a 'good' control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation.
Verlag: Springer US, Springer New York, 2012
ISBN 10: 1461352487 ISBN 13: 9781461352488
Sprache: Englisch
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Taschenbuch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a 'good' control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation.