Verwandte Artikel zu Adaptive Dynamic Programming for Control: Algorithms...

Adaptive Dynamic Programming for Control: Algorithms and Stability (Communications and Control Engineering) - Hardcover

 
9781447147565: Adaptive Dynamic Programming for Control: Algorithms and Stability (Communications and Control Engineering)

Inhaltsangabe

There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of  adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods:
• infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and  proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;
• finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control;
• nonlinear games for which  a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.
Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.
In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time:
• establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm;
• demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and
• shows how ADP methods can be put to use both in simulation and in real applications.
This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.

Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.

Von der hinteren Coverseite

There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming for Control approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods:
infinite-horizon control for which the difficulty of solving partial differential Hamilton Jacobi Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;
finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinte-horizon control;
nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.
Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.
In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming for Control:
establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm;
demonstrates convergence proofs of the ADP algorithms to deepen undertstanding of the derivation of stability and convergence with the iterative computational methods used; and
shows how ADP methods can be put to use both in simulation and in real applications.
This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.

The Communications and Control Engineering series reports major technological advances which have potential for great impact in the fields of communication and control. It reflects research in industrial and academic institutions around the world so that the readership can exploit new possibilities as they become available.

„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.

Gratis für den Versand innerhalb von/der Deutschland

Versandziele, Kosten & Dauer

Weitere beliebte Ausgaben desselben Titels

Suchergebnisse für Adaptive Dynamic Programming for Control: Algorithms...

Foto des Verkäufers

Huaguang Zhang
ISBN 10: 1447147561 ISBN 13: 9781447147565
Neu Buch

Anbieter: buchversandmimpf2000, Emtmannsberg, BAYE, Deutschland

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Buch. Zustand: Neu. Neuware -There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods:¿ infinite-horizon control for which the difficulty of solving partial differential Hamilton¿Jacobi¿Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;¿ finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control;¿ nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time:¿ establishes the fundamental theory involved clearly with each chapter devoted to aclearly identifiable control paradigm;¿ demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and¿ shows how ADP methods can be put to use both in simulation and in real applications.This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 440 pp. Englisch. Artikel-Nr. 9781447147565

Verkäufer kontaktieren

Neu kaufen

EUR 160,49
Währung umrechnen
Versand: Gratis
Innerhalb Deutschlands
Versandziele, Kosten & Dauer

Anzahl: 2 verfügbar

In den Warenkorb

Foto des Verkäufers

Huaguang Zhang
ISBN 10: 1447147561 ISBN 13: 9781447147565
Neu Hardcover

Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Buch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: - infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; - finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; - nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: - establishes the fundamental theory involved clearly with each chapter devoted to aclearly identifiable control paradigm; - demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and - shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study. Artikel-Nr. 9781447147565

Verkäufer kontaktieren

Neu kaufen

EUR 164,49
Währung umrechnen
Versand: Gratis
Innerhalb Deutschlands
Versandziele, Kosten & Dauer

Anzahl: 1 verfügbar

In den Warenkorb

Beispielbild für diese ISBN

Zhang, Huaguang; Liu, Derong; Luo, Yanhong; Wang, Ding
Verlag: Springer, 2012
ISBN 10: 1447147561 ISBN 13: 9781447147565
Neu Hardcover

Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Zustand: New. In. Artikel-Nr. ria9781447147565_new

Verkäufer kontaktieren

Neu kaufen

EUR 164,48
Währung umrechnen
Versand: EUR 5,71
Von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & Dauer

Anzahl: Mehr als 20 verfügbar

In den Warenkorb

Foto des Verkäufers

Huaguang Zhang
Verlag: Springer, 2012
ISBN 10: 1447147561 ISBN 13: 9781447147565
Neu Hardcover

Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Hardcover. Zustand: Neu. Neu Neuware, Importqualität, auf Lager - There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: - infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; - finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; - nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: - establishes the fundamental theory involved clearly with each chapter devoted to aclearly identifiable control paradigm; - demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and - shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study. Artikel-Nr. INF1000775230

Verkäufer kontaktieren

Neu kaufen

EUR 245,00
Währung umrechnen
Versand: Gratis
Innerhalb Deutschlands
Versandziele, Kosten & Dauer

Anzahl: 1 verfügbar

In den Warenkorb

Beispielbild für diese ISBN

Zhang, Huaguang/ Liu, Derong/ Luo, Yanhong/ Wang, Ding
Verlag: Springer Verlag, 2012
ISBN 10: 1447147561 ISBN 13: 9781447147565
Neu Hardcover

Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Hardcover. Zustand: Brand New. 2013 edition. 439 pages. 9.25x6.25x1.25 inches. In Stock. Artikel-Nr. x-1447147561

Verkäufer kontaktieren

Neu kaufen

EUR 240,31
Währung umrechnen
Versand: EUR 11,47
Von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & Dauer

Anzahl: 2 verfügbar

In den Warenkorb