Verwandte Artikel zu Application of Fuzzy State Aggregation and Policy Hill...

Application of Fuzzy State Aggregation and Policy Hill Climbing to Multi-Agent Systems in Stochastic Environments - Softcover

 
9781288408993: Application of Fuzzy State Aggregation and Policy Hill Climbing to Multi-Agent Systems in Stochastic Environments

Inhaltsangabe

Reinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually learn even as the operating environment changes. Applying this learning to multiple cooperative software agents (a multi-agent system) not only allows each individual agent to learn from its own experience, but also opens up the opportunity for the individual agents to learn from the other agents in the system, thus accelerating the rate of learning. This research presents the novel use of fuzzy state aggregation, as the means of function approximation, combined with the fastest policy hill climbing methods of Win or Lose Fast (WoLF) and policy-dynamics based WoLF (PD-WoLF). The combination of fast policy hill climbing and fuzzy state aggregation function approximation is tested in two stochastic environments; Tileworld and the simulated robot soccer domain, RoboCup. The Tileworld results demonstrate that a single agent using the combination of FSA and PHC learns quicker and performs better than combined fuzzy state aggregation and Q-learning reinforcement learning alone. Results from the multi-agent RoboCup domain again illustrate that the policy hill climbing algorithms perform better than Q-learning alone in a multi-agent environment. The learning is further enhanced by allowing the agents to share their experience through a weighted strategy sharing.

Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.

Reseña del editor

Reinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually learn even as the operating environment changes. Applying this learning to multiple cooperative software agents (a multi-agent system) not only allows each individual agent to learn from its own experience, but also opens up the opportunity for the individual agents to learn from the other agents in the system, thus accelerating the rate of learning. This research presents the novel use of fuzzy state aggregation, as the means of function approximation, combined with the fastest policy hill climbing methods of Win or Lose Fast (WoLF) and policy-dynamics based WoLF (PD-WoLF). The combination of fast policy hill climbing and fuzzy state aggregation function approximation is tested in two stochastic environments; Tileworld and the simulated robot soccer domain, RoboCup. The Tileworld results demonstrate that a single agent using the combination of FSA and PHC learns quicker and performs better than combined fuzzy state aggregation and Q-learning reinforcement learning alone. Results from the multi-agent RoboCup domain again illustrate that the policy hill climbing algorithms perform better than Q-learning alone in a multi-agent environment. The learning is further enhanced by allowing the agents to share their experience through a weighted strategy sharing.

„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.

  • VerlagBiblioScholar
  • Erscheinungsdatum2012
  • ISBN 10 1288408994
  • ISBN 13 9781288408993
  • EinbandTapa blanda
  • SpracheEnglisch
  • Anzahl der Seiten82
  • Kontakt zum HerstellerNicht verfügbar

EUR 5,92 für den Versand von Vereinigtes Königreich nach Deutschland

Versandziele, Kosten & Dauer

Suchergebnisse für Application of Fuzzy State Aggregation and Policy Hill...

Beispielbild für diese ISBN

Wardell, Dean C
Verlag: Biblioscholar, 2012
ISBN 10: 1288408994 ISBN 13: 9781288408993
Neu Softcover

Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Zustand: New. In. Artikel-Nr. ria9781288408993_new

Verkäufer kontaktieren

Neu kaufen

EUR 54,58
Währung umrechnen
Versand: EUR 5,92
Von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & Dauer

Anzahl: Mehr als 20 verfügbar

In den Warenkorb

Foto des Verkäufers

Wardell, Dean C.
Verlag: BIBLIOSCHOLAR, 2012
ISBN 10: 1288408994 ISBN 13: 9781288408993
Neu Softcover

Anbieter: moluna, Greven, Deutschland

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Zustand: New. KlappentextrnrnReinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually learn even as the operating environment changes. Applying this learning to m. Artikel-Nr. 6561706

Verkäufer kontaktieren

Neu kaufen

EUR 61,74
Währung umrechnen
Versand: Gratis
Innerhalb Deutschlands
Versandziele, Kosten & Dauer

Anzahl: Mehr als 20 verfügbar

In den Warenkorb

Foto des Verkäufers

Dean C. Wardell
ISBN 10: 1288408994 ISBN 13: 9781288408993
Neu Taschenbuch

Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Taschenbuch. Zustand: Neu. Neuware - Reinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually learn even as the operating environment changes. Applying this learning to multiple cooperative software agents (a multi-agent system) not only allows each individual agent to learn from its own experience, but also opens up the opportunity for the individual agents to learn from the other agents in the system, thus accelerating the rate of learning. This research presents the novel use of fuzzy state aggregation, as the means of function approximation, combined with the fastest policy hill climbing methods of Win or Lose Fast (WoLF) and policy-dynamics based WoLF (PD-WoLF). The combination of fast policy hill climbing and fuzzy state aggregation function approximation is tested in two stochastic environments; Tileworld and the simulated robot soccer domain, RoboCup. The Tileworld results demonstrate that a single agent using the combination of FSA and PHC learns quicker and performs better than combined fuzzy state aggregation and Q-learning reinforcement learning alone. Results from the multi-agent RoboCup domain again illustrate that the policy hill climbing algorithms perform better than Q-learning alone in a multi-agent environment. The learning is further enhanced by allowing the agents to share their experience through a weighted strategy sharing. Artikel-Nr. 9781288408993

Verkäufer kontaktieren

Neu kaufen

EUR 80,96
Währung umrechnen
Versand: Gratis
Innerhalb Deutschlands
Versandziele, Kosten & Dauer

Anzahl: 2 verfügbar

In den Warenkorb