Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
EUR 49,22
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbZustand: New. In.
Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
EUR 55,56
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbZustand: New. In.
Verlag: Springer International Publishing, Springer Nature Switzerland, 2023
ISBN 10: 3031190696 ISBN 13: 9783031190698
Sprache: Englisch
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
EUR 48,14
Währung umrechnenAnzahl: 1 verfügbar
In den WarenkorbTaschenbuch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Verlag: Springer International Publishing, 2022
ISBN 10: 3031190661 ISBN 13: 9783031190667
Sprache: Englisch
Anbieter: Buchpark, Trebbin, Deutschland
EUR 33,39
Währung umrechnenAnzahl: 2 verfügbar
In den WarenkorbZustand: Hervorragend. Zustand: Hervorragend | Seiten: 144 | Sprache: Englisch | Produktart: Bücher.
Verlag: Springer International Publishing, Springer Nature Switzerland, 2022
ISBN 10: 3031190661 ISBN 13: 9783031190667
Sprache: Englisch
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
EUR 48,14
Währung umrechnenAnzahl: 1 verfügbar
In den WarenkorbBuch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.