Sprache: Englisch
Verlag: Springer-Nature New York Inc, 2023
ISBN 10: 3031190696 ISBN 13: 9783031190698
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
EUR 68,75
Anzahl: 2 verfügbar
In den WarenkorbPaperback. Zustand: Brand New. 140 pages. 9.45x6.61x0.33 inches. In Stock.
Sprache: Englisch
Verlag: Springer International Publishing, Springer Nature Switzerland Nov 2023, 2023
ISBN 10: 3031190696 ISBN 13: 9783031190698
Anbieter: buchversandmimpf2000, Emtmannsberg, BAYE, Deutschland
Taschenbuch. Zustand: Neu. Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 144 pp. Englisch.
Sprache: Englisch
Verlag: Springer International Publishing, Springer Nature Switzerland Nov 2022, 2022
ISBN 10: 3031190661 ISBN 13: 9783031190667
Anbieter: buchversandmimpf2000, Emtmannsberg, BAYE, Deutschland
Buch. Zustand: Neu. Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 144 pp. Englisch.
Sprache: Englisch
Verlag: Springer International Publishing, Springer Nature Switzerland, 2023
ISBN 10: 3031190696 ISBN 13: 9783031190698
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Taschenbuch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Sprache: Englisch
Verlag: Springer International Publishing, Springer Nature Switzerland, 2022
ISBN 10: 3031190661 ISBN 13: 9783031190667
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Buch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Taschenbuch. Zustand: Neu. Optimization Algorithms for Distributed Machine Learning | Gauri Joshi | Taschenbuch | xiii | Englisch | 2023 | Springer | EAN 9783031190698 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu.
EUR 31,32
Anzahl: 5 verfügbar
In den WarenkorbZustand: Hervorragend. Zustand: Hervorragend | Sprache: Englisch | Produktart: Bücher | This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.