This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
Gauri Joshi, Ph.D., is an Associate Professor in the ECE department at Carnegie Mellon University. Dr. Joshi completed her Ph.D. from MIT EECS. Her current research is on designing algorithms for federated learning, distributed optimization, and parallel computing. Her awards and honors include being named as one of MIT Technology Review's 35 Innovators under 35 (2022), the NSF CAREER Award (2021), the ACM SIGMETRICS Best Paper Award (2020), Best Thesis Prize in Computer science at MIT (2012), and Institute Gold Medal of IIT Bombay (2010).
Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.
EUR 9,90 per la spedizione da Germania a Italia
Destinazione, tempi e costiEUR 9,70 per la spedizione da Germania a Italia
Destinazione, tempi e costiDa: Buchpark, Trebbin, Germania
Condizione: Hervorragend. Zustand: Hervorragend | Seiten: 144 | Sprache: Englisch | Produktart: Bücher. Codice articolo 40959855/1
Quantità: 3 disponibili
Da: moluna, Greven, Germania
Gebunden. Condizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where th. Codice articolo 706732221
Quantità: Più di 20 disponibili
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
Buch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. 144 pp. Englisch. Codice articolo 9783031190667
Quantità: 2 disponibili
Da: AHA-BUCH GmbH, Einbeck, Germania
Buch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. Codice articolo 9783031190667
Quantità: 1 disponibili
Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania
Buch. Condizione: Neu. Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 144 pp. Englisch. Codice articolo 9783031190667
Quantità: 2 disponibili
Da: Books Puddle, New York, NY, U.S.A.
Condizione: New. 1st ed. 2023 edition NO-PA16APR2015-KAP. Codice articolo 26396295639
Quantità: 4 disponibili
Da: Majestic Books, Hounslow, Regno Unito
Condizione: New. Print on Demand This item is printed on demand. Codice articolo 401162760
Quantità: 4 disponibili
Da: Biblios, Frankfurt am main, HESSE, Germania
Condizione: New. PRINT ON DEMAND. Codice articolo 18396295645
Quantità: 4 disponibili