This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
EUR 1,94 per la spedizione da U.S.A. a Italia
Destinazione, tempi e costiDa: PBShop.store US, Wood Dale, IL, U.S.A.
PAP. Condizione: New. New Book. Shipped from UK. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Codice articolo L0-9783031190681
Quantità: Più di 20 disponibili
Da: PBShop.store UK, Fairford, GLOS, Regno Unito
PAP. Condizione: New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Codice articolo L0-9783031190681
Quantità: Più di 20 disponibili
Da: Ria Christie Collections, Uxbridge, Regno Unito
Condizione: New. In. Codice articolo ria9783031190681_new
Quantità: Più di 20 disponibili
Da: Chiron Media, Wallingford, Regno Unito
PF. Condizione: New. Codice articolo 6666-IUK-9783031190681
Quantità: 10 disponibili