Da: Books From California, Simi Valley, CA, U.S.A.
EUR 91,94
Convertire valutaQuantità: 1 disponibili
Aggiungi al carrellohardcover. Condizione: Very Good.
EUR 185,78
Convertire valutaQuantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
Editore: Springer Verlag, Singapore, Singapore, 2024
ISBN 10: 9819950678 ISBN 13: 9789819950676
Lingua: Inglese
Da: Grand Eagle Retail, Bensenville, IL, U.S.A.
EUR 188,12
Convertire valutaQuantità: 1 disponibili
Aggiungi al carrelloHardcover. Condizione: new. Hardcover. Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Da: California Books, Miami, FL, U.S.A.
EUR 188,19
Convertire valutaQuantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
Da: Ria Christie Collections, Uxbridge, Regno Unito
EUR 177,48
Convertire valutaQuantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. In.
EUR 177,47
Convertire valutaQuantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
EUR 196,60
Convertire valutaQuantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: As New. Unread book in perfect condition.
EUR 196,64
Convertire valutaQuantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: As New. Unread book in perfect condition.
Da: Books Puddle, New York, NY, U.S.A.
EUR 218,91
Convertire valutaQuantità: 4 disponibili
Aggiungi al carrelloCondizione: New.
Editore: Springer Nature Singapore, Springer Nature Singapore Feb 2025, 2025
ISBN 10: 9819950708 ISBN 13: 9789819950706
Lingua: Inglese
Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania
EUR 171,19
Convertire valutaQuantità: 2 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. Neuware -Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 272 pp. Englisch.
Editore: Springer Nature Singapore, Springer Nature Singapore Feb 2024, 2024
ISBN 10: 9819950678 ISBN 13: 9789819950676
Lingua: Inglese
Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania
EUR 171,19
Convertire valutaQuantità: 2 disponibili
Aggiungi al carrelloBuch. Condizione: Neu. Neuware -Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 272 pp. Englisch.
Editore: Springer Nature Singapore, Springer Nature Singapore, 2025
ISBN 10: 9819950708 ISBN 13: 9789819950706
Lingua: Inglese
Da: AHA-BUCH GmbH, Einbeck, Germania
EUR 175,77
Convertire valutaQuantità: 1 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.
Editore: Springer Nature Singapore, Springer Nature Singapore, 2024
ISBN 10: 9819950678 ISBN 13: 9789819950676
Lingua: Inglese
Da: AHA-BUCH GmbH, Einbeck, Germania
EUR 175,09
Convertire valutaQuantità: 1 disponibili
Aggiungi al carrelloBuch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.
Editore: Springer-Nature New York Inc, 2024
ISBN 10: 9819950678 ISBN 13: 9789819950676
Lingua: Inglese
Da: Revaluation Books, Exeter, Regno Unito
EUR 246,78
Convertire valutaQuantità: 2 disponibili
Aggiungi al carrelloHardcover. Condizione: Brand New. 269 pages. 9.25x6.10x9.21 inches. In Stock.
Editore: Springer Verlag, Singapore, Singapore, 2024
ISBN 10: 9819950678 ISBN 13: 9789819950676
Lingua: Inglese
Da: AussieBookSeller, Truganina, VIC, Australia
EUR 295,82
Convertire valutaQuantità: 1 disponibili
Aggiungi al carrelloHardcover. Condizione: new. Hardcover. Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability.
Editore: Springer, Berlin|Springer Nature Singapore|Springer, 2023
ISBN 10: 9819950678 ISBN 13: 9789819950676
Lingua: Inglese
Da: moluna, Greven, Germania
EUR 144,94
Convertire valutaQuantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. Th.
Editore: Springer Nature Singapore Nov 2023, 2023
ISBN 10: 9819950678 ISBN 13: 9789819950676
Lingua: Inglese
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
EUR 171,19
Convertire valutaQuantità: 2 disponibili
Aggiungi al carrelloBuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book. 260 pp. Englisch.
Editore: Springer, Berlin, Springer Nature Singapore, Springer, 2025
ISBN 10: 9819950708 ISBN 13: 9789819950706
Lingua: Inglese
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
EUR 171,19
Convertire valutaQuantità: 2 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book. 260 pp. Englisch.
Da: Majestic Books, Hounslow, Regno Unito
EUR 231,48
Convertire valutaQuantità: 4 disponibili
Aggiungi al carrelloCondizione: New. Print on Demand.
Da: Biblios, Frankfurt am main, HESSE, Germania
EUR 235,17
Convertire valutaQuantità: 4 disponibili
Aggiungi al carrelloCondizione: New. PRINT ON DEMAND.
Editore: CRC Press, 2023
ISBN 10: 103245248X ISBN 13: 9781032452487
Da: moluna, Greven, Germania
EUR 153,21
Convertire valutaQuantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Baochang Zhang is a full Professor with Institute of Artificial Intelligence, Beihang University, Beijing, China. He was selected by the Program for New Century Excellent Talents in University of Ministry of Education of China, also selected as Academic .