Neural Networks with Model Compression / Computational Intelligence Methods and Applications (PDF)
80 DeutschlandCard Punkte sammeln
- Lastschrift, Kreditkarte, Paypal, Rechnung
- Kostenloser tolino webreader
Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.
Tiancheng Wang are pursuing their Ph.D. degrees under the supervision of Baochang Zhang. His research topics include model compression and trustworthy deep learning, and he has published several high-quality papers on deep model compression. He was selected as visiting student of Zhongguancun laboratory, Beijing, China.
Sheng Xu are pursuing their Ph.D. degrees under the supervision of Baochang Zhang. His research topics mainly focus on low-bit model compression, and he is one of the most active researchers in the field of binary neural networks. He has published more than 10 top-tier papers in computer vision with two of them are selected as CVPR oral papers.
Dr. David Doermann is a Professor of Empire Innovation at the University at Buffalo (UB) and the Director of the University at Buffalo Artificial Intelligence Institute. Prior to coming to UB, he was a program manager at the Defense Advanced Research Projects Agency (DARPA), where he developed, selected and oversaw approximately $150 million in research and transition funding in the areas of computer vision, human
- Autoren: Baochang Zhang , Tiancheng Wang , Sheng Xu , David Doermann
- 2024, 1st ed. 2024, 260 Seiten, Englisch
- Verlag: Springer Nature Singapore
- ISBN-10: 9819950686
- ISBN-13: 9789819950683
- Erscheinungsdatum: 05.02.2024
Abhängig von Bildschirmgröße und eingestellter Schriftgröße kann die Seitenzahl auf Ihrem Lesegerät variieren.
- Dateiformat: PDF
- Größe: 10 MB
- Ohne Kopierschutz
- Vorlesefunktion
Zustand | Preis | Porto | Zahlung | Verkäufer | Rating |
---|
Schreiben Sie einen Kommentar zu "Neural Networks with Model Compression / Computational Intelligence Methods and Applications".
Kommentar verfassen