Low-Power Computer Vision

Low-Power Computer Vision

Improve the Efficiency of Artificial Intelligence

Kim, Jaeyoun; Thiruvathukal, George K.; Chen, Bo; Chen, Yiran; Lu, Yung-Hsiang

Taylor & Francis Ltd

10/2024

438

Mole

9780367755287

15 a 20 dias

Descrição não disponível.
Section I Introduction

Book Introduction
Yung-Hsiang Lu, George K. Thiruvathukal, Jaeyoun Kim, Yiran Chen, and Bo Chen

History of Low-Power Computer Vision Challenge
Yung-Hsiang Lu and Xiao Hu, Yiran Chen, Joe Spisak, Gaurav Aggarwal, Mike Zheng Shou, and George K. Thiruvathukal

Survey on Energy-Efficient Deep Neural Networks for Computer Vision
Abhinav Goel, Caleb Tung, Xiao Hu, Haobo Wang, and Yung-Hsiang Lu and George K. Thiruvathukal

Section II Competition Winners

Hardware design and software practices for efficient neural network inference Yu Wang, Xuefei Ning, Shulin Zeng, Yi Kai, Kaiyuan Guo, and Hanbo Sun, Changcheng Tang, Tianyi Lu, Shuang Liang, and Tianchen Zhao

Progressive Automatic Design of Search Space for One-Shot Neural Architecture Search
Xin Xia, Xuefeng Xiao, and Xing Wang

Fast Adjustable Threshold For Uniform Neural Network Quantization
Alexander Goncharenko, Andrey Denisov, and Sergey Alyamkin

Power-efficient Neural Network Scheduling on Heterogeneous SoCsYing Wang, Xuyi Cai, and Xiandong Zhao

Efficient Neural Network ArchitecturesHan Cai and Song Han

Design Methodology for Low Power Image Recognition SystemsSoonhoi Ha, EunJin Jeong, Duseok Kang, Jangryul Kim, and Donghyun Kang

Guided Design for Efficient On-device Object Detection ModelTao Sheng and Yang Liu

Section III Invited Articles

Quantizing Neural Networks Marios Fournarakis, Markus Nagel, Rana Ali Amjad, Yelysei Bondarenko, Mart van Baalen, and Tijmen Blankevoort

A practical guide to designing efficient mobile architecturesMark Sandler and Andrew Howard

A Survey of Quantization Methods for Efficient Neural Network InferenceAmir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael Mahoney, and Kurt Keutzer

Bibliography

Index
Computer Vision;Neural Networks;Artificial Intelligence;Low Power;Image Recognition Systems;Hardware;Software;Neural Network Quantization;Deep NN Model;Convolution Layers;CNN Model;NN;Neural Network;Tsinghua University;Dart;Multiply Accumulate Operations;Quantize Weights;Convolutional Layer;Execution Time;DNN;Input Feature Maps;NN Architecture;Architecture Search;Object Detection;Search Space;Initial Learning Rate;Top-1 Accuracy;CV;Accuracy Drop;Hardware Accelerators;Seoul National University