A comprehensive review on Recent Advances in VLSI Architectures for Deep Learning in Embedded Systems
Keywords:Deep Learning, VLSI Architectures, Embedded Systems, Hardware Accelerators, Energy-Efficient Design
Deep learning has transformed several industries, including computer vision and natural language processing, as well as autonomous systems and robotics. However, due to restricted processing power and memory restrictions, deploying deep learning models on resource-constrained embedded devices presents various obstacles. This review paper delves into recent advancements in VLSI (Very Large Scale Integration) designs aimed at addressing these problems and enabling fast deep learning inference on embedded devices. The primary focus of this study is on new VLSI designs and approaches for optimizing deep learning execution on embedded systems that have developed in recent years. These include hardware-friendly quantization methods, model compression techniques, and custom hardware accelerators tailored for specific deep learning tasks. They also investigate the use of sparsity and efficient memory management to minimize the memory footprint, allowing deep learning to be performed in resource-constrained environments. The significance of energy-efficient design and low-power solutions for embedded systems is highlighted. Edge AI and IoT are developing phenomena, and VLSI designs are evolving to enable these applications. The goal of this review is to provide a helpful resource for implementing deep learning on embedded systems by demonstrating the most recent breakthroughs in VLSI designs to allow fast and scalable deep learning inference.