dc.description.abstract | In recent years, deep learning has brought revolutionary progress in various fields with the advent of technology and big data era, from basic image pre- processing, image enhancement technology, face recognition, voice recognition and other related technologies, gradually replacing traditional algorithms, which shows that the rise of neural networks has led to the reform of artificial intelligence in these fields. However, due to the high cost of GPUs, the products are expensive, and the power consumption of GPUs is high, which results in low energy efficiency values when reasoning about neural networks. Since neural network algorithms are computationally intensive, they require accelerated hardware for real-time computation, which has led to a lot of research in recent years on accelerated digital circuit hardware design for deep neural networks.
In this paper, we proposed an efficient and flexible training processor, called EESA. Our proposed training processor features low power consumption, high throughput and high energy efficiency. EESA uses the sparsity of neuron activations to reduce the number of memory accesses and storage memory space to achieve an efficient training accelerator. The proposed processor uses a novel reconfigurable computing architecture to maintain high performance when operating Forward Propagation (FP) and Backward Propagation (BP) passes. The processor is implemented in TSMC 40nm technology process, with an operating frequency of 294MHz and power consumption of 87.12mW at the core voltage of 0.9V. For 16-bit brain floating point precision format, the processor achieves an energy efficiency of 1.72TOPS/W. | en_US |