Download PDFOpen PDF in browser

A Survey of FPGA Based CNNs Accelerators

EasyChair Preprint 3633

13 pagesDate: June 17, 2020

Abstract

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Especially, the high accuracy and good performance of CNNs has become a research hot spot in research organizations for the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs.

This article is aimed to survey the recent advance in FPGA-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of GPUs, ASICs and DSPs are compared, which is to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms and to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of FPGA-based accelerator.

Keyphrases: CNN, FPGA, Hardware Accelerator, deep learning

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:3633,
  author    = {Wei Zhang},
  title     = {A Survey of FPGA Based CNNs Accelerators},
  howpublished = {EasyChair Preprint 3633},
  year      = {EasyChair, 2020}}
Download PDFOpen PDF in browser