An energy-efficient GeMM-based convolution accelerator with on-the-fly im2col
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder
Systolic array architectures have recently emerged as successful accelerators for deep convolutional neural network (CNN) inference. Such architectures can be used to efficiently execute general matrix–matrix multiplications (GeMMs), but computing convolutions with this primitive involves transforming the 3-D input tensor into an equivalent matrix, which can lead to an inflation of the input data, increasing the off-chip memory traffic which is critical for energy efficiency. In this work, we propose a GeMM-based systolic array accelerator that uses a novel data feeder architecture to perform on-chip, on-the-fly convolution lowering (also known as im2col), supporting arbitrary tensor and kernel sizes as well as strided and dilated (or atrous) convolutions. By using our data feeder, we reduce memory transactions and required bandwidth on state-of-the-art CNNs by a factor of two, while only adding an area and power overhead of 4% and 7%, respectively. Application specific integrated circuit (ASIC) implementation of our accelerator in 22-nm technology fits in less than 1.1 mm 2 and reaches an energy efficiency of 1.10 TFLOP/sW with 16-bit floating-point arithmetic.
CitationFornt, J. [et al.]. An energy-efficient GeMM-based convolution accelerator with on-the-fly im2col. "IEEE transactions on very large scale integration (VLSI) systems", Novembre 2023, vol. 31, núm. 11, p. 1874-1878.