Design of an Output Interface for an In-Memory-Computing CNN Accelerator
View/Open
Cita com:
hdl:2117/340646
Document typeMaster thesis
Date2020-11-27
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
Analog in-memory computing accelerators are one of the most promising solutions to reduce data movement limitations in deep neural networks (DNNs). While analog in-memory computing accelerators have been studied in many research works due to its growth potential, output interface layers of DNNs have not been studied in depth yet. This thesis proposes a binarizing batch normalization (BBN) scheme for the analog in-memory computing CNN accelerator that employs charge-domain compute designed in [1]. Implemented in 22 nm FDSOI, the design achieves energy efficiency of 1124 TOPS/W and throughput of 12418 GOPS, when implementing the in-memory solution followed by batch normalization.
Collections
Files | Description | Size | Format | View |
---|---|---|---|---|
MEMORIA_TFM_PaulaMartinez.pdf | 4,411Mb | View/Open |