Boosting LSTM performance through dynamic precision selection
Visualitza/Obre
10.1109/HiPC50609.2020.00046
Inclou dades d'ús des de 2022
Cita com:
hdl:2117/344816
Tipus de documentText en actes de congrés
Data publicació2020
EditorInstitute of Electrical and Electronics Engineers (IEEE)
Condicions d'accésAccés obert
Tots els drets reservats. Aquesta obra està protegida pels drets de propietat intel·lectual i
industrial corresponents. Sense perjudici de les exempcions legals existents, queda prohibida la seva
reproducció, distribució, comunicació pública o transformació sense l'autorització del titular dels drets
ProjecteCoCoUnit - CoCoUnit: An Energy-Efficient Processing Unit for Cognitive Computing (EC-H2020-833057)
Abstract
The use of low numerical precision is a fundamental optimization included in modern accelerators for Deep Neural Networks (DNNs). The number of bits of the numerical representation is set to the minimum precision that is able to retain accuracy based on an offline profiling, and it is kept constant for DNN inference. In this work, we explore the use of dynamic precision selection during DNN inference. We focus on Long Short Term Memory (LSTM) networks, which represent the state-of-the-art networks for applications such as machine translation and speech recognition. Unlike conventional DNNs, LSTM networks remember information from previous evaluations by storing data in the LSTM cell state. Our key observation is that the cell state determines the amount of precision required: time-steps where the cell state changes significantly require higher precision, whereas time-steps where the cell state is stable can be computed with lower precision without any loss in accuracy. We propose a novel hardware scheme that tracks the evolution of the elements in the LSTM cell state and dynamically selects the appropriate precision on each time-step. For a set of popular LSTM networks, it chooses the lowest precision for 57% of the time, outperforming systems that fix the precision statically. We evaluate our proposal on top of a modern highly-optimized LSTM accelerator, and show that it provides 1.46x speedup and 19.2% energy savings on average without degrading the model accuracy. Our scheme has an overhead of less than 8%.
CitacióSilfa, F.A.; Arnau, J.; González, A. Boosting LSTM performance through dynamic precision selection. A: International Symposium on High Performance Computing. "2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics, HiPC 2020: 16-18 December 2020, Pune, India (virtual event): proceedings". Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 323-333. ISBN 978-0-7381-1035-6. DOI 10.1109/HiPC50609.2020.00046.
ISBN978-0-7381-1035-6
Versió de l'editorhttps://ieeexplore.ieee.org/document/9406683
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
HIPC2020.pdf | 1,239Mb | Visualitza/Obre |