Ir al contenido (pulsa Retorno)

Universitat Politècnica de Catalunya

    • Català
    • Castellano
    • English
    • LoginRegisterLog in (no UPC users)
  • mailContact Us
  • world English 
    • Català
    • Castellano
    • English
  • userLogin   
      LoginRegisterLog in (no UPC users)

UPCommons. Global access to UPC knowledge

58.737 UPC E-Prints
You are here:
View Item 
  •   DSpace Home
  • E-prints
  • Centres de recerca
  • BSC - Barcelona Supercomputing Center
  • Computer Sciences
  • Articles de revista
  • View Item
  •   DSpace Home
  • E-prints
  • Centres de recerca
  • BSC - Barcelona Supercomputing Center
  • Computer Sciences
  • Articles de revista
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

A BF16 FMA is all you need for DNN training

Thumbnail
View/Open
09823406.pdf (1,519Mb)
Share:
 
 
10.1109/TETC.2022.3187770
 
  View Usage Statistics
Cita com:
hdl:2117/373614

Show full item record
Osorio Ríos, John HaiberMés informació
Armejach Sanosa, AdriàMés informacióMés informacióMés informació
Petit, Eric
Henry, Greg
Casas Guix, Marc
Document typeArticle
Defense date2022-07-01
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder
ProjectDEEP-SEA - DEEP – SOFTWARE FOR EXASCALE ARCHITECTURES (EC-H2020-955606)
Abstract
Fused Multiply-Add (FMA) functional units constitute a fundamental hardware component to train Deep Neural Networks (DNNs). Its silicon area grows quadratically with the mantissa bit count of the computer number format, which has motivated the adoption of the BrainFloat16 format (BF16). BF16 features 1 sign, 8 exponent and 7 explicit mantissa bits. Some approaches to train DNNs achieve significant performance benefits by using the BF16 format. However, these approaches must combine BF16 with the standard IEEE 754 Floating-Point 32-bit (FP32) format to achieve state-of-the-art training accuracy, which limits the impact of adopting BF16. This article proposes the first approach able to train complex DNNs entirely using the BF16 format. We propose a new class of FMA operators, FMAbf16 n m, that entirely rely on BF16 FMA hardware instructions and deliver the same accuracy as FP32. FMAbf16 n m operators achieve performance improvements within the 1.28- 1.35X range on ResNet101 with respect to FP32. FMAbf16 n m enables training complex DNNs on simple low-end hardware devices without requiring expensive FP32 FMA functional units.
CitationOsorio, J. [et al.]. A BF16 FMA is all you need for DNN training. "IEEE transactions on emerging topics in computing", 1 Juliol 2022, vol. 10, núm. 3, p. 1302-1314. 
URIhttp://hdl.handle.net/2117/373614
DOI10.1109/TETC.2022.3187770
ISSN2168-6750
Publisher versionhttps://ieeexplore.ieee.org/document/9823406
Collections
  • Computer Sciences - Articles de revista [272]
  • Departament d'Arquitectura de Computadors - Articles de revista [957]
  • CAP - Grup de Computació d'Altes Prestacions - Articles de revista [380]
  • Doctorat en Arquitectura de Computadors - Articles de revista [135]
Share:
 
  View Usage Statistics

Show full item record

FilesDescriptionSizeFormatView
09823406.pdf1,519MbPDFView/Open

Browse

This CollectionBy Issue DateAuthorsOther contributionsTitlesSubjectsThis repositoryCommunities & CollectionsBy Issue DateAuthorsOther contributionsTitlesSubjects

© UPC Obrir en finestra nova . Servei de Biblioteques, Publicacions i Arxius

info.biblioteques@upc.edu

  • About This Repository
  • Contact Us
  • Send Feedback
  • Inici de la pàgina