Now showing items 1-4 of 4

    • A BF16 FMA is all you need for DNN training 

      Osorio Ríos, John Haiber; Armejach Sanosa, Adrià; Petit, Eric; Henry, Greg; Casas Guix, Marc (Institute of Electrical and Electronics Engineers (IEEE), 2022-07-01)
      Article
      Open Access
      Fused Multiply-Add (FMA) functional units constitute a fundamental hardware component to train Deep Neural Networks (DNNs). Its silicon area grows quadratically with the mantissa bit count of the computer number format, ...
    • Dynamically adapting floating-point precision to accelerate deep neural network training 

      Osorio Ríos, John Haiber; Armejach Sanosa, Adrià; Petit, Eric; Henry, Greg; Casas Guix, Marc (Institute of Electrical and Electronics Engineers (IEEE), 2021)
      Conference report
      Open Access
      Mixed-precision (MP) arithmetic combining both single- and half-precision operands has been successfully applied to train deep neural networks. Despite its advantages in terms of reducing the need for key resources like ...
    • Evaluating mixed-precision arithmetic for 3D generative adversarial networks to simulate high energy physics detectors 

      Osorio Ríos, John Haiber; Armejach Sanosa, Adrià; Khattak, Gulrukh; Petit, Eric; Vallecorsa, Sofia; Casas Guix, Marc (Institute of Electrical and Electronics Engineers (IEEE), 2020)
      Conference report
      Open Access
      Several hardware companies are proposing native Brain Float 16-bit (BF16) support for neural network training. The usage of Mixed Precision (MP) arithmetic with floating-point 32-bit (FP32) and 16-bit half-precision aims ...
    • FASE: A fast, accurate and seamless emulator for custom numerical formats 

      Osorio Ríos, John Haiber; Armejach Sanosa, Adrià; Petit, Eric; Henry, Greg; Casas Guix, Marc (Institute of Electrical and Electronics Engineers (IEEE), 2022)
      Conference report
      Open Access
      Deep Neural Networks (DNNs) have become ubiquitous in a wide range of application domains. Despite their success, training DNNs is an expensive task that has motivated the use of reduced numerical precision formats to ...