Exploring the Vision Processing Unit as Co-Processor for Inference
Document typeConference lecture
Rights accessOpen Access
European Commission's projectSAGE - SAGE (EC-H2020-671500)
The success of the exascale supercomputer is largely debated to remain dependent on novel breakthroughs in technology that effectively reduce the power consumption and thermal dissipation requirements. In this work, we consider the integration of co-processors in high-performance computing (HPC) to enable low-power, seamless computation offloading of certain operations. In particular, we explore the so-called Vision Processing Unit (VPU), a highly-parallel vector processor with a power envelope of less than 1W. We evaluate this chip during inference using a pre-trained GoogLeNet convolutional network model and a large image dataset from the ImageNet ILSVRC challenge. Preliminary results indicate that a multi-VPU configuration provides similar performance compared to reference CPU and GPU implementations, while reducing the thermal-design power (TDP) up to 8x in comparison.
CitationRivas-Gomez, S. [et al.]. Exploring the Vision Processing Unit as Co-Processor for Inference. A: "2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)". IEEE, 2018, p. 589-598.
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder