ASNet: Introducing Approximate Hardware to High-Level Synthesis of Neural Networks

Saman Fröhlich, Lucas Klemmer, Daniel Große, Rolf Drechsler

In: 50th International Symposium on Multiple-Valued Logic (ISMVL). IEEE International Symposium on Multiple-Valued Logic (ISMVL-2020) May 20-22 Miyazaki Japan 2020.


Approximate Computing is a design paradigmwhich makes use of error tolerance inherent to many appli-cations in order to trade off accuracy for performance. Oneclassic example for such an application is machine learning withNeural Networks (NNs). Recently, LeFlow, a High-Level Synthesis(HLS) flow for mapping Tensorflow NNs into hardware hasbeen proposed. The main steps of LeFlow are to compile theTensorflow models into the LLVM Intermediate Representation(IR), perform several transformations and feed the result into aHLS tool.In this work we take HLS-based NN synthesis one step furtherby integrating hardware approximation. To achieve this goal, weupgrade LeFlow such that (a) the user can specify hardwareapproximations, and (b) the user can analyze the impact ofhardware approximation already at the SW level. Based on theexploration results which satisfy the NN quality expectations, weimport the chosen approx. HW components into an extendedversion of the HLS tool to finally synthesize the NN to Verilog.The experimental evaluation demonstrates the advantages of ourproposed ASNet for several NNs. Significant area reductions aswell as improvements in operation frequency are achieved.

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence