Resource-Efficient Logarithmic Number Scale Arithmetic for SPN Inference on FPGAsLukas Weber; Lukas Sommer; Julian Oppermann; Alejandro Molina; Kristian Kersting; Andreas Koch
In: International Conference on Field-Programmable Technology. International Conference on Field Programmable Technology (FPT-2019), December 9-13, Tianjin, China, Pages 251-254, IEEE, 2019.
FPGAs have been successfully used for the implementation of dedicated accelerators for a wide range of machine learning problems. The inference in so-called Sum-Product Networks can also be accelerated efficiently using a pipelined FPGA architecture. However, as Sum-Product Networks compute exact probability values, the required arithmetic precision poses different challenges than those encountered with Neural Networks. In previous work, this precision was maintained by using double-precision floating-point number formats, which are expensive to implement in FPGAs. In this work, we propose the use of a logarithmic number system format tailored specifically towards the inference in Sum-Product Networks. The evaluation of our optimized arithmetic hardware operators shows that the use of logarithmic number formats allows to save up to 50% hardware resources compared to double-precision floating point, while maintaining sufficient precision for SPN inference at almost identical performance.