Abstract:
Nowadays, two groundbreaking factors are emerging in neural networks.
Firstly, there is the RISC-V open instruction set architecture (ISA) that allows
a seamless implementation of custom instruction sets. Secondly, there are
several novel formats for real number arithmetic.
In this work, we combined these two key aspects using the very promising
posit format, developing a light Posit Processing Unit (PPU-light). We present an extension of the base RISC-V ISA that allows
the conversion between 8 or 16-bit posits and 32-bit IEEE Floats or fixed point formats
in order to offer a compressed representation of real numbers with little-to-none
accuracy degradation. Then we elaborate on the hardware and software toolchain integration of our PPU-light inside the Ariane RISC-V core and its toolchain,
showing how little it impacts in term of circuit complexity and power consumption. Indeed, only 0.36\% of the circuit is devoted to the PPU-light while the full RISC-V core occupies the 33\% of the overall circuit complexity. Finally we present the impact of our PPU-light on a deep neural network task, reporting speedups up to 10 on sample inference processing time.