Abstract: The pervasiveness of deep neural networks (DNNs) in edge devices enforced new requirements on information representation. Low precision format from 16 bits down to 1 or 2 bits have been proposed in the last years. In this paper we aim to illustrate a general view of the possible approaches of optimizing neural networks for DNNs at the edge. In particular we focused on these key-points: i) limited non-volatile storage ii) limited volatile memory iii) limited computational power. Furthermore we explored the state of art of alternative representations for real numbers comparing their performance in recognition and detection tasks, in terms of accuracy and inference time. Finally we presented our results using posits number in several neural networks and datasets, showing the small accuracy degradation between 32-bit floats and 16-bit (or even 8-bit) posits, comparing the results also against the Bfloat family.