Foto 7

L. Zulberti, M. Monopoli, P. Nannipieri and L. Fanucci, "Architectural Implications for Inference of Graph Neural Networks on CGRA-based Accelerators," 2022 17th Conference on Ph.D Research in Microelectronics and Electronics (PRIME), 2022, pp. 373-3

Written by

Reconfigurable computing has become very popular in recent years. Among all available architectures, Coarse-Grained Reconfigurable Arrays are the most prominent ones. They permit to efficiently accelerate several classes of data-intensive algorithms without giving up architecture versatility, and their use in machine learning applications is becoming increasingly widespread. In particular, the typical workload of Convolutional Neural Networks fits very well on this kind of architecture. Unfortunately, their use in Graph Neural Networks is not well investigated. Graph Neural Network algorithms apply to use cases that are characterized by non-euclidean data, such as computer vision, natural language processing, traffic forecasting, chemistry, and recommendation systems. In this work, we analyse the most relevant Coarse-Grained Reconfigurable Array devices and Graph Neural Network models. Our contribution includes a comparison between the hardware architectures and their use for the inference of Graph Neural Network models. We highlight their limitations and discuss possible directions that the development of these architectures could take.

Keywords: Coarse-Grained Reconfigurable Arrays, Graph Neural Networks, Algorithms, Architectures, Accelerators.

File: https://doi.org/10.1109/PRIME55000.2022.9816810