In recent years, Convolutional Neural Networks (CNNs) have demonstrated outstanding results in several emerging classification tasks. The high-quality predictions are often achieved with computationally intensive workloads that hinder the hardware acceleration of these models at the edge. Field Programmable Gate Arrays (FPGAs) have proven to be energy efficient platforms for the execution of these algorithms and works proposing methods for automating the design on these devices have acquired relevance. The common purpose is to enable a wide range of users without specific skills to accelerate CNNs on FPGAs with reduced development times. In this paper, we present FPG-AI, a technology-independent toolflow for automating the deployment of CNNs on FPGA. The framework combines the use of model compression strategies with a fully handcrafted Hardware Description Languages (HDL)-based accelerator that poses no limit on device portability. On top of that, an automation process merges the two design spaces to define an end-to-end and ready-to-use tool. Experimental results are reported for reference models extracted from the literature (LeNet, NiN, VGG16, MobileNet-V1) on multiple classification datasets (MNIST, CIFAR10, ImageNet). To prove the technology independence of FPG-AI, we characterize the toolflow on devices with heterogeneous resource budgets belonging to different vendors (Xilinx, Intel, and Microsemi). Comparison with state-of-the-art work confirms the unmatched device portability of FPG-AI and shows performance metrics in line with the literature.