Deafblind people face remarkable challenges in communicating, because of their severe disability. The only way to interact with other people is the usage of the tactile sign language, which consists in understanding the sign language putting their hands on the signer's hands. But this approach works only when the signers are in the same place. The aim of this project is to reduce the gap between deafblind people and the other ones, giving them the capability to communicate remotely. By collecting images with two cameras, the signer's body is tracked with a deep neural network. The extracted coordinates of the body parts (chest, shoulders, elbows, wrists, palms and fingers) are used to move one or more robotic arms. The deafblind person can put his hands on the robots to understand the message delivered by the person on the other side. The entire system is based on a cloud architecture.