In the last decades, smart power wheelchairs have being used by people with motor skill impairment in order to improve their autonomy, independence and quality of life. The most recent power wheelchairs feature many technological devices, such as laser scanners to provide automatic obstacle detection or robotic arms to perform simple operations like pick and place. However, if a motor skill impaired user was able to control a very complex robotic arm, paradoxically he would not need it.
For that reason, in this paper we present an autonomous control system based on Computer Vision algorithms which allows the user to interact with buttons or elevator panels via a robotic arm in a simple and easy way. Scale-Invariant Feature Transform (SIFT) algorithm has been used to detect and track buttons. Objects detected by SIFT are mapped in a tridimensional reference system collected with Parallel and Tracking Mapping (PTAM) algorithm. Real word coordinates are obtained using a Maximum-Likelihood estimator, fusing the PTAM coordinates with distance information provided by a proximity sensor. The visual servoing algorithm has been developed in Robotic Operative System (ROS) Environment, in which the previous algorithms are implemented as different nodes. Performances have been analyzed in a test scenario, obtaining good results on the real position of the selected objects.
Keywords: {Robotic Arm, Power Wheelchair, Visual Servoing,PBVS, Eye-in-Hand, Computer Vision, SIFT, Features extraction, PTAM, ROS, Human Machine Interface, Assistive Technology, Open-source}