This paper investigates machine learning approaches toward the development of a speaker dependent keywords spotting system intended for users with speech disorders, in particular for those with dysarthria, i.e., a neuromotor speech impairment associated with severe physical disabilities. In the field of assistive technologies, nowadays automatic speech recognition (ASR) is an open challenge since standard voice recognition approaches and voice driven services are ineffective to recognize atypical speech. To address these issues, we focus our attention on keywords spotting task in presence of dysarthria and we exploit deep learning technology in conjunction with an existing convolutional neural network model to build a tailored ASR system for users with such speech disabilities. However, the usage of a machine learning approach requires enough data availability for the training of the model; to this aim, we introduce a mobile software (app) allowing those with speech disorders to collect their audio contribution in order to enrich the speech model. Considering Italian as main language, this approach allows us to build the first database containing speech samples from Italian native users with dysarthria. As discussed in the end of the article, early experiments show promising results and give us interesting perspectives for future research directions.