ABSTRACT
Voice disorders, such as dysphonia, are common among the general population. These pathologies often remain untreated until they reach a high level of severity. Assisting the detection of voice disorders could facilitate early diagnosis and subsequent treatment. In this study, we address the practical aspects of automatic voice disorders detection (AVDD). In real-world scenarios, data annotated for voice disorders is usually scarce due to various challenges involved in the collection and annotation of such data. However, some relatively large datasets are available for a reduced number of domains. In this context, we propose the use of a combination of out-of-domain and in-domain data for training a deep neural network-based AVDD system, and offer guidance on the minimum amount of in-domain data required to achieve acceptable performance. Further, we propose the use of a cost-based metric, the normalized expected cost (EC), to evaluate performance of AVDD systems in a way that closely reflects the needs of the application. As an added benefit, optimal decisions for the EC can be made in a principled way given by Bayes decision theory. Finally, we argue that for medical applications like AVDD, the categorical decisions need to be accompanied by interpretable scores that reflect the confidence of the system. Even very accurate models often produce scores that are not suited for interpretation. Here, we show that such models can be easily improved by adding a calibration stage-trained with just a few minutes of in-domain data. The outputs of the resulting calibrated system can then better support practitioners in their decision-making process.