RESUMEN
Over recent years, there has been an increase in popularity of the acquisition of dermoscopic skin lesion images using mobile devices, more specifically using the smartphone camera. The demand for self-care and telemedicine solutions requires suitable methods to guide and evaluate the acquired images' quality in order to improve the monitoring of skin lesions. In this work, a system for automated focus assessment of dermoscopic images was developed using a feature-based machine learning approach. The system was designed to guide the user throughout the acquisition process by means of a preview image validation approach that included artifact detection and focus validation, followed by the image quality assessment of the acquired picture. This paper also introduces two different datasets, dermoscopic skin lesions and artifacts, which were collected using different mobile devices to develop and test the system. The best model for automatic preview assessment attained an overall accuracy of 77.9% while focus assessment of the acquired picture reached a global accuracy of 86.2%. These findings were validated by implementing the proposed methodology within an android application, demonstrating promising results as well as the viability of the proposed solution in a real life scenario.
Asunto(s)
Dermoscopía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Anomalías Cutáneas/diagnóstico por imagen , Teléfono Inteligente , Humanos , Aprendizaje Automático , Anomalías Cutáneas/fisiopatología , Telemedicina/métodosRESUMEN
Cervical cancer is the one of the most common cancers in women worldwide, affecting around 570,000 new patients each year. Although there have been great improvements over the years, current screening procedures can still suffer from long and tedious workflows and ambiguities. The increasing interest in the development of computer-aided solutions for cervical cancer screening is to aid with these common practical difficulties, which are especially frequent in the low-income countries where most deaths caused by cervical cancer occur. In this review, an overview of the disease and its current screening procedures is firstly introduced. Furthermore, an in-depth analysis of the most relevant computational methods available on the literature for cervical cells analysis is presented. Particularly, this work focuses on topics related to automated quality assessment, segmentation and classification, including an extensive literature review and respective critical discussion. Since the major goal of this timely review is to support the development of new automated tools that can facilitate cervical screening procedures, this work also provides some considerations regarding the next generation of computer-aided diagnosis systems and future research directions.
Asunto(s)
Biología Computacional , Susceptibilidad a Enfermedades , Neoplasias del Cuello Uterino/diagnóstico , Neoplasias del Cuello Uterino/etiología , Biología Computacional/métodos , Citodiagnóstico , Diagnóstico por Computador , Manejo de la Enfermedad , Detección Precoz del Cáncer/métodos , Femenino , Humanos , InmunohistoquímicaRESUMEN
Teledermatology has developed rapidly in recent years and is nowadays an essential tool for early diagnosis. In this work, we aim to improve existing Teledermatology processes for skin lesion diagnosis by developing a deep learning approach for risk prioritization with a dataset of retrospective data from referral requests of the Portuguese National Health System. Given the high complexity of this task, we propose a new prioritization pipeline guided and inspired by domain knowledge. We explored automatic lesion segmentation and tested different learning schemes, namely hierarchical classification and curriculum learning approaches, optionally including additional patient metadata. The final priority level prediction can then be obtained by combining predicted diagnosis and a baseline priority level accounting for explicit expert knowledge. In both the differential diagnosis and prioritization branches, lesion segmentation with 30% tolerance for contextual information was shown to improve classification when compared with a flat baseline model trained on original images; furthermore, the addition of patient information was not beneficial for most experiments. Curriculum learning delivered better results than a flat or hierarchical approach. The combination of diagnosis information and a knowledge map, created in collaboration with dermatologists, together with the priority achieved interesting results (best macro F1 of 43.93% for a validated test set), paving the way for new data-centric and knowledge-driven approaches.
RESUMEN
With the increasing adoption of teledermatology, there is a need to improve the automatic organization of medical records, being dermatological image modality a key filter in this process. Although there has been considerable effort in the classification of medical imaging modalities, this has not been in the field of dermatology. Moreover, as various devices are used in teledermatological consultations, image acquisition conditions may differ. In this work, two models (VGG-16 and MobileNetV2) were used to classify dermatological images from the Portuguese National Health System according to their modality. Afterwards, four incremental learning strategies were applied to these models, namely naive, elastic weight consolidation, averaged gradient episodic memory, and experience replay, enabling their adaptation to new conditions while preserving previously acquired knowledge. The evaluation considered catastrophic forgetting, accuracy, and computational cost. The MobileNetV2 trained with the experience replay strategy, with 500 images in memory, achieved a global accuracy of 86.04% with only 0.0344 of forgetting, which is 6.98% less than the second-best strategy. Regarding efficiency, this strategy took 56 s per epoch longer than the baseline and required, on average, 4554 megabytes of RAM during training. Promising results were achieved, proving the effectiveness of the proposed approach.
RESUMEN
Dermoscopic images allow the detailed examination of subsurface characteristics of the skin, which led to creating several substantial databases of diverse skin lesions. However, the dermoscope is not an easily accessible tool in some regions. A less expensive alternative could be acquiring medium resolution clinical macroscopic images of skin lesions. However, the limited volume of macroscopic images available, especially mobile-acquired, hinders developing a clinical mobile-based deep learning approach. In this work, we present a technique to efficiently utilize the sizable number of dermoscopic images to improve the segmentation capacity of macroscopic skin lesion images. A Cycle-Consistent Adversarial Network is used to translate the image between the two distinct domains created by the different image acquisition devices. A visual inspection was performed on several databases for qualitative evaluation of the results, based on the disappearance and appearance of intrinsic dermoscopic and macroscopic features. Moreover, the Fréchet Inception Distance was used as a quantitative metric. The quantitative segmentation results are demonstrated on the available macroscopic segmentation databases, SMARTSKINS and Dermofit Image Library, yielding test set thresholded Jaccard Index of 85.13% and 74.30%. These results establish a new state-of-the-art performance in the SMARTSKINS database.
RESUMEN
Over the last few decades, researchers have been investigating the mechanisms involved in speech production. Image analysis can be a valuable aid in the understanding of the morphology of the vocal tract. The application of magnetic resonance imaging to study these mechanisms has been proven to be reliable and safe. We have applied deformable models in magnetic resonance images to conduct an automatic study of the vocal tract; mainly, to evaluate the shape of the vocal tract in the articulation of some European Portuguese sounds, and then to successfully automatically segment the vocal tract's shape in new images. Thus, a point distribution model has been built from a set of magnetic resonance images acquired during artificially sustained articulations of 21 sounds, which successfully extracts the main characteristics of the movements of the vocal tract. The combination of that statistical shape model with the gray levels of its points is subsequently used to build active shape models and active appearance models. Those models have then been used to segment the modeled vocal tract into new images in a successful and automatic manner. The computational models have thus been revealed to be useful for the specific area of speech simulation and rehabilitation, namely to simulate and recognize the compensatory movements of the articulators during speech production.