RESUMEN
Bharadwaj et al. [1] present a comments paper evaluating the classification accuracy of several state-of-the-art methods using EEG data averaged over random class samples. According to the results, some of the methods achieve above-chance accuracy, while the method proposed in [2], that is the target of their analysis, does not. In this rebuttal, we address these claims and explain why they are not grounded in the cognitive neuroscience literature, and why the evaluation procedure is ineffective and unfair.
RESUMEN
This work presents a novel method of exploring human brain-visual representations, with a view towards replicating these processes in machines. The core idea is to learn plausible computational and biological representations by correlating human neural activity and natural images. Thus, we first propose a model, EEG-ChannelNet, to learn a brain manifold for EEG classification. After verifying that visual information can be extracted from EEG data, we introduce a multimodal approach that uses deep image and EEG encoders, trained in a siamese configuration, for learning a joint manifold that maximizes a compatibility measure between visual features and brain representations. We then carry out image classification and saliency detection on the learned manifold. Performance analyses show that our approach satisfactorily decodes visual information from neural signals. This, in turn, can be used to effectively supervise the training of deep learning models, as demonstrated by the high performance of image classification and saliency detection on out-of-training classes. The obtained results show that the learned brain-visual features lead to improved performance and simultaneously bring deep models more in line with cognitive neuroscience work related to visual perception and attention.
Asunto(s)
Algoritmos , Redes Neurales de la Computación , Atención , Encéfalo/diagnóstico por imagen , Humanos , Percepción VisualRESUMEN
COVID-19 infection caused by SARS-CoV-2 pathogen has been a catastrophic pandemic outbreak all over the world, with exponential increasing of confirmed cases and, unfortunately, deaths. In this work we propose an AI-powered pipeline, based on the deep-learning paradigm, for automated COVID-19 detection and lesion categorization from CT scans. We first propose a new segmentation module aimed at automatically identifying lung parenchyma and lobes. Next, we combine the segmentation network with classification networks for COVID-19 identification and lesion categorization. We compare the model's classification results with those obtained by three expert radiologists on a dataset of 166 CT scans. Results showed a sensitivity of 90.3% and a specificity of 93.5% for COVID-19 detection, at least on par with those yielded by the expert radiologists, and an average lesion categorization accuracy of about 84%. Moreover, a significant role is played by prior lung and lobe segmentation, that allowed us to enhance classification performance by over 6 percent points. The interpretation of the trained AI models reveals that the most significant areas for supporting the decision on COVID-19 identification are consistent with the lesions clinically associated to the virus, i.e., crazy paving, consolidation and ground glass. This means that the artificial models are able to discriminate a positive patient from a negative one (both controls and patients with interstitial pneumonia tested negative to COVID) by evaluating the presence of those lesions into CT scans. Finally, the AI models are integrated into a user-friendly GUI to support AI explainability for radiologists, which is publicly available at http://perceivelab.com/covid-ai. The whole AI system is unique since, to the best of our knowledge, it is the first AI-based software, publicly available, that attempts to explain to radiologists what information is used by AI methods for making decisions and that proactively involves them in the decision loop to further improve the COVID-19 understanding.
Asunto(s)
COVID-19 , Inteligencia Artificial , Humanos , Pulmón/diagnóstico por imagen , SARS-CoV-2 , Tomografía Computarizada por Rayos XRESUMEN
This paper presents a tool for automatic assessment of skeletal bone age according to a modified version of the Tanner and Whitehouse (TW2) clinical method. The tool is able to provide an accurate bone age assessment in the range 0-6 years by processing epiphysial/metaphysial ROIs with image-processing techniques, and assigning TW2 stage to each ROI by means of hidden Markov models. The system was evaluated on a set of 360 X-rays (180 for males and 180 for females) achieving a high success rate in bone age evaluation (mean error rate of 0.41±0.33 years comparable to human error) as well as outperforming other effective methods. The paper also describes the graphical user interface of the tool, which is also released, thus to support and speed up clinicians' practices when dealing with bone age assessment.