Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38985553

RESUMO

Bharadwaj et al. [1] present a comments paper evaluating the classification accuracy of several state-of-the-art methods using EEG data averaged over random class samples. According to the results, some of the methods achieve above-chance accuracy, while the method proposed in [2], that is the target of their analysis, does not. In this rebuttal, we address these claims and explain why they are not grounded in the cognitive neuroscience literature, and why the evaluation procedure is ineffective and unfair.

2.
Sci Rep ; 13(1): 4641, 2023 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-36944784

RESUMO

Volcano-seismic signals can help for volcanic hazard estimation and eruption forecasting. However, the underlying mechanism for their low frequency components is still a matter of debate. Here, we show signatures of dynamic strain records from Distributed Acoustic Sensing in the low frequencies of volcanic signals at Vulcano Island, Italy. Signs of unrest have been observed since September 2021, with CO2 degassing and occurrence of long period and very long period events. We interrogated a fiber-optic telecommunication cable on-shore and off-shore linking Vulcano Island to Sicily. We explore various approaches to automatically detect seismo-volcanic events both adapting conventional algorithms and using machine learning techniques. During one month of acquisition, we found 1488 events with a great variety of waveforms composed of two main frequency bands (from 0.1 to 0.2 Hz and from 3 to 5 Hz) with various relative amplitudes. On the basis of spectral signature and family classification, we propose a model in which gas accumulates in the hydrothermal system and is released through a series of resonating fractures until the surface. Our findings demonstrate that fiber optic telecom cables in association with cutting-edge machine learning algorithms contribute to a better understanding and monitoring of volcanic hydrothermal systems.

3.
Mach Learn Med Imaging ; 12966: 238-247, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36780259

RESUMO

We propose a novel 3D fully convolutional deep network for automated pancreas segmentation from both MRI and CT scans. More specifically, the proposed model consists of a 3D encoder that learns to extract volume features at different scales; features taken at different points of the encoder hierarchy are then sent to multiple 3D decoders that individually predict intermediate segmentation maps. Finally, all segmentation maps are combined to obtain a unique detailed segmentation mask. We test our model on both CT and MRI imaging data: the publicly available NIH Pancreas-CT dataset (consisting of 82 contrast-enhanced CTs) and a private MRI dataset (consisting of 40 MRI scans). Experimental results show that our model outperforms existing methods on CT pancreas segmentation, obtaining an average Dice score of about 88%, and yields promising segmentation performance on a very challenging MRI data set (average Dice score is about 77%). Additional control experiments demonstrate that the achieved performance is due to the combination of our 3D fully-convolutional deep network and the hierarchical representation decoding, thus substantiating our architectural design.

4.
IEEE Trans Pattern Anal Mach Intell ; 43(11): 3833-3849, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-32750768

RESUMO

This work presents a novel method of exploring human brain-visual representations, with a view towards replicating these processes in machines. The core idea is to learn plausible computational and biological representations by correlating human neural activity and natural images. Thus, we first propose a model, EEG-ChannelNet, to learn a brain manifold for EEG classification. After verifying that visual information can be extracted from EEG data, we introduce a multimodal approach that uses deep image and EEG encoders, trained in a siamese configuration, for learning a joint manifold that maximizes a compatibility measure between visual features and brain representations. We then carry out image classification and saliency detection on the learned manifold. Performance analyses show that our approach satisfactorily decodes visual information from neural signals. This, in turn, can be used to effectively supervise the training of deep learning models, as demonstrated by the high performance of image classification and saliency detection on out-of-training classes. The obtained results show that the learned brain-visual features lead to improved performance and simultaneously bring deep models more in line with cognitive neuroscience work related to visual perception and attention.


Assuntos
Algoritmos , Redes Neurais de Computação , Atenção , Encéfalo/diagnóstico por imagem , Humanos , Percepção Visual
5.
Artif Intell Med ; 118: 102114, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34412837

RESUMO

COVID-19 infection caused by SARS-CoV-2 pathogen has been a catastrophic pandemic outbreak all over the world, with exponential increasing of confirmed cases and, unfortunately, deaths. In this work we propose an AI-powered pipeline, based on the deep-learning paradigm, for automated COVID-19 detection and lesion categorization from CT scans. We first propose a new segmentation module aimed at automatically identifying lung parenchyma and lobes. Next, we combine the segmentation network with classification networks for COVID-19 identification and lesion categorization. We compare the model's classification results with those obtained by three expert radiologists on a dataset of 166 CT scans. Results showed a sensitivity of 90.3% and a specificity of 93.5% for COVID-19 detection, at least on par with those yielded by the expert radiologists, and an average lesion categorization accuracy of about 84%. Moreover, a significant role is played by prior lung and lobe segmentation, that allowed us to enhance classification performance by over 6 percent points. The interpretation of the trained AI models reveals that the most significant areas for supporting the decision on COVID-19 identification are consistent with the lesions clinically associated to the virus, i.e., crazy paving, consolidation and ground glass. This means that the artificial models are able to discriminate a positive patient from a negative one (both controls and patients with interstitial pneumonia tested negative to COVID) by evaluating the presence of those lesions into CT scans. Finally, the AI models are integrated into a user-friendly GUI to support AI explainability for radiologists, which is publicly available at http://perceivelab.com/covid-ai. The whole AI system is unique since, to the best of our knowledge, it is the first AI-based software, publicly available, that attempts to explain to radiologists what information is used by AI methods for making decisions and that proactively involves them in the decision loop to further improve the COVID-19 understanding.


Assuntos
COVID-19 , Inteligência Artificial , Humanos , Pulmão/diagnóstico por imagem , SARS-CoV-2 , Tomografia Computadorizada por Raios X
6.
IEEE Trans Neural Netw Learn Syst ; 31(12): 5103-5115, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-31985445

RESUMO

Integrating human-provided location priors into video object segmentation has been shown to be an effective strategy to enhance performance, but their application at large scale is unfeasible. Gamification can help reduce the annotation burden, but it still requires user involvement. We propose a video object segmentation framework that leverages the combined advantages of user feedback for segmentation and gamification strategy by simulating multiple game players through a reinforcement learning (RL) model that reproduces human ability to pinpoint moving objects and using the simulated feedback to drive the decisions of a fully convolutional deep segmentation network. Experimental results on the DAVIS-17 benchmark show that: 1) including user-provided prior, even if not precise, yields high performance; 2) our RL agent replicates satisfactorily the same variability of humans in identifying spatiotemporal salient objects; and 3) employing artificially generated priors in an unsupervised video object segmentation model reaches state-of-the-art performance.

7.
IEEE Trans Pattern Anal Mach Intell ; 39(10): 1942-1958, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-27662670

RESUMO

Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa