Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Hum Brain Mapp ; 40(16): 4606-4617, 2019 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-31322793

RESUMO

Prognostication for comatose patients after cardiac arrest is a difficult but essential task. Currently, visual interpretation of electroencephalogram (EEG) is one of the main modality used in outcome prediction. There is a growing interest in computer-assisted EEG interpretation, either to overcome the possible subjectivity of visual interpretation, or to identify complex features of the EEG signal. We used a one-dimensional convolutional neural network (CNN) to predict functional outcome based on 19-channel-EEG recorded from 267 adult comatose patients during targeted temperature management after CA. The area under the receiver operating characteristic curve (AUC) on the test set was 0.885. Interestingly, model architecture and fine-tuning only played a marginal role in classification performance. We then used gradient-weighted class activation mapping (Grad-CAM) as visualization technique to identify which EEG features were used by the network to classify an EEG epoch as favorable or unfavorable outcome, and also to understand failures of the network. Grad-CAM showed that the network relied on similar features than classical visual analysis for predicting unfavorable outcome (suppressed background, epileptiform transients). This study confirms that CNNs are promising models for EEG-based prognostication in comatose patients, and that Grad-CAM can provide explanation for the models' decision-making, which is of utmost importance for future use of deep learning models in a clinical setting.


Assuntos
Eletroencefalografia , Parada Cardíaca/diagnóstico , Idoso , Idoso de 80 Anos ou mais , Mapeamento Encefálico , Coma/diagnóstico , Coma/diagnóstico por imagem , Aprendizado Profundo , Epilepsia/diagnóstico por imagem , Epilepsia/fisiopatologia , Feminino , Parada Cardíaca/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Prognóstico , Sono , Resultado do Tratamento
2.
Biomed Opt Express ; 15(2): 1219-1232, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38404325

RESUMO

Real-time 3D fluorescence microscopy is crucial for the spatiotemporal analysis of live organisms, such as neural activity monitoring. The eXtended field-of-view light field microscope (XLFM), also known as Fourier light field microscope, is a straightforward, single snapshot solution to achieve this. The XLFM acquires spatial-angular information in a single camera exposure. In a subsequent step, a 3D volume can be algorithmically reconstructed, making it exceptionally well-suited for real-time 3D acquisition and potential analysis. Unfortunately, traditional reconstruction methods (like deconvolution) require lengthy processing times (0.0220 Hz), hampering the speed advantages of the XLFM. Neural network architectures can overcome the speed constraints but do not automatically provide a way to certify the realism of their reconstructions, which is essential in the biomedical realm. To address these shortcomings, this work proposes a novel architecture to perform fast 3D reconstructions of live immobilized zebrafish neural activity based on a conditional normalizing flow. It reconstructs volumes at 8 Hz spanning 512x512x96 voxels, and it can be trained in under two hours due to the small dataset requirements (50 image-volume pairs). Furthermore, normalizing flows provides a way to compute the exact likelihood of a sample. This allows us to certify whether the predicted output is in- or ood, and retrain the system when a novel sample is detected. We evaluate the proposed method on a cross-validation approach involving multiple in-distribution samples (genetically identical zebrafish) and various out-of-distribution ones.

3.
Sleep ; 46(5)2023 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-36762998

RESUMO

STUDY OBJECTIVES: Inter-scorer variability in scoring polysomnograms is a well-known problem. Most of the existing automated sleep scoring systems are trained using labels annotated by a single-scorer, whose subjective evaluation is transferred to the model. When annotations from two or more scorers are available, the scoring models are usually trained on the scorer consensus. The averaged scorer's subjectivity is transferred into the model, losing information about the internal variability among different scorers. In this study, we aim to insert the multiple-knowledge of the different physicians into the training procedure. The goal is to optimize a model training, exploiting the full information that can be extracted from the consensus of a group of scorers. METHODS: We train two lightweight deep learning-based models on three different multi-scored databases. We exploit the label smoothing technique together with a soft-consensus (LSSC) distribution to insert the multiple-knowledge in the training procedure of the model. We introduce the averaged cosine similarity metric (ACS) to quantify the similarity between the hypnodensity-graph generated by the models with-LSSC and the hypnodensity-graph generated by the scorer consensus. RESULTS: The performance of the models improves on all the databases when we train the models with our LSSC. We found an increase in ACS (up to 6.4%) between the hypnodensity-graph generated by the models trained with-LSSC and the hypnodensity-graph generated by the consensus. CONCLUSION: Our approach definitely enables a model to better adapt to the consensus of the group of scorers. Future work will focus on further investigations on different scoring architectures and hopefully large-scale-heterogeneous multi-scored datasets.


Assuntos
Fases do Sono , Sono , Reprodutibilidade dos Testes , Polissonografia/métodos
4.
ArXiv ; 2023 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-37396615

RESUMO

Real-time 3D fluorescence microscopy is crucial for the spatiotemporal analysis of live organisms, such as neural activity monitoring. The eXtended field-of-view light field microscope (XLFM), also known as Fourier light field microscope, is a straightforward, single snapshot solution to achieve this. The XLFM acquires spatial-angular information in a single camera exposure. In a subsequent step, a 3D volume can be algorithmically reconstructed, making it exceptionally well-suited for real-time 3D acquisition and potential analysis. Unfortunately, traditional reconstruction methods (like deconvolution) require lengthy processing times (0.0220 Hz), hampering the speed advantages of the XLFM. Neural network architectures can overcome the speed constraints at the expense of lacking certainty metrics, which renders them untrustworthy for the biomedical realm. This work proposes a novel architecture to perform fast 3D reconstructions of live immobilized zebrafish neural activity based on a conditional normalizing flow. It reconstructs volumes at 8 Hz spanning 512 × 512 × 96 voxels, and it can be trained in under two hours due to the small dataset requirements (10 image-volume pairs). Furthermore, normalizing flows allow for exact Likelihood computation, enabling distribution monitoring, followed by out-of-distribution detection and retraining of the system when a novel sample is detected. We evaluate the proposed method on a cross-validation approach involving multiple in-distribution samples (genetically identical zebrafish) and various out-of-distribution ones.

5.
NPJ Digit Med ; 6(1): 33, 2023 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-36878957

RESUMO

AASM guidelines are the result of decades of efforts aiming at standardizing sleep scoring procedure, with the final goal of sharing a worldwide common methodology. The guidelines cover several aspects from the technical/digital specifications, e.g., recommended EEG derivations, to detailed sleep scoring rules accordingly to age. Automated sleep scoring systems have always largely exploited the standards as fundamental guidelines. In this context, deep learning has demonstrated better performance compared to classical machine learning. Our present work shows that a deep learning-based sleep scoring algorithm may not need to fully exploit the clinical knowledge or to strictly adhere to the AASM guidelines. Specifically, we demonstrate that U-Sleep, a state-of-the-art sleep scoring algorithm, can be strong enough to solve the scoring task even using clinically non-recommended or non-conventional derivations, and with no need to exploit information about the chronological age of the subjects. We finally strengthen a well-known finding that using data from multiple data centers always results in a better performing model compared with training on a single cohort. Indeed, we show that this latter statement is still valid even by increasing the size and the heterogeneity of the single data cohort. In all our experiments we used 28528 polysomnography studies from 13 different clinical studies.

6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2961-2966, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085742

RESUMO

In this work we introduce a novel meta-learning method for sleep scoring based on self-supervised learning. Our approach aims at building models for sleep scoring that can generalize across different patients and recording facilities, but do not require a further adaptation step to the target data. Towards this goal, we build our method on top of the Model Agnostic Meta-Learning (MAML) framework by incorporating a self-supervised learning (SSL) stage, and call it S2MAML. We show that S2MAML can significantly outperform MAML. The gain in performance comes from the SSL stage, which we base on a general purpose pseudo-task that limits the overfitting to the subject-specific patterns present in the training dataset. We show that S2MAML outperforms standard supervised learning and MAML on the SC, ST, ISRUC, UCD and CAP datasets. Clinical relevance- Our work tackles the generalization problem of automatic sleep scoring models. This is one of the main hurdles that limits the adoption of such models for clinical and research sleep studies.


Assuntos
Generalização Psicológica , Medicina , Aclimatação , Humanos , Polissonografia , Sono
7.
Artigo em Inglês | MEDLINE | ID: mdl-34648450

RESUMO

Deep learning is widely used in the most recent automatic sleep scoring algorithms. Its popularity stems from its excellent performance and from its ability to process raw signals and to learn feature directly from the data. Most of the existing scoring algorithms exploit very computationally demanding architectures, due to their high number of training parameters, and process lengthy time sequences in input (up to 12 minutes). Only few of these architectures provide an estimate of the model uncertainty. In this study we propose DeepSleepNet-Lite, a simplified and lightweight scoring architecture, processing only 90-seconds EEG input sequences. We exploit, for the first time in sleep scoring, the Monte Carlo dropout technique to enhance the performance of the architecture and to also detect the uncertain instances. The evaluation is performed on a single-channel EEG Fpz-Cz from the open source Sleep-EDF expanded database. DeepSleepNet-Lite achieves slightly lower performance, if not on par, compared to the existing state-of-the-art architectures, in overall accuracy, macro F1-score and Cohen's kappa (on Sleep-EDF v1-2013 ±30mins: 84.0%, 78.0%, 0.78; on Sleep-EDF v2-2018 ±30mins: 80.3%, 75.2%, 0.73). Monte Carlo dropout enables the estimate of the uncertain predictions. By rejecting the uncertain instances, the model achieves higher performance on both versions of the database (on Sleep-EDF v1-2013 ±30mins: 86.1.0%, 79.6%, 0.81; on Sleep-EDF v2-2018 ±30mins: 82.3%, 76.7%, 0.76). Our lighter sleep scoring approach paves the way to the application of scoring algorithms for sleep analysis in real-time.


Assuntos
Eletroencefalografia , Fases do Sono , Algoritmos , Sono , Incerteza
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 3509-3512, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018760

RESUMO

The present study evaluates how effectively a deep learning based sleep scoring system does encode the temporal dependency from raw polysomnography signals. An exhaustive range of neural networks, including state of the art architecture, have been used in the evaluation. The architectures have been assessed using a single-channel EEG Fpz-Cz from the open source Sleep-EDF expanded database. The best performing model reached an overall accuracy of 85.2% and a Cohen's kappa of 0.8, with an F1-score of stage N1 equal to 50.2%. We have introduced a new metric, δnorm, to better evaluate temporal dependencies. A simple feed forward architecture not only achieves comparable performance to most up-to-date complex architectures, but also does better encode the continuous temporal characteristics of sleep.Clinical relevance - A better understanding of the capability of the network in encoding sleep temporal patterns could lead to improve the automatic sleep scoring.


Assuntos
Aprendizado Profundo , Fases do Sono , Eletroencefalografia , Polissonografia , Sono
9.
Sleep Med Rev ; 48: 101204, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31491655

RESUMO

Clinical sleep scoring involves a tedious visual review of overnight polysomnograms by a human expert, according to official standards. It could appear then a suitable task for modern artificial intelligence algorithms. Indeed, machine learning algorithms have been applied to sleep scoring for many years. As a result, several software products offer nowadays automated or semi-automated scoring services. However, the vast majority of the sleep physicians do not use them. Very recently, thanks to the increased computational power, deep learning has also been employed with promising results. Machine learning algorithms can undoubtedly reach a high accuracy in specific situations, but there are many difficulties in their introduction in the daily routine. In this review, the latest approaches that are applying deep learning for facilitating and accelerating sleep scoring are thoroughly analyzed and compared with the state of the art methods. Then the obstacles in introducing automated sleep scoring in the clinical practice are examined. Deep learning algorithm capabilities of learning from a highly heterogeneous dataset, in terms both of human data and of scorers, are very promising and should be further investigated.


Assuntos
Análise de Dados , Aprendizado de Máquina , Fases do Sono/fisiologia , Transtornos do Sono-Vigília/diagnóstico , Algoritmos , Diagnóstico por Computador , Humanos , Polissonografia/instrumentação
10.
IEEE Trans Pattern Anal Mach Intell ; 30(3): 518-31, 2008 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-18195444

RESUMO

Defocus can be modeled as a diffusion process and represented mathematically using the heat equation, where image blur corresponds to the diffusion of heat. This analogy can be extended to non-planar scenes by allowing a space-varying diffusion coefficient. The inverse problem of reconstructing 3-D structure from blurred images corresponds to an "inverse diffusion" that is notoriously ill-posed. We show how to bypass this problem by using the notion of relative blur. Given two images, within each neighborhood, the amount of diffusion necessary to transform the sharper image into the blurrier one depends on the depth of the scene. This can be used to devise a global algorithm to estimate the depth profile of the scene without recovering the deblurred image, using only forward diffusion.


Assuntos
Algoritmos , Artefatos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Difusão , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
11.
IEEE Trans Image Process ; 27(4): 1723-1734, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-29346091

RESUMO

We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

12.
IEEE Trans Pattern Anal Mach Intell ; 38(6): 1041-55, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-26372205

RESUMO

Blind deconvolution is the problem of recovering a sharp image and a blur kernel from a noisy blurry image. Recently, there has been a significant effort on understanding the basic mechanisms to solve blind deconvolution. While this effort resulted in the deployment of effective algorithms, the theoretical findings generated contrasting views on why these approaches worked. On the one hand, one could observe experimentally that alternating energy minimization algorithms converge to the desired solution. On the other hand, it has been shown that such alternating minimization algorithms should fail to converge and one should instead use a so-called Variational Bayes approach. To clarify this conundrum, recent work showed that a good image and blur prior is instead what makes a blind deconvolution algorithm work. Unfortunately, this analysis did not apply to algorithms based on total variation regularization. In this manuscript, we provide both analysis and experiments to get a clearer picture of blind deconvolution. Our analysis reveals the very reason why an algorithm based on total variation works. We also introduce an implementation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the top performing algorithms.

13.
IEEE Trans Pattern Anal Mach Intell ; 34(5): 972-86, 2012 May.
Artigo em Inglês | MEDLINE | ID: mdl-21844629

RESUMO

Portable light field (LF) cameras have demonstrated capabilities beyond conventional cameras. In a single snapshot, they enable digital image refocusing and 3D reconstruction. We show that they obtain a larger depth of field but maintain the ability to reconstruct detail at high resolution. In fact, all depths are approximately focused, except for a thin slab where blur size is bounded, i.e., their depth of field is essentially inverted compared to regular cameras. Crucial to their success is the way they sample the LF, trading off spatial versus angular resolution, and how aliasing affects the LF. We show that applying traditional multiview stereo methods to the extracted low-resolution views can result in reconstruction errors due to aliasing. We address these challenges using an explicit image formation model, and incorporate Lambertian and texture preserving priors to reconstruct both scene depth and its superresolved texture in a variational Bayesian framework, eliminating aliasing by fusing multiview information. We demonstrate the method on synthetic and real images captured with our LF camera, and show that it can outperform other computational camera systems.

14.
IEEE Trans Biomed Eng ; 58(3): 795-9, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21118759

RESUMO

Retinal fundus images acquired with nonmydriatic digital fundus cameras are versatile tools for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relatively low cost, these cameras can be employed by operators with limited training for telemedicine or point-of-care (PoC) applications. We propose a novel technique that uses uncalibrated multiple-view fundus images to analyze the swelling of the macula. This innovation enables the detection and quantitative measurement of swollen areas by remote ophthalmologists. This capability is not available with a single image and prone to error with stereo fundus cameras. We also present automatic algorithms to measure features from the reconstructed image, which are useful in PoC automated diagnosis of early macular edema, e.g., before the appearance of exudation. The technique presented is divided into three parts: first, a preprocessing technique simultaneously enhances the dark microstructures of the macula and equalizes the image; second, all available views are registered using nonmorphological sparse features; finally, a dense pyramidal optical flow is calculated for all the images and statistically combined to build a naive height map of the macula. Results are presented on three sets of synthetic images and two sets of real-world images. These preliminary tests show the ability to infer a minimum swelling of 300 µm and to correlate the reconstruction with the swollen location.


Assuntos
Fundo de Olho , Macula Lutea/patologia , Oftalmoscopia/métodos , Sistemas Automatizados de Assistência Junto ao Leito , Telemedicina/métodos , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA