Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
2.
J Nucl Med ; 65(1): 4-12, 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-37945384

RESUMEN

Nuclear medicine imaging modalities such as PET and SPECT are confounded by high noise levels and low spatial resolution, necessitating postreconstruction image enhancement to improve their quality and quantitative accuracy. Artificial intelligence (AI) models such as convolutional neural networks, U-Nets, and generative adversarial networks have shown promising outcomes in enhancing PET and SPECT images. This review article presents a comprehensive survey of state-of-the-art AI methods for PET and SPECT image enhancement and seeks to identify emerging trends in this field. We focus on recent breakthroughs in AI-based PET and SPECT image denoising and deblurring. Supervised deep-learning models have shown great potential in reducing radiotracer dose and scan times without sacrificing image quality and diagnostic accuracy. However, the clinical utility of these methods is often limited by their need for paired clean and corrupt datasets for training. This has motivated research into unsupervised alternatives that can overcome this limitation by relying on only corrupt inputs or unpaired datasets to train models. This review highlights recently published supervised and unsupervised efforts toward AI-based PET and SPECT image enhancement. We discuss cross-scanner and cross-protocol training efforts, which can greatly enhance the clinical translatability of AI-based image enhancement tools. We also aim to address the looming question of whether the improvements in image quality generated by AI models lead to actual clinical benefit. To this end, we discuss works that have focused on task-specific objective clinical evaluation of AI models for image enhancement or incorporated clinical metrics into their loss functions to guide the image generation process. Finally, we discuss emerging research directions, which include the exploration of novel training paradigms, curation of larger task-specific datasets, and objective clinical evaluation that will enable the realization of the full translation potential of these models in the future.


Asunto(s)
Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Relación Señal-Ruido , Tomografía Computarizada de Emisión de Fotón Único , Tomografía de Emisión de Positrones/métodos
3.
PLoS One ; 18(5): e0285703, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37195925

RESUMEN

Sleep is an important indicator of a person's health, and its accurate and cost-effective quantification is of great value in healthcare. The gold standard for sleep assessment and the clinical diagnosis of sleep disorders is polysomnography (PSG). However, PSG requires an overnight clinic visit and trained technicians to score the obtained multimodality data. Wrist-worn consumer devices, such as smartwatches, are a promising alternative to PSG because of their small form factor, continuous monitoring capability, and popularity. Unlike PSG, however, wearables-derived data are noisier and far less information-rich because of the fewer number of modalities and less accurate measurements due to their small form factor. Given these challenges, most consumer devices perform two-stage (i.e., sleep-wake) classification, which is inadequate for deep insights into a person's sleep health. The challenging multi-class (three, four, or five-class) staging of sleep using data from wrist-worn wearables remains unresolved. The difference in the data quality between consumer-grade wearables and lab-grade clinical equipment is the motivation behind this study. In this paper, we present an artificial intelligence (AI) technique termed sequence-to-sequence LSTM for automated mobile sleep staging (SLAMSS), which can perform three-class (wake, NREM, REM) and four-class (wake, light, deep, REM) sleep classification from activity (i.e., wrist-accelerometry-derived locomotion) and two coarse heart rate measures-both of which can be reliably obtained from a consumer-grade wrist-wearable device. Our method relies on raw time-series datasets and obviates the need for manual feature selection. We validated our model using actigraphy and coarse heart rate data from two independent study populations: the Multi-Ethnic Study of Atherosclerosis (MESA; N = 808) cohort and the Osteoporotic Fractures in Men (MrOS; N = 817) cohort. SLAMSS achieves an overall accuracy of 79%, weighted F1 score of 0.80, 77% sensitivity, and 89% specificity for three-class sleep staging and an overall accuracy of 70-72%, weighted F1 score of 0.72-0.73, 64-66% sensitivity, and 89-90% specificity for four-class sleep staging in the MESA cohort. It yielded an overall accuracy of 77%, weighted F1 score of 0.77, 74% sensitivity, and 88% specificity for three-class sleep staging and an overall accuracy of 68-69%, weighted F1 score of 0.68-0.69, 60-63% sensitivity, and 88-89% specificity for four-class sleep staging in the MrOS cohort. These results were achieved with feature-poor inputs with a low temporal resolution. In addition, we extended our three-class staging model to an unrelated Apple Watch dataset. Importantly, SLAMSS predicts the duration of each sleep stage with high accuracy. This is especially significant for four-class sleep staging, where deep sleep is severely underrepresented. We show that, by appropriately choosing the loss function to address the inherent class imbalance, our method can accurately estimate deep sleep time (SLAMSS/MESA: 0.61±0.69 hours, PSG/MESA ground truth: 0.60±0.60 hours; SLAMSS/MrOS: 0.53±0.66 hours, PSG/MrOS ground truth: 0.55±0.57 hours;). Deep sleep quality and quantity are vital metrics and early indicators for a number of diseases. Our method, which enables accurate deep sleep estimation from wearables-derived data, is therefore promising for a variety of clinical applications requiring long-term deep sleep monitoring.


Asunto(s)
Actigrafía , Inteligencia Artificial , Masculino , Humanos , Frecuencia Cardíaca/fisiología , Sueño/fisiología , Fases del Sueño/fisiología , Factores de Tiempo , Reproducibilidad de los Resultados
4.
Phys Med Biol ; 66(21)2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34663767

RESUMEN

Objective:Elevated noise levels in positron emission tomography (PET) images lower image quality and quantitative accuracy and are a confounding factor for clinical interpretation. The objective of this paper is to develop a PET image denoising technique based on unsupervised deep learning.Significance:Recent advances in deep learning have ushered in a wide array of novel denoising techniques, several of which have been successfully adapted for PET image reconstruction and post-processing. The bulk of the deep learning research so far has focused on supervised learning schemes, which, for the image denoising problem, require paired noisy and noiseless/low-noise images. This requirement tends to limit the utility of these methods for medical applications as paired training datasets are not always available. Furthermore, to achieve the best-case performance of these methods, it is essential that the datasets for training and subsequent real-world application have consistent image characteristics (e.g. noise, resolution, etc), which is rarely the case for clinical data. To circumvent these challenges, it is critical to develop unsupervised techniques that obviate the need for paired training data.Approach:In this paper, we have adapted Noise2Void, a technique that relies on corrupt images alone for model training, for PET image denoising and assessed its performance using PET neuroimaging data. Noise2Void is an unsupervised approach that uses a blind-spot network design. It requires only a single noisy image as its input, and, therefore, is well-suited for clinical settings. During the training phase, a single noisy PET image serves as both the input and the target. Here we present a modified version of Noise2Void based on a transfer learning paradigm that involves group-level pretraining followed by individual fine-tuning. Furthermore, we investigate the impact of incorporating an anatomical image as a second input to the network.Main Results:We validated our denoising technique using simulation data based on the BrainWeb digital phantom. We show that Noise2Void with pretraining and/or anatomical guidance leads to higher peak signal-to-noise ratios than traditional denoising schemes such as Gaussian filtering, anatomically guided non-local means filtering, and block-matching and 4D filtering. We used the Noise2Noise denoising technique as an additional benchmark. For clinical validation, we applied this method to human brain imaging datasets. The clinical findings were consistent with the simulation results confirming the translational value of Noise2Void as a denoising tool.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía de Emisión de Positrones , Simulación por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Relación Señal-Ruido
5.
PET Clin ; 16(4): 553-576, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34537130

RESUMEN

High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.


Asunto(s)
Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador , Humanos , Aumento de la Imagen , Tomografía de Emisión de Positrones , Relación Señal-Ruido
6.
Med Image Comput Comput Assist Interv ; 12267: 418-427, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33263115

RESUMEN

Tau tangles are a pathophysiological hallmark of Alzheimer's disease (AD) and exhibit a stereotypical pattern of spatiotemporal spread which has strong links to disease progression and cognitive decline. Preclinical evidence suggests that tau spread depends on neuronal connectivity rather than physical proximity between different brain regions. Here, we present a novel physics-informed geometric learning model for predicting tau buildup and spread that learns patterns directly from longitudinal tau imaging data while receiving guidance from governing physical principles. Implemented as a graph neural network with physics-based regularization in latent space, the model enables effective training with smaller data sizes. For training and validation of the model, we used longitudinal tau measures from positron emission tomography (PET) and structural connectivity graphs from diffusion tensor imaging (DTI) from the Harvard Aging Brain Study. The model led to higher peak signal-to-noise ratio and lower mean squared error levels than both an unregularized graph neural network and a differential equation solver. The method was validated using both two-timepoint and three-timepoint tau PET measures. The effectiveness of the approach was further confirmed by a cross-validation study.

7.
IEEE Trans Comput Imaging ; 6: 518-528, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32055649

RESUMEN

Positron emission tomography (PET) suffers from severe resolution limitations which reduce its quantitative accuracy. In this paper, we present a super-resolution (SR) imaging technique for PET based on convolutional neural networks (CNNs). To facilitate the resolution recovery process, we incorporate high-resolution (HR) anatomical information based on magnetic resonance (MR) imaging. We introduce the spatial location information of the input image patches as additional CNN inputs to accommodate the spatially-variant nature of the blur kernels in PET. We compared the performance of shallow (3-layer) and very deep (20-layer) CNNs with various combinations of the following inputs: low-resolution (LR) PET, radial locations, axial locations, and HR MR. To validate the CNN architectures, we performed both realistic simulation studies using the BrainWeb digital phantom and clinical studies using neuroimaging datasets. For both simulation and clinical studies, the LR PET images were based on the Siemens HR+ scanner. Two different scenarios were examined in simulation: one where the target HR image is the ground-truth phantom image and another where the target HR image is based on the Siemens HRRT scanner - a high-resolution dedicated brain PET scanner. The latter scenario was also examined using clinical neuroimaging datasets. A number of factors affected relative performance of the different CNN designs examined, including network depth, target image quality, and the resemblance between the target and anatomical images. In general, however, all deep CNNs outperformed classical penalized deconvolution and partial volume correction techniques by large margins both qualitatively (e.g., edge and contrast recovery) and quantitatively (as indicated by three metrics: peak signal-to-noise-ratio, structural similarity index, and contrast-to-noise ratio).

8.
Neural Netw ; 125: 83-91, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32078963

RESUMEN

The intrinsically low spatial resolution of positron emission tomography (PET) leads to image quality degradation and inaccurate image-based quantitation. Recently developed supervised super-resolution (SR) approaches are of great relevance to PET but require paired low- and high-resolution images for training, which are usually unavailable for clinical datasets. In this paper, we present a self-supervised SR (SSSR) technique for PET based on dual generative adversarial networks (GANs), which precludes the need for paired training data, ensuring wider applicability and adoptability. The SSSR network receives as inputs a low-resolution PET image, a high-resolution anatomical magnetic resonance (MR) image, spatial information (axial and radial coordinates), and a high-dimensional feature set extracted from an auxiliary CNN which is separately-trained in a supervised manner using paired simulation datasets. The network is trained using a loss function which includes two adversarial loss terms, a cycle consistency term, and a total variation penalty on the SR image. We validate the SSSR technique using a clinical neuroimaging dataset. We demonstrate that SSSR is promising in terms of image quality, peak signal-to-noise ratio, structural similarity index, contrast-to-noise ratio, and an additional no-reference metric developed specifically for SR image quality assessment. Comparisons with other SSSR variants suggest that its high performance is largely attributable to simulation guidance.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Tomografía de Emisión de Positrones/métodos , Imagen por Resonancia Magnética/métodos , Relación Señal-Ruido
9.
IEEE Trans Comput Imaging ; 5(4): 530-539, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31723575

RESUMEN

The intrinsically limited spatial resolution of PET confounds image quantitation. This paper presents an image deblurring and super-resolution framework for PET using anatomical guidance provided by high-resolution MR images. The framework relies on image-domain post-processing of already-reconstructed PET images by means of spatially-variant deconvolution stabilized by an MR-based joint entropy penalty function. The method is validated through simulation studies based on the BrainWeb digital phantom, experimental studies based on the Hoffman phantom, and clinical neuroimaging studies pertaining to aging and Alzheimer's disease. The developed technique was compared with direct deconvolution and deconvolution stabilized by a quadratic difference penalty, a total variation penalty, and a Bowsher penalty. The BrainWeb simulation study showed improved image quality and quantitative accuracy measured by contrast-to-noise ratio, structural similarity index, root-mean-square error, and peak signal-to-noise ratio generated by this technique. The Hoffman phantom study indicated noticeable improvement in the structural similarity index (relative to the MR image) and gray-to-white contrast-to-noise ratio. Finally, clinical amyloid and tau imaging studies for Alzheimer's disease showed lowering of the coefficient of variation in several key brain regions associated with two target pathologies.

10.
Proc IEEE Int Symp Biomed Imaging ; 2019: 414-417, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31327984

RESUMEN

Graph convolutional neural networks (GCNNs) aim to extend the data representation and classification capabilities of convolutional neural networks, which are highly effective for signals defined on regular Euclidean domains, e.g. image and audio signals, to irregular, graph-structured data defined on non-Euclidean domains. Graph-theoretic tools that enable us to study the brain as a complex system are of great significance in brain connectivity studies. Particularly, in the context of Alzheimer's disease (AD), a neurodegenerative disorder associated with network dysfunction, graph-based tools are vital for disease classification and staging. Here, we implement and test a multi-class GCNN classifier for network-based classification of subjects on the AD spectrum into four categories: cognitively normal, early mild cognitive impairment, late mild cognitive impairment, and AD. We train and validate the network using structural connectivity graphs obtained from diffusion tensor imaging data. Using receiver operating characteristic curves, we show that the GCNN classifier outperforms a support vector machine classifier by margins that are reliant on disease category. Our findings indicate that the performance gap between the two methods increases with disease progression from CN to AD. We thus demonstrate that GCNN is a competitive tool for staging and classification of subjects on the AD spectrum.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...