Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
2.
J Nucl Med ; 65(1): 4-12, 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-37945384

RESUMEN

Nuclear medicine imaging modalities such as PET and SPECT are confounded by high noise levels and low spatial resolution, necessitating postreconstruction image enhancement to improve their quality and quantitative accuracy. Artificial intelligence (AI) models such as convolutional neural networks, U-Nets, and generative adversarial networks have shown promising outcomes in enhancing PET and SPECT images. This review article presents a comprehensive survey of state-of-the-art AI methods for PET and SPECT image enhancement and seeks to identify emerging trends in this field. We focus on recent breakthroughs in AI-based PET and SPECT image denoising and deblurring. Supervised deep-learning models have shown great potential in reducing radiotracer dose and scan times without sacrificing image quality and diagnostic accuracy. However, the clinical utility of these methods is often limited by their need for paired clean and corrupt datasets for training. This has motivated research into unsupervised alternatives that can overcome this limitation by relying on only corrupt inputs or unpaired datasets to train models. This review highlights recently published supervised and unsupervised efforts toward AI-based PET and SPECT image enhancement. We discuss cross-scanner and cross-protocol training efforts, which can greatly enhance the clinical translatability of AI-based image enhancement tools. We also aim to address the looming question of whether the improvements in image quality generated by AI models lead to actual clinical benefit. To this end, we discuss works that have focused on task-specific objective clinical evaluation of AI models for image enhancement or incorporated clinical metrics into their loss functions to guide the image generation process. Finally, we discuss emerging research directions, which include the exploration of novel training paradigms, curation of larger task-specific datasets, and objective clinical evaluation that will enable the realization of the full translation potential of these models in the future.


Asunto(s)
Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Relación Señal-Ruido , Tomografía Computarizada de Emisión de Fotón Único , Tomografía de Emisión de Positrones/métodos
3.
PLoS One ; 18(5): e0285703, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37195925

RESUMEN

Sleep is an important indicator of a person's health, and its accurate and cost-effective quantification is of great value in healthcare. The gold standard for sleep assessment and the clinical diagnosis of sleep disorders is polysomnography (PSG). However, PSG requires an overnight clinic visit and trained technicians to score the obtained multimodality data. Wrist-worn consumer devices, such as smartwatches, are a promising alternative to PSG because of their small form factor, continuous monitoring capability, and popularity. Unlike PSG, however, wearables-derived data are noisier and far less information-rich because of the fewer number of modalities and less accurate measurements due to their small form factor. Given these challenges, most consumer devices perform two-stage (i.e., sleep-wake) classification, which is inadequate for deep insights into a person's sleep health. The challenging multi-class (three, four, or five-class) staging of sleep using data from wrist-worn wearables remains unresolved. The difference in the data quality between consumer-grade wearables and lab-grade clinical equipment is the motivation behind this study. In this paper, we present an artificial intelligence (AI) technique termed sequence-to-sequence LSTM for automated mobile sleep staging (SLAMSS), which can perform three-class (wake, NREM, REM) and four-class (wake, light, deep, REM) sleep classification from activity (i.e., wrist-accelerometry-derived locomotion) and two coarse heart rate measures-both of which can be reliably obtained from a consumer-grade wrist-wearable device. Our method relies on raw time-series datasets and obviates the need for manual feature selection. We validated our model using actigraphy and coarse heart rate data from two independent study populations: the Multi-Ethnic Study of Atherosclerosis (MESA; N = 808) cohort and the Osteoporotic Fractures in Men (MrOS; N = 817) cohort. SLAMSS achieves an overall accuracy of 79%, weighted F1 score of 0.80, 77% sensitivity, and 89% specificity for three-class sleep staging and an overall accuracy of 70-72%, weighted F1 score of 0.72-0.73, 64-66% sensitivity, and 89-90% specificity for four-class sleep staging in the MESA cohort. It yielded an overall accuracy of 77%, weighted F1 score of 0.77, 74% sensitivity, and 88% specificity for three-class sleep staging and an overall accuracy of 68-69%, weighted F1 score of 0.68-0.69, 60-63% sensitivity, and 88-89% specificity for four-class sleep staging in the MrOS cohort. These results were achieved with feature-poor inputs with a low temporal resolution. In addition, we extended our three-class staging model to an unrelated Apple Watch dataset. Importantly, SLAMSS predicts the duration of each sleep stage with high accuracy. This is especially significant for four-class sleep staging, where deep sleep is severely underrepresented. We show that, by appropriately choosing the loss function to address the inherent class imbalance, our method can accurately estimate deep sleep time (SLAMSS/MESA: 0.61±0.69 hours, PSG/MESA ground truth: 0.60±0.60 hours; SLAMSS/MrOS: 0.53±0.66 hours, PSG/MrOS ground truth: 0.55±0.57 hours;). Deep sleep quality and quantity are vital metrics and early indicators for a number of diseases. Our method, which enables accurate deep sleep estimation from wearables-derived data, is therefore promising for a variety of clinical applications requiring long-term deep sleep monitoring.


Asunto(s)
Actigrafía , Inteligencia Artificial , Masculino , Humanos , Frecuencia Cardíaca/fisiología , Sueño/fisiología , Fases del Sueño/fisiología , Factores de Tiempo , Reproducibilidad de los Resultados
5.
J Nucl Med ; 64(2): 188-196, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36522184

RESUMEN

Trustworthiness is a core tenet of medicine. The patient-physician relationship is evolving from a dyad to a broader ecosystem of health care. With the emergence of artificial intelligence (AI) in medicine, the elements of trust must be revisited. We envision a road map for the establishment of trustworthy AI ecosystems in nuclear medicine. In this report, AI is contextualized in the history of technologic revolutions. Opportunities for AI applications in nuclear medicine related to diagnosis, therapy, and workflow efficiency, as well as emerging challenges and critical responsibilities, are discussed. Establishing and maintaining leadership in AI require a concerted effort to promote the rational and safe deployment of this innovative technology by engaging patients, nuclear medicine physicians, scientists, technologists, and referring providers, among other stakeholders, while protecting our patients and society. This strategic plan was prepared by the AI task force of the Society of Nuclear Medicine and Molecular Imaging.


Asunto(s)
Inteligencia Artificial , Medicina Nuclear , Humanos , Ecosistema , Cintigrafía , Imagen Molecular
6.
J Nucl Med ; 63(4): 500-510, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34740952

RESUMEN

The nuclear medicine field has seen a rapid expansion of academic and commercial interest in developing artificial intelligence (AI) algorithms. Users and developers can avoid some of the pitfalls of AI by recognizing and following best practices in AI algorithm development. In this article, recommendations on technical best practices for developing AI algorithms in nuclear medicine are provided, beginning with general recommendations and then continuing with descriptions of how one might practice these principles for specific topics within nuclear medicine. This report was produced by the AI Task Force of the Society of Nuclear Medicine and Molecular Imaging.


Asunto(s)
Inteligencia Artificial , Medicina Nuclear , Algoritmos , Imagen Molecular , Cintigrafía
7.
Phys Med Biol ; 66(21)2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34663767

RESUMEN

Objective:Elevated noise levels in positron emission tomography (PET) images lower image quality and quantitative accuracy and are a confounding factor for clinical interpretation. The objective of this paper is to develop a PET image denoising technique based on unsupervised deep learning.Significance:Recent advances in deep learning have ushered in a wide array of novel denoising techniques, several of which have been successfully adapted for PET image reconstruction and post-processing. The bulk of the deep learning research so far has focused on supervised learning schemes, which, for the image denoising problem, require paired noisy and noiseless/low-noise images. This requirement tends to limit the utility of these methods for medical applications as paired training datasets are not always available. Furthermore, to achieve the best-case performance of these methods, it is essential that the datasets for training and subsequent real-world application have consistent image characteristics (e.g. noise, resolution, etc), which is rarely the case for clinical data. To circumvent these challenges, it is critical to develop unsupervised techniques that obviate the need for paired training data.Approach:In this paper, we have adapted Noise2Void, a technique that relies on corrupt images alone for model training, for PET image denoising and assessed its performance using PET neuroimaging data. Noise2Void is an unsupervised approach that uses a blind-spot network design. It requires only a single noisy image as its input, and, therefore, is well-suited for clinical settings. During the training phase, a single noisy PET image serves as both the input and the target. Here we present a modified version of Noise2Void based on a transfer learning paradigm that involves group-level pretraining followed by individual fine-tuning. Furthermore, we investigate the impact of incorporating an anatomical image as a second input to the network.Main Results:We validated our denoising technique using simulation data based on the BrainWeb digital phantom. We show that Noise2Void with pretraining and/or anatomical guidance leads to higher peak signal-to-noise ratios than traditional denoising schemes such as Gaussian filtering, anatomically guided non-local means filtering, and block-matching and 4D filtering. We used the Noise2Noise denoising technique as an additional benchmark. For clinical validation, we applied this method to human brain imaging datasets. The clinical findings were consistent with the simulation results confirming the translational value of Noise2Void as a denoising tool.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía de Emisión de Positrones , Simulación por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Relación Señal-Ruido
8.
PET Clin ; 16(4): 553-576, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34537130

RESUMEN

High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.


Asunto(s)
Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador , Humanos , Aumento de la Imagen , Tomografía de Emisión de Positrones , Relación Señal-Ruido
9.
Neuroimage ; 237: 118126, 2021 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-33957234

RESUMEN

Tau neurofibrillary tangles, a pathophysiological hallmark of Alzheimer's disease (AD), exhibit a stereotypical spatiotemporal trajectory that is strongly correlated with disease progression and cognitive decline. Personalized prediction of tau progression is, therefore, vital for the early diagnosis and prognosis of AD. Evidence from both animal and human studies is suggestive of tau transmission along the brains preexisting neural connectivity conduits. We present here an analytic graph diffusion framework for individualized predictive modeling of tau progression along the structural connectome. To account for physiological processes that lead to active generation and clearance of tau alongside passive diffusion, our model uses an inhomogenous graph diffusion equation with a source term and provides closed-form solutions to this equation for linear and exponential source functionals. Longitudinal imaging data from two cohorts, the Harvard Aging Brain Study (HABS) and the Alzheimer's Disease Neuroimaging Initiative (ADNI), were used to validate the model. The clinical data used for developing and validating the model include regional tau measures extracted from longitudinal positron emission tomography (PET) scans based on the 18F-Flortaucipir radiotracer and individual structural connectivity maps computed from diffusion tensor imaging (DTI) by means of tractography and streamline counting. Two-timepoint tau PET scans were used to assess the goodness of model fit. Three-timepoint tau PET scans were used to assess predictive accuracy via comparison of predicted and observed tau measures at the third timepoint. Our results show high consistency between predicted and observed tau and differential tau from region-based analysis. While the prognostic value of this approach needs to be validated in a larger cohort, our preliminary results suggest that our longitudinal predictive model, which offers an in vivo macroscopic perspective on tau progression in the brain, is potentially promising as a personalizable predictive framework for AD.


Asunto(s)
Enfermedad de Alzheimer , Imagen de Difusión Tensora , Progresión de la Enfermedad , Modelos Neurológicos , Red Nerviosa , Tomografía de Emisión de Positrones , Proteínas tau/metabolismo , Anciano , Anciano de 80 o más Años , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/metabolismo , Enfermedad de Alzheimer/patología , Conjuntos de Datos como Asunto , Femenino , Humanos , Estudios Longitudinales , Masculino , Red Nerviosa/diagnóstico por imagen , Red Nerviosa/metabolismo , Red Nerviosa/patología , Pronóstico
10.
Phys Med Biol ; 66(6): 06RM01, 2021 03 12.
Artículo en Inglés | MEDLINE | ID: mdl-33339012

RESUMEN

Positron emission tomography (PET) plays an increasingly important role in research and clinical applications, catalysed by remarkable technical advances and a growing appreciation of the need for reliable, sensitive biomarkers of human function in health and disease. Over the last 30 years, a large amount of the physics and engineering effort in PET has been motivated by the dominant clinical application during that period, oncology. This has led to important developments such as PET/CT, whole-body PET, 3D PET, accelerated statistical image reconstruction, and time-of-flight PET. Despite impressive improvements in image quality as a result of these advances, the emphasis on static, semi-quantitative 'hot spot' imaging for oncologic applications has meant that the capability of PET to quantify biologically relevant parameters based on tracer kinetics has not been fully exploited. More recent advances, such as PET/MR and total-body PET, have opened up the ability to address a vast range of new research questions, from which a future expansion of applications and radiotracers appears highly likely. Many of these new applications and tracers will, at least initially, require quantitative analyses that more fully exploit the exquisite sensitivity of PET and the tracer principle on which it is based. It is also expected that they will require more sophisticated quantitative analysis methods than those that are currently available. At the same time, artificial intelligence is revolutionizing data analysis and impacting the relationship between the statistical quality of the acquired data and the information we can extract from the data. In this roadmap, leaders of the key sub-disciplines of the field identify the challenges and opportunities to be addressed over the next ten years that will enable PET to realise its full quantitative potential, initially in research laboratories and, ultimately, in clinical practice.


Asunto(s)
Inteligencia Artificial , Neoplasias/diagnóstico por imagen , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Tomografía Computarizada por Tomografía de Emisión de Positrones/tendencias , Tomografía de Emisión de Positrones/métodos , Tomografía de Emisión de Positrones/tendencias , Historia del Siglo XX , Historia del Siglo XXI , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional , Cinética , Oncología Médica/métodos , Oncología Médica/tendencias , Tomografía Computarizada por Tomografía de Emisión de Positrones/historia , Pronóstico , Radiofármacos , Biología de Sistemas , Tomografía Computarizada por Rayos X
11.
Med Image Comput Comput Assist Interv ; 12267: 418-427, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33263115

RESUMEN

Tau tangles are a pathophysiological hallmark of Alzheimer's disease (AD) and exhibit a stereotypical pattern of spatiotemporal spread which has strong links to disease progression and cognitive decline. Preclinical evidence suggests that tau spread depends on neuronal connectivity rather than physical proximity between different brain regions. Here, we present a novel physics-informed geometric learning model for predicting tau buildup and spread that learns patterns directly from longitudinal tau imaging data while receiving guidance from governing physical principles. Implemented as a graph neural network with physics-based regularization in latent space, the model enables effective training with smaller data sizes. For training and validation of the model, we used longitudinal tau measures from positron emission tomography (PET) and structural connectivity graphs from diffusion tensor imaging (DTI) from the Harvard Aging Brain Study. The model led to higher peak signal-to-noise ratio and lower mean squared error levels than both an unregularized graph neural network and a differential equation solver. The method was validated using both two-timepoint and three-timepoint tau PET measures. The effectiveness of the approach was further confirmed by a cross-validation study.

12.
IEEE Trans Comput Imaging ; 6: 518-528, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32055649

RESUMEN

Positron emission tomography (PET) suffers from severe resolution limitations which reduce its quantitative accuracy. In this paper, we present a super-resolution (SR) imaging technique for PET based on convolutional neural networks (CNNs). To facilitate the resolution recovery process, we incorporate high-resolution (HR) anatomical information based on magnetic resonance (MR) imaging. We introduce the spatial location information of the input image patches as additional CNN inputs to accommodate the spatially-variant nature of the blur kernels in PET. We compared the performance of shallow (3-layer) and very deep (20-layer) CNNs with various combinations of the following inputs: low-resolution (LR) PET, radial locations, axial locations, and HR MR. To validate the CNN architectures, we performed both realistic simulation studies using the BrainWeb digital phantom and clinical studies using neuroimaging datasets. For both simulation and clinical studies, the LR PET images were based on the Siemens HR+ scanner. Two different scenarios were examined in simulation: one where the target HR image is the ground-truth phantom image and another where the target HR image is based on the Siemens HRRT scanner - a high-resolution dedicated brain PET scanner. The latter scenario was also examined using clinical neuroimaging datasets. A number of factors affected relative performance of the different CNN designs examined, including network depth, target image quality, and the resemblance between the target and anatomical images. In general, however, all deep CNNs outperformed classical penalized deconvolution and partial volume correction techniques by large margins both qualitatively (e.g., edge and contrast recovery) and quantitatively (as indicated by three metrics: peak signal-to-noise-ratio, structural similarity index, and contrast-to-noise ratio).

13.
Neural Netw ; 125: 83-91, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32078963

RESUMEN

The intrinsically low spatial resolution of positron emission tomography (PET) leads to image quality degradation and inaccurate image-based quantitation. Recently developed supervised super-resolution (SR) approaches are of great relevance to PET but require paired low- and high-resolution images for training, which are usually unavailable for clinical datasets. In this paper, we present a self-supervised SR (SSSR) technique for PET based on dual generative adversarial networks (GANs), which precludes the need for paired training data, ensuring wider applicability and adoptability. The SSSR network receives as inputs a low-resolution PET image, a high-resolution anatomical magnetic resonance (MR) image, spatial information (axial and radial coordinates), and a high-dimensional feature set extracted from an auxiliary CNN which is separately-trained in a supervised manner using paired simulation datasets. The network is trained using a loss function which includes two adversarial loss terms, a cycle consistency term, and a total variation penalty on the SR image. We validate the SSSR technique using a clinical neuroimaging dataset. We demonstrate that SSSR is promising in terms of image quality, peak signal-to-noise ratio, structural similarity index, contrast-to-noise ratio, and an additional no-reference metric developed specifically for SR image quality assessment. Comparisons with other SSSR variants suggest that its high performance is largely attributable to simulation guidance.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Tomografía de Emisión de Positrones/métodos , Imagen por Resonancia Magnética/métodos , Relación Señal-Ruido
14.
IEEE J Biomed Health Inform ; 24(6): 1805-1813, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-28026794

RESUMEN

This study aims to develop an automatic classifier based on deep learning for exacerbation frequency in patients with chronic obstructive pulmonary disease (COPD). A three-layer deep belief network (DBN) with two hidden layers and one visible layer was employed to develop classification models and the models' robustness to exacerbation was analyzed. Subjects from the COPDGene cohort were labeled with exacerbation frequency, defined as the number of exacerbation events per year. A total of 10 300 subjects with 361 features each were included in the analysis. After feature selection and parameter optimization, the proposed classification method achieved an accuracy of 91.99%, using a ten-fold cross validation experiment. The analysis of DBN weights showed that there was a good visual spatial relationship between the underlying critical features of different layers. Our findings show that the most sensitive features obtained from the DBN weights are consistent with the consensus showed by clinical rules and standards for COPD diagnostics. We, thus, demonstrate that DBN is a competitive tool for exacerbation risk assessment for patients suffering from COPD.


Asunto(s)
Aprendizaje Profundo , Enfermedad Pulmonar Obstructiva Crónica , Algoritmos , Estudios de Cohortes , Progresión de la Enfermedad , Humanos , Enfermedad Pulmonar Obstructiva Crónica/clasificación , Enfermedad Pulmonar Obstructiva Crónica/epidemiología , Enfermedad Pulmonar Obstructiva Crónica/genética , Enfermedad Pulmonar Obstructiva Crónica/fisiopatología , Sensibilidad y Especificidad , Máquina de Vectores de Soporte
15.
IEEE Trans Comput Imaging ; 5(4): 530-539, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31723575

RESUMEN

The intrinsically limited spatial resolution of PET confounds image quantitation. This paper presents an image deblurring and super-resolution framework for PET using anatomical guidance provided by high-resolution MR images. The framework relies on image-domain post-processing of already-reconstructed PET images by means of spatially-variant deconvolution stabilized by an MR-based joint entropy penalty function. The method is validated through simulation studies based on the BrainWeb digital phantom, experimental studies based on the Hoffman phantom, and clinical neuroimaging studies pertaining to aging and Alzheimer's disease. The developed technique was compared with direct deconvolution and deconvolution stabilized by a quadratic difference penalty, a total variation penalty, and a Bowsher penalty. The BrainWeb simulation study showed improved image quality and quantitative accuracy measured by contrast-to-noise ratio, structural similarity index, root-mean-square error, and peak signal-to-noise ratio generated by this technique. The Hoffman phantom study indicated noticeable improvement in the structural similarity index (relative to the MR image) and gray-to-white contrast-to-noise ratio. Finally, clinical amyloid and tau imaging studies for Alzheimer's disease showed lowering of the coefficient of variation in several key brain regions associated with two target pathologies.

16.
Proc IEEE Int Symp Biomed Imaging ; 2019: 414-417, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31327984

RESUMEN

Graph convolutional neural networks (GCNNs) aim to extend the data representation and classification capabilities of convolutional neural networks, which are highly effective for signals defined on regular Euclidean domains, e.g. image and audio signals, to irregular, graph-structured data defined on non-Euclidean domains. Graph-theoretic tools that enable us to study the brain as a complex system are of great significance in brain connectivity studies. Particularly, in the context of Alzheimer's disease (AD), a neurodegenerative disorder associated with network dysfunction, graph-based tools are vital for disease classification and staging. Here, we implement and test a multi-class GCNN classifier for network-based classification of subjects on the AD spectrum into four categories: cognitively normal, early mild cognitive impairment, late mild cognitive impairment, and AD. We train and validate the network using structural connectivity graphs obtained from diffusion tensor imaging data. Using receiver operating characteristic curves, we show that the GCNN classifier outperforms a support vector machine classifier by margins that are reliant on disease category. Our findings indicate that the performance gap between the two methods increases with disease progression from CN to AD. We thus demonstrate that GCNN is a competitive tool for staging and classification of subjects on the AD spectrum.

17.
Inf Process Med Imaging ; 11492: 384-393, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31156312

RESUMEN

Tau tangles are a pathological hallmark of Alzheimer?s disease (AD) with strong correlations existing between tau aggregation and cognitive decline. Studies in mouse models have shown that the characteristic patterns of tau spatial spread associated with AD progression are determined by neural connectivity rather than physical proximity between different brain regions. We present here a network diffusion model for tau aggregation based on longitudinal tau measures from positron emission tomography (PET) and structural connectivity graphs from diffusion tensor imaging (DTI). White matter fiber bundles reconstructed via tractography from the DTI data were used to compute normalized graph Laplacians which served as graph diffusion kernels for tau spread. By linearizing this model and using sparse source localization, we were able to identify distinct patterns of propagative and generative buildup of tau at a population level. A gradient descent approach was used to solve the sparsity-constrained optimization problem. Model fitting was performed on subjects from the Harvard Aging Brain Study cohort. The fitted model parameters include a scalar factor controlling the network-based tau spread and a network-independent seed vector representing seeding in different regions-of-interest. This parametric model was validated on an independent group of subjects from the same cohort. We were able to predict with reasonably high accuracy the tau buildup at a future time-point. The network diffusion model, therefore, successfully identifies two distinct mechanisms for tau buildup in the aging brain and offers a macroscopic perspective on tau spread.

18.
J Med Imaging (Bellingham) ; 6(2): 024004, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31065568

RESUMEN

Positron emission tomography (PET) imaging of the lungs is confounded by respiratory motion-induced blurring artifacts that degrade quantitative accuracy. Gating and motion-compensated image reconstruction are frequently used to correct these motion artifacts in PET. In the absence of voxel-by-voxel deformation measures, surrogate signals from external markers are used to track internal motion and generate gated PET images. The objective of our work is to develop a group-level parcellation framework for the lungs to guide the placement of markers depending on the location of the internal target region. We present a data-driven framework based on higher-order singular value decomposition (HOSVD) of deformation tensors that enables identification of synchronous areas inside the torso and on the skin surface. Four-dimensional (4-D) magnetic resonance (MR) imaging based on a specialized radial pulse sequence with a one-dimensional slice-projection navigator was used for motion capture under free-breathing conditions. The deformation tensors were computed by nonrigidly registering the gated MR images. Group-level motion signatures obtained via HOSVD were used to cluster the voxels both inside the volume and on the surface. To characterize the parcellation result, we computed correlation measures across the different regions of interest (ROIs). To assess the robustness of the parcellation technique, leave-one-out cross-validation was performed over the subject cohort, and the dependence of the result on varying numbers of gates and singular value thresholds was examined. Overall, the parcellation results were largely consistent across these test cases with Jaccard indices reflecting high degrees of overlap. Finally, a PET simulation study was performed which showed that, depending on the location of the lesion, the selection of a synchronous ROI may lead to noticeable gains in the recovery coefficient. Accurate quantitative interpretation of PET images is important for lung cancer management. Therefore, a guided motion monitoring approach is of utmost importance in the context of pulmonary PET imaging.

19.
Phys Med Biol ; 63(16): 165011, 2018 08 14.
Artículo en Inglés | MEDLINE | ID: mdl-30040073

RESUMEN

Small animal positron emission tomography (PET) imaging often requires high resolution (∼few hundred microns) to enable accurate quantitation in small structures such as animal brains. Recently, we have developed a prototype ultrahigh resolution depth-of-interaction (DOI) PET system that uses CdZnTe detectors with a detector pixel size of 350 µm and eight DOI layers with a 250 µm depth resolution. Due to the large number of line-of-response (LOR) combinations of DOIs, the system matrix for reconstruction is 64 times larger than that without DOI. While a high resolution virtual ring geometry can be employed to simplify the system matrix and create a sinogram, the LORs in such a sinogram tend to be sparse and irregular, leading to potential degradation of the reconstructed image quality. In this paper, we propose a novel high resolution sinogram rebinning method in which a uniform sub-sampling DOI strategy is employed. However, even with the high resolution rebinning strategy, the reconstructed image tends to be very noisy due to insufficient photon counts in many high resolution sinogram pixels. To reduce noise effects, we developed a penalized maximum likelihood reconstruction framework with the Poisson log-likelihood and a non-convex total variation penalty. Here, an ordered subsets separable quadratic surrogate and alternating direction method of multipliers are utilized to solve the optimization. To evaluate the performance of the proposed sub-sampling method and the penalized maximum likelihood reconstruction technique, we perform simulations and preliminary point source experiments. By comparing the reconstructed images and profiles based on sinograms without DOI, with rebinned DOI and with sub-sampled DOI, we demonstrate that the proposed method with sub-sampled DOIs can significantly improve the image quality with lower dose and yield a high resolution of <300 µm.


Asunto(s)
Algoritmos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Tomografía de Emisión de Positrones/métodos , Animales
20.
IEEE Trans Med Imaging ; 37(6): 1478-1487, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29870375

RESUMEN

Motivated by the great potential of deep learning in medical imaging, we propose an iterative positron emission tomography reconstruction framework using a deep learning-based prior. We utilized the denoising convolutional neural network (DnCNN) method and trained the network using full-dose images as the ground truth and low dose images reconstructed from downsampled data by Poisson thinning as input. Since most published deep networks are trained at a predetermined noise level, the noise level disparity of training and testing data is a major problem for their applicability as a generalized prior. In particular, the noise level significantly changes in each iteration, which can potentially degrade the overall performance of iterative reconstruction. Due to insufficient existing studies, we conducted simulations and evaluated the degradation of performance at various noise conditions. Our findings indicated that DnCNN produces additional bias induced by the disparity of noise levels. To address this issue, we propose a local linear fitting function incorporated with the DnCNN prior to improve the image quality by preventing unwanted bias. We demonstrate that the resultant method is robust against noise level disparities despite the network being trained at a predetermined noise level. By means of bias and standard deviation studies via both simulations and clinical experiments, we show that the proposed method outperforms conventional methods based on total variation and non-local means penalties. We thereby confirm that the proposed method improves the reconstruction result both quantitatively and qualitatively.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Algoritmos , Encéfalo/diagnóstico por imagen , Bases de Datos Factuales , Humanos , Modelos Lineales , Fantasmas de Imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...