RESUMEN
BACKGROUND: CT is the standard imaging technique to evaluate pediatric sinuses. Given the potential risks of radiation exposure in children, it is important to reduce pediatric CT dose and maintain image quality. OBJECTIVE: To study the utility of spectral shaping with tin filtration to improve dose efficiency for pediatric sinus CT exams. MATERIALS AND METHODS: A head phantom was scanned on a commercial dual-source CT using a conventional protocol (120 kV) and a proposed 100 kV with a 0.4-mm tin filter (Sn100 kV) protocol for comparison. Entrance point dose (EPD) of eye and parotid gland region was measured by an ion chamber. Sixty pediatric sinus CT exams (33 acquired with 120 kV, 27 acquired with Sn100 kV) were retrospectively collected. All patient images were objectively measured for image quality and blindly reviewed by 4 pediatric neuroradiologists for overall noise, overall diagnostic quality, and delineation of 4 critical paranasal sinus structures, using a 5-point Likert scale. RESULTS: Phantom CTDIvol from Sn100 kV is 4.35 mGy, compared to CTDIvol of 5.73 mGy from 120 kV at an identical noise level. EPD of sensitive organs decreases in Sn100 kV (e.g., right eye EPD 3.83±0.42 mGy), compared to 120 kV (5.26±0.24 mGy). Patients in the 2 protocol groups were age and weight (unpaired T test P>0.05) matched. The patient CTDIvol of Sn100 kV (4.45±0.47 mGy) is significantly lower than 120 kV (5.56±0.48 mGy, unpaired T test P<0.001). No statistically significant difference for any subjective readers' score (Wilcoxon test P>0.05) was found between the two groups, indicating proposed spectral shaping provides equivalent diagnostic image quality. CONCLUSION: Phantom and patient results demonstrate that spectral shaping can significantly reduce radiation dose for non-contrast pediatric sinus CT without compromising diagnostic quality.
Asunto(s)
Estaño , Tomografía Computarizada por Rayos X , Humanos , Niño , Tomografía Computarizada por Rayos X/métodos , Estudios Retrospectivos , Reducción Gradual de Medicamentos , Dosis de RadiaciónRESUMEN
To explore the feasibility of an automatic machine-learning algorithm-based quality control system for the practice of diagnostic radiography, performance of a convolutional neural networks (CNN)-based algorithm for identifying radiographic (X-ray) views at different levels was examined with a retrospective, HIPAA-compliant, and IRB-approved study performed on 15,046 radiographic images acquired between 2013 and 2018 from nine clinical sites affiliated with our institution. Images were labeled according to four classification levels: level 1 (anatomy level, 25 classes), level 2 (laterality level, 41 classes), level 3 (projection level, 108 classes), and level 4 (detailed level, 143 classes). An Inception V3 model pre-trained with ImageNet dataset was trained with transfer learning to classify the image at all levels. Sensitivity and positive predictive value were reported for each class, and overall accuracy was reported for each level. Accuracy was also reported when we allowed for "reasonable errors". The overall accuracy was 0.96, 0.93, 0.90, and 0.86 at levels 1, 2, 3, and 4, respectively. Overall accuracy increased to 0.99, 0.97, 0.94, and 0.88 when "reasonable errors" were allowed. Machine learning algorithms resulted in reasonable model performance for identifying radiographic views with acceptable accuracy when "reasonable errors" were allowed. Our findings demonstrate the feasibility of building a quality-control program based on machine-learning algorithms to identify radiographic views with acceptable accuracy at lower levels, which could be applied in a clinical setting.
Asunto(s)
Aprendizaje Profundo , Algoritmos , Humanos , Aprendizaje Automático , Redes Neurales de la Computación , Estudios RetrospectivosRESUMEN
OBJECTIVE: To determine the feasibility of using a machine learning algorithm to screen for large vessel occlusions (LVO) in the Emergency Department (ED). MATERIALS AND METHODS: A retrospective cohort of consecutive ED stroke alerts at a large comprehensive stroke center was analyzed. The primary outcome was diagnosis of LVO at discharge. Components of the National Institutes of Health Stroke Scale (NIHSS) were used in various clinical methods and machine learning algorithms to predict LVO, and the results were compared with the baseline method (aggregate NIHSS score with threshold of 6). The Area-Under-Curve (AUC) was used to measure the overall performance of the models. Bootstrapping (n = 1000) was applied for the statistical analysis. RESULTS: Of 1133 total patients, 67 were diagnosed with LVO. A Gaussian Process (GP) algorithm significantly outperformed other methods including the baseline methods. AUC score for the GP algorithm was 0.874 ± 0.025, compared with the simple aggregate NIHSS score, which had an AUC score of 0.819 ± 0.024. A dual-stage GP algorithm is proposed, which offers flexible threshold settings for different patient populations, and achieved an overall sensitivity of 0.903 and specificity of 0.626, in which sensitivity of 0.99 was achieved for high-risk patients (defined as initial NIHSS score > 6). CONCLUSION: Machine learning using a Gaussian Process algorithm outperformed a clinical cutoff using the aggregate NIHSS score for LVO diagnosis. Future studies would be beneficial in exploring prospective interventions developed using machine learning in screening for LVOs in the emergent setting.
Asunto(s)
Trastornos Cerebrovasculares/diagnóstico , Evaluación de la Discapacidad , Servicio de Urgencia en Hospital , Aprendizaje Automático , Trastornos Cerebrovasculares/fisiopatología , Trastornos Cerebrovasculares/terapia , Estudios de Factibilidad , Femenino , Estado Funcional , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Pronóstico , Estudios RetrospectivosRESUMEN
Low-dose computed tomography (CT) lung cancer screening is recommended by the US Preventive Services Task Force for high lung cancer-risk populations. In this study, we investigated an important factor affecting the CT dose-the scan length, for this CT exam. A neural network model based on the "UNET" framework was established to segment the lung region in the CT scout images. It was trained initially with 247 chest X-ray images and then with 40 CT scout images. The mean Intersection over Union (IOU) and Dice coefficient were reported to be 0.954 and 0.976, respectively. Lung scan boundaries were determined from this segmentation and compared with the boundaries marked by an expert for 150 validation images, resulting an average 4.7% difference. Seven hundred seventy CT low-dose lung screening exams were retrospectively analyzed with the validated model. The average "desired" scan length was 252 mm with a standard deviation of 28 mm. The average "over-range" was 58.5 mm or 24%. The upper boundary (superior) on average had an "over-range" of 17 mm, and the lower boundary (inferior) on average had an "over-range" of 41 mm. Further analysis of this data showed that the extent of "over-range" was independent of acquisition date, acquisition time, acquisition station, and patient age, but dependent on technologist and patient weight. We concluded that this machine learning method could effectively support quality control on the scan length for CT low-dose screening scans, enabling the eliminations of unnecessary patient dose.
Asunto(s)
Neoplasias Pulmonares/diagnóstico por imagen , Aprendizaje Automático , Dosis de Radiación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Humanos , Pulmón/diagnóstico por imagen , Persona de Mediana EdadRESUMEN
OBJECTIVES: The purpose of this study is to determine if a universal 120-kV ultra-high pitch and virtual monoenergetic images (VMIs) protocol on the photon-counting computed tomography (PCCT) system can provide sufficient image quality for pediatric abdominal imaging, regardless of size, compared with protocols using a size-dependent kV and dual-source flash mode on the energy-integrating CT (EICT) system. MATERIALS AND METHODS: One solid water insert and 3 iodine (2, 5, 10 mg I/mL) inserts were attached or inserted into phantoms of variable sizes, simulating the abdomens of a newborn, 5-year-old, 10-year-old, and adult-sized pediatric patients. Each phantom setting was scanned on an EICT using clinical size-specific kV dual-source protocols with a pitch of 3.0. The scans were performed with fixed scanning parameters, and the CTDI vol values of full dose were 0.30, 0.71, 1.05, and 7.40 mGy for newborn to adult size, respectively. In addition, half dose scans were acquired on EICT. Each phantom was then scanned on a PCCT (Siemens Alpha) using a universal 120-kV protocol with the same full dose and half dose as determined above on the EICT scanner. All other parameters matched to EICT settings. Virtual monoenergetic images were generated from PCCT scans between 40 and 80 keV with a 5-keV interval. Image quality metrics were compared between PCCT VMIs and EICT, including image noise (measured as standard deviation of solid water), contrast-to-noise ratio (CNR) (measured at iodine inserts with solid water as background), and noise power spectrum (measured in uniform phantom regions). RESULTS: Noise at a PCCT VMI of 70 keV (7.0 ± 0.6 HU for newborn, 14.7 ± 1.6 HU for adult) is comparable ( P > 0.05, t test) or significantly lower ( P < 0.05, t test) compared with EICT (7.8 ± 0.8 HU for newborn, 15.3 ± 1.5 HU for adult). Iodine CNR from PCCT VMI at 50 keV (50.8 ± 8.4 for newborn, 27.3 ± 2.8 for adult) is comparable ( P > 0.05, t test) or significantly higher ( P < 0.05, t test) to the corresponding EICT measurements (57.5 ± 6.7 for newborn, 13.8 ± 1.7 for adult). The noise power spectrum curve shape of PCCT VMI is similar to EICT, despite PCCT VMI exhibiting higher noise at low keV levels. CONCLUSIONS: The universal PCCT 120 kV with ultra-high pitch and postprocessed VMIs demonstrated equivalent or improved performance in noise (70 keV) and iodine CNR (50 keV) for pediatric abdominal CT, compared with size-specific kV images on the EICT.
Asunto(s)
Fantasmas de Imagen , Dosis de Radiación , Radiografía Abdominal , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Tomografía Computarizada por Rayos X/instrumentación , Recién Nacido , Niño , Radiografía Abdominal/métodos , Preescolar , Fotones , MasculinoRESUMEN
PURPOSE: The comprehensive evaluation of kV selection on photon-counting computed tomography (PCCT) has yet to be performed. The aim of the study is to evaluate and determine the optimal kV options for variable pediatric body sizes on the PCCT unit. MATERIALS AND METHODS: In this study, 4 phantoms of variable sizes were utilized to represent abdomens of newborn, 5-year-old, 10-year-old, and adult-sized pediatric patients. One solid water and 4 solid iodine inserts with known concentrations (2, 5, 10, and 15 mg I/mL) were inserted into phantoms. Each phantom setting was scanned on a PCCT system (Siemens Alpha) with 4 kV options (70 and 90 kV under Quantum Mode, 120 and 140 kV under QuantumPlus Mode) and clinical dual-source (3.0 pitch) protocol. For each phantom setting, radiation dose (CTDIvol) was determined by clinical dose settings and matched for all kV acquisitions. Sixty percent clinical dose images were also acquired. Reconstruction was matched across all acquisitions using Qr40 kernel and QIR level 3. Virtual monoenergetic images (VMIs) between 40 and 80 keV with 10 keV interval were generated on the scanner. Low-energy and high-energy images were reconstructed from each scan and subsequently used to generate an iodine map (IM) using an image-based 2-material decomposition method. Image noise of VMIs from each kV acquisition was calculated and compared between kV options. Absolute percent error (APE) of iodine CT number accuracy in VMIs was calculated and compared. Root mean square error (RMSE) and bias of iodine quantification from IMs were compared across kV options. RESULTS: At the newborn size and 50 keV VMI, noise is lower at low kV acquisitions (70 kV: 10.5 HU, 90 kV: 10.4 HU), compared with high kV acquisitions (120 kV: 13.8 HU, 140 kV: 13.9 HU). At the newborn size and 70 keV VMI, the image noise from different kV options is comparable (9.4 HU for 70 kV, 8.9 HU for 90 kV, 9.7 HU for 120 kV, 10.2 HU for 140 kV). For APE of VMI, high kV (120 or 140 kV) performed overall better than low kV (70 or 90 kV). At the 5-year-old size, APE of 90 kV (median: 3.6%) is significantly higher (P < 0.001, Kruskal-Wallis rank sum test with Bonferroni correction) than 140 kV (median: 1.6%). At adult size, APE of 70 kV (median: 18.0%) is significantly higher (P < 0.0001, Kruskal-Wallis rank sum test with Bonferroni correction) than 120 kV (median: 1.4%) or 140 kV (median: 0.8%). The high kV also demonstrated lower RMSE and bias than the low kV across all controlled conditions. At 10-year-old size, RMSE and bias of 120 kV are 1.4 and 0.2 mg I/mL, whereas those from 70 kV are 1.9 and 0.8 mg I/mL. CONCLUSIONS: The high kV options (120 or 140 kV) on the PCCT unit demonstrated overall better performance than the low kV options (70 or 90 kV), in terms of image quality of VMIs and IMs. Our results recommend the use of high kV for general body imaging on the PCCT.
Asunto(s)
Tamaño Corporal , Obesidad/diagnóstico por imagen , Obesidad/fisiopatología , Dosis de Radiación , Exposición a la Radiación/análisis , Radiografía , Medicina Basada en la Evidencia , Fluoroscopía , Humanos , Medición de Riesgo , Tomografía Computarizada por Rayos X , Película para Rayos XRESUMEN
BACKGROUND AND OBJECTIVE: Computed Tomography (CT) has become an important clinical imaging modality, as well as the leading source of radiation dose from medical imaging procedures. Modern CT exams are usually led by two quick orthogonal localization scans, which are used for patient positioning and diagnostic scan parameter definition. These two localization scans contribute to the patient dose but are not used for diagnosis purposes. In this study, we investigate the possibility of using deep learning models to reconstruct one localization scan image from the other, thus reducing the patient dose and simplifying the clinical workflow. METHODS: We propose a modified encoder-decoder network and a scaled mixture loss function specifically for the focal task. In this study, 12,487 clinical abdominal exams were retrieved from a clinical medical imaging storage system and randomly split for training, validation, and test in the ratio of 7:1:2. Reconstructed images were compared with the ground truth in terms of location prediction error, profile prediction error, and attenuation prediction error. RESULTS: The average location error, profile error, and attenuation error were 1.02±3.37 mm, 4.43±2.02%, and 6.2 ± 2.94% for lateral prediction, and 6.46±6.43 mm, 3.9 ± 2.32%, and 7.12±3.54% for AP prediction, respectively. CONCLUSIONS: We conclude that although the reconstructed abdominal CT localization images may lack some details on the internal organ structures, they could be used effectively for tube current modulation calculation and patient positioning purposes, leading to a reduction of radiation dose and scan time in clinical CT exams.
Asunto(s)
Aprendizaje Profundo , Abdomen/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Dosis de Radiación , Tomografía Computarizada por Rayos XRESUMEN
Purpose: To train and test a machine learning model to automatically measure mid-thigh muscle cross-sectional area (CSA) to provide rapid estimation of appendicular lean mass (ALM) and predict knee extensor torque of obese adults. Methods: Obese adults [body mass index (BMI) = 30-40 kg/m2, age = 30-50 years] were enrolled for this study. Participants received full-body dual-energy X-ray absorptiometry (DXA), mid-thigh MRI, and completed knee extensor and flexor torque assessments via isokinetic dynamometer. Manual segmentation of mid-thigh CSA was completed for all MRI scans. A convolutional neural network (CNN) was created based on the manual segmentation to develop automated quantification of mid-thigh CSA. Relationships were established between the automated CNN values to the manual CSA segmentation, ALM via DXA, knee extensor, and flexor torque. Results: A total of 47 obese patients were enrolled in this study. Agreement between the CNN-automated measures and manual segmentation of mid-thigh CSA was high (>0.90). Automated measures of mid-thigh CSA were strongly related to the leg lean mass (r = 0.86, p < 0.001) and ALM (r = 0.87, p < 0.001). Additionally, mid-thigh CSA was strongly related to knee extensor strength (r = 0.76, p < 0.001) and moderately related to knee flexor strength (r = 0.48, p = 0.002). Conclusion: CNN-measured mid-thigh CSA was accurate compared to the manual segmented values from the mid-thigh. These values were strongly predictive of clinical measures of ALM and knee extensor torque. Mid-thigh MRI may be utilized to accurately estimate clinical measures of lean mass and function in obese adults.
RESUMEN
Split-blade diffusion-weighted periodically rotated overlapping parallel lines with enhanced reconstruction (DW-PROPELLER) was proposed to address the issues associated with diffusion-weighted echo planar imaging such as geometric distortion and difficulty in high-resolution imaging. The major drawbacks with DW-PROPELLER are its high SAR (especially at 3T) and violation of the Carr-Purcell-Meiboom-Gill condition, which leads to a long scan time and narrow blade. Parallel imaging can reduce scan time and increase blade width; however, it is very challenging to apply standard k-space-based techniques such as GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) to split-blade DW-PROPELLER due to its narrow blade. In this work, a new calibration scheme is proposed for k-space-based parallel imaging method without the need of additional calibration data, which results in a wider, more stable blade. The in vivo results show that this technique is very promising.
Asunto(s)
Algoritmos , Encéfalo/anatomía & histología , Imagen de Difusión por Resonancia Magnética/métodos , Imagen Eco-Planar/métodos , Interpretación de Imagen Asistida por Computador/métodos , Calibración , Imagen de Difusión por Resonancia Magnética/normas , Imagen Eco-Planar/normas , Humanos , Aumento de la Imagen/métodos , Aumento de la Imagen/normas , Interpretación de Imagen Asistida por Computador/normas , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Estados UnidosRESUMEN
Diffusion-weighted imaging (DWI) has shown great benefits in clinical MR exams. However, current DWI techniques have shortcomings of sensitivity to distortion or long scan times or combinations of the two. Diffusion-weighted echo-planar imaging (EPI) is fast but suffers from severe geometric distortion. Periodically rotated overlapping parallel lines with enhanced reconstruction diffusion-weighted imaging (PROPELLER DWI) is free of geometric distortion, but the scan time is usually long and imposes high Specific Absorption Rate (SAR) especially at high fields. TurboPROP was proposed to accelerate the scan by combining signal from gradient echoes, but the off-resonance artifacts from gradient echoes can still degrade the image quality. In this study, a new method called X-PROP is presented. Similar to TurboPROP, it uses gradient echoes to reduce the scan time. By separating the gradient and spin echoes into individual blades and removing the off-resonance phase, the off-resonance artifacts in X-PROP are minimized. Special reconstruction processes are applied on these blades to correct for the motion artifacts. In vivo results show its advantages over EPI, PROPELLER DWI, and TurboPROP techniques.
Asunto(s)
Algoritmos , Artefactos , Imagen de Difusión por Resonancia Magnética/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Almacenamiento y Recuperación de la Información/métodos , Imagen por Resonancia Magnética/métodos , Reproducibilidad de los Resultados , Sensibilidad y EspecificidadRESUMEN
PURPOSE: In partially parallel imaging, most k-space-based reconstruction algorithms such as GRAPPA adopt a single finite-size kernel to approximate the true relationship between sampled and nonsampled signals. However, the estimation of this kernel based on k-space signals is imperfect, and the authors are investigating methods dealing with local variation of k-space signals. METHODS: To model nonstationarity of kernel weights, similar to performing a spatially adaptive regularization, the authors fit a set of linear functions using concepts from geographically weighted regression, a methodology used in geophysical analysis. Instead of a reconstruction with a single set of kernel weights, the authors use multiple sets. A missing signal is reconstructed with its kernel weights set determined by k-space clustering. Simulated and acquired MR data with several different image content and acquisition schemes, including MR tagging, were tested. A perceptual difference model (Case-PDM) was used to quantitatively evaluate the quality of over 1000 test images, and to optimize the parameters of our algorithm. RESULTS: A MOdeling Non-stationarity of KErnel wEightS ("MONKEES") reconstruction with two sets of kernel weights gave reconstructions with significantly better image quality than the original GRAPPA in all test images. Using more sets produced improved image quality but with diminishing returns. As a rule of thumb, at least two sets of kernel weights, one from low- and the other from high frequency k-space, should be used. CONCLUSIONS: The authors conclude that the MONKEES can significantly and robustly improve the image quality in parallel MR imaging, particularly, cardiac imaging.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/estadística & datos numéricos , Imagen por Resonancia Magnética/estadística & datos numéricos , Encéfalo/anatomía & histología , Análisis por Conglomerados , Humanos , Modelos Estadísticos , Fantasmas de Imagen , Análisis de Regresión , Reproducibilidad de los ResultadosRESUMEN
Suppression of the fat signal in MRI is very important for many clinical applications. Multi-point water-fat separation methods, such as IDEAL (Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation), can robustly separate water and fat signal, but inevitably increase scan time, making separated images more easily affected by patient motions. PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) and Turboprop techniques offer an effective approach to correct for motion artifacts. By combining these techniques together, we demonstrate that the new TP-IDEAL method can provide reliable water-fat separation with robust motion correction. The Turboprop sequence was modified to acquire source images, and motion correction algorithms were adjusted to assure the registration between different echo images. Theoretical calculations were performed to predict the optimal shift and spacing of the gradient echoes. Phantom images were acquired, and results were compared with regular FSE-IDEAL. Both T1- and T2-weighted images of the human brain were used to demonstrate the effectiveness of motion correction. TP-IDEAL images were also acquired for pelvis, knee, and foot, showing great potential of this technique for general clinical applications.
Asunto(s)
Tejido Adiposo/anatomía & histología , Encéfalo/anatomía & histología , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Procesamiento de Señales Asistido por Computador , Agua , Algoritmos , Artefactos , Humanos , Movimiento (Física) , Reproducibilidad de los Resultados , Sensibilidad y EspecificidadRESUMEN
The authors are using a perceptual difference model (Case-PDM) to quantitatively evaluate image quality of the thousands of test images which can be created when optimizing fast magnetic resonance (MR) imaging strategies and reconstruction techniques. In this validation study, they compared human evaluation of MR images from multiple organs and from multiple image reconstruction algorithms to Case-PDM and similar models. The authors found that Case-PDM compared very favorably to human observers in double-stimulus continuous-quality scale and functional measurement theory studies over a large range of image quality. The Case-PDM threshold for nonperceptible differences in a 2-alternative forced choice study varied with the type of image under study, but was approximately 1.1 for diffuse image effects, providing a rule of thumb. Ordering the image quality evaluation models, we found in overall Case-PDM approximately IDM (Sarnoff Corporation) approximately SSIM [Wang et al. IEEE Trans. Image Process. 13, 600-612 (2004)] > mean squared error NR [Wang et al. (2004) (unpublished)] > DCTune (NASA) > IQM (MITRE Corporation). The authors conclude that Case-PDM is very useful in MR image evaluation but that one should probably restrict studies to similar images and similar processing, normally not a limitation in image reconstruction studies.
Asunto(s)
Simulación por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Imagen por Resonancia Magnética , Observación , Sensibilidad y Especificidad , Factores de TiempoRESUMEN
Many reconstruction algorithms are being proposed for parallel magnetic resonance imaging (MRI), which uses multiple coils and subsampled k-space data, and a quantitative method for comparison of algorithms is sorely needed. On such images, we compared three methods for quantitative image quality evaluation: human detection, computer detection model and a computer perceptual difference model (PDM). One-quarter sampling and three different reconstruction methods were investigated: a regularization method developed by Ying et al., a simplified regularization method and an iterative method proposed by Pruessmann et al. Images obtained from a full complement of k-space data were also included as reference images. Detection studies were performed using a simulated dark tumor added on MR images of fresh bovine liver. Human detection depended strongly on reconstruction methods used, with the two regularization methods achieving better performance than the iterative method. Images were also evaluated using detection by a channelized Hotelling observer model and by PDM scores. Both predicted the same trends as observed from human detection. We are encouraged that PDM gives trends similar to that for human detection studies. Its ease of use and applicability to a variety of MRI situations make it attractive for evaluating image quality in a variety of MR studies.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Detección de Señal Psicológica , Animales , Bovinos , Humanos , Funciones de Verosimilitud , Neoplasias Hepáticas/diagnóstico , Método de Montecarlo , Sensibilidad y EspecificidadRESUMEN
Parallel magnetic resonance imaging through sensitivity encoding using multiple receiver coils has emerged as an effective tool to reduce imaging time or to improve image SNR. The quality of reconstructed images is limited by the inaccurate estimation of the sensitivity map, noise in the acquired k-space data and the ill-conditioned nature of the coefficient matrix. Tikhonov regularization is a popular method to reduce or eliminate the ill-conditioned nature of the problem. In this approach, selection of the regularization map and the regularization parameter is very important. Perceptual difference model (PDM) is a quantitative image quality evaluation tool that has been successfully applied to varieties of MR applications. High correlation between the human rating and PDM score shows that PDM should be suitable to evaluate image quality in parallel MR imaging. By applying PDM, we compared four methods of selecting the regularization map and four methods of selecting the regularization parameter. We found that a regularization map obtained using generalized series (GS) together with a spatially adaptive regularization parameter gave the best reconstructions. PDM was also used as an objective function for optimizing two important parameters in the spatially adaptive method. We conclude that PDM enables one to do comprehensive experiments and that it is an effective tool for designing and optimizing reconstruction methods in parallel MR imaging.
Asunto(s)
Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos , Humanos , Procesamiento de Imagen Asistido por Computador , Variaciones Dependientes del ObservadorRESUMEN
PURPOSE: To develop and optimize a new modification of GRAPPA (generalized autocalibrating partially parallel acquisitions) MR reconstruction algorithm named "Robust GRAPPA." MATERIALS AND METHODS: In Robust GRAPPA, k-space data points were weighted before the reconstruction. Small or zero weights were assigned to "outliers" in k-space. We implemented a Slow Robust GRAPPA method, which iteratively reweighted the k-space data. It was compared to an ad hoc Fast Robust GRAPPA method, which eliminated (assigned zero weights to) a fixed percentage of k-space "outliers" following an initial estimation procedure. In comprehensive experiments the new algorithms were evaluated using the perceptual difference model (PDM), whereby image quality was quantitatively compared to the reference image. Independent variables included algorithm type, total reduction factor, outlier ratio, center filling options, and noise across multiple image datasets, providing 10,800 test images for evaluation. RESULTS: The Fast Robust GRAPPA method gave results very similar to Slow Robust GRAPPA, and showed significant improvements as compared to regular GRAPPA. Fast Robust GRAPPA added little computation time compared with regular GRAPPA. CONCLUSION: Robust GRAPPA was proposed and proved useful for improving the reconstructed image quality. PDM was helpful in designing and optimizing the MR reconstruction algorithms.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Modelos Teóricos , Humanos , Aumento de la Imagen/métodos , Variaciones Dependientes del Observador , Factores de TiempoRESUMEN
We systematically evaluated a variety of MR spiral imaging acquisition and reconstruction schemes using a computational perceptual difference model (PDM) that models the ability of humans to perceive a visual difference between a degraded "fast" MRI image with subsampling of k-space and a "gold standard" image mimicking full acquisition. Human subject experiments performed using a modified double-stimulus continuous-quality scale (DSCQS) correlated well with PDM, over a variety of images. In a smaller set of conditions, PDM scores agreed very well with human detectability measurements of image quality. Having validated the technique, PDM was used to systematically evaluate 2016 spiral image conditions (six interleave patterns, seven sampling densities, three density compensation schemes, four reconstruction methods, and four noise levels). Voronoi (VOR) with conventional regridding gave the best reconstructions. At a fixed sampling density, more interleaves gave better results. With noise present more interleaves and samples were desirable. With PDM, conditions were determined where equivalent image quality was obtained with 50% sampling in noise-free conditions. We conclude that PDM scoring provides an objective, useful tool for the assessment of fast MR image quality that can greatly aid the design of MR acquisition and signal processing strategies.
RESUMEN
Parallel imaging techniques are being applied in MRI to improve the spatial or temporal resolution. Generalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) is one of the most popular reconstruction techniques in parallel imaging. In GRAPPA, several k-space lines are acquired in addition to the normal subsampled data acquisition. Coil mapping information is extracted from these lines and used to reconstruct the missing k-space lines. These additionally acquired k-space lines can also be used in the final reconstruction so as to improve the image quality. In GRAPPA, carefully selecting the calibration region and sampling schemes can greatly reduce the noise and reconstruction artifact and improve the image quality. Perceptual Difference Model (PDM) is a quantitative image quality evaluation tool which has been successfully applied to varieties of MR applications. High correlation between human rating and PDM scores in previous studies shows that PDM is suitable for evaluating image quality in parallel MR imaging. We used PDM to quantitatively compare the quality of images reconstructed with different calibration regions and sampling schemes. We conclude that when the location of the calibration region is set at 0.8 of the phase encoding direction, and the width is set as 20% of total available fitting length, the best reconstruction image could be achieved. One should also set the outer region factor as small as possible. As an example, with all these optimizations, the time used to achieve the same image quality would be reduced by 16% as compared to unoptimized GRAPPA.