Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 134
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38645463

RESUMEN

Purpose: To rule out hemorrhage, non-contrast CT (NCCT) scans are used for early evaluation of patients with suspected stroke. Recently, artificial intelligence tools have been developed to assist with determining eligibility for reperfusion therapies by automating measurement of the Alberta Stroke Program Early CT Score (ASPECTS), a 10-point scale with > 7 or ≤ 7 being a threshold for change in functional outcome prediction and higher chance of symptomatic hemorrhage, and hypodense volume. The purpose of this work was to investigate the effects of CT reconstruction kernel and slice thickness on ASPECTS and hypodense volume. Methods: The NCCT series image data of 87 patients imaged with a CT stroke protocol at our institution were reconstructed with 3 kernels (H10s-smooth, H40s-medium, H70h-sharp) and 2 slice thicknesses (1.5mm and 5mm) to create a reference condition (H40s/5mm) and 5 non-reference conditions. Each reconstruction for each patient was analyzed with the Brainomix e-Stroke software (Brainomix, Oxford, England) which yields an ASPECTS value and measure of total hypodense volume (mL). Results: An ASPECTS value was returned for 74 of 87 cases in the reference condition (13 failures). ASPECTS in non-reference conditions changed from that measured in the reference condition for 59 cases, 7 of which changed above or below the clinical threshold of 7 for 3 non-reference conditions. ANOVA tests were performed to compare the differences in protocols, Dunnett's post-hoc tests were performed after ANOVA, and a significance level of p < 0.05 was defined. There was no significant effect of kernel (p = 0.91), a significant effect of slice thickness (p < 0.01) and no significant interaction between these factors (p = 0.91). Post-hoc tests indicated no significant difference between ASPECTS estimated in the reference and any non-reference conditions. There was a significant effect of kernel (p < 0.01) and slice thickness (p < 0.01) on hypodense volume, however there was no significant interaction between these factors (p = 0.79). Post-hoc tests indicated significantly different hypodense volume measurements for H10s/1.5mm (p = 0.03), H40s/1.5mm (p < 0.01), H70h/5mm (p < 0.01). No significant difference was found in hypodense volume measured in the H10s/5mm condition (p = 0.96). Conclusion: Automated ASPECTS and hypodense volume measurements can be significantly impacted by reconstruction kernel and slice thickness.

2.
J Med Imaging (Bellingham) ; 11(2): 024504, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38576536

RESUMEN

Purpose: The Medical Imaging and Data Resource Center (MIDRC) was created to facilitate medical imaging machine learning (ML) research for tasks including early detection, diagnosis, prognosis, and assessment of treatment response related to the coronavirus disease 2019 pandemic and beyond. The purpose of this work was to create a publicly available metrology resource to assist researchers in evaluating the performance of their medical image analysis ML algorithms. Approach: An interactive decision tree, called MIDRC-MetricTree, has been developed, organized by the type of task that the ML algorithm was trained to perform. The criteria for this decision tree were that (1) users can select information such as the type of task, the nature of the reference standard, and the type of the algorithm output and (2) based on the user input, recommendations are provided regarding appropriate performance evaluation approaches and metrics, including literature references and, when possible, links to publicly available software/code as well as short tutorial videos. Results: Five types of tasks were identified for the decision tree: (a) classification, (b) detection/localization, (c) segmentation, (d) time-to-event (TTE) analysis, and (e) estimation. As an example, the classification branch of the decision tree includes two-class (binary) and multiclass classification tasks and provides suggestions for methods, metrics, software/code recommendations, and literature references for situations where the algorithm produces either binary or non-binary (e.g., continuous) output and for reference standards with negligible or non-negligible variability and unreliability. Conclusions: The publicly available decision tree is a resource to assist researchers in conducting task-specific performance evaluations, including classification, detection/localization, segmentation, TTE, and estimation tasks.

3.
Biomedicines ; 12(1)2024 Jan 06.
Artículo en Inglés | MEDLINE | ID: mdl-38255225

RESUMEN

Coronavirus disease 2019 (COVID-19), is an ongoing issue in certain populations, presenting rapidly worsening pneumonia and persistent symptoms. This study aimed to test the predictability of rapid progression using radiographic scores and laboratory markers and present longitudinal changes. This retrospective study included 218 COVID-19 pneumonia patients admitted at the Chungnam National University Hospital. Rapid progression was defined as respiratory failure requiring mechanical ventilation within one week of hospitalization. Quantitative COVID (QCOVID) scores were derived from high-resolution computed tomography (CT) analyses: (1) ground glass opacity (QGGO), (2) mixed diseases (QMD), and (3) consolidation (QCON), and the sum, quantitative total lung diseases (QTLD). Laboratory data, including inflammatory markers, were obtained from electronic medical records. Rapid progression was observed in 9.6% of patients. All QCOVID scores predicted rapid progression, with QMD showing the best predictability (AUC = 0.813). In multivariate analyses, the QMD score and interleukin(IL)-6 level were important predictors for rapid progression (AUC = 0.864). With >2 months follow-up CT, remained lung lesions were observed in 21 subjects, even after several weeks of negative reverse transcription polymerase chain reaction test. AI-driven quantitative CT scores in conjugation with laboratory markers can be useful in predicting the rapid progression and monitoring of COVID-19.

4.
J Med Imaging (Bellingham) ; 10(6): 064501, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38074627

RESUMEN

Purpose: The Medical Imaging and Data Resource Center (MIDRC) is a multi-institutional effort to accelerate medical imaging machine intelligence research and create a publicly available image repository/commons as well as a sequestered commons for performance evaluation and benchmarking of algorithms. After de-identification, approximately 80% of the medical images and associated metadata become part of the open commons and 20% are sequestered from the open commons. To ensure that both commons are representative of the population available, we introduced a stratified sampling method to balance the demographic characteristics across the two datasets. Approach: Our method uses multi-dimensional stratified sampling where several demographic variables of interest are sequentially used to separate the data into individual strata, each representing a unique combination of variables. Within each resulting stratum, patients are assigned to the open or sequestered commons. This algorithm was used on an example dataset containing 5000 patients using the variables of race, age, sex at birth, ethnicity, COVID-19 status, and image modality and compared resulting demographic distributions to naïve random sampling of the dataset over 2000 independent trials. Results: Resulting prevalence of each demographic variable matched the prevalence from the input dataset within one standard deviation. Mann-Whitney U test results supported the hypothesis that sequestration by stratified sampling provided more balanced subsets than naïve randomization, except for demographic subcategories with very low prevalence. Conclusions: The developed multi-dimensional stratified sampling algorithm can partition a large dataset while maintaining balance across several variables, superior to the balance achieved from naïve randomization.

5.
Front Med (Lausanne) ; 10: 1151867, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37840998

RESUMEN

Purpose: Recent advancements in obtaining image-based biomarkers from CT images have enabled lung function characterization, which could aid in lung interventional planning. However, the regional heterogeneity in these biomarkers has not been well documented, yet it is critical to several procedures for lung cancer and COPD. The purpose of this paper is to analyze the interlobar and intralobar heterogeneity of tissue elasticity and study their relationship with COPD severity. Methods: We retrospectively analyzed a set of 23 lung cancer patients for this study, 14 of whom had COPD. For each patient, we employed a 5DCT scanning protocol to obtain end-exhalation and end-inhalation images and semi-automatically segmented the lobes. We calculated tissue elasticity using a biomechanical property estimation model. To obtain a measure of lobar elasticity, we calculated the mean of the voxel-wise elasticity values within each lobe. To analyze interlobar heterogeneity, we defined an index that represented the properties of the least elastic lobe as compared to the rest of the lobes, termed the Elasticity Heterogeneity Index (EHI). An index of 0 indicated total homogeneity, and higher indices indicated higher heterogeneity. Additionally, we measured intralobar heterogeneity by calculating the coefficient of variation of elasticity within each lobe. Results: The mean EHI was 0.223 ± 0.183. The mean coefficient of variation of the elasticity distributions was 51.1% ± 16.6%. For mild COPD patients, the interlobar heterogeneity was low compared to the other categories. For moderate-to-severe COPD patients, the interlobar and intralobar heterogeneities were highest, showing significant differences from the other groups. Conclusion: We observed a high level of lung tissue heterogeneity to occur between and within the lobes in all COPD severity cases, especially in moderate-to-severe cases. Heterogeneity results demonstrate the value of a regional, function-guided approach like elasticity for procedures such as surgical decision making and treatment planning.

6.
Med Phys ; 50(11): 7016-7026, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37222565

RESUMEN

BACKGROUND: A classic approach in medical image registration is to formulate an optimization problem based on the image pair of interest, and seek a deformation vector field (DVF) to minimize the corresponding objective, often iteratively. It has a clear focus on the targeted pair, but is typically slow. In contrast, more recent deep-learning-based registration offers a much faster alternative and can benefit from data-driven regularization. However, learning is a process to "fit" the training cohort, whose image or motion characteristics or both may differ from the pair of images to be tested, which is the ultimate goal of registration. Therefore, generalization gap poses a high risk with direct inference alone. PURPOSE: In this study, we propose an individualized adaptation to improve test sample targeting, to achieve a synergy of efficiency and performance in registration. METHODS: Using a previously developed network with an integrated motion representation prior module as the implementation backbone, we propose to adapt the trained registration network further for image pairs at test time to optimize the individualized performance. The adaptation method was tested against various characteristics shifts caused by cross-protocol, cross-platform, and cross-modality, with test evaluation performed on lung CBCT, cardiac MRI, and lung MRI, respectively. RESULTS: Landmark-based registration errors and motion-compensated image enhancement results demonstrated significantly improved test registration performance from our method, compared to tuned classic B-spline registration and network solutions without adaptation. CONCLUSIONS: We have developed a method to synergistically combine the effectiveness of pre-trained deep network and the target-centric perspective of optimization-based registration to improve performance on individual test data.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Pulmón , Algoritmos
7.
Med Phys ; 50(2): 894-905, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36254789

RESUMEN

BACKGROUND: Idiopathic pulmonary fibrosis (IPF) is a progressive, irreversible, and usually fatal lung disease of unknown reasons, generally affecting the elderly population. Early diagnosis of IPF is crucial for triaging patients' treatment planning into anti-fibrotic treatment or treatments for other causes of pulmonary fibrosis. However, current IPF diagnosis workflow is complicated and time-consuming, which involves collaborative efforts from radiologists, pathologists, and clinicians and it is largely subject to inter-observer variability. PURPOSE: The purpose of this work is to develop a deep learning-based automated system that can diagnose subjects with IPF among subjects with interstitial lung disease (ILD) using an axial chest computed tomography (CT) scan. This work can potentially enable timely diagnosis decisions and reduce inter-observer variability. METHODS: Our dataset contains CT scans from 349 IPF patients and 529 non-IPF ILD patients. We used 80% of the dataset for training and validation purposes and 20% as the holdout test set. We proposed a two-stage model: at stage one, we built a multi-scale, domain knowledge-guided attention model (MSGA) that encouraged the model to focus on specific areas of interest to enhance model explainability, including both high- and medium-resolution attentions; at stage two, we collected the output from MSGA and constructed a random forest (RF) classifier for patient-level diagnosis, to further boost model accuracy. RF classifier is utilized as a final decision stage since it is interpretable, computationally fast, and can handle correlated variables. Model utility was examined by (1) accuracy, represented by the area under the receiver operating characteristic curve (AUC) with standard deviation (SD), and (2) explainability, illustrated by the visual examination of the estimated attention maps which showed the important areas for model diagnostics. RESULTS: During the training and validation stage, we observe that when we provide no guidance from domain knowledge, the IPF diagnosis model reaches acceptable performance (AUC±SD = 0.93±0.07), but lacks explainability; when including only guided high- or medium-resolution attention, the learned attention maps are not satisfactory; when including both high- and medium-resolution attention, under certain hyperparameter settings, the model reaches the highest AUC among all experiments (AUC±SD = 0.99±0.01) and the estimated attention maps concentrate on the regions of interests for this task. Three best-performing hyperparameter selections according to MSGA were applied to the holdout test set and reached comparable model performance to that of the validation set. CONCLUSIONS: Our results suggest that, for a task with only scan-level labels available, MSGA+RF can utilize the population-level domain knowledge to guide the training of the network, which increases both model accuracy and explainability.


Asunto(s)
Aprendizaje Profundo , Fibrosis Pulmonar Idiopática , Enfermedades Pulmonares Intersticiales , Humanos , Anciano , Bosques Aleatorios , Fibrosis Pulmonar Idiopática/diagnóstico por imagen , Enfermedades Pulmonares Intersticiales/diagnóstico , Tomografía Computarizada por Rayos X/métodos , Estudios Retrospectivos
8.
Artículo en Inglés | MEDLINE | ID: mdl-36320561

RESUMEN

The rapid development of deep-learning methods in medical imaging has called for an analysis method suitable for non-linear and data-dependent algorithms. In this work, we investigate a local linearity analysis where a complex neural network can be represented as piecewise linear systems. We recognize that a large number of neural networks consists of alternating linear layers and rectified linear unit (ReLU) activations, and are therefore strictly piecewise linear. We investigated the extent of these locally linear regions by gradually adding perturbations to an operating point. For this work, we explored perturbations based on image features of interest, including lesion contrast, background, and additive noise. We then developed strategies to extend these strictly locally linear regions to include neighboring linear regions with similar gradients. Using these approximately linear regions, we applied singular value decomposition (SVD) analysis to each local linear system to investigate and explain the overall nonlinear and data-dependent behaviors of neural networks. The analysis was applied to an example CT denoising algorithm trained on thorax CT scans. We observed that the strictly local linear regions are highly sensitive to small signal perturbations. Over a range of lesion contrast from 0.007 to 0.04 mm-1, there is a total of 33992 linear regions. The Jacobians are also shift-variant. However, the Jacobians of neighboring linear regions are very similar. By combining linear regions with similar Jacobians, we narrowed down the number of approximately linear regions to four over lesion contrast from 0.001 to 0.08 mm-1. The SVD analysis to different linear regions revealed denoising behavior that is highly dependent on the background intensity. Analysis further identified greater amount of noise reduction in uniform regions compared to lesion edges. In summary, the local linearity analysis framework we proposed has the potential for us to better characterize and interpret the non-linear and data-dependent behaviors of neural networks.

9.
J Scleroderma Relat Disord ; 7(3): 168-178, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36211204

RESUMEN

Patients with systemic sclerosis are at high risk of developing systemic sclerosis-associated interstitial lung disease. Symptoms and outcomes of systemic sclerosis-associated interstitial lung disease range from subclinical lung involvement to respiratory failure and death. Early and accurate diagnosis of systemic sclerosis-associated interstitial lung disease is therefore important to enable appropriate intervention. The most sensitive and specific way to diagnose systemic sclerosis-associated interstitial lung disease is by high-resolution computed tomography, and experts recommend that high-resolution computed tomography should be performed in all patients with systemic sclerosis at the time of initial diagnosis. In addition to being an important screening and diagnostic tool, high-resolution computed tomography can be used to evaluate disease extent in systemic sclerosis-associated interstitial lung disease and may be helpful in assessing prognosis in some patients. Currently, there is no consensus with regards to frequency and scanning intervals in patients at risk of interstitial lung disease development and/or progression. However, expert guidance does suggest that frequency of screening using high-resolution computed tomography should be guided by risk of developing interstitial lung disease. Most experienced clinicians would not repeat high-resolution computed tomography more than once a year or every other year for the first few years unless symptoms arose. Several computed tomography techniques have been developed in recent years that are suitable for regular monitoring, including low-radiation protocols, which, together with other technologies, such as lung ultrasound and magnetic resonance imaging, may further assist in the evaluation and monitoring of patients with systemic sclerosis-associated interstitial lung disease. A video abstract to accompany this article is available at: https://www.globalmedcomms.com/respiratory/Khanna/HRCTinSScILD.

10.
IEEE Trans Biomed Eng ; 69(6): 1828-1836, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34757900

RESUMEN

OBJECTIVE: Registration between phases in 4D cardiac MRI is essential for reconstructing high-quality images and appreciating the dynamics. Complex motion and limited image quality make it challenging to design regularization functionals. We propose to introduce a motion representation model (MRM) into a registration network to impose customized, site-specific, and spatially variant prior for cardiac motion. METHODS: We propose a novel approach to regularize deep registration with a deformation vextor field (DVF) representation model using computed tomography angiography (CTA). In the form of a convolutional auto-encoder, the MRM was trained to capture the spatially variant pattern of feasible DVF Jacobian. The CTA-derived MRM was then incorporated into an unsupervised network to facilitate MRI registration. In the experiment, 10 CTAs were used to derive the MRM. The method was tested on 10 0.35 T scans in long-axis view with manual segmentation and 15 3 T scans in short-axis view with tagging-based landmarks. RESULTS: Introducing the MRM improved registration accuracy and achieved 2.23, 7.21, and 4.42 mm 80% Hausdorff distance on left ventricle, right ventricle, and pulmonary artery, respectively, and 2.23 mm landmark registration error. The results were comparable to carefully tuned SimpleElastix, but reduced the registration time from 40 s to 0.02 s. The MRM presented good robustness to different DVF sample generation methods. CONCLUSION: The model enjoys high accuracy as meticulously tuned optimization model and the efficiency of deep networks. SIGNIFICANCE: The method enables model to go beyond the quality limitation of MRI. The robustness to training DVF generation scheme makes the method attractive to adapting to the available data and software resources in various clinics.


Asunto(s)
Angiografía por Tomografía Computarizada , Procesamiento de Imagen Asistido por Computador , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Movimiento (Física) , Tomografía Computarizada por Rayos X
11.
IEEE Trans Med Imaging ; 40(12): 3748-3761, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34264825

RESUMEN

Lung cancer is by far the leading cause of cancer death in the US. Recent studies have demonstrated the effectiveness of screening using low dose CT (LDCT) in reducing lung cancer related mortality. While lung nodules are detected with a high rate of sensitivity, this exam has a low specificity rate and it is still difficult to separate benign and malignant lesions. The ISBI 2018 Lung Nodule Malignancy Prediction Challenge, developed by a team from the Quantitative Imaging Network of the National Cancer Institute, was focused on the prediction of lung nodule malignancy from two sequential LDCT screening exams using automated (non-manual) algorithms. We curated a cohort of 100 subjects who participated in the National Lung Screening Trial and had established pathological diagnoses. Data from 30 subjects were randomly selected for training and the remaining was used for testing. Participants were evaluated based on the area under the receiver operating characteristic curve (AUC) of nodule-wise malignancy scores generated by their algorithms on the test set. The challenge had 17 participants, with 11 teams submitting reports with method description, mandated by the challenge rules. Participants used quantitative methods, resulting in a reporting test AUC ranging from 0.698 to 0.913. The top five contestants used deep learning approaches, reporting an AUC between 0.87 - 0.91. The team's predictor did not achieve significant differences from each other nor from a volume change estimate (p =.05 with Bonferroni-Holm's correction).


Asunto(s)
Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Algoritmos , Humanos , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen , Curva ROC , Nódulo Pulmonar Solitario/diagnóstico por imagen , Tomografía Computarizada por Rayos X
12.
Med Phys ; 48(10): 6160-6173, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34309040

RESUMEN

PURPOSE: Size-specific dose estimate (SSDE) is a metric that adjusts CTDIvol to account for patient size. While not intended to be an estimate of organ dose, AAPM Report 204 notes the difference between the patient organ dose and SSDE is expected to be 10-20%. The purpose of this work was therefore to evaluate SSDE against estimates of organ dose obtained using Monte Carlo (MC) simulation techniques applied to routine exams across a wide range of patient sizes. MATERIALS AND METHODS: Size-specific dose estimate was evaluated with respect to organ dose based on three routine protocols taken from Siemens scanners: (a) brain parenchyma dose in routine head exams, (b) lung and breast dose in routine chest exams, and (c) liver, kidney, and spleen dose in routine abdomen/pelvis exams. For each exam, voxelized phantom models were created from existing models or derived from clinical patient scans. For routine head exams, 15 patient models were used which consisted of 10 GSF/ICRP voxelized phantom models and five pediatric voxelized patient models created from CT image data. For all exams, the size metric used was water equivalent diameter (Dw ). For the routine chest exams, data from 161 patients were collected with a Dw range of ~16-44 cm. For the routine abdomen/pelvis exams, data from 107 patients were collected with a range of Dw from ~16 to 44 cm. Image data from these patients were segmented to generate voxelized patient models. For routine head exams, fixed tube current (FTC) was used while tube current modulation (TCM) data for body exams were extracted from raw projection data. The voxelized patient models and tube current information were used in detailed MC simulations for organ dose estimation. Organ doses from MC simulation were normalized by CTDIvol and parameterized as a function of Dw . For each patient scan, the SSDE was obtained using Dw and CTDIvol values of each scan, according to AAPM Report 220 for body scans and Report 293 for head scans. For each protocol and each patient, normalized organ doses were compared with SSDE. A one-sided tolerance limit covering 95% (P = 0.95) of the population with 95% confidence (α = 0.05) was used to assess the upper tolerance limit (TU ) between SSDE and normalized organ dose. RESULTS: For head exams, the TU between SSDE and brain parenchyma dose was observed to be 12.5%. For routine chest exams, the TU between SSDE and lung and breast dose was observed to be 35.6% and 68.3%, respectively. For routine abdomen/pelvis exams, the TU between SSDE and liver, spleen, and kidney dose was observed to be 30.7%, 33.2%, and 33.0%, respectively. CONCLUSIONS: The TU of 20% between SSDE and organ dose was found to be insufficient to cover 95% of the sampled population with 95% confidence for all of the organs and protocols investigated, except for brain parenchyma dose. For the routine body exams, excluding the breasts, a wider threshold difference of ~30-36% would be needed. These results are, however, specific to Siemens scanners.


Asunto(s)
Abdomen , Tomografía Computarizada por Rayos X , Niño , Humanos , Método de Montecarlo , Fantasmas de Imagen , Dosis de Radiación
13.
J Appl Clin Med Phys ; 22(6): 4-10, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-33938120

RESUMEN

The American Association of Physicists in Medicine (AAPM) is a nonprofit professional society whose primary purposes are to advance the science, education and professional practice of medical physics. The AAPM has more than 8000 members and is the principal organization of medical physicists in the United States. The AAPM will periodically define new practice guidelines for medical physics practice to help advance the science of medical physics and to improve the quality of service to patients throughout the United States. Existing medical physics practice guidelines will be reviewed for the purpose of revision or renewal, as appropriate, on their fifth anniversary or sooner. Each medical physics practice guideline represents a policy statement by the AAPM, has undergone a thorough consensus process in which it has been subjected to extensive review, and requires the approval of the Professional Council. The medical physics practice guidelines recognize that the safe and effective use of diagnostic and therapeutic radiology requires specific training, skills, and techniques, as described in each document. Reproduction or modification of the published practice guidelines and technical standards by those entities not providing these services is not authorized. The following terms are used in the AAPM practice guidelines: (a) Must and Must Not: Used to indicate that adherence to the recommendation is considered necessary to conform to this practice guideline. (b) Should and Should Not: Used to indicate a prudent practice to which exceptions may occasionally be made in appropriate circumstances.


Asunto(s)
Física Sanitaria , Oncología por Radiación , Citarabina , Humanos , Sociedades , Tomografía Computarizada por Rayos X , Estados Unidos
14.
J Appl Clin Med Phys ; 22(5): 97-109, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33939253

RESUMEN

PURPOSE: The purpose of this work was to estimate and compare breast and lung doses of chest CT scans using organ-based tube current modulation (OBTCM) to those from conventional, attenuation-based automatic tube current modulation (ATCM) across a range of patient sizes. METHODS: Thirty-four patients (17 females, 17 males) who underwent clinically indicated CT chest/abdomen/pelvis (CAP) examinations employing OBTCM were collected from two multi-detector row CT scanners. Patient size metric was assessed as water equivalent diameter (Dw ) taken at the center of the scan volume. Breast and lung tissues were segmented from patient image data to create voxelized models for use in a Monte Carlo transport code. The OBTCM schemes for the chest portion were extracted from the raw projection data. ATCM schemes were estimated using a recently developed method. Breast and lung doses for each TCM scenario were estimated for each patient model. CTDIvol -normalized breast (nDbreast ) and lung (nDlung ) doses were subsequently calculated. The differences between OBTCM and ATCM normalized organ dose estimates were tested using linear regression models that included CT scanner and Dw as covariates. RESULTS: Mean dose reduction from OBTCM in nDbreast was significant after adjusting for the scanner models and patient size (P = 0.047). When pooled with females and male patient, mean dose reduction from OBTCM in nDlung was observed to be trending after adjusting for the scanner model and patient size (P = 0.085). CONCLUSIONS: One specific manufacturer's OBTCM was analyzed. OBTCM was observed to significantly decrease normalized breast relative to a modeled version of that same manufacturer's ATCM scheme. However, significant dose savings were not observed in lung dose over all. Results from this study support the use of OBTCM chest protocols for females only.


Asunto(s)
Mama , Tomografía Computarizada por Rayos X , Mama/diagnóstico por imagen , Femenino , Humanos , Pulmón/diagnóstico por imagen , Masculino , Método de Montecarlo , Fantasmas de Imagen , Dosis de Radiación
15.
Med Phys ; 48(6): 2906-2919, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-33706419

RESUMEN

PURPOSE: Recent studies have demonstrated a lack of reproducibility of radiomic features in response to variations in CT parameters. In addition, reproducibility of radiomic features has not been well established in clinical datasets. We aimed to investigate the effects of a wide range of CT acquisition and reconstruction parameters on radiomic features in a realistic setting using clinical low dose lung cancer screening cases. We performed univariable and multivariable explorations to consider the effects of individual parameters and the simultaneous interactions between three different acquisition/reconstruction parameters of radiation dose level, reconstructed slice thickness, and kernel. METHOD: A cohort of 89 lung cancer screening patients were collected that each had a solid lung nodule >4mm diameter. A computational pipeline was used to perform a simulation of dose reduction of the raw projection data, collected from patient scans. This was followed by reconstruction of raw data with weighted filter back projection (wFBP) algorithm and automatic lung nodule detection and segmentation using a computer-aided detection tool. For each patient, 36 different image datasets were created corresponding to dose levels of 100%, 50%, 25%, and 10% of the original dose level, three slice thicknesses of 0.6 mm, 1 mm, and 2 mm, as well as three reconstruction kernels of smooth, medium, and sharp. For each nodule, 226 well-known radiomic features were calculated at each image condition. The reproducibility of radiomic features was first evaluated by measuring the intercondition agreement of the feature values among the 36 image conditions. Then in a series of univariable analyses, the impact of individual CT parameters was assessed by selecting subsets of conditions with one varying and two constant CT parameters. In each subset, intraparameter agreements were assessed. Overall concordance correlation coefficient (OCCC) served as the measure of agreement. An OCCC ≥ 0.9 implied strong agreement and reproducibility of radiomic features in intercondition or intraparameter comparisons. Furthermore, the interaction of CT parameters in impacting radiomic feature values was investigated via ANOVA. RESULTS: All included radiomic features lacked intercondition reproducibility (OCCC < 0.9) among all the 36 conditions. Out of 226 radiomic features analyzed, only 17 and 18 features were considered reproducible (OCCC ≥ 0.9) to dose and kernel variation, respectively, within the corresponding condition subsets. Slice thickness demonstrated the largest impact on radiomic feature values where only one to five features were reproducible at a few condition subsets. ANOVA revealed significant interactions (P < 0.05) between CT parameters affecting the variability of >50% of radiomic features. CONCLUSION: We systematically explored the multidimensional space of CT parameters in affecting lung nodule radiomic features. Univariable and multivariable analyses of this study not only showed the lack of reproducibility of the majority of radiomic features but also revealed existing interactions among CT parameters, meaning that the effect of individual CT parameters on radiomic features can be conditional upon other CT acquisition and reconstruction parameters. Our findings advise on careful radiomic feature selection and attention to the inclusion criteria for CT image acquisition protocols within the datasets of radiomic studies.


Asunto(s)
Detección Precoz del Cáncer , Neoplasias Pulmonares , Algoritmos , Humanos , Pulmón/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X
16.
Med Phys ; 48(1): 523-532, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33128259

RESUMEN

PURPOSE: Task Group Report 195 of the American Association of Physicists in Medicine contains reference datasets for the direct comparison of results among different Monte Carlo (MC) simulation tools for various aspects of imaging research that employs ionizing radiation. While useful for comparing and validating MC codes, that effort did not provide the information needed to compare absolute dose estimates from CT exams. Therefore, the purpose of this work is to extend those efforts by providing a reference dataset for benchmarking fetal dose derived from MC simulations of clinical CT exams. ACQUISITION AND VALIDATION METHODS: The reference dataset contains the four necessary elements for validating MC engines for CT dosimetry: (a) physical characteristics of the CT scanner, (b) patient information, (c) exam specifications, and (d) fetal dose results from previously validated and published MC simulations methods in tabular form. Scanner characteristics include non-proprietary descriptions of equivalent source cumulative distribution function (CDF) spectra and bowtie filtration profiles, as well as scanner geometry information. Additionally, for the MCNPX MC engine, normalization factors are provided to convert raw simulation results to absolute dose in mGy. The patient information is based on a set of publicly available fetal dose models and includes de-identified image data; voxelized MC input files with fetus, uterus, and gestational sac identified; and patient size metrics in the form of water equivalent diameter (Dw ) z-axis distributions from a simulated topogram (Dw,topo ) and from the image data (Dw,image ). Exam characteristics include CT scan start and stop angles and table and patient locations, helical pitch, nominal collimation and measured beam width, and gantry rotation time for each simulation. For simulations involving estimating doses from exams using tube current modulation (TCM), a realistic TCM scheme is presented that is estimated based upon a validated method. (d) Absolute and CTDIvol -normalized fetal dose results for both TCM and FTC simulations are given for each patient model under each scan scenario. DATA FORMAT AND USAGE NOTES: Equivalent source CDFs and bowtie filtration profiles are available in text files. Image data are available in DICOM format. Voxelized models are represented by a header followed by a list of integers in a text file representing a three-dimensional model of the patient. Size distribution metrics are also given in text files. Results of absolute and normalized fetal dose with associated MC error estimates are presented in tabular form in an Excel spreadsheet. All data are stored on Zenodo and are publicly accessible using the following link: https://zenodo.org/record/3959512. POTENTIAL APPLICATIONS: Similar to the work of AAPM Report 195, this work provides a set of reference data for benchmarking fetal dose estimates from clinical CT exams. This provides researchers with an opportunity to compare MC simulation results to a set of published reference data as part of their efforts to validate absolute and normalized fetal dose estimates. This could also be used as a basis for comparison to other non-MC approaches, such as deterministic approaches, or to commercial packages that provide estimates of fetal doses from clinical CT exams.


Asunto(s)
Benchmarking , Tomografía Computarizada por Rayos X , Femenino , Feto , Humanos , Método de Montecarlo , Fantasmas de Imagen , Dosis de Radiación
17.
Tomography ; 6(2): 111-117, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32548287

RESUMEN

Several institutions have developed image feature extraction software to compute quantitative descriptors of medical images for radiomics analyses. With radiomics increasingly proposed for use in research and clinical contexts, new techniques are necessary for standardizing and replicating radiomics findings across software implementations. We have developed a software toolkit for the creation of 3D digital reference objects with customizable size, shape, intensity, texture, and margin sharpness values. Using user-supplied input parameters, these objects are defined mathematically as continuous functions, discretized, and then saved as DICOM objects. Here, we present the definition of these objects, parameterized derivations of a subset of their radiomics values, computer code for object generation, example use cases, and a user-downloadable sample collection used for the examples cited in this paper.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Radiometría , Programas Informáticos , Radiometría/normas , Estándares de Referencia
18.
Eur Radiol ; 30(3): 1822, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31728683

RESUMEN

The original version of this article, published on 24 July 2014, unfortunately contained a mistake. In section "Discussion," a sentence was worded incorrectly.

19.
Med Phys ; 46(10): 4563-4574, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31396974

RESUMEN

PURPOSE: An important challenge for deep learning models is generalizing to new datasets that may be acquired with acquisition protocols different from the training set. It is not always feasible to expand training data to the range encountered in clinical practice. We introduce a new technique, physics-based data augmentation (PBDA), that can emulate new computed tomography (CT) data acquisition protocols. We demonstrate two forms of PBDA, emulating increases in slice thickness and reductions of dose, on the specific problem of false-positive reduction in the automatic detection of lung nodules. METHODS: We worked with CT images from the lung image database consortium (LIDC) collection. We employed a hybrid ensemble convolutional neural network (CNN), which consists of multiple CNN modules (VGG, DenseNet, ResNet), for a classification task of determining whether an image patch was a suspicious nodule or a false positive. To emulate a reduction in tube current, we injected noise by simulating forward projection, noise addition, and backprojection corresponding to 1.5 mAs (a "chest x-ray" dose). To simulate thick slice CT scans from thin slice CT scans, we grouped and averaged spatially contiguous CT within thin slice data. The neural network was trained with 10% of the LIDC dataset that was selected to have either the highest tube current or the thinnest slices. The network was tested on the remaining data. We compared PBDA to a baseline with standard geometric augmentations (such as shifts and rotations) and Gaussian noise addition. RESULTS: PBDA improved the performance of the networks when generalizing to the test dataset in a limited number of cases. We found that the best performance was obtained by applying augmentation at very low doses (1.5 mAs), about an order of magnitude less than most screening protocols. In the baseline augmentation, a comparable level of Gaussian noise was injected. For dose reduction PBDA, the average sensitivity of 0.931 for the hybrid ensemble network was not statistically different from the average sensitivity of 0.935 without PBDA. Similarly for slice thickness PBDA, the average sensitivity of 0.900 when augmenting with doubled simulated slice thicknesses was not statistically different from the average sensitivity of 0.895 without PBDA. While there were cases detailed in this paper in which we observed improvements, the overall picture was one that suggests PBDA may not be an effective data enrichment tool. CONCLUSIONS: PBDA is a newly proposed strategy for mitigating the performance loss of neural networks related to the variation of acquisition protocol between the training dataset and the data that is encountered in deployment or testing. We found that PBDA does not provide robust improvements with the four neural networks (three modules and the ensemble) tested and for the specific task of false-positive reduction in nodule detection.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Reacciones Falso Positivas , Humanos , Distribución Normal , Dosis de Radiación , Sensibilidad y Especificidad
20.
Med Phys ; 46(9): 3941-3950, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31220358

RESUMEN

PURPOSE: Reducing dose level to achieve ALARA is an important task in diagnostic and therapeutic applications of computed tomography (CT) imaging. Effective image quality enhancement strategies are crucial to compensate for the degradation caused by dose reduction. In the past few years, deep learning approaches have demonstrated promising denoising performance on natural/synthetic images. This study tailors a neural network model for (ultra-)low-dose CT denoising, and assesses its performance in enhancing CT image quality and emphysema quantification. METHODS: The noise statistics in low-dose CT images has its unique characteristics and differs from that used in general denoising models. In this study, we first simulate the paired ultra-low-dose and targeted high-quality image of reference, with a well-validated pipeline. These paired images are used to train a denoising convolutional neural network (DnCNN) with residual mapping. The performance of the DnCNN tailored to CT denoising (DnCNN-CT) is assessed over various dose reduction levels, with respect to both image quality and emphysema scoring quantification. The possible over-smoothing behavior of DnCNN and its impact on different subcohort of patients are also investigated. RESULTS: Performance evaluation results showed that DnCNN-CT provided significant image quality enhancement, especially for very-low-dose level. With DnCNN-CT denoising on 3%-dose cases, the peak signal-to-noise ratio improved by 8 dB and the structure similarity index increased by 0.15. This outperformed the original DnCNN and the state-of-the-art nonlocal-mean-type denoising scheme. Emphysema mask was also investigated, where lung voxels of abnormally low attenuation coefficient were marked as potential emphysema. Emphysema mask generated after DnCNN-CT denoising on 3%-dose image was demonstrated to agree well with that from the full-dose reference. Despite over-smoothing in DnCNN denoising, which contributed to slight underestimation of emphysema score compared to the reference, such minor overcorrection did not affect clinical conclusions. The proposed method provided effective detection for cases with appreciable emphysema while serving as a reasonable correction for normal cases without emphysema. CONCLUSIONS: This work provides a tailored DnCNN for (ultra-)low-dose CT denoising, and demonstrates significant improvement on both the image quality and the clinical emphysema quantification accuracy over various dose levels. The clinical conclusion of emphysema obtained from the denoised low-dose images agrees well with that from the full-dose ones.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Enfisema Pulmonar/diagnóstico por imagen , Dosis de Radiación , Relación Señal-Ruido , Tomografía Computarizada por Rayos X , Aumento de la Imagen , Pulmón/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...