Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38741597

ABSTRACT

Pulmonary emphysema is a progressive lung disease that requires accurate evaluation for optimal management. This task, possible using quantitative CT, is particularly challenging as scanner and patient attributes change over time, negatively impacting the CT-derived quantitative measures. Efforts to minimize such variations have been limited by the absence of ground truth in clinical data, thus necessitating reliance on clinical surrogates, which may not have one-to-one correspondence to CT-based findings. This study aimed to develop the first suite of human models with emphysema at multiple time points, enabling longitudinal assessment of disease progression with access to ground truth. A total of 14 virtual subjects were modeled across three time points. Each human model was virtually imaged using a validated imaging simulator (DukeSim), modeling an energy-integrating CT scanner. The models were scanned at two dose levels and reconstructed with two reconstruction kernels, slice thicknesses, and pixel sizes. The developed longitudinal models were further utilized to demonstrate utility in algorithm testing and development. Two previously developed image processing algorithms (CT-HARMONICA, EmphysemaSeg) were evaluated. The results demonstrated the efficacy of both algorithms in improving the accuracy and precision of longitudinal quantifications, from 6.1±6.3% to 1.1±1.1% and 1.6±2.2% across years 0-5. Further investigation in EmphysemaSeg identified that baseline emphysema severity, defined as >5% emphysema at year 0, contributed to its reduced performance. This finding highlights the value of virtual imaging trials in enhancing the explainability of algorithms. Overall, the developed longitudinal human models enabled ground-truth based assessment of image processing algorithms for lung quantifications.

2.
Article in English | MEDLINE | ID: mdl-38765483

ABSTRACT

Parametric response mapping (PRM) is a voxel-based quantitative CT imaging biomarker that measures the severity of chronic obstructive pulmonary disease (COPD) by analyzing both inspiratory and expiratory CT scans. Although PRM-derived measurements have been shown to predict disease severity and phenotyping, their quantitative accuracy is impacted by the variability of scanner settings and patient conditions. The aim of this study was to evaluate the variability of PRM-based measurements due to the changes in the scanner types and configurations. We developed 10 human chest models with emphysema and air-trapping at end-inspiration and end-expiration states. These models were virtually imaged using a scanner-specific CT simulator (DukeSim) to create CT images at different acquisition settings for energy-integrating and photon-counting CT systems. The CT images were used to estimate PRM maps. The quantified measurements were compared with ground truth values to evaluate the deviations in the measurements. Results showed that PRM measurements varied with scanner type and configurations. The emphysema volume was overestimated by 3 ± 9.5 % (mean ± standard deviation) of the lung volume, and the functional small airway disease (fSAD) volume was underestimated by 7.5±19 % of the lung volume. PRM measurements were more accurate and precise when the acquired settings were photon-counting CT, higher dose, smoother kernel, and larger pixel size. This study demonstrates the development and utility of virtual imaging tools for systematic assessment of a quantitative biomarker accuracy.

3.
ArXiv ; 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38351932

ABSTRACT

Purpose: Digital phantoms are one of the key components of virtual imaging trials (VITs) that aims to assess and optimize new medical imaging systems and algorithms. However, these phantoms vary in their voxel resolution, appearance and structural details. This study aims to examine whether and how variations between digital phantoms influence system optimization with digital breast tomosynthesis (DBT) as a chosen modality. Methods: We selected widely used and open access digital breast phantoms generated with different methods. For each phantom type, we created an ensemble of DBT images to test acquisition strategies. Human observer localization ROC (LROC) was used to assess observer performance studies for each case. Noise power spectrum (NPS) was estimated to compare the phantom structural components. Further, we computed several gaze metrics to quantify the gaze pattern when viewing images generated from different phantom types. Results: Our LROC results show that the arc samplings for peak performance were approximately 2.5° and 6° in Bakic and XCAT breast phantoms respectively for 3-mm lesion detection task and indicate that system optimization outcomes from VITs can vary with phantom types and structural frequency components. Additionally, a significant correlation (p¡0.01) between gaze metrics and diagnostic performance suggests that gaze analysis can be used to understand and evaluate task difficulty in VITs. Conclusion: Our results point to the critical need to evaluate realism in digital phantoms as well as ensuring sufficient structural variations at spatial frequencies relevant to the signal size for an intended task. In addition, standardizing phantom generation and validation tools might aid in lower discrepancies among independently conducted VITs for system or algorithmic optimizations.

4.
Sci Rep ; 10(1): 13510, 2020 08 11.
Article in English | MEDLINE | ID: mdl-32782415

ABSTRACT

Image texture, the relative spatial arrangement of intensity values in an image, encodes valuable information about the scene. As it stands, much of this potential information remains untapped. Understanding how to decipher textural details would afford another method of extracting knowledge of the physical world from images. In this work, we attempt to bridge the gap in research between quantitative texture analysis and the visual perception of textures. The impact of changes in image texture on human observer's ability to perform signal detection and localization tasks in complex digital images is not understood. We examine this critical question by studying task-based human observer performance in detecting and localizing signals in tomographic breast images. We have also investigated how these changes impact the formation of second-order image texture. We used digital breast tomosynthesis (DBT) an FDA approved tomographic X-ray breast imaging method as the modality of choice to show our preliminary results. Our human observer studies involve localization ROC (LROC) studies for low contrast mass detection in DBT. Simulated images are used as they offer the benefit of known ground truth. Our results prove that changes in system geometry or processing leads to changes in image texture magnitudes. We show that the variations in several well-known texture features estimated in digital images correlate with human observer detection-localization performance for signals embedded in them. This insight can allow efficient and practical techniques to identify the best imaging system design and algorithms or filtering tools by examining the changes in these texture features. This concept linking texture feature estimates and task based image quality assessment can be extended to several other imaging modalities and applications as well. It can also offer feedback in system and algorithm designs with a goal to improve perceptual benefits. Broader impact can be in wide array of areas including imaging system design, image processing, data science, machine learning, computer vision, perceptual and vision science. Our results also point to the caution that must be exercised in using these texture features as image-based radiomic features or as predictive markers for risk assessment as they are sensitive to system or image processing changes.


Subject(s)
Mammography , Breast Neoplasms/diagnostic imaging , Female , Humans , Image Processing, Computer-Assisted , Observer Variation , ROC Curve , Visual Perception
5.
IEEE Trans Med Imaging ; 39(11): 3321-3330, 2020 11.
Article in English | MEDLINE | ID: mdl-32356742

ABSTRACT

Anatomical and quantum noise inhibits detection of malignancies in clinical images such as in digital mammography (DM), digital breast tomosynthesis (DBT) and breast CT (bCT). In this work, we examine the relative influence and interactions of these two types of noise on the task of low contrast mass detectability in DBT. We show how the changing levels of quantum noise contributes to the estimated power-law slope ß by changing DBT acquisition parameters as well as with spatial filtering like an adaptive Weiner filtering. Finally, we examine via human observer LROC studies whether power spectral parameters obtained from DBT images correlate with mass detectability in those images. Our results show that lower values of power-law slope ß can result from heightened quantum noise or image artifacts and do not necessarily imply reduced anatomical noise or improved signal detectability for the given imaging system. These results strengthen the argument that when power-law magnitude K is varying, ß is less relevant to lesion detectability. Our preliminary results also point to K values having strong correlation to human observer performance, at least for the task shown in this paper. As a byproduct of these main results, we also show that while changes in acquisition geometry can improve mass detectability, the use of efficient filters like an adaptive Weiner filtering can significantly improve the detection of low contrast masses in DBT.


Subject(s)
Breast Neoplasms , Radiographic Image Enhancement , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Female , Humans , Mammography , Perception , Tomography, X-Ray Computed
6.
Phys Med Biol ; 64(14): 145001, 2019 07 11.
Article in English | MEDLINE | ID: mdl-31216514

ABSTRACT

Spectral images from photon counting detectors are being explored for material decomposition applications such as for obtaining quantitative maps of tissue types and contrast agents. While these detectors allow acquisition of multi-energy data in a single exposure, separating the total photon counts into multiple energy bins can lead to issues of count starvation and increased quantum noise in resultant maps. Furthermore, the complex decomposition problem is often solved in a single inversion step making it difficult to separate materials with close properties. We propose a multi-step decomposition method which allows solving the problem in multiple steps using the same spectral data collected in a single exposure. During each step, quantitative accuracy of a single material is under focus and one can flexibly optimize the bins chosen in that step. The result thus obtained becomes part of the input data for the next step in the multi-step process. This makes the problem less ill-conditioned and allows better quantitation of more challenging materials within the object. In comparison to a conventional single-step method, we show excellent quantitative accuracy for decomposing up to six materials involving a mix of soft tissue types and contrast agents in micro-CT sized digital phantoms.


Subject(s)
Algorithms , Contrast Media , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Photons , X-Ray Microtomography/methods , Humans
SELECTION OF CITATIONS
SEARCH DETAIL