Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Nucl Cardiol ; 30(6): 2773-2789, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37758961

ABSTRACT

BACKGROUND: Absolute quantitative myocardial perfusion SPECT requires addressing of aleatory and epistemic uncertainties in conjunction with providing image quality sufficient for lesion detection and characterization. Iterative reconstruction methods enable the mitigation of the root causes of image degradation. This study aimed to determine the feasibility of a new SPECT/CT method with integrated corrections attempting to enable absolute quantitative cardiac imaging (xSPECT Cardiac; xSC). METHODS: We compared images of prototype xSC and conventional SPECT (Flash3DTM) acquired at rest from 56 patients aged 71 ± 12 y with suspected coronary heart disease. The xSC prototype comprised list-mode acquisitions with continuous rotation and subsequent iterative reconstructions with retrospective electrocardiography (ECG) gating. Besides accurate image formation modeling, patient-specific CT-based attenuation and energy window-based scatter correction, additionally we applied mitigation for patient and organ motion between views (inter-view), and within views (intra-view) for both the gated and ungated reconstruction. We then assessed image quality, semiquantitative regional values, and left ventricular function in the images. RESULTS: The quality of all xSC images was acceptable for clinical purposes. A polar map showed more uniform distribution for xSC compared with Flash3D, while lower apical count and higher defect contrast of myocardial infarction (p = 0.0004) were observed on xSC images. Wall motion, 16-gate volume curve, and ejection fraction were at least acceptable, with indication of improvements. The clinical prospectively gated method rejected beats ≥20% in 6 patients, whereas retrospective gating used an average of 98% beats, excluding 2% of beats. We used the list-mode data to create a product equivalent prospectively gated dataset. The dataset showed that the xSC method generated 18% higher count data and images with less noise, with comparable functional variables of volume and LVEF (p = ns). CONCLUSIONS: Quantitative myocardial perfusion imaging with the list-mode-based prototype xSPECT Cardiac is feasible, resulting in images of at least acceptable image quality.


Subject(s)
Myocardial Perfusion Imaging , Humans , Retrospective Studies , Heart/diagnostic imaging , Tomography, Emission-Computed, Single-Photon , Respiration , Arrhythmias, Cardiac , Image Processing, Computer-Assisted
2.
Med Phys ; 45(7): 3019-3030, 2018 Jul.
Article in English | MEDLINE | ID: mdl-29704868

ABSTRACT

PURPOSE: The task-based assessment of image quality using model observers is increasingly used for the assessment of different imaging modalities. However, the performance computation of model observers needs standardization as well as a well-established trust in its implementation methodology and uncertainty estimation. The purpose of this work was to determine the degree of equivalence of the channelized Hotelling observer performance and uncertainty estimation using an intercomparison exercise. MATERIALS AND METHODS: Image samples to estimate model observer performance for detection tasks were generated from two-dimensional CT image slices of a uniform water phantom. A common set of images was sent to participating laboratories to perform and document the following tasks: (a) estimate the detectability index of a well-defined CHO and its uncertainty in three conditions involving different sized targets all at the same dose, and (b) apply this CHO to an image set where ground truth was unknown to participants (lower image dose). In addition, and on an optional basis, we asked the participating laboratories to (c) estimate the performance of real human observers from a psychophysical experiment of their choice. Each of the 13 participating laboratories was confidentially assigned a participant number and image sets could be downloaded through a secure server. Results were distributed with each participant recognizable by its number and then each laboratory was able to modify their results with justification as model observer calculation are not yet a routine and potentially error prone. RESULTS: Detectability index increased with signal size for all participants and was very consistent for 6 mm sized target while showing higher variability for 8 and 10 mm sized target. There was one order of magnitude between the lowest and the largest uncertainty estimation. CONCLUSIONS: This intercomparison helped define the state of the art of model observer performance computation and with thirteen participants, reflects openness and trust within the medical imaging community. The performance of a CHO with explicitly defined channels and a relatively large number of test images was consistently estimated by all participants. In contrast, the paper demonstrates that there is no agreement on estimating the variance of detectability in the training and testing setting.


Subject(s)
Image Processing, Computer-Assisted , Laboratories , Tomography, X-Ray Computed , Observer Variation , Uncertainty
3.
J Med Imaging (Bellingham) ; 3(1): 011010, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26866048

ABSTRACT

Task-based medical image quality is typically measured by the degree to which a human observer can perform a diagnostic task in a psychophysical human observer study. During a typical study, an observer is asked to provide a numerical score quantifying his confidence as to whether an image contains a diagnostic marker or not. Such scores are then used to measure the observers' diagnostic accuracy, summarized by the receiver operating characteristic (ROC) curve and the area under ROC curve. These types of human studies are difficult to arrange, costly, and time consuming. In addition, human observers involved in this type of study should be experts on the image genre to avoid inconsistent scoring through the lengthy study. In two-alternative forced choice (2AFC) studies, known to be faster, two images are compared simultaneously and a single indicator is given. Unfortunately, the 2AFC approach cannot lead to a full ROC curve or a set of image scores. The aim of this work is to propose a methodology in which multiple rounds of the 2AFC studies are used to re-estimate an image confidence score (a.k.a. rating, ranking) and generate the full ROC curve. In the proposed approach, we treat image confidence score as an unknown rating that needs to be estimated and 2AFC as a two-player match game. To achieve this, we use the ELO rating system, which is used for calculating the relative skill levels of players in competitor-versus-competitor games such as chess. The proposed methodology is not limited to ELO, and other rating methods such as TrueSkill™, Chessmetrics, or Glicko can be also used. The presented results, using simulated data, indicate that a full ROC curve can be recovered using several rounds of 2AFC studies and that the best pairing strategy starts with the first round of pairing abnormal versus normal images (as in the classical 2AFC approach) followed by a number of rounds using random pairing. In addition, the proposed method was tested in a pilot human observer study. These pilot results indicate that three to five rounds of 2AFC studies require less human observer time than a full scoring study and that the re-estimated ROC curves and associated area under ROC curve values have high statistical agreement with the full scoring study.

4.
J Electron Imaging ; 20(3)2011 Jul.
Article in English | MEDLINE | ID: mdl-22347787

ABSTRACT

In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

SELECTION OF CITATIONS
SEARCH DETAIL
...