RESUMEN
Electronic cleansing (EC) is used for computational removal of residual feces and fluid tagged with an orally administered contrast agent on CT colonographic images to improve the visibility of polyps during virtual endoscopic "fly-through" reading. A recent trend in CT colonography is to perform a low-dose CT scanning protocol with the patient having undergone reduced- or noncathartic bowel preparation. Although several EC schemes exist, they have been developed for use with cathartic bowel preparation and high-radiation-dose CT, and thus, at a low dose with noncathartic bowel preparation, they tend to generate cleansing artifacts that distract and mislead readers. Deep learning can be used for improvement of the image quality with EC at CT colonography. Deep learning EC can produce substantially fewer cleansing artifacts at dual-energy than at single-energy CT colonography, because the dual-energy information can be used to identify relevant material in the colon more precisely than is possible with the single x-ray attenuation value. Because the number of annotated training images is limited at CT colonography, transfer learning can be used for appropriate training of deep learning algorithms. The purposes of this article are to review the causes of cleansing artifacts that distract and mislead readers in conventional EC schemes, to describe the applications of deep learning and dual-energy CT colonography to EC of the colon, and to demonstrate the improvements in image quality with EC and deep learning at single-energy and dual-energy CT colonography with noncathartic bowel preparation. ©RSNA, 2018.
Asunto(s)
Colonografía Tomográfica Computarizada/métodos , Neoplasias Colorrectales/diagnóstico por imagen , Aprendizaje Profundo , Algoritmos , Catárticos/administración & dosificación , Medios de Contraste , Heces , Humanos , Dosis de RadiaciónRESUMEN
The adoption of artificial intelligence (AI) tools in medicine poses challenges to existing clinical workflows. This commentary discusses the necessity of context-specific quality assurance (QA), emphasizing the need for robust QA measures with quality control (QC) procedures that encompass (1) acceptance testing (AT) before clinical use, (2) continuous QC monitoring, and (3) adequate user training. The discussion also covers essential components of AT and QA, illustrated with real-world examples. We also highlight what we see as the shared responsibility of manufacturers or vendors, regulators, healthcare systems, medical physicists, and clinicians to enact appropriate testing and oversight to ensure a safe and equitable transformation of medicine through AI.
RESUMEN
Innovation in medical imaging artificial intelligence (AI)/machine learning (ML) demands extensive data collection, algorithmic advancements, and rigorous performance assessments encompassing aspects such as generalizability, uncertainty, bias, fairness, trustworthiness, and interpretability. Achieving widespread integration of AI/ML algorithms into diverse clinical tasks will demand a steadfast commitment to overcoming issues in model design, development, and performance assessment. The complexities of AI/ML clinical translation present substantial challenges, requiring engagement with relevant stakeholders, assessment of cost-effectiveness for user and patient benefit, timely dissemination of information relevant to robust functioning throughout the AI/ML lifecycle, consideration of regulatory compliance, and feedback loops for real-world performance evidence. This commentary addresses several hurdles for the development and adoption of AI/ML technologies in medical imaging. Comprehensive attention to these underlying and often subtle factors is critical not only for tackling the challenges but also for exploring novel opportunities for the advancement of AI in radiology.
RESUMEN
BACKGROUND: Colon screening by optical colonoscopy (OC) or computed tomographic colonography (CTC) requires a laxative bowel preparation, which inhibits screening participation. OBJECTIVE: To assess the performance of detecting adenomas 6 mm or larger and patient experience of laxative-free, computer-aided CTC. DESIGN: Prospective test comparison of laxative-free CTC and OC. The CTC included electronic cleansing and computer-aided detection. Optical colonoscopy examinations were initially blinded to CTC results, which were subsequently revealed during colonoscope withdrawal; this method permitted reexamination to resolve discrepant findings. Unblinded OC served as a reference standard. (ClinicalTrials.gov registration number: NCT01200303) SETTING: Multicenter ambulatory imaging and endoscopy centers. PARTICIPANTS: 605 adults aged 50 to 85 years at average to moderate risk for colon cancer. MEASUREMENTS: Per-patient sensitivity and specificity of CTC and first-pass OC for detecting adenomas at thresholds of 10 mm or greater, 8 mm or greater, and 6 mm or greater; per-lesion sensitivity and survey data describing patient experience with preparations and examinations. RESULTS: For adenomas 10 mm or larger, per-patient sensitivity of CTC was 0.91 (95% CI, 0.71 to 0.99) and specificity was 0.85 (CI, 0.82 to 0.88); sensitivity of OC was 0.95 (CI, 0.77 to 1.00) and specificity was 0.89 (CI, 0.86 to 0.91). Sensitivity of CTC was 0.70 (CI, 0.53 to 0.83) for adenomas 8 mm or larger and 0.59 (CI, 0.47 to 0.70) for those 6 mm or larger; sensitivity of OC for adenomas 8 mm or larger was 0.88 (CI, 0.73 to 0.96) and 0.76 (CI, 0.64 to 0.85) for those 6 mm or larger. The specificity of OC at the threshold of 8 mm or larger was 0.91 and at 6 mm or larger was 0.94. Specificity for OC was greater than that for CTC, which was 0.86 at the threshold of 8 mm or larger and 0.88 at 6 mm or larger (P= 0.02). Reported participant experience for comfort and difficulty of examination preparation was better with CTC than OC. LIMITATIONS: There were 3 CTC readers. The survey instrument was not independently validated. CONCLUSION: Computed tomographic colonography was accurate in detecting adenomas 10 mm or larger but less so for smaller lesions. Patient experience was better with laxative-free CTC. These results suggest a possible role for laxative-free CTC as an alternate screening method.
Asunto(s)
Pólipos Adenomatosos/diagnóstico por imagen , Pólipos del Colon/diagnóstico por imagen , Colonografía Tomográfica Computarizada/métodos , Pólipos Adenomatosos/patología , Anciano , Anciano de 80 o más Años , Enfermedades Asintomáticas , Pólipos del Colon/patología , Colonografía Tomográfica Computarizada/efectos adversos , Colonoscopía/efectos adversos , Colonoscopía/métodos , Femenino , Humanos , Laxativos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Estudios ProspectivosRESUMEN
Purpose: Deep supervised learning provides an effective approach for developing robust models for various computer-aided diagnosis tasks. However, there is often an underlying assumption that the frequencies of the samples between the different classes of the training dataset are either similar or balanced. In real-world medical data, the samples of positive classes often occur too infrequently to satisfy this assumption. Thus, there is an unmet need for deep-learning systems that can automatically identify and adapt to the real-world conditions of imbalanced data. Approach: We propose a deep Bayesian ensemble learning framework to address the representation learning problem of long-tailed and out-of-distribution (OOD) samples when training from medical images. By estimating the relative uncertainties of the input data, our framework can adapt to imbalanced data for learning generalizable classifiers. We trained and tested our framework on four public medical imaging datasets with various imbalance ratios and imaging modalities across three different learning tasks: semantic medical image segmentation, OOD detection, and in-domain generalization. We compared the performance of our framework with those of state-of-the-art comparator methods. Results: Our proposed framework outperformed the comparator models significantly across all performance metrics (pairwise t-test: p<0.01) in the semantic segmentation of high-resolution CT and MR images as well as in the detection of OOD samples (p<0.01), thereby showing significant improvement in handling the associated long-tailed data distribution. The results of the in-domain generalization also indicated that our framework can enhance the prediction of retinal glaucoma, contributing to clinical decision-making processes. Conclusions: Training of the proposed deep Bayesian ensemble learning framework with dynamic Monte-Carlo dropout and a combination of losses yielded the best generalization to unseen samples from imbalanced medical imaging datasets across different learning tasks.
RESUMEN
Rapid advances in artificial intelligence (AI) and machine learning, and specifically in deep learning (DL) techniques, have enabled broad application of these methods in health care. The promise of the DL approach has spurred further interest in computer-aided diagnosis (CAD) development and applications using both "traditional" machine learning methods and newer DL-based methods. We use the term CAD-AI to refer to this expanded clinical decision support environment that uses traditional and DL-based AI methods. Numerous studies have been published to date on the development of machine learning tools for computer-aided, or AI-assisted, clinical tasks. However, most of these machine learning models are not ready for clinical deployment. It is of paramount importance to ensure that a clinical decision support tool undergoes proper training and rigorous validation of its generalizability and robustness before adoption for patient care in the clinic. To address these important issues, the American Association of Physicists in Medicine (AAPM) Computer-Aided Image Analysis Subcommittee (CADSC) is charged, in part, to develop recommendations on practices and standards for the development and performance assessment of computer-aided decision support systems. The committee has previously published two opinion papers on the evaluation of CAD systems and issues associated with user training and quality assurance of these systems in the clinic. With machine learning techniques continuing to evolve and CAD applications expanding to new stages of the patient care process, the current task group report considers the broader issues common to the development of most, if not all, CAD-AI applications and their translation from the bench to the clinic. The goal is to bring attention to the proper training and validation of machine learning algorithms that may improve their generalizability and reliability and accelerate the adoption of CAD-AI systems for clinical decision support.
Asunto(s)
Inteligencia Artificial , Diagnóstico por Computador , Humanos , Reproducibilidad de los Resultados , Diagnóstico por Computador/métodos , Diagnóstico por Imagen , Aprendizaje AutomáticoRESUMEN
Existing electronic cleansing (EC) methods for computed tomographic colonography (CTC) are generally based on image segmentation, which limits their accuracy to that of the underlying voxels. Because of the limitations of the available CTC datasets for training, traditional deep learning is of limited use in EC. The purpose of this study was to evaluate the technical feasibility of using a novel self-supervised adversarial learning scheme to perform EC with a limited training dataset with subvoxel accuracy. A three-dimensional (3D) generative adversarial network (3D GAN) was pre-trained to perform EC on CTC datasets of an anthropomorphic phantom. The 3D GAN was then fine-tuned to each input case by use of the self-supervised scheme. The architecture of the 3D GAN was optimized by use of a phantom study. The visually perceived quality of the virtual cleansing by the resulting 3D GAN compared favorably to that of commercial EC software on the virtual 3D fly-through examinations of 18 clinical CTC cases. Thus, the proposed self-supervised 3D GAN, which can be trained to perform EC on a small dataset without image annotations with subvoxel accuracy, is a potentially effective approach for addressing the remaining technical problems of EC in CTC.
RESUMEN
The application of computer-aided detection (CAD) is expected to improve reader sensitivity and to reduce inter-observer variance in computed tomographic (CT) colonography. However, current CAD systems display a large number of false-positive (FP) detections. The reviewing of a large number of FP CAD detections increases interpretation time, and it may also reduce the specificity and/or sensitivity of a computer-assisted reader. Therefore, it is important to be aware of the patterns and pitfalls of FP CAD detections. This pictorial essay reviews common sources of FP CAD detections that have been observed in the literature and in our experiments in computer-assisted CT colonography. Also the recommended computer-assisted reading technique is described.
Asunto(s)
Enfermedades del Colon/diagnóstico por imagen , Colonografía Tomográfica Computarizada , Interpretación de Imagen Radiográfica Asistida por Computador , Algoritmos , Artefactos , Reacciones Falso Positivas , Humanos , Imagenología Tridimensional , Sensibilidad y EspecificidadRESUMEN
Because of the rapid spread and wide range of the clinical manifestations of the coronavirus disease 2019 (COVID-19), fast and accurate estimation of the disease progression and mortality is vital for the management of the patients. Currently available image-based prognostic predictors for patients with COVID-19 are largely limited to semi-automated schemes with manually designed features and supervised learning, and the survival analysis is largely limited to logistic regression. We developed a weakly unsupervised conditional generative adversarial network, called pix2surv, which can be trained to estimate the time-to-event information for survival analysis directly from the chest computed tomography (CT) images of a patient. We show that the performance of pix2surv based on CT images significantly outperforms those of existing laboratory tests and image-based visual and quantitative predictors in estimating the disease progression and mortality of COVID-19 patients. Thus, pix2surv is a promising approach for performing image-based prognostic predictions.
Asunto(s)
COVID-19 , Humanos , Pronóstico , SARS-CoV-2 , Tórax , Tomografía Computarizada por Rayos XRESUMEN
The rapid increase of patients with coronavirus disease 2019 (COVID-19) has introduced major challenges to healthcare services worldwide. Therefore, fast and accurate clinical assessment of COVID-19 progression and mortality is vital for the management of COVID-19 patients. We developed an automated image-based survival prediction model, called U-survival, which combines deep learning of chest CT images with the established survival analysis methodology of an elastic-net Cox survival model. In an evaluation of 383 COVID-19 positive patients from two hospitals, the prognostic bootstrap prediction performance of U-survival was significantly higher (P < 0.0001) than those of existing laboratory and image-based reference predictors both for COVID-19 progression (maximum concordance index: 91.6% [95% confidence interval 91.5, 91.7]) and for mortality (88.7% [88.6, 88.9]), and the separation between the Kaplan-Meier survival curves of patients stratified into low- and high-risk groups was largest for U-survival (P < 3 × 10-14). The results indicate that U-survival can be used to provide automated and objective prognostic predictions for the management of COVID-19 patients.
Asunto(s)
COVID-19/diagnóstico , Pulmón/diagnóstico por imagen , SARS-CoV-2/fisiología , Anciano , Automatización , COVID-19/mortalidad , Diagnóstico por Imagen , Progresión de la Enfermedad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Pronóstico , Análisis de Supervivencia , Tomografía Computarizada por Rayos XRESUMEN
PURPOSE: Deep learning can be used for improving the performance of computer-aided detection (CADe) in various medical imaging tasks. However, in computed tomographic (CT) colonography, the performance is limited by the relatively small size and the variety of the available training datasets. Our purpose in this study was to develop and evaluate a flow-based generative model for performing 3D data augmentation of colorectal polyps for effective training of deep learning in CADe for CT colonography. METHODS: We developed a 3D-convolutional neural network (3D CNN) based on a flow-based generative model (3D Glow) for generating synthetic volumes of interest (VOIs) that has characteristics similar to those of the VOIs of its training dataset. The 3D Glow was trained to generate synthetic VOIs of polyps by use of our clinical CT colonography case collection. The evaluation was performed by use of a human observer study with three observers and by use of a CADe-based polyp classification study with a 3D DenseNet. RESULTS: The area-under-the-curve values of the receiver operating characteristic analysis of the three observers were not statistically significantly different in distinguishing between real polyps and synthetic polyps. When trained with data augmentation by 3D Glow, the 3D DenseNet yielded a statistically significantly higher polyp classification performance than when it was trained with alternative augmentation methods. CONCLUSION: The 3D Glow-generated synthetic polyps are visually indistinguishable from real colorectal polyps. Their application to data augmentation can substantially improve the performance of 3D CNNs in CADe for CT colonography. Thus, 3D Glow is a promising method for improving the performance of deep learning in CADe for CT colonography.
Asunto(s)
Pólipos del Colon/diagnóstico por imagen , Colonografía Tomográfica Computarizada/métodos , Aprendizaje Profundo , Redes Neurales de la Computación , Colonoscopía , Humanos , Sensibilidad y EspecificidadRESUMEN
Purpose The identification of abnormalities that are relatively rare within otherwise normal anatomy is a major challenge for deep learning in the semantic segmentation of medical images. The small number of samples of the minority classes in the training data makes the learning of optimal classification challenging, while the more frequently occurring samples of the majority class hamper the generalization of the classification boundary between infrequently occurring target objects and classes. In this paper, we developed a novel generative multi-adversarial network, called Ensemble-GAN, for mitigating this class imbalance problem in the semantic segmentation of abdominal images.Method The Ensemble-GAN framework is composed of a single-generator and a multi-discriminator variant for handling the class imbalance problem to provide a better generalization than existing approaches. The ensemble model aggregates the estimates of multiple models by training from different initializations and losses from various subsets of the training data. The single generator network analyzes the input image as a condition to predict a corresponding semantic segmentation image by use of feedback from the ensemble of discriminator networks. To evaluate the framework, we trained our framework on two public datasets, with different imbalance ratios and imaging modalities: the Chaos 2019 and the LiTS 2017.Result In terms of the F1 score, the accuracies of the semantic segmentation of healthy spleen, liver, and left and right kidneys were 0.93, 0.96, 0.90 and 0.94, respectively. The overall F1 scores for simultaneous segmentation of the lesions and liver were 0.83 and 0.94, respectively.Conclusion The proposed Ensemble-GAN framework demonstrated outstanding performance in the semantic segmentation of medical images in comparison with other approaches on popular abdominal imaging benchmarks. The Ensemble-GAN has the potential to segment abdominal images more accurately than human experts.
Asunto(s)
Abdomen/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Hígado/diagnóstico por imagen , Aprendizaje Profundo , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Tomografía Computarizada por Rayos XRESUMEN
In computed tomographic colonography (CTC), orally administered fecal-tagging agents can be used to indicate residual feces and fluid that could otherwise hide or imitate lesions on CTC images of the colon. Although the use of fecal tagging improves the detection accuracy of CTC, it can introduce image artifacts that may cause lesions that are covered by fecal tagging to have a different visual appearance than those not covered by fecal tagging. This can distort the values of image-based computational features, thereby reducing the accuracy of computer-aided detection (CADe). We developed a context-specific method that performs the detection of lesions separately on lumen regions covered by air and on those covered by fecal tagging, thereby facilitating the optimization of detection parameters separately for these regions and their detected lesion candidates to improve the detection accuracy of CADe. For pilot evaluation, the method was integrated into a dual-energy CADe (DE-CADe) scheme and evaluated by use of leave-one-patient-out evaluation on 66 clinical non-cathartic low-dose dual-energy CTC (DE-CTC) cases that were acquired at a low effective radiation dose and reconstructed by use of iterative image reconstruction. There were 22 colonoscopy-confirmed lesions ≥6 mm in size in 21 patients. The DE-CADe scheme detected 96% of the lesions at a median of 6 FP detections per patient. These preliminary results indicate that the use of context-specific detection can yield high detection accuracy of CADe in non-cathartic low-dose DE-CTC examinations.
RESUMEN
CT colonography (CTC) uses orally administered fecal-tagging agents to enhance retained fluid and feces that would otherwise obscure or imitate polyps on CTC images. To visualize the complete region of colon without residual materials, electronic cleansing (EC) can be used to perform virtual subtraction of the tagged materials from CTC images. However, current EC methods produce subtraction artifacts and they can fail to subtract unclearly tagged feces. We developed a novel multi-material EC (MUMA-EC) method that uses dual-energy CTC (DE-CTC) and machine-learning methods to improve the performance of EC. In our method, material decomposition is performed to calculate water-iodine decomposition images and virtual monochromatic (VIM) images. Using the images, a random forest classifier is used to label the regions of lumen air, soft tissue, fecal tagging, and their partial-volume boundaries. The electronically cleansed images are synthesized from the multi-material and VIM image volumes. For pilot evaluation, we acquired the clinical DE-CTC data of 7 patients. Preliminary results suggest that the proposed MUMA-EC method is effective and that it minimizes the three types of image artifacts that were present in previous EC methods.
RESUMEN
In recent years, dual-energy computed tomography (DECT) has been widely used in the clinical routine due to improved diagnostics capability from additional spectral information. One promising application for DECT is CT colonography (CTC) in combination with computer-aided diagnosis (CAD) for detection of lesions and polyps. While CAD has demonstrated in the past that it is able to detect small polyps, its performance is highly dependent on the quality of the input data. The presence of artifacts such as beam-hardening and noise in ultra-low-dose CTC may severely degrade detection performances of small polyps. In this work, we investigate and compare virtual monochromatic images, generated by image-based decomposition and projection-based decomposition, with respect to CAD performance. In the image-based method, reconstructed images are firstly decomposed into water and iodine before the virtual monochromatic images are calculated. On the contrary, in the projection-based method, the projection data are first decomposed before calculation of virtual monochromatic projection and reconstruction. Both material decomposition methods are evaluated with regards to the accuracy of iodine detection. Further, the performance of the virtual monochromatic images is qualitatively and quantitatively assessed. Preliminary results show that the projection-based method does not only have a more accurate detection of iodine, but also delivers virtual monochromatic images with reduced beam hardening artifacts in comparison with the image-based method. With regards to the CAD performance, the projection-based method yields an improved detection performance of polyps in comparison with that of the image-based method.
RESUMEN
In recent years, several computer-aided detection (CAD) schemes have been developed for the detection of polyps in CT colonography (CTC). However, few studies have addressed the problem of computerized detection of colorectal masses in CTC. This is mostly because masses are considered to be well visualized by a radiologist because of their size and invasiveness. Nevertheless, the automated detection of masses would naturally complement the automated detection of polyps in CTC and would produce a more comprehensive computer aid to radiologists. Therefore, in this study, we identified some of the problems involved with the computerized detection of masses, and we developed a scheme for the computerized detection of masses that can be integrated into a CAD scheme for the detection of polyps. The performance of the mass detection scheme was evaluated by application to clinical CTC data sets. CTC was performed on 82 patients with helical CT scanners and reconstruction intervals of 1.0-5.0 mm in the supine and prone positions. Fourteen patients (17%) had a total of 14 masses of 30-50 mm, and sixteen patients (20%) had a total of 30 polyps 5-25 mm in diameter. Four patients had both polyps and masses. Fifty-six of the patients (68%) were normal. The CTC data were interpolated linearly to yield isotropic data sets, and the colon was extracted by use of a knowledge-guided segmentation technique. Two methods, fuzzy merging and wall-thickening analysis, were developed for the detection of masses. The fuzzy merging method detected masses with a significant intraluminal component by separating the initial CAD detections of locally cap-like shapes within the colonic wall into mass candidates and polyp candidates. The wall-thickening analysis detected nonintraluminal masses by searching the colonic wall for abnormal thickening. The final regions of the mass candidates were extracted by use of a level set method based on a fast marching algorithm. False-positive (FP) detections were reduced by a quadratic discriminant classifier. The performance of the scheme was evaluated by use of a leave-one-out (round-robin) method with by-patient elimination. All but one of the 14 masses, which was partially cut off from the CTC data set in both supine and prone positions, were detected. The fuzzy merging method detected 11 of the masses, and the wall-thickening analysis detected 3 of the masses including all nonintraluminal masses. In combination, the two methods detected 13 of the 14 masses with 0.21 FPs per patient on average based on the leave-one-out evaluation. Most FPs were generated by extrinsic compression of the colonic wall that would be recognized easily and quickly by a radiologist. The mass detection methods did not affect the result of the polyp detection. The results indicate that the scheme is potentially useful in providing a high-performance CAD scheme for the detection of colorectal neoplasms in CTC.
Asunto(s)
Inteligencia Artificial , Neoplasias del Colon/diagnóstico por imagen , Colonografía Tomográfica Computarizada/métodos , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Técnica de Sustracción , Algoritmos , Anatomía Transversal/métodos , Neoplasias del Colon/patología , Lógica Difusa , Humanos , Intensificación de Imagen Radiográfica/métodos , Reproducibilidad de los Resultados , Sensibilidad y EspecificidadRESUMEN
In CT colonography, orally administered positive-contrast fecal-tagging agents are used for differentiating residual fluid and feces from true lesions. However, the presence of high-density tagging agent in the colon can introduce erroneous artifacts, such as local pseudo-enhancement and beam-hardening, on the reconstructed CT images, thereby complicating reliable detection of soft-tissue lesions. In dual-energy CT colonography, such image artifacts can be reduced by the calculation of virtual monochromatic CT images, which provide more accurate quantitative attenuation measurements than conventional single-energy CT colonography. In practice, however, virtual monochromatic images may still contain some pseudo-enhancement artifacts, and efforts to minimize radiation dose may enhance such artifacts. In this study, we evaluated the effect of image-based pseudo-enhancement post-correction on virtual monochromatic images in standard-dose and low-dose dual-energy CT colonography. The mean CT values of the virtual monochromatic standard-dose CT images of 51 polyps and those of the virtual monochromatic low-dose CT images of 20 polyps were measured without and with the pseudo-enhancement correction. Statistically significant differences were observed between uncorrected and pseudo-enhancement-corrected images of polyps covered by fecal tagging in standard-dose CT (p < 0.001) and in low-dose CT (p < 0.05). The results indicate that image-based pseudo-enhancement post-correction can be useful for optimizing the performance of image-processing applications in virtual monochromatic CT colonography.
RESUMEN
In CT colonography (CTC), orally administered positive-contrast fecal-tagging agents can cause artificial elevation of the observed radiodensity of adjacent soft tissue. Such pseudo-enhancement makes it challenging to differentiate polyps and folds reliably from tagged materials, and it is also present in dual-energy CTC (DE-CTC). We developed a method that corrects for pseudo-enhancement on DE-CTC images without distorting the dual-energy information contained in the data. A pilot study was performed to evaluate the effect of the method visually and quantitatively by use of clinical non-cathartic low-dose DE-CTC data from 10 patients including 13 polyps covered partially or completely by iodine-based fecal tagging. The results indicate that the proposed method can be used to reduce the pseudo-enhancement distortion of DE-CTC images without losing material-specific dual-energy information. The method has potential application in improving the accuracy of automated image-processing applications, such as computer-aided detection and virtual bowel cleansing in CTC.
RESUMEN
Reliable computer-aided detection (CADe) of small polyps and flat lesions is limited by the relatively low image resolution of computed tomographic colonography (CTC). We developed a sinogram-based super-resolution (SR) method to enhance the images of lesion candidates detected by CADe. First, CADe is used to detect lesion candidates at high sensitivity from conventional CTC images. Next, the signal patterns of the lesion candidates are enhanced in sinogram domain by use of non-uniform compressive sampling and iterative reconstruction to produce SR images of the lesion candidates. For pilot evaluation, an anthropomorphic phantom including simulated lesions was filled partially with fecal tagging and scanned by use of a CT scanner. A fully automated CADe scheme was used to detect lesion candidates in the images reconstructed at conventional 0.61-mm and at 0.10-mm SR image resolution. The proof-of-concept results indicate that the SR method has potential to reduce the number of FP CADe detections below that obtainable with the conventional CTC imaging technology.
RESUMEN
Noncathartic computed tomographic colonography (CTC) could significantly increase patient adherence to colorectal screening guidelines. However, radiologists find the interpretation of noncathartic CTC images challenging. We developed a fully automated computer-aided detection (CAD) scheme for assisting radiologists with noncathartic CTC. A volumetric method is used to detect lesions within a thick target region encompassing the colonic wall. Dual-energy CTC (DE-CTC) is used to provide more detailed information about the colon than what is possible with conventional CTC. False-positive detections are reduced by use of a random-forest classifier. The effect of the thickness of the target region on detection performance was assessed by use of 22 clinical noncathartic DE-CTC studies including 27 lesions ≥6 mm. The results indicate that the thickness parameter can have significant effect on detection accuracy. Leave-one-patient-out evaluation indicated that the proposed CAD scheme detects colorectal lesions at high accuracy in noncathartic CTC.