Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
BJR Artif Intell ; 1(1): ubae006, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38828430

RESUMEN

Innovation in medical imaging artificial intelligence (AI)/machine learning (ML) demands extensive data collection, algorithmic advancements, and rigorous performance assessments encompassing aspects such as generalizability, uncertainty, bias, fairness, trustworthiness, and interpretability. Achieving widespread integration of AI/ML algorithms into diverse clinical tasks will demand a steadfast commitment to overcoming issues in model design, development, and performance assessment. The complexities of AI/ML clinical translation present substantial challenges, requiring engagement with relevant stakeholders, assessment of cost-effectiveness for user and patient benefit, timely dissemination of information relevant to robust functioning throughout the AI/ML lifecycle, consideration of regulatory compliance, and feedback loops for real-world performance evidence. This commentary addresses several hurdles for the development and adoption of AI/ML technologies in medical imaging. Comprehensive attention to these underlying and often subtle factors is critical not only for tackling the challenges but also for exploring novel opportunities for the advancement of AI in radiology.

2.
BJR Artif Intell ; 1(1): ubae003, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38476957

RESUMEN

The adoption of artificial intelligence (AI) tools in medicine poses challenges to existing clinical workflows. This commentary discusses the necessity of context-specific quality assurance (QA), emphasizing the need for robust QA measures with quality control (QC) procedures that encompass (1) acceptance testing (AT) before clinical use, (2) continuous QC monitoring, and (3) adequate user training. The discussion also covers essential components of AT and QA, illustrated with real-world examples. We also highlight what we see as the shared responsibility of manufacturers or vendors, regulators, healthcare systems, medical physicists, and clinicians to enact appropriate testing and oversight to ensure a safe and equitable transformation of medicine through AI.

3.
J Med Imaging (Bellingham) ; 10(5): 054501, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37818179

RESUMEN

Purpose: Deep supervised learning provides an effective approach for developing robust models for various computer-aided diagnosis tasks. However, there is often an underlying assumption that the frequencies of the samples between the different classes of the training dataset are either similar or balanced. In real-world medical data, the samples of positive classes often occur too infrequently to satisfy this assumption. Thus, there is an unmet need for deep-learning systems that can automatically identify and adapt to the real-world conditions of imbalanced data. Approach: We propose a deep Bayesian ensemble learning framework to address the representation learning problem of long-tailed and out-of-distribution (OOD) samples when training from medical images. By estimating the relative uncertainties of the input data, our framework can adapt to imbalanced data for learning generalizable classifiers. We trained and tested our framework on four public medical imaging datasets with various imbalance ratios and imaging modalities across three different learning tasks: semantic medical image segmentation, OOD detection, and in-domain generalization. We compared the performance of our framework with those of state-of-the-art comparator methods. Results: Our proposed framework outperformed the comparator models significantly across all performance metrics (pairwise t-test: p<0.01) in the semantic segmentation of high-resolution CT and MR images as well as in the detection of OOD samples (p<0.01), thereby showing significant improvement in handling the associated long-tailed data distribution. The results of the in-domain generalization also indicated that our framework can enhance the prediction of retinal glaucoma, contributing to clinical decision-making processes. Conclusions: Training of the proposed deep Bayesian ensemble learning framework with dynamic Monte-Carlo dropout and a combination of losses yielded the best generalization to unseen samples from imbalanced medical imaging datasets across different learning tasks.

4.
Oral Radiol ; 39(3): 553-562, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-36753006

RESUMEN

OBJECTIVES: A videofluoroscopic swallowing study (VFSS) is conducted to detect aspiration. However, aspiration occurs within a short time and is difficult to detect. If deep learning can detect aspirations with high accuracy, clinicians can focus on the diagnosis of the detected aspirations. Whether VFSS aspirations can be classified using rapid-prototyping deep-learning tools was studied. METHODS: VFSS videos were separated into individual image frames. A region of interest was defined on the pharynx. Three convolutional neural networks (CNNs), namely a Simple-Layer CNN, Multiple-Layer CNN, and Modified LeNet, were designed for the classification. The performance results of the CNNs were compared in terms of the areas under their receiver-operating characteristic curves (AUCs). RESULTS: A total of 18,333 images obtained through data augmentation were selected for the evaluation. The different CNNs yielded sensitivities of 78.8%-87.6%, specificities of 91.9%-98.1%, and overall accuracies of 85.8%-91.7%. The AUC of 0.974 obtained for the Simple-Layer CNN and Modified LeNet was significantly higher than that obtained for the Multiple-Layer CNN (AUC of 0.936) (p < 0.001). CONCLUSIONS: The results of this study show that deep learning has potential for detecting aspiration with high accuracy.


Asunto(s)
Aprendizaje Profundo , Deglución , Fluoroscopía/métodos , Redes Neurales de la Computación , Área Bajo la Curva
5.
Med Phys ; 50(2): e1-e24, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36565447

RESUMEN

Rapid advances in artificial intelligence (AI) and machine learning, and specifically in deep learning (DL) techniques, have enabled broad application of these methods in health care. The promise of the DL approach has spurred further interest in computer-aided diagnosis (CAD) development and applications using both "traditional" machine learning methods and newer DL-based methods. We use the term CAD-AI to refer to this expanded clinical decision support environment that uses traditional and DL-based AI methods. Numerous studies have been published to date on the development of machine learning tools for computer-aided, or AI-assisted, clinical tasks. However, most of these machine learning models are not ready for clinical deployment. It is of paramount importance to ensure that a clinical decision support tool undergoes proper training and rigorous validation of its generalizability and robustness before adoption for patient care in the clinic. To address these important issues, the American Association of Physicists in Medicine (AAPM) Computer-Aided Image Analysis Subcommittee (CADSC) is charged, in part, to develop recommendations on practices and standards for the development and performance assessment of computer-aided decision support systems. The committee has previously published two opinion papers on the evaluation of CAD systems and issues associated with user training and quality assurance of these systems in the clinic. With machine learning techniques continuing to evolve and CAD applications expanding to new stages of the patient care process, the current task group report considers the broader issues common to the development of most, if not all, CAD-AI applications and their translation from the bench to the clinic. The goal is to bring attention to the proper training and validation of machine learning algorithms that may improve their generalizability and reliability and accelerate the adoption of CAD-AI systems for clinical decision support.


Asunto(s)
Inteligencia Artificial , Diagnóstico por Computador , Humanos , Reproducibilidad de los Resultados , Diagnóstico por Computador/métodos , Diagnóstico por Imagen , Aprendizaje Automático
6.
Cancers (Basel) ; 14(17)2022 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-36077662

RESUMEN

Existing electronic cleansing (EC) methods for computed tomographic colonography (CTC) are generally based on image segmentation, which limits their accuracy to that of the underlying voxels. Because of the limitations of the available CTC datasets for training, traditional deep learning is of limited use in EC. The purpose of this study was to evaluate the technical feasibility of using a novel self-supervised adversarial learning scheme to perform EC with a limited training dataset with subvoxel accuracy. A three-dimensional (3D) generative adversarial network (3D GAN) was pre-trained to perform EC on CTC datasets of an anthropomorphic phantom. The 3D GAN was then fine-tuned to each input case by use of the self-supervised scheme. The architecture of the 3D GAN was optimized by use of a phantom study. The visually perceived quality of the virtual cleansing by the resulting 3D GAN compared favorably to that of commercial EC software on the virtual 3D fly-through examinations of 18 clinical CTC cases. Thus, the proposed self-supervised 3D GAN, which can be trained to perform EC on a small dataset without image annotations with subvoxel accuracy, is a potentially effective approach for addressing the remaining technical problems of EC in CTC.

7.
Med Image Anal ; 73: 102159, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34303892

RESUMEN

Because of the rapid spread and wide range of the clinical manifestations of the coronavirus disease 2019 (COVID-19), fast and accurate estimation of the disease progression and mortality is vital for the management of the patients. Currently available image-based prognostic predictors for patients with COVID-19 are largely limited to semi-automated schemes with manually designed features and supervised learning, and the survival analysis is largely limited to logistic regression. We developed a weakly unsupervised conditional generative adversarial network, called pix2surv, which can be trained to estimate the time-to-event information for survival analysis directly from the chest computed tomography (CT) images of a patient. We show that the performance of pix2surv based on CT images significantly outperforms those of existing laboratory tests and image-based visual and quantitative predictors in estimating the disease progression and mortality of COVID-19 patients. Thus, pix2surv is a promising approach for performing image-based prognostic predictions.


Asunto(s)
COVID-19 , Humanos , Pronóstico , SARS-CoV-2 , Tórax , Tomografía Computarizada por Rayos X
8.
Sci Rep ; 11(1): 9263, 2021 04 29.
Artículo en Inglés | MEDLINE | ID: mdl-33927287

RESUMEN

The rapid increase of patients with coronavirus disease 2019 (COVID-19) has introduced major challenges to healthcare services worldwide. Therefore, fast and accurate clinical assessment of COVID-19 progression and mortality is vital for the management of COVID-19 patients. We developed an automated image-based survival prediction model, called U-survival, which combines deep learning of chest CT images with the established survival analysis methodology of an elastic-net Cox survival model. In an evaluation of 383 COVID-19 positive patients from two hospitals, the prognostic bootstrap prediction performance of U-survival was significantly higher (P < 0.0001) than those of existing laboratory and image-based reference predictors both for COVID-19 progression (maximum concordance index: 91.6% [95% confidence interval 91.5, 91.7]) and for mortality (88.7% [88.6, 88.9]), and the separation between the Kaplan-Meier survival curves of patients stratified into low- and high-risk groups was largest for U-survival (P < 3 × 10-14). The results indicate that U-survival can be used to provide automated and objective prognostic predictions for the management of COVID-19 patients.


Asunto(s)
COVID-19/diagnóstico , Pulmón/diagnóstico por imagen , SARS-CoV-2/fisiología , Anciano , Automatización , COVID-19/mortalidad , Diagnóstico por Imagen , Progresión de la Enfermedad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Pronóstico , Análisis de Supervivencia , Tomografía Computarizada por Rayos X
9.
Int J Comput Assist Radiol Surg ; 16(1): 81-89, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33150471

RESUMEN

PURPOSE: Deep learning can be used for improving the performance of computer-aided detection (CADe) in various medical imaging tasks. However, in computed tomographic (CT) colonography, the performance is limited by the relatively small size and the variety of the available training datasets. Our purpose in this study was to develop and evaluate a flow-based generative model for performing 3D data augmentation of colorectal polyps for effective training of deep learning in CADe for CT colonography. METHODS: We developed a 3D-convolutional neural network (3D CNN) based on a flow-based generative model (3D Glow) for generating synthetic volumes of interest (VOIs) that has characteristics similar to those of the VOIs of its training dataset. The 3D Glow was trained to generate synthetic VOIs of polyps by use of our clinical CT colonography case collection. The evaluation was performed by use of a human observer study with three observers and by use of a CADe-based polyp classification study with a 3D DenseNet. RESULTS: The area-under-the-curve values of the receiver operating characteristic analysis of the three observers were not statistically significantly different in distinguishing between real polyps and synthetic polyps. When trained with data augmentation by 3D Glow, the 3D DenseNet yielded a statistically significantly higher polyp classification performance than when it was trained with alternative augmentation methods. CONCLUSION: The 3D Glow-generated synthetic polyps are visually indistinguishable from real colorectal polyps. Their application to data augmentation can substantially improve the performance of 3D CNNs in CADe for CT colonography. Thus, 3D Glow is a promising method for improving the performance of deep learning in CADe for CT colonography.


Asunto(s)
Pólipos del Colon/diagnóstico por imagen , Colonografía Tomográfica Computarizada/métodos , Aprendizaje Profundo , Redes Neurales de la Computación , Colonoscopía , Humanos , Sensibilidad y Especificidad
10.
Int J Comput Assist Radiol Surg ; 15(11): 1847-1858, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32897490

RESUMEN

Purpose The identification of abnormalities that are relatively rare within otherwise normal anatomy is a major challenge for deep learning in the semantic segmentation of medical images. The small number of samples of the minority classes in the training data makes the learning of optimal classification challenging, while the more frequently occurring samples of the majority class hamper the generalization of the classification boundary between infrequently occurring target objects and classes. In this paper, we developed a novel generative multi-adversarial network, called Ensemble-GAN, for mitigating this class imbalance problem in the semantic segmentation of abdominal images.Method The Ensemble-GAN framework is composed of a single-generator and a multi-discriminator variant for handling the class imbalance problem to provide a better generalization than existing approaches. The ensemble model aggregates the estimates of multiple models by training from different initializations and losses from various subsets of the training data. The single generator network analyzes the input image as a condition to predict a corresponding semantic segmentation image by use of feedback from the ensemble of discriminator networks. To evaluate the framework, we trained our framework on two public datasets, with different imbalance ratios and imaging modalities: the Chaos 2019 and the LiTS 2017.Result In terms of the F1 score, the accuracies of the semantic segmentation of healthy spleen, liver, and left and right kidneys were 0.93, 0.96, 0.90 and 0.94, respectively. The overall F1 scores for simultaneous segmentation of the lesions and liver were 0.83 and 0.94, respectively.Conclusion The proposed Ensemble-GAN framework demonstrated outstanding performance in the semantic segmentation of medical images in comparison with other approaches on popular abdominal imaging benchmarks. The Ensemble-GAN has the potential to segment abdominal images more accurately than human experts.


Asunto(s)
Abdomen/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Hígado/diagnóstico por imagen , Aprendizaje Profundo , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
11.
Int J Comput Assist Radiol Surg ; 15(1): 163-172, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31722085

RESUMEN

PURPOSE: As some of the most important factors for treatment decision of lung cancer (which is the deadliest neoplasm) are staging and histology, this work aimed to associate quantitative contrast-enhanced computed tomography (CT) features from malignant lung tumors with distant and nodal metastases (according to clinical TNM staging) and histopathology (according to biopsy and surgical resection) using radiomics assessment. METHODS: A local cohort of 85 patients were retrospectively (2010-2017) analyzed after approval by the institutional research review board. CT images acquired with the same protocol were semiautomatically segmented by a volumetric segmentation method. Tumors were characterized by quantitative CT features of shape, first-order, second-order, and higher-order textures. Statistical and machine learning analyses assessed the features individually and combined with clinical data. RESULTS: Univariate and multivariate analyses identified 40, 2003, and 45 quantitative features associated with distant metastasis, nodal metastasis, and histopathology (adenocarcinoma and squamous cell carcinoma), respectively. A machine learning model yielded the highest areas under the receiver operating characteristic curves of 0.92, 0.84, and 0.88 to predict the same previous patterns. CONCLUSION: Several radiomic features (including wavelet energies, information measures of correlation and maximum probability from co-occurrence matrix, busyness from neighborhood intensity-difference matrix, directionalities from Tamura's texture, and fractal dimension estimation) significantly associated with distant metastasis, nodal metastasis, and histology were discovered in this work, presenting great potential as imaging biomarkers for pathological diagnosis and target therapy decision.


Asunto(s)
Neoplasias Pulmonares/diagnóstico , Pulmón/diagnóstico por imagen , Aprendizaje Automático , Estadificación de Neoplasias , Tomografía Computarizada por Rayos X/métodos , Adulto , Anciano , Anciano de 80 o más Años , Biopsia , Femenino , Humanos , Neoplasias Pulmonares/secundario , Masculino , Persona de Mediana Edad , Metástasis de la Neoplasia , Valor Predictivo de las Pruebas , Curva ROC , Estudios Retrospectivos
12.
Radiographics ; 38(7): 2034-2050, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30422761

RESUMEN

Electronic cleansing (EC) is used for computational removal of residual feces and fluid tagged with an orally administered contrast agent on CT colonographic images to improve the visibility of polyps during virtual endoscopic "fly-through" reading. A recent trend in CT colonography is to perform a low-dose CT scanning protocol with the patient having undergone reduced- or noncathartic bowel preparation. Although several EC schemes exist, they have been developed for use with cathartic bowel preparation and high-radiation-dose CT, and thus, at a low dose with noncathartic bowel preparation, they tend to generate cleansing artifacts that distract and mislead readers. Deep learning can be used for improvement of the image quality with EC at CT colonography. Deep learning EC can produce substantially fewer cleansing artifacts at dual-energy than at single-energy CT colonography, because the dual-energy information can be used to identify relevant material in the colon more precisely than is possible with the single x-ray attenuation value. Because the number of annotated training images is limited at CT colonography, transfer learning can be used for appropriate training of deep learning algorithms. The purposes of this article are to review the causes of cleansing artifacts that distract and mislead readers in conventional EC schemes, to describe the applications of deep learning and dual-energy CT colonography to EC of the colon, and to demonstrate the improvements in image quality with EC and deep learning at single-energy and dual-energy CT colonography with noncathartic bowel preparation. ©RSNA, 2018.


Asunto(s)
Colonografía Tomográfica Computarizada/métodos , Neoplasias Colorrectales/diagnóstico por imagen , Aprendizaje Profundo , Algoritmos , Catárticos/administración & dosificación , Medios de Contraste , Heces , Humanos , Dosis de Radiación
13.
Am J Gastroenterol ; 112(1): 163-171, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-27779195

RESUMEN

OBJECTIVES: The objective of this study was to assess prospectively the diagnostic accuracy of computer-assisted computed tomographic colonography (CTC) in the detection of polypoid (pedunculated or sessile) and nonpolypoid neoplasms and compare the accuracy between gastroenterologists and radiologists. METHODS: This nationwide multicenter prospective controlled trial recruited 1,257 participants with average or high risk of colorectal cancer at 14 Japanese institutions. Participants had CTC and colonoscopy on the same day. CTC images were interpreted independently by trained gastroenterologists and radiologists. The main outcome was the accuracy of CTC in the detection of neoplasms ≥6 mm in diameter, with colonoscopy results as the reference standard. Detection sensitivities of polypoid vs. nonpolypoid lesions were also evaluated. RESULTS: Of the 1,257 participants, 1,177 were included in the final analysis: 42 (3.6%) were at average risk of colorectal cancer, 456 (38.7%) were at elevated risk, and 679 (57.7%) had recent positive immunochemical fecal occult blood tests. The overall per-participant sensitivity, specificity, and positive and negative predictive values for neoplasms ≥6 mm in diameter were 0.90, 0.93, 0.83, and 0.96, respectively, among gastroenterologists and 0.86, 0.90, 0.76, and 0.95 among radiologists (P<0.05 for gastroenterologists vs. radiologists). The sensitivity and specificity for neoplasms ≥10 mm in diameter were 0.93 and 0.99 among gastroenterologists and 0.91 and 0.98 among radiologists (not significant for gastroenterologists vs. radiologists). The CTC interpretation time by radiologists was shorter than that by gastroenterologists (9.97 vs. 15.8 min, P<0.05). Sensitivities for pedunculated and sessile lesions exceeded those for flat elevated lesions ≥10 mm in diameter in both groups (gastroenterologists 0.95, 0.92, and 0.68; radiologists: 0.94, 0.87, and 0.61; P<0.05 for polypoid vs. nonpolypoid), although not significant (P>0.05) for gastroenterologists vs. radiologists. CONCLUSIONS: CTC interpretation by gastroenterologists and radiologists was accurate for detection of polypoid neoplasms, but less so for nonpolypoid neoplasms. Gastroenterologists had a higher accuracy in the detection of neoplasms ≥6 mm than did radiologists, although their interpretation time was longer than that of radiologists.


Asunto(s)
Adenoma/diagnóstico por imagen , Carcinoma/diagnóstico por imagen , Pólipos del Colon/diagnóstico por imagen , Colonografía Tomográfica Computarizada , Neoplasias Colorrectales/diagnóstico por imagen , Gastroenterólogos , Radiólogos , Adenoma/patología , Anciano , Carcinoma/patología , Pólipos del Colon/patología , Colonoscopía , Neoplasias Colorrectales/patología , Heces/química , Femenino , Hemoglobinas/análisis , Humanos , Inmunoquímica , Japón , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Sensibilidad y Especificidad
14.
Proc SPIE Int Soc Opt Eng ; 9414: 94142Y, 2015 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-25964710

RESUMEN

In computed tomographic colonography (CTC), orally administered fecal-tagging agents can be used to indicate residual feces and fluid that could otherwise hide or imitate lesions on CTC images of the colon. Although the use of fecal tagging improves the detection accuracy of CTC, it can introduce image artifacts that may cause lesions that are covered by fecal tagging to have a different visual appearance than those not covered by fecal tagging. This can distort the values of image-based computational features, thereby reducing the accuracy of computer-aided detection (CADe). We developed a context-specific method that performs the detection of lesions separately on lumen regions covered by air and on those covered by fecal tagging, thereby facilitating the optimization of detection parameters separately for these regions and their detected lesion candidates to improve the detection accuracy of CADe. For pilot evaluation, the method was integrated into a dual-energy CADe (DE-CADe) scheme and evaluated by use of leave-one-patient-out evaluation on 66 clinical non-cathartic low-dose dual-energy CTC (DE-CTC) cases that were acquired at a low effective radiation dose and reconstructed by use of iterative image reconstruction. There were 22 colonoscopy-confirmed lesions ≥6 mm in size in 21 patients. The DE-CADe scheme detected 96% of the lesions at a median of 6 FP detections per patient. These preliminary results indicate that the use of context-specific detection can yield high detection accuracy of CADe in non-cathartic low-dose DE-CTC examinations.

15.
Proc SPIE Int Soc Opt Eng ; 9414: 94140Q, 2015 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-25844029

RESUMEN

CT colonography (CTC) uses orally administered fecal-tagging agents to enhance retained fluid and feces that would otherwise obscure or imitate polyps on CTC images. To visualize the complete region of colon without residual materials, electronic cleansing (EC) can be used to perform virtual subtraction of the tagged materials from CTC images. However, current EC methods produce subtraction artifacts and they can fail to subtract unclearly tagged feces. We developed a novel multi-material EC (MUMA-EC) method that uses dual-energy CTC (DE-CTC) and machine-learning methods to improve the performance of EC. In our method, material decomposition is performed to calculate water-iodine decomposition images and virtual monochromatic (VIM) images. Using the images, a random forest classifier is used to label the regions of lumen air, soft tissue, fecal tagging, and their partial-volume boundaries. The electronically cleansed images are synthesized from the multi-material and VIM image volumes. For pilot evaluation, we acquired the clinical DE-CTC data of 7 patients. Preliminary results suggest that the proposed MUMA-EC method is effective and that it minimizes the three types of image artifacts that were present in previous EC methods.

16.
Proc SPIE Int Soc Opt Eng ; 94122015 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-25918480

RESUMEN

In recent years, dual-energy computed tomography (DECT) has been widely used in the clinical routine due to improved diagnostics capability from additional spectral information. One promising application for DECT is CT colonography (CTC) in combination with computer-aided diagnosis (CAD) for detection of lesions and polyps. While CAD has demonstrated in the past that it is able to detect small polyps, its performance is highly dependent on the quality of the input data. The presence of artifacts such as beam-hardening and noise in ultra-low-dose CTC may severely degrade detection performances of small polyps. In this work, we investigate and compare virtual monochromatic images, generated by image-based decomposition and projection-based decomposition, with respect to CAD performance. In the image-based method, reconstructed images are firstly decomposed into water and iodine before the virtual monochromatic images are calculated. On the contrary, in the projection-based method, the projection data are first decomposed before calculation of virtual monochromatic projection and reconstruction. Both material decomposition methods are evaluated with regards to the accuracy of iodine detection. Further, the performance of the virtual monochromatic images is qualitatively and quantitatively assessed. Preliminary results show that the projection-based method does not only have a more accurate detection of iodine, but also delivers virtual monochromatic images with reduced beam hardening artifacts in comparison with the image-based method. With regards to the CAD performance, the projection-based method yields an improved detection performance of polyps in comparison with that of the image-based method.

17.
Abdom Imaging (2014) ; 8676: 159-168, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26236780

RESUMEN

In CT colonography (CTC), orally administered positive-contrast fecal-tagging agents can cause artificial elevation of the observed radiodensity of adjacent soft tissue. Such pseudo-enhancement makes it challenging to differentiate polyps and folds reliably from tagged materials, and it is also present in dual-energy CTC (DE-CTC). We developed a method that corrects for pseudo-enhancement on DE-CTC images without distorting the dual-energy information contained in the data. A pilot study was performed to evaluate the effect of the method visually and quantitatively by use of clinical non-cathartic low-dose DE-CTC data from 10 patients including 13 polyps covered partially or completely by iodine-based fecal tagging. The results indicate that the proposed method can be used to reduce the pseudo-enhancement distortion of DE-CTC images without losing material-specific dual-energy information. The method has potential application in improving the accuracy of automated image-processing applications, such as computer-aided detection and virtual bowel cleansing in CTC.

18.
Abdom Imaging (2014) ; 8676: 169-178, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26236781

RESUMEN

In CT colonography, orally administered positive-contrast fecal-tagging agents are used for differentiating residual fluid and feces from true lesions. However, the presence of high-density tagging agent in the colon can introduce erroneous artifacts, such as local pseudo-enhancement and beam-hardening, on the reconstructed CT images, thereby complicating reliable detection of soft-tissue lesions. In dual-energy CT colonography, such image artifacts can be reduced by the calculation of virtual monochromatic CT images, which provide more accurate quantitative attenuation measurements than conventional single-energy CT colonography. In practice, however, virtual monochromatic images may still contain some pseudo-enhancement artifacts, and efforts to minimize radiation dose may enhance such artifacts. In this study, we evaluated the effect of image-based pseudo-enhancement post-correction on virtual monochromatic images in standard-dose and low-dose dual-energy CT colonography. The mean CT values of the virtual monochromatic standard-dose CT images of 51 polyps and those of the virtual monochromatic low-dose CT images of 20 polyps were measured without and with the pseudo-enhancement correction. Statistically significant differences were observed between uncorrected and pseudo-enhancement-corrected images of polyps covered by fecal tagging in standard-dose CT (p < 0.001) and in low-dose CT (p < 0.05). The results indicate that image-based pseudo-enhancement post-correction can be useful for optimizing the performance of image-processing applications in virtual monochromatic CT colonography.

19.
Abdom Imaging (2013) ; 8198: 73-80, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-25580475

RESUMEN

Reliable computer-aided detection (CADe) of small polyps and flat lesions is limited by the relatively low image resolution of computed tomographic colonography (CTC). We developed a sinogram-based super-resolution (SR) method to enhance the images of lesion candidates detected by CADe. First, CADe is used to detect lesion candidates at high sensitivity from conventional CTC images. Next, the signal patterns of the lesion candidates are enhanced in sinogram domain by use of non-uniform compressive sampling and iterative reconstruction to produce SR images of the lesion candidates. For pilot evaluation, an anthropomorphic phantom including simulated lesions was filled partially with fecal tagging and scanned by use of a CT scanner. A fully automated CADe scheme was used to detect lesion candidates in the images reconstructed at conventional 0.61-mm and at 0.10-mm SR image resolution. The proof-of-concept results indicate that the SR method has potential to reduce the number of FP CADe detections below that obtainable with the conventional CTC imaging technology.

20.
Ann Intern Med ; 156(10): 692-702, 2012 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-22586008

RESUMEN

BACKGROUND: Colon screening by optical colonoscopy (OC) or computed tomographic colonography (CTC) requires a laxative bowel preparation, which inhibits screening participation. OBJECTIVE: To assess the performance of detecting adenomas 6 mm or larger and patient experience of laxative-free, computer-aided CTC. DESIGN: Prospective test comparison of laxative-free CTC and OC. The CTC included electronic cleansing and computer-aided detection. Optical colonoscopy examinations were initially blinded to CTC results, which were subsequently revealed during colonoscope withdrawal; this method permitted reexamination to resolve discrepant findings. Unblinded OC served as a reference standard. (ClinicalTrials.gov registration number: NCT01200303) SETTING: Multicenter ambulatory imaging and endoscopy centers. PARTICIPANTS: 605 adults aged 50 to 85 years at average to moderate risk for colon cancer. MEASUREMENTS: Per-patient sensitivity and specificity of CTC and first-pass OC for detecting adenomas at thresholds of 10 mm or greater, 8 mm or greater, and 6 mm or greater; per-lesion sensitivity and survey data describing patient experience with preparations and examinations. RESULTS: For adenomas 10 mm or larger, per-patient sensitivity of CTC was 0.91 (95% CI, 0.71 to 0.99) and specificity was 0.85 (CI, 0.82 to 0.88); sensitivity of OC was 0.95 (CI, 0.77 to 1.00) and specificity was 0.89 (CI, 0.86 to 0.91). Sensitivity of CTC was 0.70 (CI, 0.53 to 0.83) for adenomas 8 mm or larger and 0.59 (CI, 0.47 to 0.70) for those 6 mm or larger; sensitivity of OC for adenomas 8 mm or larger was 0.88 (CI, 0.73 to 0.96) and 0.76 (CI, 0.64 to 0.85) for those 6 mm or larger. The specificity of OC at the threshold of 8 mm or larger was 0.91 and at 6 mm or larger was 0.94. Specificity for OC was greater than that for CTC, which was 0.86 at the threshold of 8 mm or larger and 0.88 at 6 mm or larger (P= 0.02). Reported participant experience for comfort and difficulty of examination preparation was better with CTC than OC. LIMITATIONS: There were 3 CTC readers. The survey instrument was not independently validated. CONCLUSION: Computed tomographic colonography was accurate in detecting adenomas 10 mm or larger but less so for smaller lesions. Patient experience was better with laxative-free CTC. These results suggest a possible role for laxative-free CTC as an alternate screening method.


Asunto(s)
Pólipos Adenomatosos/diagnóstico por imagen , Pólipos del Colon/diagnóstico por imagen , Colonografía Tomográfica Computarizada/métodos , Pólipos Adenomatosos/patología , Anciano , Anciano de 80 o más Años , Enfermedades Asintomáticas , Pólipos del Colon/patología , Colonografía Tomográfica Computarizada/efectos adversos , Colonoscopía/efectos adversos , Colonoscopía/métodos , Femenino , Humanos , Laxativos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Estudios Prospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA