Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Radiographics ; 38(7): 2034-2050, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30422761

RESUMO

Electronic cleansing (EC) is used for computational removal of residual feces and fluid tagged with an orally administered contrast agent on CT colonographic images to improve the visibility of polyps during virtual endoscopic "fly-through" reading. A recent trend in CT colonography is to perform a low-dose CT scanning protocol with the patient having undergone reduced- or noncathartic bowel preparation. Although several EC schemes exist, they have been developed for use with cathartic bowel preparation and high-radiation-dose CT, and thus, at a low dose with noncathartic bowel preparation, they tend to generate cleansing artifacts that distract and mislead readers. Deep learning can be used for improvement of the image quality with EC at CT colonography. Deep learning EC can produce substantially fewer cleansing artifacts at dual-energy than at single-energy CT colonography, because the dual-energy information can be used to identify relevant material in the colon more precisely than is possible with the single x-ray attenuation value. Because the number of annotated training images is limited at CT colonography, transfer learning can be used for appropriate training of deep learning algorithms. The purposes of this article are to review the causes of cleansing artifacts that distract and mislead readers in conventional EC schemes, to describe the applications of deep learning and dual-energy CT colonography to EC of the colon, and to demonstrate the improvements in image quality with EC and deep learning at single-energy and dual-energy CT colonography with noncathartic bowel preparation. ©RSNA, 2018.


Assuntos
Colonografia Tomográfica Computadorizada/métodos , Neoplasias Colorretais/diagnóstico por imagem , Aprendizado Profundo , Algoritmos , Catárticos/administração & dosagem , Meios de Contraste , Fezes , Humanos , Doses de Radiação
2.
Oral Radiol ; 39(3): 553-562, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36753006

RESUMO

OBJECTIVES: A videofluoroscopic swallowing study (VFSS) is conducted to detect aspiration. However, aspiration occurs within a short time and is difficult to detect. If deep learning can detect aspirations with high accuracy, clinicians can focus on the diagnosis of the detected aspirations. Whether VFSS aspirations can be classified using rapid-prototyping deep-learning tools was studied. METHODS: VFSS videos were separated into individual image frames. A region of interest was defined on the pharynx. Three convolutional neural networks (CNNs), namely a Simple-Layer CNN, Multiple-Layer CNN, and Modified LeNet, were designed for the classification. The performance results of the CNNs were compared in terms of the areas under their receiver-operating characteristic curves (AUCs). RESULTS: A total of 18,333 images obtained through data augmentation were selected for the evaluation. The different CNNs yielded sensitivities of 78.8%-87.6%, specificities of 91.9%-98.1%, and overall accuracies of 85.8%-91.7%. The AUC of 0.974 obtained for the Simple-Layer CNN and Modified LeNet was significantly higher than that obtained for the Multiple-Layer CNN (AUC of 0.936) (p < 0.001). CONCLUSIONS: The results of this study show that deep learning has potential for detecting aspiration with high accuracy.


Assuntos
Aprendizado Profundo , Deglutição , Fluoroscopia/métodos , Redes Neurais de Computação , Área Sob a Curva
3.
Cancers (Basel) ; 14(17)2022 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-36077662

RESUMO

Existing electronic cleansing (EC) methods for computed tomographic colonography (CTC) are generally based on image segmentation, which limits their accuracy to that of the underlying voxels. Because of the limitations of the available CTC datasets for training, traditional deep learning is of limited use in EC. The purpose of this study was to evaluate the technical feasibility of using a novel self-supervised adversarial learning scheme to perform EC with a limited training dataset with subvoxel accuracy. A three-dimensional (3D) generative adversarial network (3D GAN) was pre-trained to perform EC on CTC datasets of an anthropomorphic phantom. The 3D GAN was then fine-tuned to each input case by use of the self-supervised scheme. The architecture of the 3D GAN was optimized by use of a phantom study. The visually perceived quality of the virtual cleansing by the resulting 3D GAN compared favorably to that of commercial EC software on the virtual 3D fly-through examinations of 18 clinical CTC cases. Thus, the proposed self-supervised 3D GAN, which can be trained to perform EC on a small dataset without image annotations with subvoxel accuracy, is a potentially effective approach for addressing the remaining technical problems of EC in CTC.

4.
Med Image Anal ; 73: 102159, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34303892

RESUMO

Because of the rapid spread and wide range of the clinical manifestations of the coronavirus disease 2019 (COVID-19), fast and accurate estimation of the disease progression and mortality is vital for the management of the patients. Currently available image-based prognostic predictors for patients with COVID-19 are largely limited to semi-automated schemes with manually designed features and supervised learning, and the survival analysis is largely limited to logistic regression. We developed a weakly unsupervised conditional generative adversarial network, called pix2surv, which can be trained to estimate the time-to-event information for survival analysis directly from the chest computed tomography (CT) images of a patient. We show that the performance of pix2surv based on CT images significantly outperforms those of existing laboratory tests and image-based visual and quantitative predictors in estimating the disease progression and mortality of COVID-19 patients. Thus, pix2surv is a promising approach for performing image-based prognostic predictions.


Assuntos
COVID-19 , Humanos , Prognóstico , SARS-CoV-2 , Tórax , Tomografia Computadorizada por Raios X
5.
Sci Rep ; 11(1): 9263, 2021 04 29.
Artigo em Inglês | MEDLINE | ID: mdl-33927287

RESUMO

The rapid increase of patients with coronavirus disease 2019 (COVID-19) has introduced major challenges to healthcare services worldwide. Therefore, fast and accurate clinical assessment of COVID-19 progression and mortality is vital for the management of COVID-19 patients. We developed an automated image-based survival prediction model, called U-survival, which combines deep learning of chest CT images with the established survival analysis methodology of an elastic-net Cox survival model. In an evaluation of 383 COVID-19 positive patients from two hospitals, the prognostic bootstrap prediction performance of U-survival was significantly higher (P < 0.0001) than those of existing laboratory and image-based reference predictors both for COVID-19 progression (maximum concordance index: 91.6% [95% confidence interval 91.5, 91.7]) and for mortality (88.7% [88.6, 88.9]), and the separation between the Kaplan-Meier survival curves of patients stratified into low- and high-risk groups was largest for U-survival (P < 3 × 10-14). The results indicate that U-survival can be used to provide automated and objective prognostic predictions for the management of COVID-19 patients.


Assuntos
COVID-19/diagnóstico , Pulmão/diagnóstico por imagem , SARS-CoV-2/fisiologia , Idoso , Automação , COVID-19/mortalidade , Diagnóstico por Imagem , Progressão da Doença , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Análise de Sobrevida , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA