Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
BMC Med Inform Decis Mak ; 22(1): 102, 2022 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-35428335

RESUMEN

BACKGROUND: There is progress to be made in building artificially intelligent systems to detect abnormalities that are not only accurate but can handle the true breadth of findings that radiologists encounter in body (chest, abdomen, and pelvis) computed tomography (CT). Currently, the major bottleneck for developing multi-disease classifiers is a lack of manually annotated data. The purpose of this work was to develop high throughput multi-label annotators for body CT reports that can be applied across a variety of abnormalities, organs, and disease states thereby mitigating the need for human annotation. METHODS: We used a dictionary approach to develop rule-based algorithms (RBA) for extraction of disease labels from radiology text reports. We targeted three organ systems (lungs/pleura, liver/gallbladder, kidneys/ureters) with four diseases per system based on their prevalence in our dataset. To expand the algorithms beyond pre-defined keywords, attention-guided recurrent neural networks (RNN) were trained using the RBA-extracted labels to classify reports as being positive for one or more diseases or normal for each organ system. Alternative effects on disease classification performance were evaluated using random initialization or pre-trained embedding as well as different sizes of training datasets. The RBA was tested on a subset of 2158 manually labeled reports and performance was reported as accuracy and F-score. The RNN was tested against a test set of 48,758 reports labeled by RBA and performance was reported as area under the receiver operating characteristic curve (AUC), with 95% CIs calculated using the DeLong method. RESULTS: Manual validation of the RBA confirmed 91-99% accuracy across the 15 different labels. Our models extracted disease labels from 261,229 radiology reports of 112,501 unique subjects. Pre-trained models outperformed random initialization across all diseases. As the training dataset size was reduced, performance was robust except for a few diseases with a relatively small number of cases. Pre-trained classification AUCs reached > 0.95 for all four disease outcomes and normality across all three organ systems. CONCLUSIONS: Our label-extracting pipeline was able to encompass a variety of cases and diseases in body CT reports by generalizing beyond strict rules with exceptional accuracy. The method described can be easily adapted to enable automated labeling of hospital-scale medical data sets for training image-based disease classifiers.


Asunto(s)
Aprendizaje Profundo , Abdomen , Humanos , Redes Neurales de la Computación , Pelvis/diagnóstico por imagen , Tomografía Computarizada por Rayos X
2.
ArXiv ; 2024 Sep 24.
Artículo en Inglés | MEDLINE | ID: mdl-38699170

RESUMEN

IMPORTANCE: Clinical imaging trials are crucial for definitive evaluation of medical innovations, but the process is inefficient, expensive, and ethically-constrained. Virtual imaging trial (VIT) approach address these limitations by emulating the components of a clinical trial. An in silico rendition of the National Lung Screening Trial (NCLS) via Virtual Lung Screening Trial (VLST) demonstrates the promise of VITs to expedite clinical trials, reduce risks to subjects, and facilitate the optimal use of imaging technologies in clinical settings. DESIGN, SETTING, AND PARTICIPANTS: A diverse virtual patient population of 294 subjects was created from human models (XCAT) emulating the characteristics of cases on NLST, with two types of simulated lung nodules. The cohort was assessed using simulated CT and CXR systems to generate images that reflect the NLST imaging technologies. Deep learning models trained for lesion detection in CXR and CT served as virtual readers. RESULTS: The study analyzed 294 CT and CXR simulated images from 294 virtual patients, with a lesion-level AUC of 0.81 (95% CI: 0.79-0.84) for CT and 0.56 (95% CI: 0.54-0.58) for CXR. At the patient level, CT demonstrated an AUC of 0.84 (95% CI: 0.80-0.89), compared to 0.52 (95% CI: 0.45-0.58) for CXR. Subgroup analyses on CT results indicated superior detection of homogeneous lesions (lesion-level AUC 0.97) than heterogeneous lesions (lesion-level AUC 0.72). Performance was particularly high for identifying larger nodules (AUC of 0.98 for nodules > 8 mm). The VLST results closely mirrored the NLST, particularly in size-based detection trends, with CT achieving high AUCs for nodules > 8 mm and similar challenges in detecting smaller nodules. CONCLUSION AND RELEVANCE: The VIT results closely replicated those of the earlier NLST, underscoring its potential to replicate real clinical imaging trials.

3.
Radiol Artif Intell ; 4(1): e210026, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35146433

RESUMEN

PURPOSE: To design multidisease classifiers for body CT scans for three different organ systems using automatically extracted labels from radiology text reports. MATERIALS AND METHODS: This retrospective study included a total of 12 092 patients (mean age, 57 years ± 18 [standard deviation]; 6172 women) for model development and testing. Rule-based algorithms were used to extract 19 225 disease labels from 13 667 body CT scans performed between 2012 and 2017. Using a three-dimensional DenseVNet, three organ systems were segmented: lungs and pleura, liver and gallbladder, and kidneys and ureters. For each organ system, a three-dimensional convolutional neural network classified each as no apparent disease or for the presence of four common diseases, for a total of 15 different labels across all three models. Testing was performed on a subset of 2158 CT volumes relative to 2875 manually derived reference labels from 2133 patients (mean age, 58 years ± 18; 1079 women). Performance was reported as area under the receiver operating characteristic curve (AUC), with 95% CIs calculated using the DeLong method. RESULTS: Manual validation of the extracted labels confirmed 91%-99% accuracy across the 15 different labels. AUCs for lungs and pleura labels were as follows: atelectasis, 0.77 (95% CI: 0.74, 0.81); nodule, 0.65 (95% CI: 0.61, 0.69); emphysema, 0.89 (95% CI: 0.86, 0.92); effusion, 0.97 (95% CI: 0.96, 0.98); and no apparent disease, 0.89 (95% CI: 0.87, 0.91). AUCs for liver and gallbladder were as follows: hepatobiliary calcification, 0.62 (95% CI: 0.56, 0.67); lesion, 0.73 (95% CI: 0.69, 0.77); dilation, 0.87 (95% CI: 0.84, 0.90); fatty, 0.89 (95% CI: 0.86, 0.92); and no apparent disease, 0.82 (95% CI: 0.78, 0.85). AUCs for kidneys and ureters were as follows: stone, 0.83 (95% CI: 0.79, 0.87); atrophy, 0.92 (95% CI: 0.89, 0.94); lesion, 0.68 (95% CI: 0.64, 0.72); cyst, 0.70 (95% CI: 0.66, 0.73); and no apparent disease, 0.79 (95% CI: 0.75, 0.83). CONCLUSION: Weakly supervised deep learning models were able to classify diverse diseases in multiple organ systems from CT scans.Keywords: CT, Diagnosis/Classification/Application Domain, Semisupervised Learning, Whole-Body Imaging© RSNA, 2022.

4.
Comput Biol Med ; 120: 103738, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32421644

RESUMEN

BACKGROUND AND OBJECTIVE: Automatic segmentation of skin lesions is considered a crucial step in Computer-aided Diagnosis (CAD) systems for melanoma detection. Despite its significance, skin lesion segmentation remains an unsolved challenge due to their variability in color, texture, and shapes and indistinguishable boundaries. METHODS: Through this study, we present a new and automatic semantic segmentation network for robust skin lesion segmentation named Dermoscopic Skin Network (DSNet). In order to reduce the number of parameters to make the network lightweight, we used a depth-wise separable convolution in lieu of standard convolution to project the learned discriminating features onto the pixel space at different stages of the encoder. Additionally, we implemented both a U-Net and a Fully Convolutional Network (FCN8s) to compare against the proposed DSNet. RESULTS: We evaluate our proposed model on two publicly available datasets, namely ISIC-20171 and PH22. The obtained mean Intersection over Union (mIoU) is 77.5% and 87.0% respectively for ISIC-2017 and PH2 datasets which outperformed the ISIC-2017 challenge winner by 1.0% with respect to mIoU. Our proposed network also outperformed U-Net and FCN8s respectively by 3.6% and 6.8% with respect to mIoU on the ISIC-2017 dataset. CONCLUSION: Our network for skin lesion segmentation outperforms the other methods discussed in the article and is able to provide better-segmented masks on two different test datasets which can lead to better performance in melanoma detection. Our trained model along with the source code and predicted masks are made publicly available3.


Asunto(s)
Melanoma , Enfermedades de la Piel , Neoplasias Cutáneas , Dermoscopía , Humanos , Melanoma/diagnóstico por imagen , Redes Neurales de la Computación , Piel/diagnóstico por imagen , Neoplasias Cutáneas/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA