Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Data ; 11(1): 688, 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38926396

RESUMO

Automated medical image analysis systems often require large amounts of training data with high quality labels, which are difficult and time consuming to generate. This paper introduces Radiology Object in COntext version 2 (ROCOv2), a multimodal dataset consisting of radiological images and associated medical concepts and captions extracted from the PMC Open Access subset. It is an updated version of the ROCO dataset published in 2018, and adds 35,705 new images added to PMC since 2018. It further provides manually curated concepts for imaging modalities with additional anatomical and directional concepts for X-rays. The dataset consists of 79,789 images and has been used, with minor modifications, in the concept detection and caption prediction tasks of ImageCLEFmedical Caption 2023. The dataset is suitable for training image annotation models based on image-caption pairs, or for multi-label image classification using Unified Medical Language System (UMLS) concepts provided with each image. In addition, it can serve for pre-training of medical domain models, and evaluation of deep learning models for multi-task learning.


Assuntos
Imagem Multimodal , Radiologia , Humanos , Processamento de Imagem Assistida por Computador , Unified Medical Language System
2.
Invest Radiol ; 2024 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-38436405

RESUMO

OBJECTIVES: Accurately acquiring and assigning different contrast-enhanced phases in computed tomography (CT) is relevant for clinicians and for artificial intelligence orchestration to select the most appropriate series for analysis. However, this information is commonly extracted from the CT metadata, which is often wrong. This study aimed at developing an automatic pipeline for classifying intravenous (IV) contrast phases and additionally for identifying contrast media in the gastrointestinal tract (GIT). MATERIALS AND METHODS: This retrospective study used 1200 CT scans collected at the investigating institution between January 4, 2016 and September 12, 2022, and 240 CT scans from multiple centers from The Cancer Imaging Archive for external validation. The open-source segmentation algorithm TotalSegmentator was used to identify regions of interest (pulmonary artery, aorta, stomach, portal/splenic vein, liver, portal vein/hepatic veins, inferior vena cava, duodenum, small bowel, colon, left/right kidney, urinary bladder), and machine learning classifiers were trained with 5-fold cross-validation to classify IV contrast phases (noncontrast, pulmonary arterial, arterial, venous, and urographic) and GIT contrast enhancement. The performance of the ensembles was evaluated using the receiver operating characteristic area under the curve (AUC) and 95% confidence intervals (CIs). RESULTS: For the IV phase classification task, the following AUC scores were obtained for the internal test set: 99.59% [95% CI, 99.58-99.63] for the noncontrast phase, 99.50% [95% CI, 99.49-99.52] for the pulmonary-arterial phase, 99.13% [95% CI, 99.10-99.15] for the arterial phase, 99.8% [95% CI, 99.79-99.81] for the venous phase, and 99.7% [95% CI, 99.68-99.7] for the urographic phase. For the external dataset, a mean AUC of 97.33% [95% CI, 97.27-97.35] and 97.38% [95% CI, 97.34-97.41] was achieved for all contrast phases for the first and second annotators, respectively. Contrast media in the GIT could be identified with an AUC of 99.90% [95% CI, 99.89-99.9] in the internal dataset, whereas in the external dataset, an AUC of 99.73% [95% CI, 99.71-99.73] and 99.31% [95% CI, 99.27-99.33] was achieved with the first and second annotator, respectively. CONCLUSIONS: The integration of open-source segmentation networks and classifiers effectively classified contrast phases and identified GIT contrast enhancement using anatomical landmarks.

3.
Diagnostics (Basel) ; 13(15)2023 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-37568979

RESUMO

Spondylolysis is underdiagnosed and often missed in non-musculoskeletal abdominal CT imaging. Our aim was to assess the inter-reader agreement and diagnostic performance of a novel "Darth Vader sign" for the detection of spondylolysis in routine axial images. We performed a retrospective search in the institutional report archives through keyword strings for lumbar spondylolysis and spondylolisthesis. Abdominal CTs from 53 spondylolysis cases (41% female) and from controls (n = 6) without spine abnormalities were identified. A total of 139 single axial slices covering the lumbar spine (86 normal images, 40 with spondylolysis, 13 with degenerative spondylolisthesis without spondylolysis) were exported. Two radiology residents rated all images for the presence or absence of the "Darth Vader sign". The diagnostic accuracy for both readers, as well as the inter-reader agreement, was calculated. The "Darth Vader sign" showed an inter-reader agreement of 0.77. Using the "Darth Vader sign", spondylolysis was detected with a sensitivity and specificity of 65.0-88.2% and 96.2-99.0%, respectively. The "Darth Vader sign" shows excellent diagnostic performance at a substantial inter-reader agreement for the detection of spondylolysis. Using the "Darth Vader sign" in the CT reading routine may be an easy yet effective tool to improve the detection rate of spondylolysis in non-musculoskeletal cases and hence improve patient care.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA