Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Invest Radiol ; 59(9): 635-645, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-38436405

RESUMO

OBJECTIVES: Accurately acquiring and assigning different contrast-enhanced phases in computed tomography (CT) is relevant for clinicians and for artificial intelligence orchestration to select the most appropriate series for analysis. However, this information is commonly extracted from the CT metadata, which is often wrong. This study aimed at developing an automatic pipeline for classifying intravenous (IV) contrast phases and additionally for identifying contrast media in the gastrointestinal tract (GIT). MATERIALS AND METHODS: This retrospective study used 1200 CT scans collected at the investigating institution between January 4, 2016 and September 12, 2022, and 240 CT scans from multiple centers from The Cancer Imaging Archive for external validation. The open-source segmentation algorithm TotalSegmentator was used to identify regions of interest (pulmonary artery, aorta, stomach, portal/splenic vein, liver, portal vein/hepatic veins, inferior vena cava, duodenum, small bowel, colon, left/right kidney, urinary bladder), and machine learning classifiers were trained with 5-fold cross-validation to classify IV contrast phases (noncontrast, pulmonary arterial, arterial, venous, and urographic) and GIT contrast enhancement. The performance of the ensembles was evaluated using the receiver operating characteristic area under the curve (AUC) and 95% confidence intervals (CIs). RESULTS: For the IV phase classification task, the following AUC scores were obtained for the internal test set: 99.59% [95% CI, 99.58-99.63] for the noncontrast phase, 99.50% [95% CI, 99.49-99.52] for the pulmonary-arterial phase, 99.13% [95% CI, 99.10-99.15] for the arterial phase, 99.8% [95% CI, 99.79-99.81] for the venous phase, and 99.7% [95% CI, 99.68-99.7] for the urographic phase. For the external dataset, a mean AUC of 97.33% [95% CI, 97.27-97.35] and 97.38% [95% CI, 97.34-97.41] was achieved for all contrast phases for the first and second annotators, respectively. Contrast media in the GIT could be identified with an AUC of 99.90% [95% CI, 99.89-99.9] in the internal dataset, whereas in the external dataset, an AUC of 99.73% [95% CI, 99.71-99.73] and 99.31% [95% CI, 99.27-99.33] was achieved with the first and second annotator, respectively. CONCLUSIONS: The integration of open-source segmentation networks and classifiers effectively classified contrast phases and identified GIT contrast enhancement using anatomical landmarks.


Assuntos
Meios de Contraste , Aprendizado de Máquina , Tomografia Computadorizada por Raios X , Humanos , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Masculino , Feminino , Trato Gastrointestinal/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Pessoa de Meia-Idade , Algoritmos
2.
Sci Data ; 11(1): 688, 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38926396

RESUMO

Automated medical image analysis systems often require large amounts of training data with high quality labels, which are difficult and time consuming to generate. This paper introduces Radiology Object in COntext version 2 (ROCOv2), a multimodal dataset consisting of radiological images and associated medical concepts and captions extracted from the PMC Open Access subset. It is an updated version of the ROCO dataset published in 2018, and adds 35,705 new images added to PMC since 2018. It further provides manually curated concepts for imaging modalities with additional anatomical and directional concepts for X-rays. The dataset consists of 79,789 images and has been used, with minor modifications, in the concept detection and caption prediction tasks of ImageCLEFmedical Caption 2023. The dataset is suitable for training image annotation models based on image-caption pairs, or for multi-label image classification using Unified Medical Language System (UMLS) concepts provided with each image. In addition, it can serve for pre-training of medical domain models, and evaluation of deep learning models for multi-task learning.


Assuntos
Imagem Multimodal , Radiologia , Humanos , Processamento de Imagem Assistida por Computador , Unified Medical Language System
3.
Comput Struct Biotechnol J ; 24: 639-660, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39502384

RESUMO

Background The growth of biomedical literature presents challenges in extracting and structuring knowledge. Knowledge Graphs (KGs) offer a solution by representing relationships between biomedical entities. However, manual construction of KGs is labor-intensive and time-consuming, highlighting the need for automated methods. This work introduces BioKGrapher, a tool for automatic KG construction using large-scale publication data, with a focus on biomedical concepts related to specific medical conditions. BioKGrapher allows researchers to construct KGs from PubMed IDs. Methods The BioKGrapher pipeline begins with Named Entity Recognition and Linking (NER+NEL) to extract and normalize biomedical concepts from PubMed, mapping them to the Unified Medical Language System (UMLS). Extracted concepts are weighted and re-ranked using Kullback-Leibler divergence and local frequency balancing. These concepts are then integrated into hierarchical KGs, with relationships formed using terminologies like SNOMED CT and NCIt. Downstream applications include multi-label document classification using Adapter-infused Transformer models. Results BioKGrapher effectively aligns generated concepts with clinical practice guidelines from the German Guideline Program in Oncology (GGPO), achieving F 1 -Scores of up to 0.6. In multi-label classification, Adapter-infused models using a BioKGrapher cancer-specific KG improved micro F 1 -Scores by up to 0.89 percentage points over a non-specific KG and 2.16 points over base models across three BERT variants. The drug-disease extraction case study identified indications for Nivolumab and Rituximab. Conclusion BioKGrapher is a tool for automatic KG construction, aligning with the GGPO and enhancing downstream task performance. It offers a scalable solution for managing biomedical knowledge, with potential applications in literature recommendation, decision support, and drug repurposing.

4.
Diagnostics (Basel) ; 13(15)2023 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-37568979

RESUMO

Spondylolysis is underdiagnosed and often missed in non-musculoskeletal abdominal CT imaging. Our aim was to assess the inter-reader agreement and diagnostic performance of a novel "Darth Vader sign" for the detection of spondylolysis in routine axial images. We performed a retrospective search in the institutional report archives through keyword strings for lumbar spondylolysis and spondylolisthesis. Abdominal CTs from 53 spondylolysis cases (41% female) and from controls (n = 6) without spine abnormalities were identified. A total of 139 single axial slices covering the lumbar spine (86 normal images, 40 with spondylolysis, 13 with degenerative spondylolisthesis without spondylolysis) were exported. Two radiology residents rated all images for the presence or absence of the "Darth Vader sign". The diagnostic accuracy for both readers, as well as the inter-reader agreement, was calculated. The "Darth Vader sign" showed an inter-reader agreement of 0.77. Using the "Darth Vader sign", spondylolysis was detected with a sensitivity and specificity of 65.0-88.2% and 96.2-99.0%, respectively. The "Darth Vader sign" shows excellent diagnostic performance at a substantial inter-reader agreement for the detection of spondylolysis. Using the "Darth Vader sign" in the CT reading routine may be an easy yet effective tool to improve the detection rate of spondylolysis in non-musculoskeletal cases and hence improve patient care.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA