Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Invest Radiol ; 2024 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-38436405

RESUMO

OBJECTIVES: Accurately acquiring and assigning different contrast-enhanced phases in computed tomography (CT) is relevant for clinicians and for artificial intelligence orchestration to select the most appropriate series for analysis. However, this information is commonly extracted from the CT metadata, which is often wrong. This study aimed at developing an automatic pipeline for classifying intravenous (IV) contrast phases and additionally for identifying contrast media in the gastrointestinal tract (GIT). MATERIALS AND METHODS: This retrospective study used 1200 CT scans collected at the investigating institution between January 4, 2016 and September 12, 2022, and 240 CT scans from multiple centers from The Cancer Imaging Archive for external validation. The open-source segmentation algorithm TotalSegmentator was used to identify regions of interest (pulmonary artery, aorta, stomach, portal/splenic vein, liver, portal vein/hepatic veins, inferior vena cava, duodenum, small bowel, colon, left/right kidney, urinary bladder), and machine learning classifiers were trained with 5-fold cross-validation to classify IV contrast phases (noncontrast, pulmonary arterial, arterial, venous, and urographic) and GIT contrast enhancement. The performance of the ensembles was evaluated using the receiver operating characteristic area under the curve (AUC) and 95% confidence intervals (CIs). RESULTS: For the IV phase classification task, the following AUC scores were obtained for the internal test set: 99.59% [95% CI, 99.58-99.63] for the noncontrast phase, 99.50% [95% CI, 99.49-99.52] for the pulmonary-arterial phase, 99.13% [95% CI, 99.10-99.15] for the arterial phase, 99.8% [95% CI, 99.79-99.81] for the venous phase, and 99.7% [95% CI, 99.68-99.7] for the urographic phase. For the external dataset, a mean AUC of 97.33% [95% CI, 97.27-97.35] and 97.38% [95% CI, 97.34-97.41] was achieved for all contrast phases for the first and second annotators, respectively. Contrast media in the GIT could be identified with an AUC of 99.90% [95% CI, 99.89-99.9] in the internal dataset, whereas in the external dataset, an AUC of 99.73% [95% CI, 99.71-99.73] and 99.31% [95% CI, 99.27-99.33] was achieved with the first and second annotator, respectively. CONCLUSIONS: The integration of open-source segmentation networks and classifiers effectively classified contrast phases and identified GIT contrast enhancement using anatomical landmarks.

2.
Sci Data ; 11(1): 483, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38729970

RESUMO

The Sparsely Annotated Region and Organ Segmentation (SAROS) dataset was created using data from The Cancer Imaging Archive (TCIA) to provide a large open-access CT dataset with high-quality annotations of body landmarks. In-house segmentation models were employed to generate annotation proposals on randomly selected cases from TCIA. The dataset includes 13 semantic body region labels (abdominal/thoracic cavity, bones, brain, breast implant, mediastinum, muscle, parotid/submandibular/thyroid glands, pericardium, spinal cord, subcutaneous tissue) and six body part labels (left/right arm/leg, head, torso). Case selection was based on the DICOM series description, gender, and imaging protocol, resulting in 882 patients (438 female) for a total of 900 CTs. Manual review and correction of proposals were conducted in a continuous quality control cycle. Only every fifth axial slice was annotated, yielding 20150 annotated slices from 28 data collections. For the reproducibility on downstream tasks, five cross-validation folds and a test set were pre-defined. The SAROS dataset serves as an open-access resource for training and evaluating novel segmentation models, covering various scanner vendors and diseases.


Assuntos
Tomografia Computadorizada por Raios X , Imagem Corporal Total , Feminino , Humanos , Masculino , Processamento de Imagem Assistida por Computador
3.
Sci Data ; 11(1): 688, 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38926396

RESUMO

Automated medical image analysis systems often require large amounts of training data with high quality labels, which are difficult and time consuming to generate. This paper introduces Radiology Object in COntext version 2 (ROCOv2), a multimodal dataset consisting of radiological images and associated medical concepts and captions extracted from the PMC Open Access subset. It is an updated version of the ROCO dataset published in 2018, and adds 35,705 new images added to PMC since 2018. It further provides manually curated concepts for imaging modalities with additional anatomical and directional concepts for X-rays. The dataset consists of 79,789 images and has been used, with minor modifications, in the concept detection and caption prediction tasks of ImageCLEFmedical Caption 2023. The dataset is suitable for training image annotation models based on image-caption pairs, or for multi-label image classification using Unified Medical Language System (UMLS) concepts provided with each image. In addition, it can serve for pre-training of medical domain models, and evaluation of deep learning models for multi-task learning.


Assuntos
Imagem Multimodal , Radiologia , Humanos , Processamento de Imagem Assistida por Computador , Unified Medical Language System
4.
PLoS One ; 15(9): e0236868, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32976486

RESUMO

Detection and diagnosis of early and subclinical stages of Alzheimer's Disease (AD) play an essential role in the implementation of intervention and prevention strategies. Neuroimaging techniques predominantly provide insight into anatomic structure changes associated with AD. Deep learning methods have been extensively applied towards creating and evaluating models capable of differentiating between cognitively unimpaired, patients with Mild Cognitive Impairment (MCI) and AD dementia. Several published approaches apply information fusion techniques, providing ways of combining several input sources in the medical domain, which contributes to knowledge of broader and enriched quality. The aim of this paper is to fuse sociodemographic data such as age, marital status, education and gender, and genetic data (presence of an apolipoprotein E (APOE)-ε4 allele) with Magnetic Resonance Imaging (MRI) scans. This enables enriched multi-modal features, that adequately represent the MRI scan visually and is adopted for creating and modeling classification systems capable of detecting amnestic MCI (aMCI). To fully utilize the potential of deep convolutional neural networks, two extra color layers denoting contrast intensified and blurred image adaptations are virtually augmented to each MRI scan, completing the Red-Green-Blue (RGB) color channels. Deep convolutional activation features (DeCAF) are extracted from the average pooling layer of the deep learning system Inception_v3. These features from the fused MRI scans are used as visual representation for the Long Short-Term Memory (LSTM) based Recurrent Neural Network (RNN) classification model. The proposed approach is evaluated on a sub-study containing 120 participants (aMCI = 61 and cognitively unimpaired = 59) of the Heinz Nixdorf Recall (HNR) Study with a baseline model accuracy of 76%. Further evaluation was conducted on the ADNI Phase 1 dataset with 624 participants (aMCI = 397 and cognitively unimpaired = 227) with a baseline model accuracy of 66.27%. Experimental results show that the proposed approach achieves 90% accuracy and 0.90 F1-Score at classification of aMCI vs. cognitively unimpaired participants on the HNR Study dataset, and 77% accuracy and 0.83 F1-Score on the ADNI dataset.


Assuntos
Apolipoproteínas E/genética , Disfunção Cognitiva/diagnóstico , Imageamento por Ressonância Magnética , Neuroimagem , Idoso , Idoso de 80 Anos ou mais , Disfunção Cognitiva/patologia , Conjuntos de Dados como Assunto , Aprendizado Profundo , Progressão da Doença , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fatores Socioeconômicos
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 890-894, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946037

RESUMO

The aim of this paper is to combine automatically generated image keywords with radiographs, thus enabling an enriched multi-modal image representation for body part classification. The proposed method could also be used to incorporate meta data into images for combined learning. Multi-modality is achieved by branding the radiographs via intensity markers, which denotes the occurrence of textual features. There is a need to create systems capable of adequately detecting and classifying body parts in radiology images, as the number of digital medical scans taken daily has expeditiously increased. This is a fundamental step towards computer-aided interpretation, as manual annotation is time-consuming, prone to errors and often impractical. Word embeddings are derived from keywords, automatically generated with the Long Short-Term Memory (LSTM) based Recurrent Neural Network (RNN) Show-and-Tell model, and incorporated by augmentation into radiographs with Word2Vec: Deep learning systems are trained with the augmented radiographs. Using the data sets, Musculoskeletal Radiographs (MURA) and ImageCLEF 2015 Medical Clustering Task, the proposed approach obtains best prediction accuracy, with 95.78 % and 83.90 %, respectively.


Assuntos
Corpo Humano , Redes Neurais de Computação , Aprendizado Profundo , Humanos , Radiografia , Registros
6.
PLoS One ; 13(11): e0206229, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30419028

RESUMO

The number of images taken per patient scan has rapidly increased due to advances in software, hardware and digital imaging in the medical domain. There is the need for medical image annotation systems that are accurate as manual annotation is impractical, time-consuming and prone to errors. This paper presents modeling approaches performed to automatically classify and annotate radiographs using several classification schemes, which can be further applied for automatic content-based image retrieval (CBIR) and computer-aided diagnosis (CAD). Different image preprocessing and enhancement techniques were applied to augment grayscale radiographs by virtually adding two extra layers. The Image Retrieval in Medical Applications (IRMA) Code, a mono-hierarchical multi-axial code, served as a basis for this work. To extensively evaluate the image enhancement techniques, five classification schemes including the complete IRMA code were adopted. The deep convolutional neural network systems Inception-v3 and Inception-ResNet-v2, and Random Forest models with 1000 trees were trained using extracted Bag-of-Keypoints visual representations. The classification model performances were evaluated using the ImageCLEF 2009 Medical Annotation Task test set. The applied visual enhancement techniques proved to achieve better annotation accuracy in all classification schemes.


Assuntos
Diagnóstico por Computador/estatística & dados numéricos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Radiografia/métodos , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA