Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Sci Rep ; 14(1): 3341, 2024 02 09.
Artigo em Inglês | MEDLINE | ID: mdl-38336974

RESUMO

Accurate annotation of vertebral bodies is crucial for automating the analysis of spinal X-ray images. However, manual annotation of these structures is a laborious and costly process due to their complex nature, including small sizes and varying shapes. To address this challenge and expedite the annotation process, we propose an ensemble pipeline called VertXNet. This pipeline currently combines two segmentation mechanisms, semantic segmentation using U-Net, and instance segmentation using Mask R-CNN, to automatically segment and label vertebral bodies in lateral cervical and lumbar spinal X-ray images. VertXNet enhances its effectiveness by adopting a rule-based strategy (termed the ensemble rule) for effectively combining segmentation outcomes from U-Net and Mask R-CNN. It determines vertebral body labels by recognizing specific reference vertebral instances, such as cervical vertebra 2 ('C2') in cervical spine X-rays and sacral vertebra 1 ('S1') in lumbar spine X-rays. Those references are commonly relatively easy to identify at the edge of the spine. To assess the performance of our proposed pipeline, we conducted evaluations on three spinal X-ray datasets, including two in-house datasets and one publicly available dataset. The ground truth annotations were provided by radiologists for comparison. Our experimental results have shown that the proposed pipeline outperformed two state-of-the-art (SOTA) segmentation models on our test dataset with a mean Dice of 0.90, vs. a mean Dice of 0.73 for Mask R-CNN and 0.72 for U-Net. We also demonstrated that VertXNet is a modular pipeline that enables using other SOTA model, like nnU-Net to further improve its performance. Furthermore, to evaluate the generalization ability of VertXNet on spinal X-rays, we directly tested the pre-trained pipeline on two additional datasets. A consistently strong performance was observed, with mean Dice coefficients of 0.89 and 0.88, respectively. In summary, VertXNet demonstrated significantly improved performance in vertebral body segmentation and labeling for spinal X-ray imaging. Its robustness and generalization were presented through the evaluation of both in-house clinical trial data and publicly available datasets.


Assuntos
Tomografia Computadorizada por Raios X , Corpo Vertebral , Tomografia Computadorizada por Raios X/métodos , Raios X , Radiografia , Vértebras Cervicais/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
2.
Med Image Anal ; 95: 103196, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38781755

RESUMO

The success of deep learning on image classification and recognition tasks has led to new applications in diverse contexts, including the field of medical imaging. However, two properties of deep neural networks (DNNs) may limit their future use in medical applications. The first is that DNNs require a large amount of labeled training data, and the second is that the deep learning-based models lack interpretability. In this paper, we propose and investigate a data-efficient framework for the task of general medical image segmentation. We address the two aforementioned challenges by introducing domain knowledge in the form of a strong prior into a deep learning framework. This prior is expressed by a customized dynamical system. We performed experiments on two different datasets, namely JSRT and ISIC2016 (heart and lungs segmentation on chest X-ray images and skin lesion segmentation on dermoscopy images). We have achieved competitive results using the same amount of training data compared to the state-of-the-art methods. More importantly, we demonstrate that our framework is extremely data-efficient, and it can achieve reliable results using extremely limited training data. Furthermore, the proposed method is rotationally invariant and insensitive to initialization.


Assuntos
Aprendizado Profundo , Humanos , Pulmão/diagnóstico por imagem , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Radiografia Torácica , Algoritmos , Coração/diagnóstico por imagem
3.
Med Image Anal ; 77: 102373, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35134636

RESUMO

Machine learning has been widely adopted for medical image analysis in recent years given its promising performance in image segmentation and classification tasks. The success of machine learning, in particular supervised learning, depends on the availability of manually annotated datasets. For medical imaging applications, such annotated datasets are not easy to acquire, it takes a substantial amount of time and resource to curate an annotated medical image set. In this paper, we propose an efficient annotation framework for brain MR images that can suggest informative sample images for human experts to annotate. We evaluate the framework on two different brain image analysis tasks, namely brain tumour segmentation and whole brain segmentation. Experiments show that for brain tumour segmentation task on the BraTS 2019 dataset, training a segmentation model with only 7% suggestively annotated image samples can achieve a performance comparable to that of training on the full dataset. For whole brain segmentation on the MALC dataset, training with 42% suggestively annotated image samples can achieve a comparable performance to training on the full dataset. The proposed framework demonstrates a promising way to save manual annotation cost and improve data efficiency in medical imaging applications.


Assuntos
Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Diagnóstico por Imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Imageamento por Ressonância Magnética
4.
Artigo em Inglês | MEDLINE | ID: mdl-36998700

RESUMO

Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.

5.
Syst Rev ; 4: 172, 2015 Nov 26.
Artigo em Inglês | MEDLINE | ID: mdl-26612232

RESUMO

BACKGROUND: Identifying relevant studies for inclusion in a systematic review (i.e. screening) is a complex, laborious and expensive task. Recently, a number of studies has shown that the use of machine learning and text mining methods to automatically identify relevant studies has the potential to drastically decrease the workload involved in the screening phase. The vast majority of these machine learning methods exploit the same underlying principle, i.e. a study is modelled as a bag-of-words (BOW). METHODS: We explore the use of topic modelling methods to derive a more informative representation of studies. We apply Latent Dirichlet allocation (LDA), an unsupervised topic modelling approach, to automatically identify topics in a collection of studies. We then represent each study as a distribution of LDA topics. Additionally, we enrich topics derived using LDA with multi-word terms identified by using an automatic term recognition (ATR) tool. For evaluation purposes, we carry out automatic identification of relevant studies using support vector machine (SVM)-based classifiers that employ both our novel topic-based representation and the BOW representation. RESULTS: Our results show that the SVM classifier is able to identify a greater number of relevant studies when using the LDA representation than the BOW representation. These observations hold for two systematic reviews of the clinical domain and three reviews of the social science domain. CONCLUSIONS: A topic-based feature representation of documents outperforms the BOW representation when applied to the task of automatic citation screening. The proposed term-enriched topics are more informative and less ambiguous to systematic reviewers.


Assuntos
Pesquisa Biomédica/classificação , Mineração de Dados/métodos , Modelos Estatísticos , Literatura de Revisão como Assunto , Máquina de Vetores de Suporte , Tomada de Decisões Assistida por Computador , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA