Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Bases de datos
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 3341, 2024 02 09.
Artículo en Inglés | MEDLINE | ID: mdl-38336974

RESUMEN

Accurate annotation of vertebral bodies is crucial for automating the analysis of spinal X-ray images. However, manual annotation of these structures is a laborious and costly process due to their complex nature, including small sizes and varying shapes. To address this challenge and expedite the annotation process, we propose an ensemble pipeline called VertXNet. This pipeline currently combines two segmentation mechanisms, semantic segmentation using U-Net, and instance segmentation using Mask R-CNN, to automatically segment and label vertebral bodies in lateral cervical and lumbar spinal X-ray images. VertXNet enhances its effectiveness by adopting a rule-based strategy (termed the ensemble rule) for effectively combining segmentation outcomes from U-Net and Mask R-CNN. It determines vertebral body labels by recognizing specific reference vertebral instances, such as cervical vertebra 2 ('C2') in cervical spine X-rays and sacral vertebra 1 ('S1') in lumbar spine X-rays. Those references are commonly relatively easy to identify at the edge of the spine. To assess the performance of our proposed pipeline, we conducted evaluations on three spinal X-ray datasets, including two in-house datasets and one publicly available dataset. The ground truth annotations were provided by radiologists for comparison. Our experimental results have shown that the proposed pipeline outperformed two state-of-the-art (SOTA) segmentation models on our test dataset with a mean Dice of 0.90, vs. a mean Dice of 0.73 for Mask R-CNN and 0.72 for U-Net. We also demonstrated that VertXNet is a modular pipeline that enables using other SOTA model, like nnU-Net to further improve its performance. Furthermore, to evaluate the generalization ability of VertXNet on spinal X-rays, we directly tested the pre-trained pipeline on two additional datasets. A consistently strong performance was observed, with mean Dice coefficients of 0.89 and 0.88, respectively. In summary, VertXNet demonstrated significantly improved performance in vertebral body segmentation and labeling for spinal X-ray imaging. Its robustness and generalization were presented through the evaluation of both in-house clinical trial data and publicly available datasets.


Asunto(s)
Tomografía Computarizada por Rayos X , Cuerpo Vertebral , Tomografía Computarizada por Rayos X/métodos , Rayos X , Radiografía , Vértebras Cervicales/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
2.
Med Image Anal ; 91: 103038, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38000258

RESUMEN

Deformable image registration, the estimation of the spatial transformation between different images, is an important task in medical imaging. Deep learning techniques have been shown to perform 3D image registration efficiently. However, current registration strategies often only focus on the deformation smoothness, which leads to the ignorance of complicated motion patterns (e.g., separate or sliding motions), especially for the intersection of organs. Thus, the performance when dealing with the discontinuous motions of multiple nearby objects is limited, causing undesired predictive outcomes in clinical usage, such as misidentification and mislocalization of lesions or other abnormalities. Consequently, we proposed a novel registration method to address this issue: a new Motion Separable backbone is exploited to capture the separate motion, with a theoretical analysis of the upper bound of the motions' discontinuity provided. In addition, a novel Residual Aligner module was used to disentangle and refine the predicted motions across the multiple neighboring objects/organs. We evaluate our method, Residual Aligner-based Network (RAN), on abdominal Computed Tomography (CT) scans and it has shown to achieve one of the most accurate unsupervised inter-subject registration for the 9 organs, with the highest-ranked registration of the veins (Dice Similarity Coefficient (%)/Average surface distance (mm): 62%/4.9mm for the vena cava and 34%/7.9mm for the portal and splenic vein), with a smaller model structure and less computation compared to state-of-the-art methods. Furthermore, when applied to lung CT, the RAN achieves comparable results to the best-ranked networks (94%/3.0mm), also with fewer parameters and less computation.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Movimiento (Física) , Pulmón/diagnóstico por imagen , Imagenología Tridimensional , Procesamiento de Imagen Asistido por Computador/métodos
3.
Med Image Anal ; 95: 103196, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38781755

RESUMEN

The success of deep learning on image classification and recognition tasks has led to new applications in diverse contexts, including the field of medical imaging. However, two properties of deep neural networks (DNNs) may limit their future use in medical applications. The first is that DNNs require a large amount of labeled training data, and the second is that the deep learning-based models lack interpretability. In this paper, we propose and investigate a data-efficient framework for the task of general medical image segmentation. We address the two aforementioned challenges by introducing domain knowledge in the form of a strong prior into a deep learning framework. This prior is expressed by a customized dynamical system. We performed experiments on two different datasets, namely JSRT and ISIC2016 (heart and lungs segmentation on chest X-ray images and skin lesion segmentation on dermoscopy images). We have achieved competitive results using the same amount of training data compared to the state-of-the-art methods. More importantly, we demonstrate that our framework is extremely data-efficient, and it can achieve reliable results using extremely limited training data. Furthermore, the proposed method is rotationally invariant and insensitive to initialization.


Asunto(s)
Aprendizaje Profundo , Humanos , Pulmón/diagnóstico por imagen , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía Torácica , Algoritmos , Corazón/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA