Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; 43(4): 1347-1364, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37995173

RESUMO

Image segmentation achieves significant improvements with deep neural networks at the premise of a large scale of labeled training data, which is laborious to assure in medical image tasks. Recently, semi-supervised learning (SSL) has shown great potential in medical image segmentation. However, the influence of the learning target quality for unlabeled data is usually neglected in these SSL methods. Therefore, this study proposes a novel self-correcting co-training scheme to learn a better target that is more similar to ground-truth labels from collaborative network outputs. Our work has three-fold highlights. First, we advance the learning target generation as a learning task, improving the learning confidence for unannotated data with a self-correcting module. Second, we impose a structure constraint to encourage the shape similarity further between the improved learning target and the collaborative network outputs. Finally, we propose an innovative pixel-wise contrastive learning loss to boost the representation capacity under the guidance of an improved learning target, thus exploring unlabeled data more efficiently with the awareness of semantic context. We have extensively evaluated our method with the state-of-the-art semi-supervised approaches on four public-available datasets, including the ACDC dataset, M&Ms dataset, Pancreas-CT dataset, and Task_07 CT dataset. The experimental results with different labeled-data ratios show our proposed method's superiority over other existing methods, demonstrating its effectiveness in semi-supervised medical image segmentation.


Assuntos
Redes Neurais de Computação , Semântica , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador
2.
Ultrasonics ; 132: 107012, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37071944

RESUMO

Freehand 3-D ultrasound systems have been advanced in scoliosis assessment to avoid radiation hazards, especially for teenagers. This novel 3-D imaging method also makes it possible to evaluate the spine curvature automatically from the corresponding 3-D projection images. However, most approaches neglect the three-dimensional spine deformity by only using the rendering images, thus limiting their usage in clinical applications. In this study, we proposed a structure-aware localization model to directly identify the spinous processes for automatic 3-D spine curve measurement using the images acquired with freehand 3-D ultrasound imaging. The pivot is to leverage a novel reinforcement learning (RL) framework to localize the landmarks, which adopts a multi-scale agent to boost structure representation with positional information. We also introduced a structure similarity prediction mechanism to perceive the targets with apparent spinous process structures. Finally, a two-fold filtering strategy was proposed to screen the detected spinous processes landmarks iteratively, followed by a three-dimensional spine curve fitting for the spine curvature assessments. We evaluated the proposed model on 3-D ultrasound images among subjects with different scoliotic angles. The results showed that the mean localization accuracy of the proposed landmark localization algorithm was 5.95 pixels. Also, the curvature angles on the coronal plane obtained by the new method had a high linear correlation with those by manual measurement (R = 0.86, p < 0.001). These results demonstrated the potential of our proposed method for facilitating the 3-D assessment of scoliosis, especially for 3-D spine deformity assessment.


Assuntos
Escoliose , Adolescente , Humanos , Escoliose/diagnóstico por imagem , Corpo Vertebral , Coluna Vertebral/diagnóstico por imagem , Imageamento Tridimensional/métodos , Ultrassonografia/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA