Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Int J Comput Assist Radiol Surg ; 19(1): 87-96, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37233894

RESUMO

PURPOSE: The training of deep medical image segmentation networks usually requires a large amount of human-annotated data. To alleviate the burden of human labor, many semi- or non-supervised methods have been developed. However, due to the complexity of clinical scenario, insufficient training labels still causes inaccurate segmentation in some difficult local areas such as heterogeneous tumors and fuzzy boundaries. METHODS: We propose an annotation-efficient training approach, which only requires scribble guidance in the difficult areas. A segmentation network is initially trained with a small amount of fully annotated data and then used to produce pseudo labels for more training data. Human supervisors draw scribbles in the areas of incorrect pseudo labels (i.e., difficult areas), and the scribbles are converted into pseudo label maps using a probability-modulated geodesic transform. To reduce the influence of the potential errors in the pseudo labels, a confidence map of the pseudo labels is generated by jointly considering the pixel-to-scribble geodesic distance and the network output probability. The pseudo labels and confidence maps are iteratively optimized with the update of the network, and the network training is promoted by the pseudo labels and the confidence maps in turn. RESULTS: Cross-validation based on two data sets (brain tumor MRI and liver tumor CT) showed that our method significantly reduces the annotation time while maintains the segmentation accuracy of difficult areas (e.g., tumors). Using 90 scribble-annotated training images (annotated time: ~ 9 h), our method achieved the same performance as using 45 fully annotated images (annotation time: > 100 h) but required much shorter annotation time. CONCLUSION: Compared to the conventional full annotation approaches, the proposed method significantly saves the annotation efforts by focusing the human supervisions on the most difficult regions. It provides an annotation-efficient way for training medical image segmentation networks in complex clinical scenario.


Assuntos
Neoplasias Encefálicas , Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Neuroimagem , Probabilidade , Projetos de Pesquisa , Processamento de Imagem Assistida por Computador
2.
Int J Comput Assist Radiol Surg ; 19(1): 97-108, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37322299

RESUMO

PURPOSE: Pelvic bone segmentation and landmark definition from computed tomography (CT) images are prerequisite steps for the preoperative planning of total hip arthroplasty. In clinical applications, the diseased pelvic anatomy usually degrades the accuracies of bone segmentation and landmark detection, leading to improper surgery planning and potential operative complications. METHODS: This work proposes a two-stage multi-task algorithm to improve the accuracy of pelvic bone segmentation and landmark detection, especially for the diseased cases. The two-stage framework uses a coarse-to-fine strategy which first conducts global-scale bone segmentation and landmark detection and then focuses on the important local region to further refine the accuracy. For the global stage, a dual-task network is designed to share the common features between the segmentation and detection tasks, so that the two tasks mutually reinforce each other's performance. For the local-scale segmentation, an edge-enhanced dual-task network is designed for simultaneous bone segmentation and edge detection, leading to the more accurate delineation of the acetabulum boundary. RESULTS: This method was evaluated via threefold cross-validation based on 81 CT images (including 31 diseased and 50 healthy cases). The first stage achieved DSC scores of 0.94, 0.97, and 0.97 for the sacrum, left and right hips, respectively, and an average distance error of 3.24 mm for the bone landmarks. The second stage further improved the DSC of the acetabulum by 5.42%, and this accuracy outperforms the state-of-the-arts (SOTA) methods by 0.63%. Our method also accurately segmented the diseased acetabulum boundaries. The entire workflow took ~ 10 s, which was only half of the U-Net run time. CONCLUSION: Using the multi-task networks and the coarse-to-fine strategy, this method achieved more accurate bone segmentation and landmark detection than the SOTA method, especially for diseased hip images. Our work contributes to accurate and rapid design of acetabular cup prostheses.


Assuntos
Aprendizado Profundo , Humanos , Tomografia Computadorizada por Raios X/métodos , Quadril , Pelve/diagnóstico por imagem , Acetábulo , Processamento de Imagem Assistida por Computador/métodos
3.
Int J Comput Assist Radiol Surg ; 18(2): 379-394, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36048319

RESUMO

PURPOSE: Training deep neural networks usually require a large number of human-annotated data. For organ segmentation from volumetric medical images, human annotation is tedious and inefficient. To save human labour and to accelerate the training process, the strategy of annotation by iterative deep learning recently becomes popular in the research community. However, due to the lack of domain knowledge or efficient human-interaction tools, the current AID methods still suffer from long training time and high annotation burden. METHODS: We develop a contour-based annotation by iterative deep learning (AID) algorithm which uses boundary representation instead of voxel labels to incorporate high-level organ shape knowledge. We propose a contour segmentation network with a multi-scale feature extraction backbone to improve the boundary detection accuracy. We also developed a contour-based human-intervention method to facilitate easy adjustments of organ boundaries. By combining the contour-based segmentation network and the contour-adjustment intervention method, our algorithm achieves fast few-shot learning and efficient human proofreading. RESULTS: For validation, two human operators independently annotated four abdominal organs in computed tomography (CT) images using our method and two compared methods, i.e. a traditional contour-interpolation method and a state-of-the-art (SOTA) convolutional network (CNN) method based on voxel label representation. Compared to these methods, our approach considerably saved annotation time and reduced inter-rater variabilities. Our contour detection network also outperforms the SOTA nnU-Net in producing anatomically plausible organ shape with only a small training set. CONCLUSION: Taking advantage of the boundary shape prior and the contour representation, our method is more efficient, more accurate and less prone to inter-operator variability than the SOTA AID methods for organ segmentation from volumetric medical images. The good shape learning ability and flexible boundary adjustment function make it suitable for fast annotation of organ structures with regular shape.


Assuntos
Aprendizado Profundo , Humanos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA