Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
1.
IEEE Trans Med Imaging ; PP2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38602852

RESUMEN

Adapting a medical image segmentation model to a new domain is important for improving its cross-domain transferability, and due to the expensive annotation process, Unsupervised Domain Adaptation (UDA) is appealing where only unlabeled images are needed for the adaptation. Existing UDA methods are mainly based on image or feature alignment with adversarial training for regularization, and they are limited by insufficient supervision in the target domain. In this paper, we propose an enhanced Filtered Pseudo Label (FPL+)-based UDA method for 3D medical image segmentation. It first uses cross-domain data augmentation to translate labeled images in the source domain to a dual-domain training set consisting of a pseudo source-domain set and a pseudo target-domain set. To leverage the dual-domain augmented images to train a pseudo label generator, domain-specific batch normalization layers are used to deal with the domain shift while learn the domain-invariant structure features, generating high-quality pseudo labels for target-domain images. We then combine labeled source-domain images and target-domain images with pseudo labels to train a final segmentor, where image-level weighting based on uncertainty estimation and pixel-level weighting based on dual-domain consensus are proposed to mitigate the adverse effect of noisy pseudo labels. Experiments on three public multi-modal datasets for Vestibular Schwannoma, brain tumor and whole heart segmentation show that our method surpassed ten state-of-the-art UDA methods, and it even achieved better results than fully supervised learning in the target domain in some cases.

2.
Arch Gynecol Obstet ; 309(2): 503-514, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-36790463

RESUMEN

PURPOSE: To investigate the diagnostic value of monoexponential, biexponential, and diffusion kurtosis MR imaging (MRI) in distinguishing invasive placentas. METHODS: A total of 53 patients with invasive placentas and 47 patients with noninvasive placentas undergoing conventional diffusion-weighted imaging (DWI), intravoxel incoherent motion (IVIM), and diffusion kurtosis imaging (DKI) were retrospectively enrolled. The mean, minimum, and maximum parameters including the apparent diffusion coefficient (ADC) and exponential ADC (eADC) from standard DWI, diffusion kurtosis (MK), and diffusion coefficient (MD) from DKI and pure diffusion coefficient (D), pseudo-diffusion coefficient (D*), and perfusion fraction (f) from IVIM were measured and compared from the volumetric analysis. Receiver operating characteristics (ROC) curve and logistic regression analyses were conducted to evaluate the diagnostic efficiency of different diffusion parameters for distinguishing invasive placentas. RESULTS: Comparisons between accreta lesions in patients with invasive placentas (AL) and lower 1/3 part of the placenta in patients with noninvasive placentas (LP) demonstrated that MD mean, D mean, and D* mean were significantly lower while ADC max and D max were significantly higher in invasive placentas (all p < 0.05). Multivariate analysis demonstrated that D mean, D max and D* mean differed significantly among all the studied parameters for invasive placentas. A combined use of these three parameters yielded an AUC of 0.86 with sensitivity, specificity, and accuracy of 84.91%, 76.60%, and 80%, respectively. CONCLUSION: The combined use of different IVIM parameters is helpful in distinguishing invasive placentas.


Asunto(s)
Imagen de Difusión por Resonancia Magnética , Imagen de Difusión Tensora , Humanos , Estudios Retrospectivos , Imagen de Difusión por Resonancia Magnética/métodos , Imagen de Difusión Tensora/métodos , Curva ROC , Movimiento (Física)
3.
IEEE Trans Med Imaging ; 43(1): 175-189, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37440388

RESUMEN

Deep neural networks typically require accurate and a large number of annotations to achieve outstanding performance in medical image segmentation. One-shot and weakly-supervised learning are promising research directions that reduce labeling effort by learning a new class from only one annotated image and using coarse labels instead, respectively. In this work, we present an innovative framework for 3D medical image segmentation with one-shot and weakly-supervised settings. Firstly a propagation-reconstruction network is proposed to propagate scribbles from one annotated volume to unlabeled 3D images based on the assumption that anatomical patterns in different human bodies are similar. Then a multi-level similarity denoising module is designed to refine the scribbles based on embeddings from anatomical- to pixel-level. After expanding the scribbles to pseudo masks, we observe the miss-classified voxels mainly occur at the border region and propose to extract self-support prototypes for the specific refinement. Based on these weakly-supervised segmentation results, we further train a segmentation model for the new class with the noisy label training strategy. Experiments on three CT and one MRI datasets show the proposed method obtains significant improvement over the state-of-the-art methods and performs robustly even under severe class imbalance and low contrast. Code is publicly available at https://github.com/LWHYC/OneShot_WeaklySeg.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Aprendizaje Automático Supervisado
4.
IEEE Trans Med Imaging ; 42(12): 3932-3943, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37738202

RESUMEN

Domain Adaptation (DA) is important for deep learning-based medical image segmentation models to deal with testing images from a new target domain. As the source-domain data are usually unavailable when a trained model is deployed at a new center, Source-Free Domain Adaptation (SFDA) is appealing for data and annotation-efficient adaptation to the target domain. However, existing SFDA methods have a limited performance due to lack of sufficient supervision with source-domain images unavailable and target-domain images unlabeled. We propose a novel Uncertainty-aware Pseudo Label guided (UPL) SFDA method for medical image segmentation. Specifically, we propose Target Domain Growing (TDG) to enhance the diversity of predictions in the target domain by duplicating the pre-trained model's prediction head multiple times with perturbations. The different predictions in these duplicated heads are used to obtain pseudo labels for unlabeled target-domain images and their uncertainty to identify reliable pseudo labels. We also propose a Twice Forward pass Supervision (TFS) strategy that uses reliable pseudo labels obtained in one forward pass to supervise predictions in the next forward pass. The adaptation is further regularized by a mean prediction-based entropy minimization term that encourages confident and consistent results in different prediction heads. UPL-SFDA was validated with a multi-site heart MRI segmentation dataset, a cross-modality fetal brain segmentation dataset, and a 3D fetal tissue segmentation dataset. It improved the average Dice by 5.54, 5.01 and 6.89 percentage points for the three tasks compared with the baseline, respectively, and outperformed several state-of-the-art SFDA methods.


Asunto(s)
Feto , Procesamiento de Imagen Asistido por Computador , Incertidumbre , Entropía
5.
Neurocomputing (Amst) ; 544: None, 2023 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-37528990

RESUMEN

Accurate segmentation of brain tumors from medical images is important for diagnosis and treatment planning, and it often requires multi-modal or contrast-enhanced images. However, in practice some modalities of a patient may be absent. Synthesizing the missing modality has a potential for filling this gap and achieving high segmentation performance. Existing methods often treat the synthesis and segmentation tasks separately or consider them jointly but without effective regularization of the complex joint model, leading to limited performance. We propose a novel brain Tumor Image Synthesis and Segmentation network (TISS-Net) that obtains the synthesized target modality and segmentation of brain tumors end-to-end with high performance. First, we propose a dual-task-regularized generator that simultaneously obtains a synthesized target modality and a coarse segmentation, which leverages a tumor-aware synthesis loss with perceptibility regularization to minimize the high-level semantic domain gap between synthesized and real target modalities. Based on the synthesized image and the coarse segmentation, we further propose a dual-task segmentor that predicts a refined segmentation and error in the coarse segmentation simultaneously, where a consistency between these two predictions is introduced for regularization. Our TISS-Net was validated with two applications: synthesizing FLAIR images for whole glioma segmentation, and synthesizing contrast-enhanced T1 images for Vestibular Schwannoma segmentation. Experimental results showed that our TISS-Net largely improved the segmentation accuracy compared with direct segmentation from the available modalities, and it outperformed state-of-the-art image synthesis-based segmentation methods.

6.
Med Image Anal ; 89: 102904, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37506556

RESUMEN

Generalization to previously unseen images with potential domain shifts is essential for clinically applicable medical image segmentation. Disentangling domain-specific and domain-invariant features is key for Domain Generalization (DG). However, existing DG methods struggle to achieve effective disentanglement. To address this problem, we propose an efficient framework called Contrastive Domain Disentanglement and Style Augmentation (CDDSA) for generalizable medical image segmentation. First, a disentangle network decomposes the image into domain-invariant anatomical representation and domain-specific style code, where the former is sent for further segmentation that is not affected by domain shift, and the disentanglement is regularized by a decoder that combines the anatomical representation and style code to reconstruct the original image. Second, to achieve better disentanglement, a contrastive loss is proposed to encourage the style codes from the same domain and different domains to be compact and divergent, respectively. Finally, to further improve generalizability, we propose a style augmentation strategy to synthesize images with various unseen styles in real time while maintaining anatomical information. Comprehensive experiments on a public multi-site fundus image dataset and an in-house multi-site Nasopharyngeal Carcinoma Magnetic Resonance Image (NPC-MRI) dataset show that the proposed CDDSA achieved remarkable generalizability across different domains, and it outperformed several state-of-the-art methods in generalizable segmentation. Code is available at https://github.com/HiLab-git/DAG4MIA.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Fondo de Ojo
7.
Med Image Anal ; 88: 102873, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37421932

RESUMEN

Abdominal multi-organ segmentation in multi-sequence magnetic resonance images (MRI) is of great significance in many clinical scenarios, e.g., MRI-oriented pre-operative treatment planning. Labeling multiple organs on a single MR sequence is a time-consuming and labor-intensive task, let alone manual labeling on multiple MR sequences. Training a model by one sequence and generalizing it to other domains is one way to reduce the burden of manual annotation, but the existence of domain gap often leads to poor generalization performance of such methods. Image translation-based unsupervised domain adaptation (UDA) is a common way to address this domain gap issue. However, existing methods focus less on keeping anatomical consistency and are limited by one-to-one domain adaptation, leading to low efficiency for adapting a model to multiple target domains. This work proposes a unified framework called OMUDA for one-to-multiple unsupervised domain-adaptive segmentation, where disentanglement between content and style is used to efficiently translate a source domain image into multiple target domains. Moreover, generator refactoring and style constraint are conducted in OMUDA for better maintaining cross-modality structural consistency and reducing domain aliasing. The average Dice Similarity Coefficients (DSCs) of OMUDA for multiple sequences and organs on the in-house test set, the AMOS22 dataset and the CHAOS dataset are 85.51%, 82.66% and 91.38%, respectively, which are slightly lower than those of CycleGAN(85.66% and 83.40%) in the first two data sets and slightly higher than CycleGAN(91.36%) in the last dataset. But compared with CycleGAN, OMUDA reduces floating-point calculations by about 87 percent in the training phase and about 30 percent in the inference stage respectively. The quantitative results in both segmentation performance and training efficiency demonstrate the usability of OMUDA in some practical scenes, such as the initial phase of product development.

8.
Med Image Anal ; 88: 102833, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37267773

RESUMEN

In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Sustancia Blanca , Embarazo , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo/diagnóstico por imagen , Cabeza , Feto/diagnóstico por imagen , Algoritmos , Imagen por Resonancia Magnética/métodos
9.
Int J Radiat Oncol Biol Phys ; 117(4): 994-1006, 2023 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-37244625

RESUMEN

PURPOSE: Our purpose was to develop a deep learning model (AbsegNet) that produces accurate contours of 16 organs at risk (OARs) for abdominal malignancies as an essential part of fully automated radiation treatment planning. METHODS AND MATERIALS: Three data sets with 544 computed tomography scans were retrospectively collected. Data set 1 was split into 300 training cases and 128 test cases (cohort 1) for AbsegNet. Data set 2, including cohort 2 (n = 24) and cohort 3 (n = 20), were used to validate AbsegNet externally. Data set 3, including cohort 4 (n = 40) and cohort 5 (n = 32), were used to clinically assess the accuracy of AbsegNet-generated contours. Each cohort was from a different center. The Dice similarity coefficient and 95th-percentile Hausdorff distance were calculated to evaluate the delineation quality for each OAR. Clinical accuracy evaluation was classified into 4 levels: no revision, minor revisions (0% < volumetric revision degrees [VRD] ≤ 10%), moderate revisions (10% ≤ VRD < 20%), and major revisions (VRD ≥20%). RESULTS: For all OARs, AbsegNet achieved a mean Dice similarity coefficient of 86.73%, 85.65%, and 88.04% in cohorts 1, 2, and 3, respectively, and a mean 95th-percentile Hausdorff distance of 8.92, 10.18, and 12.40 mm, respectively. The performance of AbsegNet outperformed SwinUNETR, DeepLabV3+, Attention-UNet, UNet, and 3D-UNet. When experts evaluated contours from cohorts 4 and 5, 4 OARs (liver, kidney_L, kidney_R, and spleen) of all patients were scored as having no revision, and over 87.5% of patients with contours of the stomach, esophagus, adrenals, or rectum were considered as having no or minor revisions. Only 15.0% of patients with colon and small bowel contours required major revisions. CONCLUSIONS: We propose a novel deep-learning model to delineate OARs on diverse data sets. Most contours produced by AbsegNet are accurate and robust and are, therefore, clinically applicable and helpful to facilitate radiation therapy workflow.

10.
Med Image Anal ; 87: 102808, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37087838

RESUMEN

Assessment of myocardial viability is essential in diagnosis and treatment management of patients suffering from myocardial infarction, and classification of pathology on the myocardium is the key to this assessment. This work defines a new task of medical image analysis, i.e., to perform myocardial pathology segmentation (MyoPS) combining three-sequence cardiac magnetic resonance (CMR) images, which was first proposed in the MyoPS challenge, in conjunction with MICCAI 2020. Note that MyoPS refers to both myocardial pathology segmentation and the challenge in this paper. The challenge provided 45 paired and pre-aligned CMR images, allowing algorithms to combine the complementary information from the three CMR sequences for pathology segmentation. In this article, we provide details of the challenge, survey the works from fifteen participants and interpret their methods according to five aspects, i.e., preprocessing, data augmentation, learning strategy, model architecture and post-processing. In addition, we analyze the results with respect to different factors, in order to examine the key obstacles and explore the potential of solutions, as well as to provide a benchmark for future research. The average Dice scores of submitted algorithms were 0.614±0.231 and 0.644±0.153 for myocardial scars and edema, respectively. We conclude that while promising results have been reported, the research is still in the early stage, and more in-depth exploration is needed before a successful application to the clinics. MyoPS data and evaluation tool continue to be publicly available upon registration via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/myops20/).


Asunto(s)
Benchmarking , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Corazón/diagnóstico por imagen , Miocardio/patología , Imagen por Resonancia Magnética/métodos
12.
IEEE Trans Med Imaging ; 42(10): 2912-2923, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37093729

RESUMEN

Semantic segmentation of histopathological images is important for automatic cancer diagnosis, and it is challenged by time-consuming and labor-intensive annotation process that obtains pixel-level labels for training. To reduce annotation costs, Weakly Supervised Semantic Segmentation (WSSS) aims to segment objects by only using image or patch-level classification labels. Current WSSS methods are mostly based on Class Activation Map (CAM) that usually locates the most discriminative object part with limited segmentation accuracy. In this work, we propose a novel two-stage weakly supervised segmentation framework based on High-resolution Activation Maps and Interleaved Learning (HAMIL). First, we propose a simple yet effective Classification Network with High-resolution Activation Maps (HAM-Net) that exploits a lightweight classification head combined with Multiple Layer Fusion (MLF) of activation maps and Monte Carlo Augmentation (MCA) to obtain precise foreground regions. Second, we use dense pseudo labels generated by HAM-Net to train a better segmentation model, where three networks with the same structure are trained with interleaved learning: The agreement between two networks is used to highlight reliable pseudo labels for training the third network, and at the same time, the two networks serve as teachers for guiding the third network via knowledge distillation. Extensive experiments on two public histopathological image datasets of lung cancer demonstrated that our proposed HAMIL outperformed state-of-the-art weakly supervised and noisy label learning methods, respectively. The code is available at https://github.com/HiLab-git/HAMIL.


Asunto(s)
Neoplasias Pulmonares , Humanos , Método de Montecarlo , Semántica , Aprendizaje Automático Supervisado , Procesamiento de Imagen Asistido por Computador
13.
IEEE Trans Med Imaging ; 42(9): 2513-2523, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37030798

RESUMEN

Accurate segmentation of multiple abdominal organs from Computed Tomography (CT) images plays an important role in computer-aided diagnosis, treatment planning and follow-up. Currently, 3D Convolution Neural Networks (CNN) have achieved promising performance for automatic medical image segmentation tasks. However, most existing 3D CNNs have a large set of parameters and huge floating point operations (FLOPs), and 3D CT volumes have a large size, leading to high computational cost, which limits their clinical application. To tackle this issue, we propose a novel framework based on lightweight network and Knowledge Distillation (KD) for delineating multiple organs from 3D CT volumes. We first propose a novel lightweight medical image segmentation network named LCOV-Net for reducing the model size and then introduce two knowledge distillation modules (i.e., Class-Affinity KD and Multi-Scale KD) to effectively distill the knowledge from a heavy-weight teacher model to improve LCOV-Net's segmentation accuracy. Experiments on two public abdominal CT datasets for multiple organ segmentation showed that: 1) Our LCOV-Net outperformed existing lightweight 3D segmentation models in both computational cost and accuracy; 2) The proposed KD strategy effectively improved the performance of the lightweight network, and it outperformed existing KD methods; 3) Combining the proposed LCOV-Net and KD strategy, our framework achieved better performance than the state-of-the-art 3D nnU-Net with only one-fifth parameters. The code is available at https://github.com/HiLab-git/LCOVNet-and-KD.


Asunto(s)
Abdomen , Imagenología Tridimensional , Imagenología Tridimensional/métodos , Abdomen/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador/métodos
14.
IEEE Trans Med Imaging ; 42(9): 2539-2551, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37030841

RESUMEN

In clinical practice, it is desirable for medical image segmentation models to be able to continually learn on a sequential data stream from multiple sites, rather than a consolidated dataset, due to storage cost and privacy restrictions. However, when learning on a new site, existing methods struggle with a weak memorizability for previous sites with complex shape and semantic information, and a poor explainability for the memory consolidation process. In this work, we propose a novel Shape and Semantics-based Selective Regularization ( [Formula: see text]) method for explainable cross-site continual segmentation to maintain both shape and semantic knowledge of previously learned sites. Specifically, [Formula: see text] method adopts a selective regularization scheme to penalize changes of parameters with high Joint Shape and Semantics-based Importance (JSSI) weights, which are estimated based on the parameter sensitivity to shape properties and reliable semantics of the segmentation object. This helps to prevent the related shape and semantic knowledge from being forgotten. Moreover, we propose an Importance Activation Mapping (IAM) method for memory interpretation, which indicates the spatial support for important parameters to visualize the memorized content. We have extensively evaluated our method on prostate segmentation and optic cup and disc segmentation tasks. Our method outperforms other comparison methods in reducing model forgetting and increasing explainability. Our code is available at https://github.com/jingyzhang/S3R.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Disco Óptico , Masculino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Semántica , Aprendizaje Automático , Próstata
15.
IEEE Trans Med Imaging ; 42(8): 2235-2246, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37022877

RESUMEN

The success of Convolutional Neural Networks (CNNs) in 3D medical image segmentation relies on massive fully annotated 3D volumes for training that are time-consuming and labor-intensive to acquire. In this paper, we propose to annotate a segmentation target with only seven points in 3D medical images, and design a two-stage weakly supervised learning framework PA-Seg. In the first stage, we employ geodesic distance transform to expand the seed points to provide more supervision signal. To further deal with unannotated image regions during training, we propose two contextual regularization strategies, i.e., multi-view Conditional Random Field (mCRF) loss and Variance Minimization (VM) loss, where the first one encourages pixels with similar features to have consistent labels, and the second one minimizes the intensity variance for the segmented foreground and background, respectively. In the second stage, we use predictions obtained by the model pre-trained in the first stage as pseudo labels. To overcome noises in the pseudo labels, we introduce a Self and Cross Monitoring (SCM) strategy, which combines self-training with Cross Knowledge Distillation (CKD) between a primary model and an auxiliary model that learn from soft labels generated by each other. Experiments on public datasets for Vestibular Schwannoma (VS) segmentation and Brain Tumor Segmentation (BraTS) demonstrated that our model trained in the first stage outperformed existing state-of-the-art weakly supervised approaches by a large margin, and after using SCM for additional training, the model's performance was close to its fully supervised counterpart on the BraTS dataset.


Asunto(s)
Neoplasias Encefálicas , Humanos , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático Supervisado
16.
Med Phys ; 50(7): 4430-4442, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36762594

RESUMEN

BACKGROUND: Delineation of Organs-at-Risks (OARs) is an important step in radiotherapy treatment planning. As manual delineation is time-consuming, labor-intensive and affected by inter- and intra-observer variability, a robust and efficient automatic segmentation algorithm is highly desirable for improving the efficiency and repeatability of OAR delineation. PURPOSE: Automatic segmentation of OARs in medical images is challenged by low contrast, various shapes and imbalanced sizes of different organs. We aim to overcome these challenges and develop a high-performance method for automatic segmentation of 10 OARs required in radiotherapy planning for brain tumors. METHODS: A novel two-stage segmentation framework is proposed, where a coarse and simultaneous localization of all the target organs is obtained in the first stage, and a fine segmentation is achieved for each organ, respectively, in the second stage. To deal with organs with various sizes and shapes, a stratified segmentation strategy is proposed, where a High- and Low-Resolution Residual Network (HLRNet) that consists of a multiresolution branch and a high-resolution branch is introduced to segment medium-sized organs, and a High-Resolution Residual Network (HRRNet) is used to segment small organs. In addition, a label fusion strategy is proposed to better deal with symmetric pairs of organs like the left and right cochleas and lacrimal glands. RESULTS: Our method was validated on the dataset of MICCAI ABCs 2020 challenge for OAR segmentation. It obtained an average Dice of 75.8% for 10 OARs, and significantly outperformed several state-of-the-art models including nnU-Net (71.6%) and FocusNet (72.4%). Our proposed HLRNet and HRRNet improved the segmentation accuracy for medium-sized and small organs, respectively. The label fusion strategy led to higher accuracy for symmetric pairs of organs. CONCLUSIONS: Our proposed method is effective for the segmentation of OARs of brain tumors, with a better performance than existing methods, especially on medium-sized and small organs. It has a potential for improving the efficiency of radiotherapy planning with high segmentation accuracy.


Asunto(s)
Neoplasias Encefálicas , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X , Órganos en Riesgo , Planificación de la Radioterapia Asistida por Computador/métodos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/radioterapia
17.
Comput Methods Programs Biomed ; 231: 107398, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36773591

RESUMEN

BACKGROUND AND OBJECTIVE: Open-source deep learning toolkits are one of the driving forces for developing medical image segmentation models that are essential for computer-assisted diagnosis and treatment procedures. Existing toolkits mainly focus on fully supervised segmentation that assumes full and accurate pixel-level annotations are available. Such annotations are time-consuming and difficult to acquire for segmentation tasks, which makes learning from imperfect labels highly desired for reducing the annotation cost. We aim to develop a new deep learning toolkit to support annotation-efficient learning for medical image segmentation, which can accelerate and simplify the development of deep learning models with limited annotation budget, e.g., learning from partial, sparse or noisy annotations. METHODS: Our proposed toolkit named PyMIC is a modular deep learning library for medical image segmentation tasks. In addition to basic components that support development of high-performance models for fully supervised segmentation, it contains several advanced components that are tailored for learning from imperfect annotations, such as loading annotated and unannounced images, loss functions for unannotated, partially or inaccurately annotated images, and training procedures for co-learning between multiple networks, etc. PyMIC is built on the PyTorch framework and supports development of semi-supervised, weakly supervised and noise-robust learning methods for medical image segmentation. RESULTS: We present several illustrative medical image segmentation tasks based on PyMIC: (1) Achieving competitive performance on fully supervised learning; (2) Semi-supervised cardiac structure segmentation with only 10% training images annotated; (3) Weakly supervised segmentation using scribble annotations; and (4) Learning from noisy labels for chest radiograph segmentation. CONCLUSIONS: The PyMIC toolkit is easy to use and facilitates efficient development of medical image segmentation models with imperfect annotations. It is modular and flexible, which enables researchers to develop high-performance models with low annotation cost. The source code is available at:https://github.com/HiLab-git/PyMIC.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Computador , Corazón , Programas Informáticos , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático Supervisado
18.
Radiother Oncol ; 180: 109480, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36657723

RESUMEN

BACKGROUND AND PURPOSE: The problem of obtaining accurate primary gross tumor volume (GTVp) segmentation for nasopharyngeal carcinoma (NPC) on heterogeneous magnetic resonance imaging (MRI) images with deep learning remains unsolved. Herein, we reported a new deep-learning method than can accurately delineate GTVp for NPC on multi-center MRI scans. MATERIAL AND METHODS: We collected 1057 patients with MRI images from five hospitals and randomly selected 600 patients from three hospitals to constitute a mixed training cohort for model development. The resting patients were used as internal (n = 259) and external (n = 198) testing cohorts for model evaluation. An augmentation-invariant strategy was proposed to delineate GTVp from multi-center MRI images, which encouraged networks to produce similar predictions for inputs with different augmentations to learn invariant anatomical structure features. The Dice similarity coefficient (DSC), 95 % Hausdorff distance (HD95), average surface distance (ASD), and relative absolute volume difference (RAVD) were used to measure segmentation performance. RESULTS: The model-generated predictions had a high overlap ratio with the ground truth. For the internal testing cohorts, the average DSC, HD95, ASD, and RAVD were 0.88, 4.99 mm, 1.03 mm, and 0.13, respectively. For external testing cohorts, the average DSC, HD95, ASD, and RAVD were 0.88, 3.97 mm, 0.97 mm, and 0.10, respectively. No significant differences were found in DSC, HD95, and ASD for patients with different T categories, MRI thickness, or in-plane spacings. Moreover, the proposed augmentation-invariant strategy outperformed the widely-used nnUNet, which uses conventional data augmentation approaches. CONCLUSION: Our proposed method showed a highly accurate GTVp segmentation for NPC on multi-center MRI images, suggesting that it has the potential to act as a generalized delineation solution for heterogeneous MRI images.


Asunto(s)
Aprendizaje Profundo , Neoplasias Nasofaríngeas , Humanos , Carcinoma Nasofaríngeo/diagnóstico por imagen , Carga Tumoral , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Neoplasias Nasofaríngeas/diagnóstico por imagen , Espectroscopía de Resonancia Magnética
20.
Clin Chem ; 69(2): 130-139, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-36544350

RESUMEN

BACKGROUND: Immunofixation electrophoresis (IFE) is important for diagnosis of plasma cell disorders (PCDs). Manual analysis of IFE images is time-consuming and potentially subjective. An artificial intelligence (AI) system for automatic and accurate IFE image recognition is desirable. METHODS: In total, 12 703 expert-annotated IFE images (9182 from a new IFE imaging system and 3521 from an old one) were used to develop and test an AI system that was an ensemble of 3 deep neural networks. The model takes an IFE image as input and predicts the presence of 8 basic patterns (IgA-, IgA-, IgG-, IgG-, IgM-, IgM-, light chain and ) and their combinations. Score-based class activation maps (Score-CAMs) were used for visual explanation of the models prediction. RESULTS: The AI model achieved an average accuracy, sensitivity, and specificity of 99.82, 93.17, and 99.93, respectively, for detection of the 8 basic patterns, which outperformed 4 junior experts with 1 years experience and was comparable to a senior expert with 5 years experience. The Score-CAMs gave a reasonable visual explanation of the prediction by highlighting the target aligned regions in the bands and indicating potentially unreliable predictions. When trained with only the new system images, the models performance was still higher than junior experts on both the new and old IFE systems, with average accuracy of 99.91 and 99.81, respectively. CONCLUSIONS: Our AI system achieved human-level performance in automatic recognition of IFE images, with high explainability and generalizability. It has the potential to improve the efficiency and reliability of diagnosis of PCDs.


Asunto(s)
Aprendizaje Profundo , Paraproteinemias , Humanos , Reproducibilidad de los Resultados , Inteligencia Artificial , Inmunoelectroforesis/métodos , Inmunoglobulina A , Inmunoglobulina G , Inmunoglobulina M
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA