Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38557615

ABSTRACT

Multi-organ segmentation is a fundamental task and existing approaches usually rely on large-scale fully-labeled images for training. However, data privacy and incomplete/partial labels make those approaches struggle in practice. Federated learning is an emerging tool to address data privacy but federated learning with partial labels is under-explored. In this work, we explore generating full supervision by building and aggregating inter-organ dependency based on partial labels and propose a single-encoder-multi-decoder framework named FedIOD. To simulate the annotation process where each organ is labeled by referring to other closely-related organs, a transformer module is introduced and the learned self-attention matrices modeling pairwise inter-organ dependency are used to build pseudo full labels. By using those pseudo-full labels for regularization in each client, the shared encoder is trained to extract rich and complete organ-related features rather than being biased toward certain organs. Then, each decoder in FedIOD projects the shared organ-related features into a specific space trained by the corresponding partial labels. Experimental results based on five widely-used datasets, including LiTS, KiTS, MSD, BCTV, and ACDC, demonstrate the effectiveness of FedIOD, outperforming the state-of-the-art approaches under in-federation evaluation and achieving the second-best performance under out-of-federation evaluation for multi-organ segmentation from partial labels. The source code is publicly available at https://github.com/vagabond-healer/FedIOD.

2.
Comput Biol Med ; 171: 108228, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38422964

ABSTRACT

Weakly supervised learning with image-level labels, releasing deep learning from highly labor-intensive pixel-wise annotation, has gained great attention for medical image segmentation. However, existing weakly supervised methods are mainly designed for single-class segmentation while leaving multi-class medical image segmentation rarely-explored. Different from natural images, label symbiosis, together with location adjacency, are much more common in medical images, making it more challenging for multi-class segmentation. In this paper, we propose a novel weakly supervised learning method for multi-class medical image segmentation with image-level labels. In terms of the multi-class classification backbone, a multi-level classification network encoding multi-scale features is proposed to produce binary predictions, together with the corresponding CAMs, of each class separately. To address the above issues (i.e., label symbiosis and location adjacency), a feature decomposition module based on semantic affinity is first proposed to learn both class-independent and class-dependent features by maximizing the inter-class feature distance. Through a cross-guidance loss to jointly utilize the above features, label symbiosis is largely alleviated. In terms of location adjacency, a mutually exclusive loss is constructed to minimize the overlap among regions corresponding to different classes. Experimental results on three datasets demonstrate the superior performance of the proposed weakly-supervised framework for both single-class and multi-class medical image segmentation. We believe the analysis in this paper would shed new light on future work for multi-class medical image segmentation. The source code of this paper is publicly available at https://github.com/HustAlexander/MCWSS.


Subject(s)
Labor, Obstetric , Pregnancy , Female , Humans , Semantics , Software , Supervised Machine Learning , Image Processing, Computer-Assisted
3.
Article in English | MEDLINE | ID: mdl-37862279

ABSTRACT

Brain tumor segmentation is a fundamental task and existing approaches usually rely on multi-modality magnetic resonance imaging (MRI) images for accurate segmentation. However, the common problem of missing/incomplete modalities in clinical practice would severely degrade their segmentation performance, and existing fusion strategies for incomplete multi-modality brain tumor segmentation are far from ideal. In this work, we propose a novel framework named M 2 FTrans to explore and fuse cross-modality features through modality-masked fusion transformers under various incomplete multi-modality settings. Considering vanilla self-attention is sensitive to missing tokens/inputs, both learnable fusion tokens and masked self-attention are introduced to stably build long-range dependency across modalities while being more flexible to learn from incomplete modalities. In addition, to avoid being biased toward certain dominant modalities, modality-specific features are further re-weighted through spatial weight attention and channel- wise fusion transformers for feature redundancy reduction and modality re-balancing. In this way, the fusion strategy in M 2 FTrans is more robust to missing modalities. Experimental results on the widely-used BraTS2018, BraTS2020, and BraTS2021 datasets demonstrate the effectiveness of M 2 FTrans, outperforming the state-of-the-art approaches with large margins under various incomplete modalities for brain tumor segmentation. Code is available at https://github.com/Jun-Jie-Shi/M2FTrans.

4.
Acta Cir Bras ; 38: e382423, 2023.
Article in English | MEDLINE | ID: mdl-37610964

ABSTRACT

PURPOSE: To investigate putative mechanism of wound healing for chitosan-based bisacurone gel against secondary burn wounds in rats. METHODS: A second-degree burn wound with an open flame using mixed fuel (2 mL, 20 seconds) was induced in Sprague Dawley rats (male, 180-220 g, n = 15, each) followed by topical treatments with either vehicle control (white petroleum gel, 1%), silver sulfadiazine (1%) or bisacurone gel (2.5, 5, or 10%) for 20 days. Wound contraction rate and paw withdrawal threshold were monitored on various days. Oxidative stress (superoxide dismutase, glutathione, malondialdehyde, and nitric oxide), pro-inflammatory cytokines (tumour necrosis factor-alpha, interleukins by enzyme-linked immunosorbent assay), growth factors (transforming growth factor-ß, vascular endothelial growth factor C using real time polymerase chain reaction and Western blot assay) levels, and histology of wound skin were assessed at the end. RESULTS: Bisacurone gel showed 98.72% drug release with a 420.90-442.70 cps viscosity. Bisacurone gel (5 and 10%) significantly (p < 0.05) improved wound contraction rate and paw withdrawal threshold. Bisacurone gel attenuated oxidative stress, pro-inflammatory cytokines, and water content. It also enhanced angiogenesis (hydroxyproline and growth factor) and granulation in wound tissue than vehicle control. CONCLUSIONS: These findings suggested that bisacurone gel can be a potential candidate to treat burn wounds via its anti-inflammatory, antioxidant, and angiogenic properties.


Subject(s)
Antioxidants , Vascular Endothelial Growth Factor C , Male , Rats , Animals , Antioxidants/pharmacology , Rats, Sprague-Dawley , Cytokines , Anti-Inflammatory Agents/pharmacology
5.
IEEE J Biomed Health Inform ; 27(10): 4890-4901, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37523274

ABSTRACT

Weakly supervised learning, releasing deep learning from highly labor-intensive pixel-wise annotations, has gained great attention, especially for medical image segmentation. With only image-level labels, pixel-wise segmentation/localization usually is achieved based on class activation maps (CAMs) containing the most discriminative regions. One common consequence of CAM-based approaches is incomplete foreground segmentation, i.e. under-segmentation/false negatives. Meanwhile, suffering from relatively limited medical imaging data, class-irrelevant tissues can hardly be suppressed during classification, resulting in incorrect background identification, i.e. over-segmentation/false positives. The above two issues are determined by the loose-constraint nature of image-level labels penalizing on the entire image space, and thus how to develop pixel-wise constraints based on image-level labels is the key for performance improvement which is under-explored. In this paper, based on unsupervised clustering, we propose a new paradigm called cluster-re-supervision to evaluate the contribution of each pixel in CAMs to final classification and thus generate pixel-wise supervision (i.e., clustering maps) for CAMs refinement on both over- and under-segmentation reduction. Furthermore, based on self-supervised learning, an inter-modality image reconstruction module, together with random masking, is designed to complement local information in feature learning which helps stabilize clustering. Experimental results on two popular public datasets demonstrate the superior performance of the proposed weakly-supervised framework for medical image segmentation. More importantly, cluster-re-supervision is independent of specific tasks and highly extendable to other applications.

6.
IEEE J Biomed Health Inform ; 27(8): 4006-4017, 2023 08.
Article in English | MEDLINE | ID: mdl-37163397

ABSTRACT

Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms. However, achieving high pixel-wise accuracy, complete topology structure and robustness to various contrast variations are critical and challenging, and most existing methods focus only on achieving one or two of these aspects. In this paper, we present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach. Specifically, we compute a multiscale affinity field for each pixel, capturing its semantic relationships with neighboring pixels in the predicted mask image. This field represents the local geometry of vessel segments of different sizes, allowing us to learn spatial- and scale-aware adaptive weights to strengthen vessel features. We evaluate our AFN on four different types of vascular datasets: X-ray angiography coronary vessel dataset (XCAD), portal vein dataset (PV), digital subtraction angiography cerebrovascular vessel dataset (DSA) and retinal vessel dataset (DRIVE). Extensive experimental results demonstrate that our AFN outperforms the state-of-the-art methods in terms of both higher accuracy and topological metrics, while also being more robust to various contrast changes.


Subject(s)
Algorithms , Retinal Diseases , Humans , Retinal Vessels/diagnostic imaging , Retina , Coronary Vessels/diagnostic imaging , Image Processing, Computer-Assisted/methods
7.
IEEE Trans Med Imaging ; 42(8): 2325-2337, 2023 08.
Article in English | MEDLINE | ID: mdl-37027664

ABSTRACT

Vision transformers have recently set off a new wave in the field of medical image analysis due to their remarkable performance on various computer vision tasks. However, recent hybrid-/transformer-based approaches mainly focus on the benefits of transformers in capturing long-range dependency while ignoring the issues of their daunting computational complexity, high training costs, and redundant dependency. In this paper, we propose to employ adaptive pruning to transformers for medical image segmentation and propose a lightweight and effective hybrid network APFormer. To our best knowledge, this is the first work on transformer pruning for medical image analysis tasks. The key features of APFormer are self-regularized self-attention (SSA) to improve the convergence of dependency establishment, Gaussian-prior relative position embedding (GRPE) to foster the learning of position information, and adaptive pruning to eliminate redundant computations and perception information. Specifically, SSA and GRPE consider the well-converged dependency distribution and the Gaussian heatmap distribution separately as the prior knowledge of self-attention and position embedding to ease the training of transformers and lay a solid foundation for the following pruning operation. Then, adaptive transformer pruning, both query-wise and dependency-wise, is performed by adjusting the gate control parameters for both complexity reduction and performance improvement. Extensive experiments on two widely-used datasets demonstrate the prominent segmentation performance of APFormer against the state-of-the-art methods with much fewer parameters and lower GFLOPs. More importantly, we prove, through ablation studies, that adaptive pruning can work as a plug-n-play module for performance improvement on other hybrid-/transformer-based methods. Code is available at https://github.com/xianlin7/APFormer.


Subject(s)
Diagnostic Imaging , Normal Distribution
8.
IEEE J Biomed Health Inform ; 27(7): 3501-3512, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37053058

ABSTRACT

OBJECTIVE: Transformers, born to remedy the inadequate receptive fields of CNNs, have drawn explosive attention recently. However, the daunting computational complexity of global representation learning, together with rigid window partitioning, hinders their deployment in medical image segmentation. This work aims to address the above two issues in transformers for better medical image segmentation. METHODS: We propose a boundary-aware lightweight transformer (BATFormer) that can build cross-scale global interaction with lower computational complexity and generate windows flexibly under the guidance of entropy. Specifically, to fully explore the benefits of transformers in long-range dependency establishment, a cross-scale global transformer (CGT) module is introduced to jointly utilize multiple small-scale feature maps for richer global features with lower computational complexity. Given the importance of shape modeling in medical image segmentation, a boundary-aware local transformer (BLT) module is constructed. Different from rigid window partitioning in vanilla transformers which would produce boundary distortion, BLT adopts an adaptive window partitioning scheme under the guidance of entropy for both computational complexity reduction and shape preservation. RESULTS: BATFormer achieves the best performance in Dice of 92.84 %, 91.97 %, 90.26 %, and 96.30 % for the average, right ventricle, myocardium, and left ventricle respectively on the ACDC dataset and the best performance in Dice, IoU, and ACC of 90.76 %, 84.64 %, and 96.76 % respectively on the ISIC 2018 dataset. More importantly, BATFormer requires the least amount of model parameters and the lowest computational complexity compared to the state-of-the-art approaches. CONCLUSION AND SIGNIFICANCE: Our results demonstrate the necessity of developing customized transformers for efficient and better medical image segmentation. We believe the design of BATFormer is inspiring and extendable to other applications/frameworks.


Subject(s)
Electric Power Supplies , Heart Ventricles , Humans , Entropy , Image Processing, Computer-Assisted
9.
IEEE Trans Med Imaging ; 42(7): 1955-1968, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37015653

ABSTRACT

The purpose of federated learning is to enable multiple clients to jointly train a machine learning model without sharing data. However, the existing methods for training an image segmentation model have been based on an unrealistic assumption that the training set for each local client is annotated in a similar fashion and thus follows the same image supervision level. To relax this assumption, in this work, we propose a label-agnostic unified federated learning framework, named FedMix, for medical image segmentation based on mixed image labels. In FedMix, each client updates the federated model by integrating and effectively making use of all available labeled data ranging from strong pixel-level labels, weak bounding box labels, to weakest image-level class labels. Based on these local models, we further propose an adaptive weight assignment procedure across local clients, where each client learns an aggregation weight during the global model update. Compared to the existing methods, FedMix not only breaks through the constraint of a single level of image supervision but also can dynamically adjust the aggregation weight of each local client, achieving rich yet discriminative feature representations. Experimental results on multiple publicly-available datasets validate that the proposed FedMix outperforms the state-of-the-art methods by a large margin. In addition, we demonstrate through experiments that FedMix is extendable to multi-class medical image segmentation and much more feasible in clinical scenarios. The code is available at: https://github.com/Jwicaksana/FedMix.


Subject(s)
Machine Learning , Supervised Machine Learning , Humans
10.
Burns ; 49(3): 678-687, 2023 05.
Article in English | MEDLINE | ID: mdl-35623933

ABSTRACT

BACKGROUND: Research on coagulation dysfunction following burns is controversial. This study aimed to describe the coagulation changes in severe burn patients by examining coagulation parameters. METHODS: Patients with third-degree total body surface area (TBSA) burns of ≥30% were enrolled between 2017 and 2020. Platelet (PLT) count and coagulation indexes (including APTT, INR, FIB, DD, and AT Ⅲ) were measured at admission and once weekly for 8 weeks, and statistical analysis was performed. The patient medical profiles were reviewed to extract demographic and clinical data, including TBSA, third-degree TBSA, and inhalation injury. The total intravenous fluids and transfusions of crystalloids, fresh frozen plasma (FFP), and red blood cells (RBC) were calculated during the forty-eight-hour period. The number of sepsis cases was recorded. RESULTS: We enrolled 104 patients , and while the overall coagulation trend fluctuated, inflection points appeared around one week and demonstrated hypercoagulability. INR was significantly higher in the non-survival group than in the survivors' group from admission to three weeks after burn (all p<0.01). From post-injury week 1 to post-injury week 3, the APTT in the non-survival group was greater than in the survival group, but the non-survival group's PLT count was lower than that in the survival group (all p<0.05). At two and three weeks after burns, the FIB levels in the non-survival group were significantly lower than those of the survival group (both p<0.01). The prevalence of inhalation injury and the proportion of sepsis cases were significantly higher in the non-survival group than in the survival group ( p < 0.05, p < 0.001, respectively). At the time of death, APTT, INR, and FDP levels were significantly higher in the non-survival group in the survivor group, and FIB, ATIII, and PLT were significantly lower than in the survivor group (all p<0.01). On the day of death, nine of the 12 dead patients had disseminated intravascular coagulation (DIC). CONCLUSIONS: Coagulation dysfunction was most prominent in severe burn patients 1 week after injury and presented as hypercoagulability. Large-area burn injury, large amounts of fluid resuscitation, inhalation injury, and sepsis may all contribute to coagulation dysfunction, which can further develop into DIC and even death in severe burns patients.


Subject(s)
Blood Coagulation Disorders , Burns , Sepsis , Thrombophilia , Humans , Retrospective Studies , Cause of Death , Blood Coagulation Disorders/epidemiology , Blood Coagulation Disorders/etiology , Sepsis/epidemiology , Sepsis/etiology
11.
IEEE J Biomed Health Inform ; 26(11): 5596-5607, 2022 11.
Article in English | MEDLINE | ID: mdl-35984796

ABSTRACT

The performance of deep networks for medical image analysis is often constrained by limited medical data, which is privacy-sensitive. Federated learning (FL) alleviates the constraint by allowing different institutions to collaboratively train a federated model without sharing data. However, the federated model is often suboptimal with respect to the characteristics of each client's local data. Instead of training a single global model, we propose Customized FL (CusFL), for which each client iteratively trains a client-specific/private model based on a federated global model aggregated from all private models trained in the immediate previous iteration. Two overarching strategies employed by CusFL lead to its superior performance: 1) the federated model is mainly for feature alignment and thus only consists of feature extraction layers; 2) the federated feature extractor is used to guide the training of each private model. In that way, CusFL allows each client to selectively learn useful knowledge from the federated model to improve its personalized model. We evaluated CusFL on multi-source medical image datasets for the identification of clinically significant prostate cancer and the classification of skin lesions.


Subject(s)
Deep Learning , Skin Diseases , Male , Humans , Privacy
12.
IEEE J Biomed Health Inform ; 26(10): 5165-5176, 2022 10.
Article in English | MEDLINE | ID: mdl-35849684

ABSTRACT

Cerebral ventricles are one of the prominent structures in the brain, segmenting which can provide rich information for brain-related disease diagnosis. Unfortunately, cerebral ventricle segmentation in complex clinical cases, such as in the coexistence with other lesions/hemorrhages, remains unexplored. In this paper, we, for the first time, focus on cerebral ventricle segmentation with the presence of intra-ventricular hemorrhages (IVH). To overcome the occlusions formed by IVH, we propose a symmetry-aware deep learning approach inspired by contrastive self-supervised learning. Specifically, for each slice, we jointly employ the raw slice and the horizontally flipped slice as inputs and penalize the consistency loss between the corresponding segmentation maps in addition to their segmentation losses. In this way, the symmetry of cerebral ventricles is enforced to eliminate the occlusions brought by IVH. Extensive experimental results show that the proposed symmetry-aware deep learning approach achieves consistent performance improvements for ventricle segmentation in both normal (i.e. without IVH) and challenging cases (i.e. with IVH). Through evaluation of multiple backbone networks, we demonstrate the architecture-independence of the proposed approach for performance improvements. Moreover, we re-design an end-to-end version of symmetry-aware deep learning, making it more extendable to other approaches for brain-related analysis.


Subject(s)
Deep Learning , Brain , Cerebral Hemorrhage/diagnostic imaging , Cerebral Ventricles/diagnostic imaging , Humans , Image Processing, Computer-Assisted
13.
Animals (Basel) ; 12(12)2022 Jun 09.
Article in English | MEDLINE | ID: mdl-35739841

ABSTRACT

Porcine Reproductive and Respiratory Syndrome (PRRS) is one of the serious infectious diseases that threatens the swine industry. Increasing evidence shows that gut microbiota plays an important role in regulating host immune responses to PRRS virus (PRRSV). The aim of this study was to investigate gut microbiota difference between PRRSV-resistant pigs and PRRSV-suspectable pigs derived from a Tongcheng pigs and Large White pigs crossed population. PRRSV infection induces an increase in the abundance and diversity of gut microbiota. Correlation analysis showed that 36 genera were correlated with viral loads or weight gain after PRRSV infection. Prevotellaceae-NK3B31-group, Christensenellaceae-R7-group, and Parabacteroides were highly correlated with both viral load and weight gain. Notably, the diversity and abundance of beneficial bacteria such as Prevotellaceae-NK3B31-group was high in resistant pigs, and the diversity and abundance of pathogenic bacteria such as Campylobacter and Desulfovibrio were high in susceptible pigs. Gut microbiota were significantly associated with immune function and growth performance, suggesting that these genera might be related to viremia, clinical symptoms, and disease resistance. Altogether, this study revealed the correlation of gut microbiota with PRRSV infection and gut microbiota interventions may provide an effective prevention against PRRSV infection.

14.
Med Phys ; 49(11): 7179-7192, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35713606

ABSTRACT

BACKGROUND: Skull fracture, as a common traumatic brain injury, can lead to multiple complications including bleeding, leaking of cerebrospinal fluid, infection, and seizures. Automatic skull fracture detection (SFD) is of great importance, especially in emergency medicine. PURPOSE: Existing algorithms for SFD, developed based on hand-crafted features, suffer from low detection accuracy due to poor generalizability to unseen samples. Deploying deep detectors designed for natural images like Faster Region-based Convolutional Neural Network (R-CNN) for SFD can be helpful but are of high redundancy and with nonnegligible false detections due to the cranial suture and skull base interference. Therefore, we, for the first time, propose an anchor-efficient anti-interference deep learning framework named Fracture R-CNN for accurate SFD with low computational cost. METHODS: The proposed Fracture R-CNN is developed by incorporating the prior knowledge utilized in clinical diagnosis into the original Faster R-CNN. Specifically, based on the distributions of skull fractures, we first propose an adaptive anchoring region proposal network (AA-RPN) to generate proposals for diverse-scale fractures with low computational complexity. Then, based on the prior knowledge that cranial sutures exist in the junctions of bones and usually contain sclerotic margins, we design an anti-interference head (A-Head) network to eliminate the cranial suture interference for better SFD detection. In addition, to further enhance the anti-interference ability of the proposed A-Head, a difficulty-balanced weighted loss function is proposed to emphasize more on distinguishing the interference areas from the skull base and the cranial sutures during training. RESULTS: Experimental results demonstrate that the proposed Fracture R-CNN outperforms the current state-of-the-art (SOTA) deep detectors for SFD with a higher recall and fewer false detections. Compared to Faster R-CNN, the proposed Fracture R-CNN improves the average precision (AP) by 11.74% and the free-response receiver operating characteristic (FROC) score by 11.08%. Through validating on various backbones, we further demonstrate the architecture independence of Fracture R-CNN, making it extendable to other detection applications. CONCLUSIONS: As the customized deep learning-based framework for SFD, Fracture R-CNN can effectively overcome the unique challenges in SFD with less computational cost, leading to a better detection performance compared to the SOTA deep detectors. Moreover, we believe the prior knowledge explored for Fracture R-CNN would shed new light on future deep learning approaches for SFD.


Subject(s)
Skull Fractures , Humans , Skull Fractures/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed
15.
IEEE J Biomed Health Inform ; 26(6): 2615-2626, 2022 06.
Article in English | MEDLINE | ID: mdl-34986106

ABSTRACT

Perihematomal edema (PHE) volume, surrounding spontaneous intracerebral hemorrhage (SICH), is an important biomarker for the presence of SICH-associated diseases. However, due to irregular shapes and extremely low contrast of PHE on CT images, manually annotating PHE in pixel-wise is time-consuming and labour intensive even for experienced experts, which makes it almost infeasible to deploy current supervised deep learning approaches for automated PHE segmentation. How to develop annotation-efficient deep learning to achieve accurate PHE segmentation is an open problem. In this paper, we, for the first time, propose a cross-task supervised framework by introducing slice-level PHE labels and pixel-wise SICH annotations, which are more accessible in clinical scenarios compared to pixel-wise PHE annotations. Specifically, we first train a multi-level classifier based on slice-level PHE labels to produce high-quality class activation maps (CAMs) as pseudo PHE annotations. Then, we train a deep learning model to produce accurate PHE segmentation by iteratively refining the pseudo annotations via an uncertainty-aware corrective training strategy for noise removal and a distance-aware loss for background compression. Experimental results demonstrate that, the proposed framework achieves a comparative performance with the fully supervised methods on PHE segmentation, and largely improves the baseline performance where only pseudo PHE labels are used for training. We believe the findings from this study of using cross-task supervision for annotation-efficient deep learning can be applied to other medical imaging applications.


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed , Uncertainty
16.
Burns ; 48(5): 1213-1220, 2022 08.
Article in English | MEDLINE | ID: mdl-34903409

ABSTRACT

Burns are a common traumatic injuries with considerable morbidity and mortality rates. Post-burn intestinal injuries are closely related to oxidative stress and inflammatory response. The aim of the current study was to investigate the combined effect of sodium butyrate (NaB) and probiotics (PROB) on severe burn-induced oxidative stress and inflammatory response and the underlying mechanism of action. Sprague-Dawley rats with severe burns were treated with NaB with or without PROB. Pathomorphology of skin and small intestine tissue was observed using hematoxylin and eosin staining and severe burn-induced apoptosis in small intestine tissue was examined via terminal deoxynucleotidyl transferase-mediated dUTP-biotin nick end labeling assay. The release of factors related to inflammation was quantified using ELISA kits and qRT-PCR and levels of oxidative stress markers were evaluated using biochemical assays. Furthermore, mitochondrial morphological changes in small intestinal epithelial cells were observed using transmission electron microscopy. In addition, the underlying mechanism associated with the combined effect of NaB and PROB on severe burn-induced oxidative stress and inflammatory response was investigated using western blotting. The combination of NaB and PROB exerted protective effects against severe burn-induced intestinal barrier injury by reducing the levels of diamine oxidase and intestinal fatty acid binding protein. Combined NaB and PROB treatment inhibited severe burn-induced oxidative stress by increasing superoxide dismutase levels and decreasing those of malondialdehyde and myeloperoxidase levels. Severe burn-induced inflammation was suppressed by combined NaB and PROB administration, as demonstrated by the decreased mRNA expression of tumor necrosis factor-α, interleukin-6, interleukin-1ß, and high mobility group box-1 in the small intestine. In addition, this study showed that combined NaB and PROB administration increased nuclear factor-erythroid 2-related factor 2 (Nrf2) protein expression and decreased the phosphorylation of nuclear factor (NF)-κB and extracellular signal-regulated kinase 1/2 (ERK 1/2). In conclusion, our findings indicate that combined NaB and PROB treatment may inhibit severe burn-induced inflammation and oxidative stress in the small intestine by regulating HMGB1/NF-κB and ERK1/2/Nrf2 signaling, thereby providing a new therapeutic strategy for intestinal injury induced by severe burn.


Subject(s)
Burns , Probiotics , Animals , Burns/complications , Burns/therapy , Butyric Acid/pharmacology , Butyric Acid/therapeutic use , Inflammation/metabolism , NF-E2-Related Factor 2/genetics , NF-E2-Related Factor 2/metabolism , NF-E2-Related Factor 2/pharmacology , NF-kappa B/metabolism , Oxidative Stress , Probiotics/pharmacology , Probiotics/therapeutic use , Rats , Rats, Sprague-Dawley
17.
Zhongguo Shi Yan Xue Ye Xue Za Zhi ; 29(1): 167-171, 2021 Feb.
Article in Chinese | MEDLINE | ID: mdl-33554814

ABSTRACT

OBJECTIVE: To explore the expression of CD40/CD40L in multiple myeloma(MM) patients and its influence on prognosis. METHODS: Thirty patients with MM treated in Cangzhou People's Hospital from May 2016 to June 2017 were selected and divided into MM group, then 30 healthy people with a physical examination in our hospital at the same time were selected as the normal group. The serum CD40/CD40L levels of the patients in the two groups was detected by flow cytometry, and its correlation with the lymphocyte population, pathological grade and prognostic significance of MM patients was anaysis. RESULTS: The expression of CD40 in serum of the patients in MM group was significantly higher than those in normal group (P<0.05). The expression of CD40L in serum of the patients in MM group showed no significant difference as compared with those in normal group (P>0.05). The levels of CD40 and CD40L in the patients before and after chemotherapy showed no difference(P>0.05). The levels of Ts and NK cells in the patients of MM group were lower than those in normal group (P<0.05). The proportion of total B lymphocytes, Th and Th/Ts cells between the two groups showed no significant difference (P>0.05). The CD40 level was correlated with the serum total B lymphocyte level of the patients in MM group (r=0.877, P=0.005). There was a correlation with CD40L and Th cells in the serum of MM patients (r=-0.783, P=0.035). The expression of serum CD40 in the patients at phase III-IV was higher than those of the patients at phase I-II, the levels of serum CD40L in MM patients at different periods showed no significant difference(P>0.05). The survival rate of MM patients with high CD40 expression was lower than that of MM patients with low CD40 expression (χ2=1.639, P=0.201). The high level CD40 was the main factor affecting the prognosis of MM patients (95%CI: 1.156-4.125). CONCLUSION: The increasing of CD40 level in MM patients is related to the pathological grade of the patients. Chemotherapy can reduce the level of CD40. The increasing of CD40 is an important factor for the poor prognosis of MM patients. CD40L level is not meaningful for MM treatment and prognosis.


Subject(s)
CD40 Antigens , CD40 Ligand , B-Lymphocytes , Humans , Lymphocyte Subsets , Prognosis
18.
IEEE J Biomed Health Inform ; 25(7): 2615-2628, 2021 07.
Article in English | MEDLINE | ID: mdl-33232246

ABSTRACT

Privacy concerns make it infeasible to construct a large medical image dataset by fusing small ones from different sources/institutions. Therefore, federated learning (FL) becomes a promising technique to learn from multi-source decentralized data with privacy preservation. However, the cross-client variation problem in medical image data would be the bottleneck in practice. In this paper, we propose a variation-aware federated learning (VAFL) framework, where the variations among clients are minimized by transforming the images of all clients onto a common image space. We first select one client with the lowest data complexity to define the target image space and synthesize a collection of images through a privacy-preserving generative adversarial network, called PPWGAN-GP. Then, a subset of those synthesized images, which effectively capture the characteristics of the raw images and are sufficiently distinct from any raw image, is automatically selected for sharing with other clients. For each client, a modified CycleGAN is applied to translate its raw images to the target image space defined by the shared synthesized images. In this way, the cross-client variation problem is addressed with privacy preservation. We apply the framework for automated classification of clinically significant prostate cancer and evaluate it using multi-source decentralized apparent diffusion coefficient (ADC) image data. Experimental results demonstrate that the proposed VAFL framework stably outperforms the current horizontal FL framework. As VAFL is independent of deep learning architectures for classification, we believe that the proposed framework is widely applicable to other medical image classification tasks.


Subject(s)
Privacy , Prostatic Neoplasms , Humans , Male
19.
IEEE Trans Med Imaging ; 39(6): 2176-2189, 2020 06.
Article in English | MEDLINE | ID: mdl-31944936

ABSTRACT

Segmenting gland instances in histology images is highly challenging as it requires not only detecting glands from a complex background but also separating each individual gland instance with accurate boundary detection. However, due to the boundary uncertainty problem in manual annotations, pixel-to-pixel matching based loss functions are too restrictive for simultaneous gland detection and boundary detection. State-of-the-art approaches adopted multi-model schemes, resulting in unnecessarily high model complexity and difficulties in the training process. In this paper, we propose to use one single deep learning model for accurate gland instance segmentation. To address the boundary uncertainty problem, instead of pixel-to-pixel matching, we propose a segment-level shape similarity measure to calculate the curve similarity between each annotated boundary segment and the corresponding detected boundary segment within a fixed searching range. As the segment-level measure allows location variations within a fixed range for shape similarity calculation, it has better tolerance to boundary uncertainty and is more effective for boundary detection. Furthermore, by adjusting the radius of the searching range, the segment-level shape similarity measure is able to deal with different levels of boundary uncertainty. Therefore, in our framework, images of different scales are down-sampled and integrated to provide both global and local contextual information for training, which is helpful in segmenting gland instances of different sizes. To reduce the variations of multi-scale training images, by referring to adversarial domain adaptation, we propose a pseudo domain adaptation framework for feature alignment. By constructing loss functions based on the segment-level shape similarity measure, combining with the adversarial loss function, the proposed shape-aware adversarial learning framework enables one single deep learning model for gland instance segmentation. Experimental results on the 2015 MICCAI Gland Challenge dataset demonstrate that the proposed framework achieves state-of-the-art performance with one single deep learning model. As the boundary uncertainty problem widely exists in medical image segmentation, it is broadly applicable to other applications.


Subject(s)
Deep Learning , Histological Techniques
20.
IEEE J Biomed Health Inform ; 23(4): 1427-1436, 2019 07.
Article in English | MEDLINE | ID: mdl-30281503

ABSTRACT

Automatic retinal vessel segmentation is a fundamental step in the diagnosis of eye-related diseases, in which both thick vessels and thin vessels are important features for symptom detection. All existing deep learning models attempt to segment both types of vessels simultaneously by using a unified pixel-wise loss that treats all vessel pixels with equal importance. Due to the highly imbalanced ratio between thick vessels and thin vessels (namely the majority of vessel pixels belong to thick vessels), the pixel-wise loss would be dominantly guided by thick vessels and relatively little influence comes from thin vessels, often leading to low segmentation accuracy for thin vessels. To address the imbalance problem, in this paper, we explore to segment thick vessels and thin vessels separately by proposing a three-stage deep learning model. The vessel segmentation task is divided into three stages, namely thick vessel segmentation, thin vessel segmentation, and vessel fusion. As better discriminative features could be learned for separate segmentation of thick vessels and thin vessels, this process minimizes the negative influence caused by their highly imbalanced ratio. The final vessel fusion stage refines the results by further identifying nonvessel pixels and improving the overall vessel thickness consistency. The experiments on public datasets DRIVE, STARE, and CHASE_DB1 clearly demonstrate that the proposed three-stage deep learning model outperforms the current state-of-the-art vessel segmentation methods.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Retinal Vessels/diagnostic imaging , Algorithms , Databases, Factual , Humans , Image Interpretation, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL
...