Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Nature ; 627(8002): 80-87, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38418888

RESUMO

Integrated microwave photonics (MWP) is an intriguing technology for the generation, transmission and manipulation of microwave signals in chip-scale optical systems1,2. In particular, ultrafast processing of analogue signals in the optical domain with high fidelity and low latency could enable a variety of applications such as MWP filters3-5, microwave signal processing6-9 and image recognition10,11. An ideal integrated MWP processing platform should have both an efficient and high-speed electro-optic modulation block to faithfully perform microwave-optic conversion at low power and also a low-loss functional photonic network to implement various signal-processing tasks. Moreover, large-scale, low-cost manufacturability is required to monolithically integrate the two building blocks on the same chip. Here we demonstrate such an integrated MWP processing engine based on a 4 inch wafer-scale thin-film lithium niobate platform. It can perform multipurpose tasks with processing bandwidths of up to 67 GHz at complementary metal-oxide-semiconductor (CMOS)-compatible voltages. We achieve ultrafast analogue computation, namely temporal integration and differentiation, at sampling rates of up to 256 giga samples per second, and deploy these functions to showcase three proof-of-concept applications: solving ordinary differential equations, generating ultra-wideband signals and detecting edges in images. We further leverage the image edge detector to realize a photonic-assisted image segmentation model that can effectively outline the boundaries of melanoma lesion in medical diagnostic images. Our ultrafast lithium niobate MWP engine could provide compact, low-latency and cost-effective solutions for future wireless communications, high-resolution radar and photonic artificial intelligence.


Assuntos
Micro-Ondas , Nióbio , Óptica e Fotônica , Óxidos , Fótons , Inteligência Artificial , Diagnóstico por Imagem/instrumentação , Diagnóstico por Imagem/métodos , Melanoma/diagnóstico por imagem , Melanoma/patologia , Óptica e Fotônica/instrumentação , Óptica e Fotônica/métodos , Radar , Tecnologia sem Fio , Humanos
2.
Mol Microbiol ; 120(2): 241-257, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37330634

RESUMO

Vibrio parahaemolyticus is a significant food-borne pathogen that is found in diverse aquatic habitats. Quorum sensing (QS), a signaling system for cell-cell communication, plays an important role in V. parahaemolyticus persistence. We characterized the function of three V. parahaemolyticus QS signal synthases, CqsAvp , LuxMvp , and LuxSvp , and show that they are essential to activate QS and regulate swarming. We found that CqsAvp , LuxMvp , and LuxSvp activate a QS bioluminescence reporter through OpaR. However, V. parahaemolyticus exhibits swarming defects in the absence of CqsAvp , LuxMvp , and LuxSvp , but not OpaR. The swarming defect of this synthase mutant (termed Δ3AI) was recovered by overexpressing either LuxOvp D47A , a mimic of dephosphorylated LuxOvp mutant, or the scrABC operon. CqsAvp , LuxMvp , and LuxSvp inhibit lateral flagellar (laf) gene expression by inhibiting the phosphorylation of LuxOvp and the expression of scrABC. Phosphorylated LuxOvp enhances laf gene expression in a mechanism that involves modulating c-di-GMP levels. However, enhancing swarming requires phosphorylated and dephosphorylated LuxOvp which is regulated by the QS signals that are synthesized by CqsAvp , LuxMvp , and LuxSvp . The data presented here suggest an important strategy of swarming regulation by the integration of QS and c-di-GMP signaling pathways in V. parahaemolyticus.


Assuntos
Percepção de Quorum , Vibrio parahaemolyticus , Percepção de Quorum/genética , Vibrio parahaemolyticus/fisiologia , Regulação Bacteriana da Expressão Gênica , Proteínas de Bactérias/genética , Proteínas de Bactérias/metabolismo , Transdução de Sinais
3.
Cereb Cortex ; 23(4): 786-800, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22490548

RESUMO

Is there a common structural and functional cortical architecture that can be quantitatively encoded and precisely reproduced across individuals and populations? This question is still largely unanswered due to the vast complexity, variability, and nonlinearity of the cerebral cortex. Here, we hypothesize that the common cortical architecture can be effectively represented by group-wise consistent structural fiber connections and take a novel data-driven approach to explore the cortical architecture. We report a dense and consistent map of 358 cortical landmarks, named Dense Individualized and Common Connectivity-based Cortical Landmarks (DICCCOLs). Each DICCCOL is defined by group-wise consistent white-matter fiber connection patterns derived from diffusion tensor imaging (DTI) data. Our results have shown that these 358 landmarks are remarkably reproducible over more than one hundred human brains and possess accurate intrinsically established structural and functional cross-subject correspondences validated by large-scale functional magnetic resonance imaging data. In particular, these 358 cortical landmarks can be accurately and efficiently predicted in a new single brain with DTI data. Thus, this set of 358 DICCCOL landmarks comprehensively encodes the common structural and functional cortical architectures, providing opportunities for many applications in brain science including mapping human brain connectomes, as demonstrated in this work.


Assuntos
Mapeamento Encefálico , Córtex Cerebral/fisiologia , Fibras Nervosas Mielinizadas/fisiologia , Vias Neurais/fisiologia , Adolescente , Adulto , Fatores Etários , Idoso , Algoritmos , Atenção/fisiologia , Córtex Cerebral/anatomia & histologia , Córtex Cerebral/irrigação sanguínea , Imagem de Difusão por Ressonância Magnética , Emoções/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Semântica
4.
IEEE Trans Med Imaging ; PP2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38913527

RESUMO

Multi-modal prompt learning is a high-performance and cost-effective learning paradigm, which learns text as well as image prompts to tune pre-trained vision-language (V-L) models like CLIP for adapting multiple downstream tasks. However, recent methods typically treat text and image prompts as independent components without considering the dependency between prompts. Moreover, extending multi-modal prompt learning into the medical field poses challenges due to a significant gap between general- and medical-domain data. To this end, we propose a Multi-modal Collaborative Prompt Learning (MCPL) pipeline to tune a frozen V-L model for aligning medical text-image representations, thereby achieving medical downstream tasks. We first construct the anatomy-pathology (AP) prompt for multi-modal prompting jointly with text and image prompts. The AP prompt introduces instance-level anatomy and pathology information, thereby making a V-L model better comprehend medical reports and images. Next, we propose graph-guided prompt collaboration module (GPCM), which explicitly establishes multi-way couplings between the AP, text, and image prompts, enabling collaborative multi-modal prompt producing and updating for more effective prompting. Finally, we develop a novel prompt configuration scheme, which attaches the AP prompt to the query and key, and the text/image prompt to the value in self-attention layers for improving the interpretability of multi-modal prompts. Extensive experiments on numerous medical classification and object detection datasets show that the proposed pipeline achieves excellent effectiveness and generalization. Compared with state-of-the-art prompt learning methods, MCPL provides a more reliable multi-modal prompt paradigm for reducing tuning costs of V-L models on medical downstream tasks. Our code: https://github.com/CUHK-AIM-Group/MCPL.

5.
IEEE Trans Med Imaging ; 43(1): 190-202, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37428659

RESUMO

Open set recognition (OSR) aims to accurately classify known diseases and recognize unseen diseases as the unknown class in medical scenarios. However, in existing OSR approaches, gathering data from distributed sites to construct large-scale centralized training datasets usually leads to high privacy and security risk, which could be alleviated elegantly via the popular cross-site training paradigm, federated learning (FL). To this end, we represent the first effort to formulate federated open set recognition (FedOSR), and meanwhile propose a novel Federated Open Set Synthesis (FedOSS) framework to address the core challenge of FedOSR: the unavailability of unknown samples for all anticipated clients during the training phase. The proposed FedOSS framework mainly leverages two modules, i.e., Discrete Unknown Sample Synthesis (DUSS) and Federated Open Space Sampling (FOSS), to generate virtual unknown samples for learning decision boundaries between known and unknown classes. Specifically, DUSS exploits inter-client knowledge inconsistency to recognize known samples near decision boundaries and then pushes them beyond decision boundaries to synthesize discrete virtual unknown samples. FOSS unites these generated unknown samples from different clients to estimate the class-conditional distributions of open data space near decision boundaries and further samples open data, thereby improving the diversity of virtual unknown samples. Additionally, we conduct comprehensive ablation experiments to verify the effectiveness of DUSS and FOSS. FedOSS shows superior performance on public medical datasets in comparison with state-of-the-art approaches. The source code is available at https://github.com/CityU-AIM-Group/FedOSS.


Assuntos
Aprendizado de Máquina , Software , Humanos , Doença
6.
IEEE Trans Med Imaging ; 43(5): 1816-1827, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38165794

RESUMO

The computer-aided diagnosis (CAD) for rare diseases using medical imaging poses a significant challenge due to the requirement of large volumes of labeled training data, which is particularly difficult to collect for rare diseases. Although Few-shot learning (FSL) methods have been developed for this task, these methods focus solely on rare disease diagnosis, failing to preserve the performance in common disease diagnosis. To address this issue, we propose the Disentangle then Calibrate with Gradient Guidance (DCGG) framework under the setting of generalized few-shot learning, i.e., using one model to diagnose both common and rare diseases. The DCGG framework consists of a network backbone, a gradient-guided network disentanglement (GND) module, and a gradient-induced feature calibration (GFC) module. The GND module disentangles the network into a disease-shared component and a disease-specific component based on gradient guidance, and devises independent optimization strategies for both components, respectively, when learning from rare diseases. The GFC module transfers only the disease-shared channels of common-disease features to rare diseases, and incorporates the optimal transport theory to identify the best transport scheme based on the semantic relationship among different diseases. Based on the best transport scheme, the GFC module calibrates the distribution of rare-disease features at the disease-shared channels, deriving more informative rare-disease features for better diagnosis. The proposed DCGG framework has been evaluated on three public medical image classification datasets. Our results suggest that the DCGG framework achieves state-of-the-art performance in diagnosing both common and rare diseases.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador , Doenças Raras , Humanos , Doenças Raras/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Bases de Dados Factuais , Imageamento por Ressonância Magnética/métodos , Aprendizado de Máquina
7.
J Immunol Res ; 2024: 4481452, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39104595

RESUMO

Exosome-derived microRNAs (miRNAs) are emerging as pivotal players in the pathophysiology of sepsis, representing a new frontier in both the diagnosis and treatment of this complex condition. Sepsis, a severe systemic response to infection, involves intricate immune and nonimmune mechanisms, where exosome-mediated communication can significantly influence disease progression and outcomes. During the progress of sepsis, the miRNA profile of exosomes undergoes notable alterations, is reflecting, and may affect the progression of the disease. This review comprehensively explores the biology of exosome-derived miRNAs, which originate from both immune cells (such as macrophages and dendritic cells) and nonimmune cells (such as endothelial and epithelial cells) and play a dynamic role in modulating pathways that affect the course of sepsis, including those related to inflammation, immune response, cell survival, and apoptosis. Taking into account these dynamic changes, we further discuss the potential of exosome-derived miRNAs as biomarkers for the early detection and prognosis of sepsis and advantages over traditional biomarkers due to their stability and specificity. Furthermore, this review evaluates exosome-based therapeutic miRNA delivery systems in sepsis, which may pave the way for targeted modulation of the septic response and personalized treatment options.


Assuntos
Biomarcadores , Exossomos , MicroRNAs , Sepse , Humanos , Exossomos/metabolismo , Sepse/diagnóstico , Sepse/terapia , Sepse/genética , Sepse/imunologia , MicroRNAs/genética , Animais , Prognóstico , Macrófagos/imunologia , Macrófagos/metabolismo
8.
Neural Netw ; 172: 106099, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38237445

RESUMO

Domain generalization-based fault diagnosis (DGFD) presents significant prospects for recognizing faults without the accessibility of the target domain. Previous DGFD methods have achieved significant progress; however, there are some limitations. First, most DGFG methods statistically model the dependence between time-series data and labels, and they are superficial descriptions to the actual data-generating process. Second, most of the existing DGFD methods are only verified on vibrational time-series datasets, which is insufficient to show the potential of domain generalization in the fault diagnosis area. In response to the above issues, this paper first proposes a DGFD method named Causal Disentanglement Domain Generalization (CDDG), which can reestablish the data-generating process by disentangling time-series data into the causal factors (fault-related representation) and no-casual factors (domain-related representation) with a structural causal model. Specifically, in CDDG, causal aggregation loss is designed to separate the unobservable causal and non-causal factors. Meanwhile, the reconstruction loss is proposed to ensure the information completeness of the disentangled factors. We also introduce a redundancy reduction loss to learn efficient features. The proposed CDDG is verified on five cross-machine vibrational fault diagnosis cases and three cross-environment acoustical anomaly detection cases by comparing it with eight state-of-the-art (SOTA) DGFD methods. We publicize the open-source time-series DGFD Benchmark containing CDDG and the eight SOTA methods. The code repository will be available at https://github.com/ShaneSpace/DGFDBenchmark.


Assuntos
Generalização Psicológica , Aprendizagem , Acústica , Benchmarking , Causalidade
9.
Med Image Anal ; 91: 102990, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37864912

RESUMO

The fusion of multi-modal data, e.g., pathology slides and genomic profiles, can provide complementary information and benefit glioma grading. However, genomic profiles are difficult to obtain due to the high costs and technical challenges, thus limiting the clinical applications of multi-modal diagnosis. In this work, we investigate the realistic problem where paired pathology-genomic data are available during training, while only pathology slides are accessible for inference. To solve this problem, a comprehensive learning and adaptive teaching framework is proposed to improve the performance of pathological grading models by transferring the privileged knowledge from the multi-modal teacher to the pathology student. For comprehensive learning of the multi-modal teacher, we propose a novel Saliency-Aware Masking (SA-Mask) strategy to explore richer disease-related features from both modalities by masking the most salient features. For adaptive teaching of the pathology student, we first devise a Local Topology Preserving and Discrepancy Eliminating Contrastive Distillation (TDC-Distill) module to align the feature distributions of the teacher and student models. Furthermore, considering the multi-modal teacher may include incorrect information, we propose a Gradient-guided Knowledge Refinement (GK-Refine) module that builds a knowledge bank and adaptively absorbs the reliable knowledge according to their agreement in the gradient space. Experiments on the TCGA GBM-LGG dataset show that our proposed distillation framework improves the pathological glioma grading and outperforms other KD methods. Notably, with the sole pathology slides, our method achieves comparable performance with existing multi-modal methods. The code is available at https://github.com/CUHK-AIM-Group/MultiModal-learning.


Assuntos
Glioma , Aprendizagem , Humanos
10.
IEEE Trans Med Imaging ; 43(6): 2113-2124, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38231819

RESUMO

Taking advantage of multi-modal radiology-pathology data with complementary clinical information for cancer grading is helpful for doctors to improve diagnosis efficiency and accuracy. However, radiology and pathology data have distinct acquisition difficulties and costs, which leads to incomplete-modality data being common in applications. In this work, we propose a Memory- and Gradient-guided Incomplete Modal-modal Learning (MGIML) framework for cancer grading with incomplete radiology-pathology data. Firstly, to remedy missing-modality information, we propose a Memory-driven Hetero-modality Complement (MH-Complete) scheme, which constructs modal-specific memory banks constrained by a coarse-grained memory boosting (CMB) loss to record generic radiology and pathology feature patterns, and develops a cross-modal memory reading strategy enhanced by a fine-grained memory consistency (FMC) loss to take missing-modality information from well-stored memories. Secondly, as gradient conflicts exist between missing-modality situations, we propose a Rotation-driven Gradient Homogenization (RG-Homogenize) scheme, which estimates instance-specific rotation matrices to smoothly change the feature-level gradient directions, and computes confidence-guided homogenization weights to dynamically balance gradient magnitudes. By simultaneously mitigating gradient direction and magnitude conflicts, this scheme well avoids the negative transfer and optimization imbalance problems. Extensive experiments on CPTAC-UCEC and CPTAC-PDA datasets show that the proposed MGIML framework performs favorably against state-of-the-art multi-modal methods on missing-modality situations.


Assuntos
Algoritmos , Gradação de Tumores , Humanos , Gradação de Tumores/métodos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Neoplasias/diagnóstico por imagem
11.
IEEE Trans Med Imaging ; PP2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38935476

RESUMO

Pathology image are essential for accurately interpreting lesion cells in cytopathology screening, but acquiring high-resolution digital slides requires specialized equipment and long scanning times. Though super-resolution (SR) techniques can alleviate this problem, existing deep learning models recover pathology image in a black-box manner, which can lead to untruthful biological details and misdiagnosis. Additionally, current methods allocate the same computational resources to recover each pixel of pathology image, leading to the sub-optimal recovery issue due to the large variation of pathology image. In this paper, we propose the first hierarchical reinforcement learning framework named Spatial-Temporal hierARchical Reinforcement Learning (STAR-RL), mainly for addressing the aforementioned issues in pathology image super-resolution problem. We reformulate the SR problem as a Markov decision process of interpretable operations and adopt the hierarchical recovery mechanism in patch level, to avoid sub-optimal recovery. Specifically, the higher-level spatial manager is proposed to pick out the most corrupted patch for the lower-level patch worker. Moreover, the higher-level temporal manager is advanced to evaluate the selected patch and determine whether the optimization should be stopped earlier, thereby avoiding the over-processed problem. Under the guidance of spatial-temporal managers, the lower-level patch worker processes the selected patch with pixel-wise interpretable actions at each time step. Experimental results on medical images degraded by different kernels show the effectiveness of STAR-RL. Furthermore, STAR-RL validates the promotion in tumor diagnosis with a large margin and shows generalizability under various degradation. The source code is to be released.

12.
Med Image Anal ; 96: 103205, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38788328

RESUMO

Multi-phase enhanced computed tomography (MPECT) translation from plain CT can help doctors to detect the liver lesion and prevent patients from the allergy during MPECT examination. Existing CT translation methods directly learn an end-to-end mapping from plain CT to MPECT, ignoring the crucial clinical domain knowledge. As clinicians subtract the plain CT from MPECT images as subtraction image to highlight the contrast-enhanced regions and further to facilitate liver disease diagnosis in the clinical diagnosis, we aim to exploit this domain knowledge for automatic CT translation. To this end, we propose a Mask-Aware Transformer (MAFormer) with structure invariant loss for CT translation, which presents the first effort to exploit this domain knowledge for CT translation. Specifically, the proposed MAFormer introduces a mask estimator to predict the subtraction image from the plain CT image. To integrate the subtraction image into the network, the MAFormer devises a Mask-Aware Transformer based Normalization (MATNorm) as normalization layer to highlight the contrast-enhanced regions and capture the long-range dependencies among these regions. Moreover, aiming to preserve the biological structure of CT slices, a structure invariant loss is designed to extract the structural information and minimize the structural similarity between the plain and synthetic CT images to ensure the structure invariant. Extensive experiments have proven the effectiveness of the proposed method and its superiority to the state-of-the-art CT translation methods. Source code is to be released.


Assuntos
Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Técnica de Subtração , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
13.
IEEE J Biomed Health Inform ; 28(5): 3003-3014, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38470599

RESUMO

Fusing multi-modal radiology and pathology data with complementary information can improve the accuracy of tumor typing. However, collecting pathology data is difficult since it is high-cost and sometimes only obtainable after the surgery, which limits the application of multi-modal methods in diagnosis. To address this problem, we propose comprehensively learning multi-modal radiology-pathology data in training, and only using uni-modal radiology data in testing. Concretely, a Memory-aware Hetero-modal Distillation Network (MHD-Net) is proposed, which can distill well-learned multi-modal knowledge with the assistance of memory from the teacher to the student. In the teacher, to tackle the challenge in hetero-modal feature fusion, we propose a novel spatial-differentiated hetero-modal fusion module (SHFM) that models spatial-specific tumor information correlations across modalities. As only radiology data is accessible to the student, we store pathology features in the proposed contrast-boosted typing memory module (CTMM) that achieves type-wise memory updating and stage-wise contrastive memory boosting to ensure the effectiveness and generalization of memory items. In the student, to improve the cross-modal distillation, we propose a multi-stage memory-aware distillation (MMD) scheme that reads memory-aware pathology features from CTMM to remedy missing modal-specific information. Furthermore, we construct a Radiology-Pathology Thymic Epithelial Tumor (RPTET) dataset containing paired CT and WSI images with annotations. Experiments on the RPTET and CPTAC-LUAD datasets demonstrate that MHD-Net significantly improves tumor typing and outperforms existing multi-modal methods on missing modality situations.


Assuntos
Neoplasias Epiteliais e Glandulares , Neoplasias do Timo , Humanos , Neoplasias do Timo/diagnóstico por imagem , Neoplasias Epiteliais e Glandulares/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Redes Neurais de Computação , Aprendizado Profundo , Imagem Multimodal/métodos
14.
Neural Netw ; 179: 106540, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39079377

RESUMO

West syndrome is an epileptic disease that seriously affects the normal growth and development of infants in early childhood. Based on the methods of brain topological network and graph theory, this article focuses on three clinical states of patients before and after treatment. In addition to discussing bidirectional and unidirectional global networks from the perspective of computational principles, a more in-depth analysis of local intra-network and inter-network characteristics of multi-partitioned networks is also performed. The spatial feature distribution based on feature path length is introduced for the first time. The results show that the bidirectional network has better significant differentiation. The rhythmic feature change trend and spatial characteristic distribution of this network can be used as a measure of the impact on global information processing in the brain after treatment. And localized brain regions variability in features and differences in the ability to interact with information between brain regions have potential as biomarkers for medication assessment in WEST syndrome. The above shows specific conclusions on the interaction relationship and consistency of macro-network and micro-network, which may have a positive effect on patients' treatment and prognosis management.


Assuntos
Encéfalo , Eletroencefalografia , Espasmos Infantis , Humanos , Eletroencefalografia/métodos , Espasmos Infantis/fisiopatologia , Espasmos Infantis/diagnóstico , Lactente , Encéfalo/fisiopatologia , Couro Cabeludo , Masculino , Feminino , Anticonvulsivantes/uso terapêutico , Rede Nervosa/fisiopatologia
15.
Med Image Anal ; 97: 103226, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38852215

RESUMO

The advancement of artificial intelligence (AI) for organ segmentation and tumor detection is propelled by the growing availability of computed tomography (CT) datasets with detailed, per-voxel annotations. However, these AI models often struggle with flexibility for partially annotated datasets and extensibility for new classes due to limitations in the one-hot encoding, architectural design, and learning scheme. To overcome these limitations, we propose a universal, extensible framework enabling a single model, termed Universal Model, to deal with multiple public datasets and adapt to new classes (e.g., organs/tumors). Firstly, we introduce a novel language-driven parameter generator that leverages language embeddings from large language models, enriching semantic encoding compared with one-hot encoding. Secondly, the conventional output layers are replaced with lightweight, class-specific heads, allowing Universal Model to simultaneously segment 25 organs and six types of tumors and ease the addition of new classes. We train our Universal Model on 3410 CT volumes assembled from 14 publicly available datasets and then test it on 6173 CT volumes from four external datasets. Universal Model achieves first place on six CT tasks in the Medical Segmentation Decathlon (MSD) public leaderboard and leading performance on the Beyond The Cranial Vault (BTCV) dataset. In summary, Universal Model exhibits remarkable computational efficiency (6× faster than other dataset-specific models), demonstrates strong generalization across different hospitals, transfers well to numerous downstream tasks, and more importantly, facilitates the extensibility to new classes while alleviating the catastrophic forgetting of previously learned classes. Codes, models, and datasets are available at https://github.com/ljwztc/CLIP-Driven-Universal-Model.


Assuntos
Tomografia Computadorizada por Raios X , Humanos , Radiografia Abdominal/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Inteligência Artificial
16.
Med Image Anal ; 97: 103275, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39032395

RESUMO

Recent unsupervised domain adaptation (UDA) methods in medical image segmentation commonly utilize Generative Adversarial Networks (GANs) for domain translation. However, the translated images often exhibit a distribution deviation from the ideal due to the inherent instability of GANs, leading to challenges such as visual inconsistency and incorrect style, consequently causing the segmentation model to fall into the fixed wrong pattern. To address this problem, we propose a novel UDA framework known as Dual Domain Distribution Disruption with Semantics Preservation (DDSP). Departing from the idea of generating images conforming to the target domain distribution in GAN-based UDA methods, we make the model domain-agnostic and focus on anatomical structural information by leveraging semantic information as constraints to guide the model to adapt to images with disrupted distributions in both source and target domains. Furthermore, we introduce the inter-channel similarity feature alignment based on the domain-invariant structural prior information, which facilitates the shared pixel-wise classifier to achieve robust performance on target domain features by aligning the source and target domain features across channels. Without any exaggeration, our method significantly outperforms existing state-of-the-art UDA methods on three public datasets (i.e., the heart dataset, the brain dataset, and the prostate dataset). The code is available at https://github.com/MIXAILAB/DDSPSeg.


Assuntos
Semântica , Aprendizado de Máquina não Supervisionado , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Algoritmos , Encéfalo/diagnóstico por imagem
17.
Gels ; 10(2)2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38391438

RESUMO

Polyurethanes (PUs) are a highly adaptable class of biomaterials that are among some of the most researched materials for various biomedical applications. However, engineered tissue scaffolds composed of PU have not found their way into clinical application, mainly due to the difficulty of balancing the control of material properties with the desired cellular response. A simple method for the synthesis of tunable bioactive poly(ethylene glycol) diacrylate (PEGDA) hydrogels containing photocurable PU is described. These hydrogels may be modified with PEGylated peptides or proteins to impart variable biological functions, and the mechanical properties of the hydrogels can be tuned based on the ratios of PU and PEGDA. Studies with human cells revealed that PU-PEG blended hydrogels support cell adhesion and viability when cell adhesion peptides are crosslinked within the hydrogel matrix. These hydrogels represent a unique and highly tailorable system for synthesizing PU-based synthetic extracellular matrices for tissue engineering applications.

18.
Cereb Cortex ; 22(12): 2831-9, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22190432

RESUMO

Convoluted cortical folding and neuronal wiring are 2 prominent attributes of the mammalian brain. However, the macroscale intrinsic relationship between these 2 general cross-species attributes, as well as the underlying principles that sculpt the architecture of the cerebral cortex, remains unclear. Here, we show that the axonal fibers connected to gyri are significantly denser than those connected to sulci. In human, chimpanzee, and macaque brains, a dominant fraction of axonal fibers were found to be connected to the gyri. This finding has been replicated in a range of mammalian brains via diffusion tensor imaging and high-angular resolution diffusion imaging. These results may have shed some lights on fundamental mechanisms for development and organization of the cerebral cortex, suggesting that axonal pushing is a mechanism of cortical folding.


Assuntos
Axônios/ultraestrutura , Córtex Cerebral/ultraestrutura , Macaca/anatomia & histologia , Vias Neurais/ultraestrutura , Pan troglodytes/anatomia & histologia , Animais , Feminino , Humanos , Masculino , Especificidade da Espécie , Adulto Jovem
19.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 9022-9040, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37018585

RESUMO

Domain Adaptive Object Detection (DAOD) generalizes the object detector from an annotated domain to a label-free novel one. Recent works estimate prototypes (class centers) and minimize the corresponding distances to adapt the cross-domain class conditional distribution. However, this prototype-based paradigm 1) fails to capture the class variance with agnostic structural dependencies, and 2) ignores the domain-mismatched classes with a sub-optimal adaptation. To address these two challenges, we propose an improved SemantIc-complete Graph MAtching framework, dubbed SIGMA++, for DAOD, completing mismatched semantics and reformulating adaptation with hypergraph matching. Specifically, we propose a Hypergraphical Semantic Completion (HSC) module to generate hallucination graph nodes in mismatched classes. HSC builds a cross-image hypergraph to model class conditional distribution with high-order dependencies and learns a graph-guided memory bank to generate missing semantics. After representing the source and target batch with hypergraphs, we reformulate domain adaptation with a hypergraph matching problem, i.e., discovering well-matched nodes with homogeneous semantics to reduce the domain gap, which is solved with a Bipartite Hypergraph Matching (BHM) module. Graph nodes are used to estimate semantic-aware affinity, while edges serve as high-order structural constraints in a structure-aware matching loss, achieving fine-grained adaptation with hypergraph matching. The applicability of various object detectors verifies the generalization of SIGMA++, and extensive experiments on nine benchmarks show its state-of-the-art performance on both AP 50 and adaptation gains.

20.
IEEE Trans Med Imaging ; 42(6): 1632-1643, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37018639

RESUMO

Weakly supervised segmentation (WSS) aims to exploit weak forms of annotations to achieve the segmentation training, thereby reducing the burden on annotation. However, existing methods rely on large-scale centralized datasets, which are difficult to construct due to privacy concerns on medical data. Federated learning (FL) provides a cross-site training paradigm and shows great potential to address this problem. In this work, we represent the first effort to formulate federated weakly supervised segmentation (FedWSS) and propose a novel Federated Drift Mitigation (FedDM) framework to learn segmentation models across multiple sites without sharing their raw data. FedDM is devoted to solving two main challenges (i.e., local drift on client-side optimization and global drift on server-side aggregation) caused by weak supervision signals in FL setting via Collaborative Annotation Calibration (CAC) and Hierarchical Gradient De-conflicting (HGD). To mitigate the local drift, CAC customizes a distal peer and a proximal peer for each client via a Monte Carlo sampling strategy, and then employs inter-client knowledge agreement and disagreement to recognize clean labels and correct noisy labels, respectively. Moreover, in order to alleviate the global drift, HGD online builds a client hierarchy under the guidance of history gradient of the global model in each communication round. Through de-conflicting clients under the same parent nodes from bottom layers to top layers, HGD achieves robust gradient aggregation at the server side. Furthermore, we theoretically analyze FedDM and conduct extensive experiments on public datasets. The experimental results demonstrate the superior performance of our method compared with state-of-the-art approaches. The source code is available at https://github.com/CityU-AIM-Group/FedDM.


Assuntos
Software , Aprendizado de Máquina Supervisionado , Humanos , Calibragem , Método de Monte Carlo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA