Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 186
Filtrar
1.
Cell ; 184(18): 4819-4837.e22, 2021 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-34380046

RESUMO

Animal bodies are composed of cell types with unique expression programs that implement their distinct locations, shapes, structures, and functions. Based on these properties, cell types assemble into specific tissues and organs. To systematically explore the link between cell-type-specific gene expression and morphology, we registered an expression atlas to a whole-body electron microscopy volume of the nereid Platynereis dumerilii. Automated segmentation of cells and nuclei identifies major cell classes and establishes a link between gene activation, chromatin topography, and nuclear size. Clustering of segmented cells according to gene expression reveals spatially coherent tissues. In the brain, genetically defined groups of neurons match ganglionic nuclei with coherent projections. Besides interneurons, we uncover sensory-neurosecretory cells in the nereid mushroom bodies, which thus qualify as sensory organs. They furthermore resemble the vertebrate telencephalon by molecular anatomy. We provide an integrated browser as a Fiji plugin for remote exploration of all available multimodal datasets.


Assuntos
Forma Celular , Regulação da Expressão Gênica , Poliquetos/citologia , Poliquetos/genética , Análise de Célula Única , Animais , Núcleo Celular/metabolismo , Gânglios dos Invertebrados/metabolismo , Perfilação da Expressão Gênica , Família Multigênica , Imagem Multimodal , Corpos Pedunculados/metabolismo , Poliquetos/ultraestrutura
2.
Brief Bioinform ; 25(5)2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39073832

RESUMO

Herbal medicines, particularly traditional Chinese medicines (TCMs), are a rich source of natural products with significant therapeutic potential. However, understanding their mechanisms of action is challenging due to the complexity of their multi-ingredient compositions. We introduced Herb-CMap, a multimodal fusion framework leveraging protein-protein interactions and herb-perturbed gene expression signatures. Utilizing a network-based heat diffusion algorithm, Herb-CMap creates a connectivity map linking herb perturbations to their therapeutic targets, thereby facilitating the prioritization of active ingredients. As a case study, we applied Herb-CMap to Suhuang antitussive capsule (Suhuang), a TCM formula used for treating cough variant asthma (CVA). Using in vivo rat models, our analysis established the transcriptomic signatures of Suhuang and identified its key compounds, such as quercetin and luteolin, and their target genes, including IL17A, PIK3CB, PIK3CD, AKT1, and TNF. These drug-target interactions inhibit the IL-17 signaling pathway and deactivate PI3K, AKT, and NF-κB, effectively reducing lung inflammation and alleviating CVA. The study demonstrates the efficacy of Herb-CMap in elucidating the molecular mechanisms of herbal medicines, offering valuable insights for advancing drug discovery in TCM.


Assuntos
Antitussígenos , Medicamentos de Ervas Chinesas , Medicina Tradicional Chinesa , Animais , Medicamentos de Ervas Chinesas/farmacologia , Medicamentos de Ervas Chinesas/uso terapêutico , Medicina Tradicional Chinesa/métodos , Ratos , Antitussígenos/farmacologia , Antitussígenos/uso terapêutico , Mapas de Interação de Proteínas/efeitos dos fármacos , Asma/tratamento farmacológico , Asma/metabolismo , Asma/genética , Transdução de Sinais/efeitos dos fármacos , Tosse/tratamento farmacológico , Transcriptoma , Humanos
3.
Proc Natl Acad Sci U S A ; 120(32): e2303647120, 2023 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-37523521

RESUMO

Multimodal single-cell technologies profile multiple modalities for each cell simultaneously, enabling a more thorough characterization of cell populations. Existing dimension-reduction methods for multimodal data capture the "union of information," producing a lower-dimensional embedding that combines the information across modalities. While these tools are useful, we focus on a fundamentally different task of separating and quantifying the information among cells that is shared between the two modalities as well as unique to only one modality. Hence, we develop Tilted Canonical Correlation Analysis (Tilted-CCA), a method that decomposes a paired multimodal dataset into three lower-dimensional embeddings-one embedding captures the "intersection of information," representing the geometric relations among the cells that is common to both modalities, while the remaining two embeddings capture the "distinct information for a modality," representing the modality-specific geometric relations. We analyze single-cell multimodal datasets sequencing RNA along surface antibodies (i.e., CITE-seq) as well as RNA alongside chromatin accessibility (i.e., 10x) for blood cells and developing neurons via Tilted-CCA. These analyses show that Tilted-CCA enables meaningful visualization and quantification of the cross-modal information. Finally, Tilted-CCA's framework allows us to perform two specific downstream analyses. First, for single-cell datasets that simultaneously profile transcriptome and surface antibody markers, we show that Tilted-CCA helps design the target antibody panel to complement the transcriptome best. Second, for developmental single-cell datasets that simultaneously profile transcriptome and chromatin accessibility, we show that Tilted-CCA helps identify development-informative genes and distinguish between transient versus terminal cell types.


Assuntos
Algoritmos , Análise de Correlação Canônica , Transcriptoma , Análise de Célula Única/métodos
4.
Trends Genet ; 38(2): 128-139, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34561102

RESUMO

A wealth of single-cell protocols makes it possible to characterize different molecular layers at unprecedented resolution. Integrating the resulting multimodal single-cell data to find cell-to-cell correspondences remains a challenge. We argue that data integration needs to happen at a meaningful biological level of abstraction and that it is necessary to consider the inherent discrepancies between modalities to strike a balance between biological discovery and noise removal. A survey of current methods reveals that a distinction between technical and biological origins of presumed unwanted variation between datasets is not yet commonly considered. The increasing availability of paired multimodal data will aid the development of improved methods by providing a ground truth on cell-to-cell matches.

5.
Brief Bioinform ; 24(6)2023 09 22.
Artigo em Inglês | MEDLINE | ID: mdl-37756592

RESUMO

The prediction of prognostic outcome is critical for the development of efficient cancer therapeutics and potential personalized medicine. However, due to the heterogeneity and diversity of multimodal data of cancer, data integration and feature selection remain a challenge for prognostic outcome prediction. We proposed a deep learning method with generative adversarial network based on sequential channel-spatial attention modules (CSAM-GAN), a multimodal data integration and feature selection approach, for accomplishing prognostic stratification tasks in cancer. Sequential channel-spatial attention modules equipped with an encoder-decoder are applied for the input features of multimodal data to accurately refine selected features. A discriminator network was proposed to make the generator and discriminator learning in an adversarial way to accurately describe the complex heterogeneous information of multiple modal data. We conducted extensive experiments with various feature selection and classification methods and confirmed that the CSAM-GAN via the multilayer deep neural network (DNN) classifier outperformed these baseline methods on two different multimodal data sets with miRNA expression, mRNA expression and histopathological image data: lower-grade glioma and kidney renal clear cell carcinoma. The CSAM-GAN via the multilayer DNN classifier bridges the gap between heterogenous multimodal data and prognostic outcome prediction.


Assuntos
Carcinoma de Células Renais , Glioma , Neoplasias Renais , MicroRNAs , Humanos , Prognóstico
6.
Brief Bioinform ; 24(5)2023 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-37507114

RESUMO

Advances in single-cell multi-omics technology provide an unprecedented opportunity to fully understand cellular heterogeneity. However, integrating omics data from multiple modalities is challenging due to the individual characteristics of each measurement. Here, to solve such a problem, we propose a contrastive and generative deep self-expression model, called single-cell multimodal self-expressive integration (scMSI), which integrates the heterogeneous multimodal data into a unified manifold space. Specifically, scMSI first learns each omics-specific latent representation and self-expression relationship to consider the characteristics of different omics data by deep self-expressive generative model. Then, scMSI combines these omics-specific self-expression relations through contrastive learning. In such a way, scMSI provides a paradigm to integrate multiple omics data even with weak relation, which effectively achieves the representation learning and data integration into a unified framework. We demonstrate that scMSI provides a cohesive solution for a variety of analysis tasks, such as integration analysis, data denoising, batch correction and spatial domain detection. We have applied scMSI on various single-cell and spatial multimodal datasets to validate its high effectiveness and robustness in diverse data types and application scenarios.


Assuntos
Aprendizagem , Multiômica
7.
Brief Bioinform ; 24(1)2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36502428

RESUMO

At present, the study on the pathogenesis of Alzheimer's disease (AD) by multimodal data fusion analysis has been attracted wide attention. It often has the problems of small sample size and high dimension with the multimodal medical data. In view of the characteristics of multimodal medical data, the existing genetic evolution random neural network cluster (GERNNC) model combine genetic evolution algorithm and neural network for the classification of AD patients and the extraction of pathogenic factors. However, the model does not take into account the non-linear relationship between brain regions and genes and the problem that the genetic evolution algorithm can fall into local optimal solutions, which leads to the overall performance of the model is not satisfactory. In order to solve the above two problems, this paper made some improvements on the construction of fusion features and genetic evolution algorithm in GERNNC model, and proposed an improved genetic evolution random neural network cluster (IGERNNC) model. The IGERNNC model uses mutual information correlation analysis method to combine resting-state functional magnetic resonance imaging data with single nucleotide polymorphism data for the construction of fusion features. Based on the traditional genetic evolution algorithm, elite retention strategy and large variation genetic algorithm are added to avoid the model falling into the local optimal solution. Through multiple independent experimental comparisons, the IGERNNC model can more effectively identify AD patients and extract relevant pathogenic factors, which is expected to become an effective tool in the field of AD research.


Assuntos
Doença de Alzheimer , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Doença de Alzheimer/genética , Redes Neurais de Computação , Algoritmos , Encéfalo/diagnóstico por imagem
8.
Neuroimage ; 285: 120485, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38110045

RESUMO

In recent years, deep learning approaches have gained significant attention in predicting brain disorders using neuroimaging data. However, conventional methods often rely on single-modality data and supervised models, which provide only a limited perspective of the intricacies of the highly complex brain. Moreover, the scarcity of accurate diagnostic labels in clinical settings hinders the applicability of the supervised models. To address these limitations, we propose a novel self-supervised framework for extracting multiple representations from multimodal neuroimaging data to enhance group inferences and enable analysis without resorting to labeled data during pre-training. Our approach leverages Deep InfoMax (DIM), a self-supervised methodology renowned for its efficacy in learning representations by estimating mutual information without the need for explicit labels. While DIM has shown promise in predicting brain disorders from single-modality MRI data, its potential for multimodal data remains untapped. This work extends DIM to multimodal neuroimaging data, allowing us to identify disorder-relevant brain regions and explore multimodal links. We present compelling evidence of the efficacy of our multimodal DIM analysis in uncovering disorder-relevant brain regions, including the hippocampus, caudate, insula, - and multimodal links with the thalamus, precuneus, and subthalamus hypothalamus. Our self-supervised representations demonstrate promising capabilities in predicting the presence of brain disorders across a spectrum of Alzheimer's phenotypes. Comparative evaluations against state-of-the-art unsupervised methods based on autoencoders, canonical correlation analysis, and supervised models highlight the superiority of our proposed method in achieving improved classification performance, capturing joint information, and interpretability capabilities. The computational efficiency of the decoder-free strategy enhances its practical utility, as it saves compute resources without compromising performance. This work offers a significant step forward in addressing the challenge of understanding multimodal links in complex brain disorders, with potential applications in neuroimaging research and clinical diagnosis.


Assuntos
Encefalopatias , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Encéfalo/diagnóstico por imagem , Imagem Multimodal/métodos
9.
Neurobiol Dis ; 190: 106361, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37992784

RESUMO

The prefrontal cortex is a crucial regulator of alcohol drinking, and dependence, and other behavioral phenotypes associated with AUD. Comprehensive identification of cell-type specific transcriptomic changes in alcohol dependence will improve our understanding of mechanisms underlying the excessive alcohol use associated with alcohol dependence and will refine targets for therapeutic development. We performed single nucleus RNA sequencing (snRNA-seq) and Visium spatial gene expression profiling on the medial prefrontal cortex (mPFC) obtained from C57BL/6 J mice exposed to the two-bottle choice-chronic intermittent ethanol (CIE) vapor exposure (2BC-CIE, defined as dependent group) paradigm which models phenotypes of alcohol dependence including escalation of alcohol drinking. Gene co-expression network analysis and differential expression analysis identified highly dysregulated co-expression networks in multiple cell types. Dysregulated modules and their hub genes suggest novel understudied targets for studying molecular mechanisms contributing to the alcohol dependence state. A subtype of inhibitory neurons was the most alcohol-sensitive cell type and contained a downregulated gene co-expression module; the hub gene for this module is Cpa6, a gene previously identified by GWAS to be associated with excessive alcohol consumption. We identified an astrocytic Gpc5 module significantly upregulated in the alcohol-dependent group. To our knowledge, there are no studies linking Cpa6 and Gpc5 to the alcohol-dependent phenotype. We also identified neuroinflammation related gene expression changes in multiple cell types, specifically enriched in microglia, further implicating neuroinflammation in the escalation of alcohol drinking. Here, we present a comprehensive atlas of cell-type specific alcohol dependence mediated gene expression changes in the mPFC and identify novel cell type-specific targets implicated in alcohol dependence.


Assuntos
Alcoolismo , Animais , Camundongos , Alcoolismo/genética , Doenças Neuroinflamatórias , Camundongos Endogâmicos C57BL , Encéfalo/metabolismo , Córtex Pré-Frontal/metabolismo , Etanol/toxicidade
10.
Biol Chem ; 405(6): 427-439, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38651266

RESUMO

Integration of multiple data sources presents a challenge for accurate prediction of molecular patho-phenotypic features in automated analysis of data from human model systems. Here, we applied a machine learning-based data integration to distinguish patho-phenotypic features at the subcellular level for dilated cardiomyopathy (DCM). We employed a human induced pluripotent stem cell-derived cardiomyocyte (iPSC-CM) model of a DCM mutation in the sarcomere protein troponin T (TnT), TnT-R141W, compared to isogenic healthy (WT) control iPSC-CMs. We established a multimodal data fusion (MDF)-based analysis to integrate source datasets for Ca2+ transients, force measurements, and contractility recordings. Data were acquired for three additional layer types, single cells, cell monolayers, and 3D spheroid iPSC-CM models. For data analysis, numerical conversion as well as fusion of data from Ca2+ transients, force measurements, and contractility recordings, a non-negative blind deconvolution (NNBD)-based method was applied. Using an XGBoost algorithm, we found a high prediction accuracy for fused single cell, monolayer, and 3D spheroid iPSC-CM models (≥92 ± 0.08 %), as well as for fused Ca2+ transient, beating force, and contractility models (>96 ± 0.04 %). Integrating MDF and XGBoost provides a highly effective analysis tool for prediction of patho-phenotypic features in complex human disease models such as DCM iPSC-CMs.


Assuntos
Cardiomiopatia Dilatada , Células-Tronco Pluripotentes Induzidas , Aprendizado de Máquina , Células-Tronco Pluripotentes Induzidas/metabolismo , Células-Tronco Pluripotentes Induzidas/citologia , Células-Tronco Pluripotentes Induzidas/patologia , Cardiomiopatia Dilatada/patologia , Cardiomiopatia Dilatada/metabolismo , Humanos , Fenótipo , Miócitos Cardíacos/metabolismo , Miócitos Cardíacos/patologia , Troponina T/metabolismo , Cálcio/metabolismo
11.
Brief Bioinform ; 23(4)2022 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-35679533

RESUMO

Patient similarity networks (PSNs), where patients are represented as nodes and their similarities as weighted edges, are being increasingly used in clinical research. These networks provide an insightful summary of the relationships among patients and can be exploited by inductive or transductive learning algorithms for the prediction of patient outcome, phenotype and disease risk. PSNs can also be easily visualized, thus offering a natural way to inspect complex heterogeneous patient data and providing some level of explainability of the predictions obtained by machine learning algorithms. The advent of high-throughput technologies, enabling us to acquire high-dimensional views of the same patients (e.g. omics data, laboratory data, imaging data), calls for the development of data fusion techniques for PSNs in order to leverage this rich heterogeneous information. In this article, we review existing methods for integrating multiple biomedical data views to construct PSNs, together with the different patient similarity measures that have been proposed. We also review methods that have appeared in the machine learning literature but have not yet been applied to PSNs, thus providing a resource to navigate the vast machine learning literature existing on this topic. In particular, we focus on methods that could be used to integrate very heterogeneous datasets, including multi-omics data as well as data derived from clinical information and medical imaging.


Assuntos
Algoritmos , Aprendizado de Máquina
12.
Brief Bioinform ; 23(1)2022 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-34585247

RESUMO

Single-cell technologies provide us new ways to profile transcriptomic landscape, chromatin accessibility, spatial expression patterns in heterogeneous tissues at the resolution of single cell. With enormous generated single-cell datasets, a key analytic challenge is to integrate these datasets to gain biological insights into cellular compositions. Here, we developed a domain-adversarial and variational approximation, DAVAE, which can integrate multiple single-cell datasets across samples, technologies and modalities with a single strategy. Besides, DAVAE can also integrate paired data of ATAC profile and transcriptome profile that are simultaneously measured from a same cell. With a mini-batch stochastic gradient descent strategy, it is scalable for large-scale data and can be accelerated by GPUs. Results on seven real data integration applications demonstrated the effectiveness and scalability of DAVAE in batch-effect removing, transfer learning and cell-type predictions for multiple single-cell datasets across samples, technologies and modalities. Availability: DAVAE has been implemented in a toolkit package "scbean" in the pypi repository, and the source code can be also freely accessible at https://github.com/jhu99/scbean. All our data and source code for reproducing the results of this paper can be accessible at https://github.com/jhu99/davae_paper.


Assuntos
Análise de Célula Única , Software , Algoritmos , Cromatina , Transcriptoma
13.
J Med Primatol ; 53(4): e12722, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38949157

RESUMO

BACKGROUND: Tuberculosis (TB) kills approximately 1.6 million people yearly despite the fact anti-TB drugs are generally curative. Therefore, TB-case detection and monitoring of therapy, need a comprehensive approach. Automated radiological analysis, combined with clinical, microbiological, and immunological data, by machine learning (ML), can help achieve it. METHODS: Six rhesus macaques were experimentally inoculated with pathogenic Mycobacterium tuberculosis in the lung. Data, including Computed Tomography (CT), were collected at 0, 2, 4, 8, 12, 16, and 20 weeks. RESULTS: Our ML-based CT analysis (TB-Net) efficiently and accurately analyzed disease progression, performing better than standard deep learning model (LLM OpenAI's CLIP Vi4). TB-Net based results were more consistent than, and confirmed independently by, blinded manual disease scoring by two radiologists and exhibited strong correlations with blood biomarkers, TB-lesion volumes, and disease-signs during disease pathogenesis. CONCLUSION: The proposed approach is valuable in early disease detection, monitoring efficacy of therapy, and clinical decision making.


Assuntos
Biomarcadores , Aprendizado Profundo , Macaca mulatta , Mycobacterium tuberculosis , Tomografia Computadorizada por Raios X , Animais , Biomarcadores/sangue , Tomografia Computadorizada por Raios X/veterinária , Tuberculose/veterinária , Tuberculose/diagnóstico por imagem , Modelos Animais de Doenças , Tuberculose Pulmonar/diagnóstico por imagem , Masculino , Feminino , Pulmão/diagnóstico por imagem , Pulmão/patologia , Pulmão/microbiologia , Doenças dos Macacos/diagnóstico por imagem , Doenças dos Macacos/microbiologia
14.
Sensors (Basel) ; 24(16)2024 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-39205094

RESUMO

Traditional broadcasting methods often result in fatigue and decision-making errors when dealing with complex and diverse live content. Current research on intelligent broadcasting primarily relies on preset rules and model-based decisions, which have limited capabilities for understanding emotional dynamics. To address these issues, this study proposed and developed an emotion-driven intelligent broadcasting system, EmotionCast, to enhance the efficiency of camera switching during live broadcasts through decisions based on multimodal emotion recognition technology. Initially, the system employs sensing technologies to collect real-time video and audio data from multiple cameras, utilizing deep learning algorithms to analyze facial expressions and vocal tone cues for emotion detection. Subsequently, the visual, audio, and textual analyses were integrated to generate an emotional score for each camera. Finally, the score for each camera shot at the current time point was calculated by combining the current emotion score with the optimal scores from the preceding time window. This approach ensured optimal camera switching, thereby enabling swift responses to emotional changes. EmotionCast can be applied in various sensing environments such as sports events, concerts, and large-scale performances. The experimental results demonstrate that EmotionCast excels in switching accuracy, emotional resonance, and audience satisfaction, significantly enhancing emotional engagement compared to traditional broadcasting methods.


Assuntos
Algoritmos , Emoções , Expressão Facial , Emoções/fisiologia , Humanos , Aprendizado Profundo , Gravação em Vídeo/métodos
15.
Sensors (Basel) ; 24(12)2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38931497

RESUMO

Depression is a major psychological disorder with a growing impact worldwide. Traditional methods for detecting the risk of depression, predominantly reliant on psychiatric evaluations and self-assessment questionnaires, are often criticized for their inefficiency and lack of objectivity. Advancements in deep learning have paved the way for innovations in depression risk detection methods that fuse multimodal data. This paper introduces a novel framework, the Audio, Video, and Text Fusion-Three Branch Network (AVTF-TBN), designed to amalgamate auditory, visual, and textual cues for a comprehensive analysis of depression risk. Our approach encompasses three dedicated branches-Audio Branch, Video Branch, and Text Branch-each responsible for extracting salient features from the corresponding modality. These features are subsequently fused through a multimodal fusion (MMF) module, yielding a robust feature vector that feeds into a predictive modeling layer. To further our research, we devised an emotion elicitation paradigm based on two distinct tasks-reading and interviewing-implemented to gather a rich, sensor-based depression risk detection dataset. The sensory equipment, such as cameras, captures subtle facial expressions and vocal characteristics essential for our analysis. The research thoroughly investigates the data generated by varying emotional stimuli and evaluates the contribution of different tasks to emotion evocation. During the experiment, the AVTF-TBN model has the best performance when the data from the two tasks are simultaneously used for detection, where the F1 Score is 0.78, Precision is 0.76, and Recall is 0.81. Our experimental results confirm the validity of the paradigm and demonstrate the efficacy of the AVTF-TBN model in detecting depression risk, showcasing the crucial role of sensor-based data in mental health detection.


Assuntos
Depressão , Humanos , Depressão/diagnóstico , Gravação em Vídeo , Emoções/fisiologia , Aprendizado Profundo , Expressão Facial , Feminino , Masculino , Adulto , Redes Neurais de Computação
16.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732857

RESUMO

This study presents a pioneering approach that leverages advanced sensing technologies and data processing techniques to enhance the process of clinical documentation generation during medical consultations. By employing sophisticated sensors to capture and interpret various cues such as speech patterns, intonations, or pauses, the system aims to accurately perceive and understand patient-doctor interactions in real time. This sensing capability allows for the automation of transcription and summarization tasks, facilitating the creation of concise and informative clinical documents. Through the integration of automatic speech recognition sensors, spoken dialogue is seamlessly converted into text, enabling efficient data capture. Additionally, deep models such as Transformer models are utilized to extract and analyze crucial information from the dialogue, ensuring that the generated summaries encapsulate the essence of the consultations accurately. Despite encountering challenges during development, experimentation with these sensing technologies has yielded promising results. The system achieved a maximum ROUGE-1 metric score of 0.57, demonstrating its effectiveness in summarizing complex medical discussions. This sensor-based approach aims to alleviate the administrative burden on healthcare professionals by automating documentation tasks and safeguarding important patient information. Ultimately, by enhancing the efficiency and reliability of clinical documentation, this innovative method contributes to improving overall healthcare outcomes.


Assuntos
Aprendizado Profundo , Humanos , Interface para o Reconhecimento da Fala
17.
Biom J ; 66(2): e2300037, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38368275

RESUMO

Conventional canonical correlation analysis (CCA) measures the association between two datasets and identifies relevant contributors. However, it encounters issues with execution and interpretation when the sample size is smaller than the number of variables or there are more than two datasets. Our motivating example is a stroke-related clinical study on pigs. The data are multimodal and consist of measurements taken at multiple time points and have many more variables than observations. This study aims to uncover important biomarkers and stroke recovery patterns based on physiological changes. To address the issues in the data, we develop two sparse CCA methods for multiple datasets. Various simulated examples are used to illustrate and contrast the performance of the proposed methods with that of the existing methods. In analyzing the pig stroke data, we apply the proposed sparse CCA methods along with dimension reduction techniques, interpret the recovery patterns, and identify influential variables in recovery.


Assuntos
Genômica , Acidente Vascular Cerebral , Animais , Suínos , Genômica/métodos , Análise de Correlação Canônica , Algoritmos
18.
J Environ Manage ; 367: 122048, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39088903

RESUMO

Monitoring suspended sediment concentration (SSC) in rivers is pivotal for water quality management and sustainable river ecosystem development. However, achieving continuous and precise SSC monitoring is fraught with challenges, including low automation, lengthy measurement processes, and high cost. This study proposes an innovative approach for SSC identification in rivers using multimodal data fusion. We developed a robust model by harnessing colour features from video images, motion characteristics from the Lucas-Kanade (LK) optical flow method, and temperature data. By integrating ResNet with a mixed density network (MDN), our method fused the image and optical flow fields, and temperature data to enhance accuracy and reliability. Validated at a hydropower station in the Xinjiang Uygur Autonomous Region, China, the results demonstrated that while the image field alone offers a baseline level of SSC identification, it experiences local errors under specific conditions. The incorporation of optical flow and water temperature information enhanced model robustness, particularly when coupling the image and optical flow fields, yielding a Nash-Sutcliffe efficiency (NSE) of 0.91. Further enhancement was observed with the combined use of all three data types, attaining an NSE of 0.93. This integrated approach offers a more accurate SSC identification solution, enabling non-contact, low-cost measurements, facilitating remote online monitoring, and supporting water resource management and river water-sediment element monitoring.


Assuntos
Monitoramento Ambiental , Rios , Temperatura , Rios/química , Monitoramento Ambiental/métodos , Sedimentos Geológicos/análise , China , Qualidade da Água
19.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(3): 485-493, 2024 Jun 25.
Artigo em Chinês | MEDLINE | ID: mdl-38932534

RESUMO

Alzheimer's Disease (AD) is a progressive neurodegenerative disorder. Due to the subtlety of symptoms in the early stages of AD, rapid and accurate clinical diagnosis is challenging, leading to a high rate of misdiagnosis. Current research on early diagnosis of AD has not sufficiently focused on tracking the progression of the disease over an extended period in subjects. To address this issue, this paper proposes an ensemble model for assisting early diagnosis of AD that combines structural magnetic resonance imaging (sMRI) data from two time points with clinical information. The model employs a three-dimensional convolutional neural network (3DCNN) and twin neural network modules to extract features from the sMRI data of subjects at two time points, while a multi-layer perceptron (MLP) is used to model the clinical information of the subjects. The objective is to extract AD-related features from the multi-modal data of the subjects as much as possible, thereby enhancing the diagnostic performance of the ensemble model. Experimental results show that based on this model, the classification accuracy rate is 89% for differentiating AD patients from normal controls (NC), 88% for differentiating mild cognitive impairment converting to AD (MCIc) from NC, and 69% for distinguishing non-converting mild cognitive impairment (MCInc) from MCIc, confirming the effectiveness and efficiency of the proposed method for early diagnosis of AD, as well as its potential to play a supportive role in the clinical diagnosis of early Alzheimer's disease.


Assuntos
Doença de Alzheimer , Diagnóstico Precoce , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/diagnóstico , Humanos , Imageamento por Ressonância Magnética/métodos , Progressão da Doença , Algoritmos
20.
Lab Invest ; 103(11): 100255, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37757969

RESUMO

Digital pathology has transformed the traditional pathology practice of analyzing tissue under a microscope into a computer vision workflow. Whole-slide imaging allows pathologists to view and analyze microscopic images on a computer monitor, enabling computational pathology. By leveraging artificial intelligence (AI) and machine learning (ML), computational pathology has emerged as a promising field in recent years. Recently, task-specific AI/ML (eg, convolutional neural networks) has risen to the forefront, achieving above-human performance in many image-processing and computer vision tasks. The performance of task-specific AI/ML models depends on the availability of many annotated training datasets, which presents a rate-limiting factor for AI/ML development in pathology. Task-specific AI/ML models cannot benefit from multimodal data and lack generalization, eg, the AI models often struggle to generalize to new datasets or unseen variations in image acquisition, staining techniques, or tissue types. The 2020s are witnessing the rise of foundation models and generative AI. A foundation model is a large AI model trained using sizable data, which is later adapted (or fine-tuned) to perform different tasks using a modest amount of task-specific annotated data. These AI models provide in-context learning, can self-correct mistakes, and promptly adjust to user feedback. In this review, we provide a brief overview of recent advances in computational pathology enabled by task-specific AI, their challenges and limitations, and then introduce various foundation models. We propose to create a pathology-specific generative AI based on multimodal foundation models and present its potentially transformative role in digital pathology. We describe different use cases, delineating how it could serve as an expert companion of pathologists and help them efficiently and objectively perform routine laboratory tasks, including quantifying image analysis, generating pathology reports, diagnosis, and prognosis. We also outline the potential role that foundation models and generative AI can play in standardizing the pathology laboratory workflow, education, and training.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Patologia , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Patologistas , Patologia/tendências
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa