Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 74
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Molecules ; 28(19)2023 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-37836728

RESUMO

Infrared (IR) spectroscopy has greatly improved the ability to study biomedical samples because IR spectroscopy measures how molecules interact with infrared light, providing a measurement of the vibrational states of the molecules. Therefore, the resulting IR spectrum provides a unique vibrational fingerprint of the sample. This characteristic makes IR spectroscopy an invaluable and versatile technology for detecting a wide variety of chemicals and is widely used in biological, chemical, and medical scenarios. These include, but are not limited to, micro-organism identification, clinical diagnosis, and explosive detection. However, IR spectroscopy is susceptible to various interfering factors such as scattering, reflection, and interference, which manifest themselves as baseline, band distortion, and intensity changes in the measured IR spectra. Combined with the absorption information of the molecules of interest, these interferences prevent direct data interpretation based on the Beer-Lambert law. Instead, more advanced data analysis approaches, particularly artificial intelligence (AI)-based algorithms, are required to remove the interfering contributions and, more importantly, to translate the spectral signals into high-level biological/chemical information. This leads to the tasks of spectral pre-processing and data modeling, the main topics of this review. In particular, we will discuss recent developments in both tasks from the perspectives of classical machine learning and deep learning.

2.
Sensors (Basel) ; 22(11)2022 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-35684887

RESUMO

Integrated logistics support (ILS) is of great significance for maintaining equipment operational capability in the whole lifecycle. Numerous segments and complex product objects exist in the process of equipment ILS, which gives ILS data multi-source, heterogeneous, and multidimensional characteristics. The present ILS data cannot satisfy the demand for efficient utilization. Therefore, the unified modeling of ILS data is extremely urgent and significant. In this paper, a unified data modeling method is proposed to solve the consistent and comprehensive expression problem of ILS data. Firstly, a four-tier unified data modeling framework is constructed based on the analysis of ILS data characteristics. Secondly, the Core unified data model, Domain unified data model, and Instantiated unified data model are built successively. Then, the expressions of ILS data in the three dimensions of time, product, and activity are analyzed. Thirdly, the Lifecycle ILS unified data model is constructed, and the multidimensional information retrieval methods are discussed. Based on these, different systems in the equipment ILS process can share a set of data models and provide ILS designers with relevant data through different views. Finally, the practical ILS data models are constructed based on the developed unified data modeling software prototype, which verifies the feasibility of the proposed method.


Assuntos
Armazenamento e Recuperação da Informação , Software
3.
J Comput Inf Sci Eng ; 22(6)2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37720111

RESUMO

Recently, The number and types of measurement devices that collect data that is used to monitor Laser-Based Powder Bed Fusion of Metals processes and inspect Additive Manufacturing (AM) metal parts have increased rapidly. Each measurement device generates data in a unique coordinate system and in a unique format. Data alignment is the process of spatially aligning different datasets to a single coordinate system. It is part of a broader process called "Data Registration". This paper provides a data-registration procedure and includes an example of aligning data to a single, reference, coordinate system. Such a reference coordinate system is needed for downstream applications, including data analytic, artificial intelligence, and part qualification.

4.
J Biomed Inform ; 114: 103670, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33359548

RESUMO

With the extensive adoption of electronic health records (EHRs) by several healthcare organizations, more efforts are needed to manage and utilize such massive, various, and complex healthcare data. Databases' performance and suitability to health care tasks are dramatically affected by how their data storage model and query capabilities are well-adapted to the use case scenario. On the other hand, standardized healthcare data modeling is one of the most favorable paths for achieving semantic interoperability, facilitating patient data integration from different healthcare systems. This paper compares the state-of-the-art of the most crucial database management systems used for storing standardized EHRs data. It discusses different database models' appropriateness for meeting different EHRs functions with different database specifications and workload scenarios. Insights into relevant literature show how flexible NoSQL databases (document, column, and graph) effectively deal with standardized EHRs data's distinctive features, especially in the distributed healthcare system, leading to better EHR.


Assuntos
Sistemas de Gerenciamento de Base de Dados , Registros Eletrônicos de Saúde , Bases de Dados Factuais , Atenção à Saúde , Humanos , Armazenamento e Recuperação da Informação
5.
J Med Internet Res ; 23(4): e26075, 2021 04 28.
Artigo em Inglês | MEDLINE | ID: mdl-33835931

RESUMO

BACKGROUND: In the face of the current COVID-19 pandemic, the timely prediction of upcoming medical needs for infected individuals enables better and quicker care provision when necessary and management decisions within health care systems. OBJECTIVE: This work aims to predict the medical needs (hospitalizations, intensive care unit admissions, and respiratory assistance) and survivability of individuals testing positive for SARS-CoV-2 infection in Portugal. METHODS: A retrospective cohort of 38,545 infected individuals during 2020 was used. Predictions of medical needs were performed using state-of-the-art machine learning approaches at various stages of a patient's cycle, namely, at testing (prehospitalization), at posthospitalization, and during postintensive care. A thorough optimization of state-of-the-art predictors was undertaken to assess the ability to anticipate medical needs and infection outcomes using demographic and comorbidity variables, as well as dates associated with symptom onset, testing, and hospitalization. RESULTS: For the target cohort, 75% of hospitalization needs could be identified at the time of testing for SARS-CoV-2 infection. Over 60% of respiratory needs could be identified at the time of hospitalization. Both predictions had >50% precision. CONCLUSIONS: The conducted study pinpoints the relevance of the proposed predictive models as good candidates to support medical decisions in the Portuguese population, including both monitoring and in-hospital care decisions. A clinical decision support system is further provided to this end.


Assuntos
COVID-19/terapia , Hospitalização/estatística & dados numéricos , Unidades de Terapia Intensiva/estatística & dados numéricos , Respiração Artificial/estatística & dados numéricos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , COVID-19/epidemiologia , Criança , Pré-Escolar , Estudos de Coortes , Feminino , Humanos , Lactente , Recém-Nascido , Estudos Longitudinais , Masculino , Pessoa de Meia-Idade , Pandemias , Portugal/epidemiologia , Estudos Retrospectivos , SARS-CoV-2/isolamento & purificação , Adulto Jovem
6.
BMC Med Inform Decis Mak ; 21(1): 200, 2021 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-34182974

RESUMO

Postoperative complications are still hard to predict despite the efforts towards the creation of clinical risk scores. The published scores contribute for the creation of specialized tools, but with limited predictive performance and reusability for implementation in the oncological context. This work aims to predict postoperative complications risk for cancer patients, offering two major contributions. First, to develop and evaluate a machine learning-based risk score, specific for the Portuguese population using a retrospective cohort of 847 cancer patients undergoing surgery between 2016 and 2018, for 4 outcomes of interest: (1) existence of postoperative complications, (2) severity level of complications, (3) number of days in the Intermediate Care Unit (ICU), and (4) postoperative mortality within 1 year. An additional cohort of 137 cancer patients from the same center was used for validation. Second, to improve the interpretability of the predictive models. In order to achieve these objectives, we propose an approach for the learning of risk predictors, offering new perspectives and insights into the clinical decision process. For postoperative complications the Receiver Operating Characteristic Curve (AUC) was 0.69, for complications' severity AUC was 0.65, for the days in the ICU the mean absolute error was 1.07 days, and for 1-year postoperative mortality the AUC was 0.74, calculated on the development cohort. In this study, predictive models which could help to guide physicians at organizational and clinical decision making were developed. Additionally, a web-based decision support tool is further provided to this end.


Assuntos
Neoplasias , Complicações Pós-Operatórias , Estudos de Coortes , Humanos , Neoplasias/cirurgia , Portugal/epidemiologia , Complicações Pós-Operatórias/epidemiologia , Curva ROC , Estudos Retrospectivos
7.
Anal Bioanal Chem ; 411(10): 2223-2237, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30879117

RESUMO

Dielectrophoresis (DEP) brings about the high-resolution separations of cells and other bioparticles arising from very subtle differences in their properties. However, an unanticipated limitation has arisen: difficulty in assignment of specific biological features which vary between two cell populations. This hampers the ability to interpret the significance of the variations. To realize the opportunities made possible by dielectrophoresis, the data and the diversity of structures found in cells and bioparticles must be linked. While the crossover frequency in DEP has been studied in-depth and exploited in applications using AC fields, less attention has been given when a DC field is present. Here, a new mathematical model of dielectrophoretic data is introduced which connects the physical properties of cells to specific elements of the data from potential- or time-varied DEP experiments. The slope of the data in either analysis is related to the electrokinetic mobility, while the potential at which capture initiates in potential-based analysis is related to both the electrokinetic and dielectrophoretic mobilities. These mobilities can be assigned to cellular properties for which values appear in the literature. Representative examples of high and low values of properties such as conductivity, zeta potential, and surface charge density for bacteria including Streptococcus mutans, Rhodococcus erythropolis, Pasteurella multocida, Escherichia coli, and Staphylococcus aureus are considered. While the many properties of a cell collapse into one or two features of data, for a well-vetted system the model can indicate the extent of dissimilarity. The influence of individual properties on the features of dielectrophoretic data is summarized, allowing for further interpretation of data. Graphical abstract.


Assuntos
Algoritmos , Bactérias/química , Eletroforese/métodos , Bactérias/citologia , Bactérias/isolamento & purificação , Condutividade Elétrica , Eletro-Osmose , Cinética , Modelos Biológicos , Modelos Químicos , Eletricidade Estática , Propriedades de Superfície
8.
Curr Genomics ; 20(2): 90-99, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31555060

RESUMO

After human genome sequencing and rapid changes in genome sequencing methods, we have entered into the era of rapidly accumulating genome-sequencing data. This has derived the development of several types of methods for representing results of genome sequencing data. Circular genome visual-ization tools are also critical in this area as they provide rapid interpretation and simple visualization of overall data. In the last 15 years, we have seen rapid changes in circular visualization tools after the de-velopment of the circos tool with 1-2 tools published per year. Herein we have summarized and revisited all these tools until the third quarter of 2018.

9.
BMC Med Inform Decis Mak ; 19(1): 92, 2019 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-31023322

RESUMO

BACKGROUND: Maintaining physical fitness is a crucial component of the therapeutic process for patients with cardiovascular disease (CVD). Despite the known importance of being physically active, patient adherence to exercise, both in daily life and during cardiac rehabilitation (CR), is low. Patient adherence is frequently composed of numerous determinants associated with different patient aspects (e.g., psychological, clinical, etc.). Understanding the influence of such determinants is a central component of developing personalized interventions to improve or maintain patient adherence. Medical research produced evidence regarding factors affecting patients' adherence to physical activity regimen. However, the heterogeneity of the available data is a significant challenge for knowledge reusability. Ontologies constitute one of the methods applied for efficient knowledge sharing and reuse. In this paper, we are proposing an ontology called OPTImAL, focusing on CVD patient adherence to physical activity and exercise training. METHODS: OPTImAL was developed following the Ontology Development 101 methodology and refined based on the NeOn framework. First, we defined the ontology specification (i.e., purpose, scope, target users, etc.). Then, we elicited domain knowledge based on the published studies. Further, the model was conceptualized, formalized and implemented, while the developed ontology was validated for its consistency. An independent cardiologist and three CR trainers evaluated the ontology for its appropriateness and usefulness. RESULTS: We developed a formal model that includes 142 classes, ten object properties, and 371 individuals, that describes the relations of different factors of CVD patient profile to adherence and adherence quality, as well as the associated types and dimensions of physical activity and exercise. 2637 logical axioms were constructed to comprise the overall concepts that the ontology defines. The ontology was successfully validated for its consistency and preliminary evaluated for its appropriateness and usefulness in medical practice. CONCLUSIONS: OPTImAL describes relations of 320 factors originated from 60 multidimensional aspects (e.g., social, clinical, psychological, etc.) affecting CVD patient adherence to physical activity and exercise. The formal model is evidence-based and can serve as a knowledge tool in the practice of cardiac rehabilitation experts, supporting the process of activity regimen recommendation for better patient adherence.


Assuntos
Exercício Físico , Modelos Teóricos , Cooperação do Paciente , Reabilitação Cardíaca , Doenças Cardiovasculares , Feminino , Comportamentos Relacionados com a Saúde , Humanos , Masculino
10.
BMC Bioinformatics ; 19(1): 87, 2018 03 07.
Artigo em Inglês | MEDLINE | ID: mdl-29514626

RESUMO

BACKGROUND: DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. RESULTS: We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. CONCLUSIONS: This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.


Assuntos
Teoria da Informação , Modelos Teóricos , Estatística como Assunto , Sulfitos/química , Sequenciamento Completo do Genoma/métodos , Sequência de Bases , Simulação por Computador , Ilhas de CpG/genética , Metilação de DNA/genética , Entropia , Epigênese Genética , Ontologia Genética , Genoma Humano , Humanos , Neoplasias Pulmonares/genética , Probabilidade , Navegador
11.
BMC Bioinformatics ; 18(1): 521, 2017 Nov 25.
Artigo em Inglês | MEDLINE | ID: mdl-29178831

RESUMO

BACKGROUND: Chromatin immunoprecipitation followed by DNA sequencing (ChIP-seq) and associated methods are widely used to define the genome wide distribution of chromatin associated proteins, post-translational epigenetic marks, and modifications found on DNA bases. An area of emerging interest is to study time dependent changes in the distribution of such proteins and marks by using serial ChIP-seq experiments performed in a time resolved manner. Despite such time resolved studies becoming increasingly common, software to facilitate analysis of such data in a robust automated manner is limited. RESULTS: We have designed software called Time-Dependent ChIP-Sequencing Analyser (TDCA), which is the first program to automate analysis of time-dependent ChIP-seq data by fitting to sigmoidal curves. We provide users with guidance for experimental design of TDCA for modeling of time course (TC) ChIP-seq data using two simulated data sets. Furthermore, we demonstrate that this fitting strategy is widely applicable by showing that automated analysis of three previously published TC data sets accurately recapitulates key findings reported in these studies. Using each of these data sets, we highlight how biologically relevant findings can be readily obtained by exploiting TDCA to yield intuitive parameters that describe behavior at either a single locus or sets of loci. TDCA enables customizable analysis of user input aligned DNA sequencing data, coupled with graphical outputs in the form of publication-ready figures that describe behavior at either individual loci or sets of loci sharing common traits defined by the user. TDCA accepts sequencing data as standard binary alignment map (BAM) files and loci of interest in browser extensible data (BED) file format. CONCLUSIONS: TDCA accurately models the number of sequencing reads, or coverage, at loci from TC ChIP-seq studies or conceptually related TC sequencing experiments. TC experiments are reduced to intuitive parametric values that facilitate biologically relevant data analysis, and the uncovering of variations in the time-dependent behavior of chromatin. TDCA automates the analysis of TC ChIP-seq experiments, permitting researchers to easily obtain raw and modeled data for specific loci or groups of loci with similar behavior while also enhancing consistency of data analysis of TC data within the genomics field.


Assuntos
Imunoprecipitação da Cromatina/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Software , Algoritmos , Animais , Linhagem Celular , Cromossomos/química , Cromossomos/metabolismo , DNA/química , DNA/isolamento & purificação , DNA/metabolismo , Proteínas de Ligação a DNA/química , Proteínas de Ligação a DNA/genética , Proteínas de Ligação a DNA/metabolismo , Histonas/química , Histonas/genética , Histonas/metabolismo , Humanos , Saccharomyces cerevisiae/metabolismo , Proteínas de Saccharomyces cerevisiae/química , Proteínas de Saccharomyces cerevisiae/genética , Proteínas de Saccharomyces cerevisiae/metabolismo , Análise de Sequência de DNA , Fatores de Transcrição/química , Fatores de Transcrição/genética , Fatores de Transcrição/metabolismo
12.
Methods ; 111: 3-11, 2016 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-27637471

RESUMO

While a huge amount of (epi)genomic data of multiple types is becoming available by using Next Generation Sequencing (NGS) technologies, the most important emerging problem is the so-called tertiary analysis, concerned with sense making, e.g., discovering how different (epi)genomic regions and their products interact and cooperate with each other. We propose a paradigm shift in tertiary analysis, based on the use of the Genomic Data Model (GDM), a simple data model which links genomic feature data to their associated experimental, biological and clinical metadata. GDM encompasses all the data formats which have been produced for feature extraction from (epi)genomic datasets. We specifically describe the mapping to GDM of SAM (Sequence Alignment/Map), VCF (Variant Call Format), NARROWPEAK (for called peaks produced by NGS ChIP-seq or DNase-seq methods), and BED (Browser Extensible Data) formats, but GDM supports as well all the formats describing experimental datasets (e.g., including copy number variations, DNA somatic mutations, or gene expressions) and annotations (e.g., regarding transcription start sites, genes, enhancers or CpG islands). We downloaded and integrated samples of all the above-mentioned data types and formats from multiple sources. The GDM is able to homogeneously describe semantically heterogeneous data and makes the ground for providing data interoperability, e.g., achieved through the GenoMetric Query Language (GMQL), a high-level, declarative query language for genomic big data. The combined use of the data model and the query language allows comprehensive processing of multiple heterogeneous data, and supports the development of domain-specific data-driven computations and bio-molecular knowledge discovery.


Assuntos
Mineração de Dados/métodos , Genômica/métodos , Análise de Sequência de DNA/métodos , Software , Variações do Número de Cópias de DNA/genética , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Humanos , Sequências Reguladoras de Ácido Nucleico/genética , Alinhamento de Sequência/métodos , Sítio de Iniciação de Transcrição
13.
Health Econ ; 26(12): 1505-1523, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-27747997

RESUMO

This study explores the effects of widowhood on mental health by taking into account the anticipation and adaptation to the partner's death. The empirical analysis uses representative panel data from the USA that are linked to administrative death records of the National Death Index. I estimate static and dynamic specifications of the panel probit model in which unobserved heterogeneity is modeled with correlated random effects. I find strong anticipation effects of the partner's death on the probability of depression, implying that the partner's death event cannot be assumed to be exogenous in econometric models. In the absence of any anticipation effects, the partner's death has long-lasting mental health consequences, leading to a significantly slower adaptation to widowhood. The results suggest that both anticipation effects and adaptation effects can be attributed to a caregiver burden and to the cause of death. The findings of this study have important implications for designing adequate social policies for the elderly US population that alleviate the negative consequences of bereavement. Copyright © 2016 John Wiley & Sons, Ltd.


Assuntos
Luto , Saúde Mental , Cônjuges , Viuvez/psicologia , Adaptação Psicológica , Idoso , Causas de Morte , Feminino , Humanos , Entrevistas como Assunto , Masculino , Pessoa de Meia-Idade , Modelos Econométricos , Pesquisa Qualitativa
15.
Int J Mol Sci ; 16(10): 25897-911, 2015 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-26516852

RESUMO

Biomedical data obtained during cell experiments, laboratory animal research, or human studies often display a complex distribution. Statistical identification of subgroups in research data poses an analytical challenge. Here were introduce an interactive R-based bioinformatics tool, called "AdaptGauss". It enables a valid identification of a biologically-meaningful multimodal structure in the data by fitting a Gaussian mixture model (GMM) to the data. The interface allows a supervised selection of the number of subgroups. This enables the expectation maximization (EM) algorithm to adapt more complex GMM than usually observed with a noninteractive approach. Interactively fitting a GMM to heat pain threshold data acquired from human volunteers revealed a distribution pattern with four Gaussian modes located at temperatures of 32.3, 37.2, 41.4, and 45.4 °C. Noninteractive fitting was unable to identify a meaningful data structure. Obtained results are compatible with known activity temperatures of different TRP ion channels suggesting the mechanistic contribution of different heat sensors to the perception of thermal pain. Thus, sophisticated analysis of the modal structure of biomedical data provides a basis for the mechanistic interpretation of the observations. As it may reflect the involvement of different TRP thermosensory ion channels, the analysis provides a starting point for hypothesis-driven laboratory experiments.


Assuntos
Temperatura Alta , Dor Nociceptiva/metabolismo , Limiar da Dor , Sensação Térmica , Adolescente , Adulto , Algoritmos , Feminino , Humanos , Masculino , Modelos Neurológicos , Dor Nociceptiva/fisiopatologia , Canais de Potencial de Receptor Transitório/metabolismo
16.
Cartogr Geogr Inf Sci ; 41(3): 227-234, 2014 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-27019643

RESUMO

Traditional geographic information system (GIS)-overlay routines usually build on relatively simple data models. Topology is - if at all - calculated on the fly for very specific tasks only. If, for example, a change comparison is conducted between two or more polygon layers, the result leads mostly to a complete and also very complex from-to class intersection. A lot of additional processing steps need to be performed to arrive at aggregated and meaningful results. To overcome this problem a new, automated geospatial overlay method in a topologically enabled (multi-scale) framework is presented. The implementation works with polygon and raster layers and uses a multi-scale vector/raster data model developed in the object-based image analysis software eCognition (Trimble Geospatial Imaging, Munich, Germany). Advantages are the use of the software inherent topological relationships in an object-by-object comparison, addressing some of the basic concepts of object-oriented data modeling such as classification, generalization, and aggregation. Results can easily be aggregated to a change-detection layer; change dependencies and the definition of different change classes are interactively possible through the use of a class hierarchy and its inheritance (parent-child class relationships). Implementation is exemplarily shown for a change comparison of CORINE Land Cover data sets. The result is a flexible and transferable solution which is - if parameterized once - fully automated.

17.
J Biomed Semantics ; 15(1): 16, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39210467

RESUMO

Automatic disease progression prediction models require large amounts of training data, which are seldom available, especially when it comes to rare diseases. A possible solution is to integrate data from different medical centres. Nevertheless, various centres often follow diverse data collection procedures and assign different semantics to collected data. Ontologies, used as schemas for interoperable knowledge bases, represent a state-of-the-art solution to homologate the semantics and foster data integration from various sources. This work presents the BrainTeaser Ontology (BTO), an ontology that models the clinical data associated with two brain-related rare diseases (ALS and MS) in a comprehensive and modular manner. BTO assists in organizing and standardizing the data collected during patient follow-up. It was created by harmonizing schemas currently used by multiple medical centers into a common ontology, following a bottom-up approach. As a result, BTO effectively addresses the practical data collection needs of various real-world situations and promotes data portability and interoperability. BTO captures various clinical occurrences, such as disease onset, symptoms, diagnostic and therapeutic procedures, and relapses, using an event-based approach. Developed in collaboration with medical partners and domain experts, BTO offers a holistic view of ALS and MS for supporting the representation of retrospective and prospective data. Furthermore, BTO adheres to Open Science and FAIR (Findable, Accessible, Interoperable, and Reusable) principles, making it a reliable framework for developing predictive tools to aid in medical decision-making and patient care. Although BTO is designed for ALS and MS, its modular structure makes it easily extendable to other brain-related diseases, showcasing its potential for broader applicability.Database URL  https://zenodo.org/records/7886998 .


Assuntos
Ontologias Biológicas , Humanos , Estudos Retrospectivos , Esclerose Lateral Amiotrófica , Esclerose Múltipla , Semântica
18.
Adv Sci (Weinh) ; : e2403548, 2024 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-39364764

RESUMO

Small data in materials present significant challenges to constructing highly accurate machine learning models, severely hindering the widespread implementation of data-driven materials intelligent design. In this study, the Dual-Strategy Materials Intelligent Design Framework (DSMID) is introduced, which integrates two innovative methods. The Adversarial domain Adaptive Embedding Generative network (AAEG) transfers data between related property datasets, even with only 90 data points, enhancing material composition characterization and improving property prediction. Additionally, to address the challenge of screening and evaluating numerous alloy designs, the Automated Material Screening and Evaluation Pipeline (AMSEP) is implemented. This pipeline utilizes large language models with extensive domain knowledge to efficiently identify promising experimental candidates through self-retrieval and self-summarization. Experimental findings demonstrate that this approach effectively identifies and prepares new eutectic High Entropy Alloy (EHEA), notably Al14(CoCrFe)19Ni28, achieving an ultimate tensile strength of 1085 MPa and 24% elongation without heat treatment or extra processing. This demonstrates significantly greater plasticity and equivalent strength compared to the typical as-cast eutectic HEA AlCoCrFeNi2.1. The DSMID framework, combining AAEG and AMSEP, addresses the challenges of small data modeling and extensive candidate screening, contributing to cost reduction and enhanced efficiency of material design. This framework offers a promising avenue for intelligent material design, particularly in scenarios constrained by limited data availability.

19.
Res Sq ; 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38562856

RESUMO

Polymicrobial infection of the airways is a hallmark of obstructive lung diseases such as cystic fibrosis (CF), non-CF bronchiectasis, and chronic obstructive pulmonary disease. Pulmonary exacerbations (PEx) in these conditions are associated with accelerated lung function decline and higher mortality rates. An understanding of the microbial underpinnings of PEx is challenged by high inter-patient variability in airway microbial community profiles. We analyzed bacterial communities in 880 CF sputum samples and developed microbiome descriptors to model community reorganization prior to and during 18 PEx. We identified two microbial dysbiosis regimes with opposing ecology and dynamics. Pathogen-governed PEx showed hierarchical community reorganization and reduced diversity, whereas anaerobic bloom PEx displayed stochasticity and increased diversity. A simulation of antimicrobial treatment predicted better efficacy for hierarchically organized communities. This link between PEx type, microbiome organization, and treatment success advances the development of personalized clinical management in CF and, potentially, other obstructive lung diseases.

20.
Front Big Data ; 7: 1349116, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638340

RESUMO

With the rapid growth of information and communication technologies, governments worldwide are embracing digital transformation to enhance service delivery and governance practices. In the rapidly evolving landscape of information technology (IT), secure data management stands as a cornerstone for organizations aiming to safeguard sensitive information. Robust data modeling techniques are pivotal in structuring and organizing data, ensuring its integrity, and facilitating efficient retrieval and analysis. As the world increasingly emphasizes sustainability, integrating eco-friendly practices into data management processes becomes imperative. This study focuses on the specific context of Pakistan and investigates the potential of cloud computing in advancing e-governance capabilities. Cloud computing offers scalability, cost efficiency, and enhanced data security, making it an ideal technology for digital transformation. Through an extensive literature review, analysis of case studies, and interviews with stakeholders, this research explores the current state of e-governance in Pakistan, identifies the challenges faced, and proposes a framework for leveraging cloud computing to overcome these challenges. The findings reveal that cloud computing can significantly enhance the accessibility, scalability, and cost-effectiveness of e-governance services, thereby improving citizen engagement and satisfaction. This study provides valuable insights for policymakers, government agencies, and researchers interested in the digital transformation of e-governance in Pakistan and offers a roadmap for leveraging cloud computing technologies in similar contexts. The findings contribute to the growing body of knowledge on e-governance and cloud computing, supporting the advancement of digital governance practices globally. This research identifies monitoring parameters necessary to establish a sustainable e-governance system incorporating big data and cloud computing. The proposed framework, Monitoring and Assessment System using Cloud (MASC), is validated through secondary data analysis and successfully fulfills the research objectives. By leveraging big data and cloud computing, governments can revolutionize their digital governance practices, driving transformative changes and enhancing efficiency and effectiveness in public administration.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA