Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 137
Filtrar
1.
Stud Health Technol Inform ; 316: 1627-1631, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176522

RESUMO

MyDigiTwin is a scientific initiative for the development of a platform for the early detection and prevention of cardiovascular diseases. This platform, which is supported by prediction models trained in a federated fashion to preserve data privacy, is expected to be hosted by the Dutch Personal Health Environments (PGOs). Consequently, one of the challenges for this federated learning architecture is ensuring consistency between the PGOs data and the reference datasets that will be part of it. This paper introduces a novel data harmonization framework that streamlines an efficient generation of FHIR-based representations of multiple cohort study data. Furthermore, its applicability in the integration of Lifelines' cohort study data into the MiDigiTwin federated research infrastructure is discussed.


Assuntos
Doenças Cardiovasculares , Humanos , Estudos de Coortes , Doenças Cardiovasculares/prevenção & controle , Países Baixos , Aprendizado de Máquina , Registros Eletrônicos de Saúde
2.
Stud Health Technol Inform ; 316: 1312-1313, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176622

RESUMO

The interoperability of healthcare data across various systems remains a big challenge, largely attributable to the disparate data schemas and APIs in use. This study showcases the integration of a FHIR layer into GameBus, a gamified health platform, aiming to enhance its interoperability. Traditionally, GameBus has relied on proprietary data schemas and REST APIs, which restricted data exchange with other platforms. The incorporation of the FHIR standard significantly mitigates these constraints. The FHIR layer, constructed with open-source technologies - including the Google HCLS Data Harmonization tool for data transformation and the HAPI FHIR framework for RESTful services - allows GameBus to engage in data sharing using standardized FHIR formats and APIs. Implemented as a standalone microservice, this layer requires no alterations to the pre-existing architecture of GameBus. Furthermore, the design and implementation of the FHIR layer illustrate a generic method for achieving interoperability across diverse healthcare platforms.


Assuntos
Registros Eletrônicos de Saúde , Humanos , Interoperabilidade da Informação em Saúde , Integração de Sistemas
3.
Stud Health Technol Inform ; 316: 1943-1944, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176872

RESUMO

Korean National Institute of Health initiated data harmonization across cohorts with the aim to ensure semantic interoperability of data and to create a common database of standardized data elements for future collaborative research. With this aim, we reviewed code books of cohorts and identified common data items and values which can be combined for data analyses. We then mapped data items and values to standard health terminologies such as SNOMED CT. Preliminary results of this ongoing data harmonization work will be presented.


Assuntos
Systematized Nomenclature of Medicine , Registros Eletrônicos de Saúde , Humanos , Semântica , Vocabulário Controlado , Terminologia como Assunto
4.
Front Psychol ; 15: 1345406, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39049945

RESUMO

Introduction: A problem that applied researchers and practitioners often face is the fact that different institutions within research consortia use different scales to evaluate the same construct which makes comparison of the results and pooling challenging. In order to meaningfully pool and compare the scores, the scales should be harmonized. The aim of this paper is to use different test equating methods to harmonize the ADHD scores from Child Behavior Checklist (CBCL) and Strengths and Difficulties Questionnaire (SDQ) and to see which method leads to the result. Methods: Sample consists of 1551 parent reports of children aged 10-11.5 years from Raine study on both CBCL and SDQ (common persons design). We used linear equating, kernel equating, Item Response Theory (IRT), and the following machine learning methods: regression (linear and ordinal), random forest (regression and classification) and Support Vector Machine (regression and classification). Efficacy of the methods is operationalized in terms of the root-mean-square error (RMSE) of differences between predicted and observed scores in cross-validation. Results and discussion: Results showed that with single group design, it is the best to use the methods that use item level information and that treat the outcome as interval measurement level (regression approach).

5.
Health Informatics J ; 30(3): 14604582241267792, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39056109

RESUMO

Objective: This article aims to describe the implementation of a new health information technology system called Health Connect that is harmonizing cancer data in the Canadian province of Newfoundland and Labrador; explain high-level technical details of this technology; provide concrete examples of how this technology is helping to improve cancer care in the province, and to discuss its future expansion and implications. Methods: We give a technical description of the Health Connect architecture, how it integrated numerous data sources into a single, scalable health information system for cancer data and highlight its artificial intelligence and analytics capacity. Results: We illustrated two practical achievements of Health Connect. First, an analytical dashboard that was used to pinpoint variations in colon cancer screening uptake in small defined geographic regions of the province; and second, a natural language processing algorithm that provided AI-assisted decision support in interpreting appropriate follow-up action based on assessments of breast mammography reports. Conclusion: Health Connect is a cutting-edge, health systems solution for harmonizing cancer screening data for practical decision-making. The long term goal is to integrate all cancer care data holdings into Health Connect to build a comprehensive health information system for cancer care in the province.


Assuntos
Neoplasias , Humanos , Terra Nova e Labrador , Feminino , Inteligência Artificial/tendências , Informática Médica/métodos , Detecção Precoce de Câncer/métodos
6.
J Pers Med ; 14(7)2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-39063922

RESUMO

Biobanks are infrastructures essential for research involving multi-disciplinary teams and an increasing number of stakeholders. In the field of personalized medicine, biobanks play a key role through the provision of well-characterized and annotated samples protecting at the same time the right of donors. The Andalusian Public Health System Biobank (SSPA Biobank) has implemented a global information management system made up of different modules that allow for the recording, traceability and monitoring of all the information associated with the biobank operations. The data model, designed in a standardized and normalized way according to international initiatives on data harmonization, integrates the information necessary to guarantee the quality of results from research, benefiting researchers, clinicians and donors.

7.
J Public Health Dent ; 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38953657

RESUMO

BACKGROUND/OBJECTIVES: Effective use of longitudinal study data is challenging because of divergences in the construct definitions and measurement approaches over time, between studies and across disciplines. One approach to overcome these challenges is data harmonization. Data harmonization is a practice used to improve variable comparability and reduce heterogeneity across studies. This study describes the process used to evaluate the harmonization potential of oral health-related variables across each survey wave. METHODS: National child cohort surveys with similar themes/objectives conducted in the last two decades were selected. The Maelstrom Research Guidelines were followed for harmonization potential evaluation. RESULTS: Seven nationally representative child cohort surveys were included and questionnaires examined from 50 survey waves. Questionnaires were classified into three domains and fifteen constructs and summarized by age groups. A DataSchema (a list of core variables representing the suitable version of the oral health outcomes and risk factors) was compiled comprising 42 variables. For each study wave, the potential (or not) to generate each DataSchema variable was evaluated. Of the 2100 harmonization status assessments, 543 (26%) were complete. Approximately 50% of the DataSchema variables can be generated across at least four cohort surveys while only 10% (n = 4) variables can be generated across all surveys. For each survey, the DataSchema variables that can be generated ranged between 26% and 76%. CONCLUSION: Data harmonization can improve the comparability of variables both within and across surveys. For future cohort surveys, the authors advocate more consistency and standardization in survey questionnaires within and between surveys.

8.
JMIR Med Inform ; 12: e57005, 2024 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-39042420

RESUMO

BACKGROUND: Cross-institutional interoperability between health care providers remains a recurring challenge worldwide. The German Medical Informatics Initiative, a collaboration of 37 university hospitals in Germany, aims to enable interoperability between partner sites by defining Fast Healthcare Interoperability Resources (FHIR) profiles for the cross-institutional exchange of health care data, the Core Data Set (CDS). The current CDS and its extension modules define elements representing patients' health care records. All university hospitals in Germany have made significant progress in providing routine data in a standardized format based on the CDS. In addition, the central research platform for health, the German Portal for Medical Research Data feasibility tool, allows medical researchers to query the available CDS data items across many participating hospitals. OBJECTIVE: In this study, we aimed to evaluate a novel approach of combining the current top-down generated FHIR profiles with the bottom-up generated knowledge gained by the analysis of respective instance data. This allowed us to derive options for iteratively refining FHIR profiles using the information obtained from a discrepancy analysis. METHODS: We developed an FHIR validation pipeline and opted to derive more restrictive profiles from the original CDS profiles. This decision was driven by the need to align more closely with the specific assumptions and requirements of the central feasibility platform's search ontology. While the original CDS profiles offer a generic framework adaptable for a broad spectrum of medical informatics use cases, they lack the specificity to model the nuanced criteria essential for medical researchers. A key example of this is the necessity to represent specific laboratory codings and values interdependencies accurately. The validation results allow us to identify discrepancies between the instance data at the clinical sites and the profiles specified by the feasibility platform and addressed in the future. RESULTS: A total of 20 university hospitals participated in this study. Historical factors, lack of harmonization, a wide range of source systems, and case sensitivity of coding are some of the causes for the discrepancies identified. While in our case study, Conditions, Procedures, and Medications have a high degree of uniformity in the coding of instance data due to legislative requirements for billing in Germany, we found that laboratory values pose a significant data harmonization challenge due to their interdependency between coding and value. CONCLUSIONS: While the CDS achieves interoperability, different challenges for federated data access arise, requiring more specificity in the profiles to make assumptions on the instance data. We further argue that further harmonization of the instance data can significantly lower required retrospective harmonization efforts. We recognize that discrepancies cannot be resolved solely at the clinical site; therefore, our findings have a wide range of implications and will require action on multiple levels and by various stakeholders.

9.
Environ Sci Technol ; 58(27): 12260-12271, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38923944

RESUMO

Despite the critical importance of virus disinfection by chlorine, our fundamental understanding of the relative susceptibility of different viruses to chlorine and robust quantitative relationships between virus disinfection rate constants and environmental parameters remains limited. We conducted a systematic review of virus inactivation by free chlorine and used the resulting data set to develop a linear mixed model that estimates chlorine inactivation rate constants for viruses based on experimental conditions. 570 data points were collected in our systematic review, representing 82 viruses over a broad range of environmental conditions. The harmonized inactivation rate constants under reference conditions (pH = 7.53, T = 20 °C, [Cl-] < 50 mM) spanned 5 orders of magnitude, ranging from 0.0196 to 1150 L mg-1 min-1, and uncovered important trends between viruses. Whereas common surrogate bacteriophage MS2 does not serve as a conservative chlorine disinfection surrogate for many human viruses, CVB5 was one of the most resistant viruses in the data set. The model quantifies the role of pH, temperature, and chloride levels across viruses, and an online tool allows users to estimate rate constants for viruses and conditions of interest. Results from the model identified potential shortcomings in current U.S. EPA drinking water disinfection requirements.


Assuntos
Cloro , Desinfecção , Cloro/farmacologia , Inativação de Vírus/efeitos dos fármacos , Vírus/efeitos dos fármacos , Desinfetantes/farmacologia
10.
Learn Health Syst ; 8(Suppl 1): e10418, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38883873

RESUMO

Introduction: Shared decision-making (SDM) is a method of care by which patients and clinicians work together to co-create a plan of care. Electronic health record (EHR) integration of SDM tools may increase adoption of SDM. We conducted a "lightweight" integration of a freely available electronic SDM tool, CV Prevention Choice, within the EHRs of three healthcare systems. Here, we report how the healthcare systems collaborated to achieve integration. Methods: This work was conducted as part of a stepped wedge randomized pragmatic trial. CV Prevention Choice was developed using guidelines for HTML5-based web applications. Healthcare systems integrated the tool in their EHR using documentation the study team developed and refined with lessons learned after each system integrated the electronic SDM tool into their EHR. CV Prevention Choice integration populates the tool with individual patient data locally without sending protected health information between the EHR and the web. Data abstraction and secure transfer systems were developed to manage data collection to assess tool implementation and effectiveness outcomes. Results: Time to integrate CV Prevention Choice in the EHR was 12.1 weeks for the first system, 10.4 weeks for the second, and 9.7 weeks for the third. One system required two 1-hour meetings with study team members and two healthcare systems required a single 1-hour meeting. Healthcare system information technology teams collaborated by sharing information and offering improvements to documentation. Challenges included tracking CV Prevention Choice use for reporting and capture of combination medications. Data abstraction required refinements to address differences in how each healthcare system captured data elements. Conclusion: Targeted documentation on tool features and resource mapping supported collaboration of IT teams across healthcare systems, enabling them to integrate a web-based SDM tool with little additional research team effort or oversight. Their collaboration helped overcome difficulties integrating the web application and address challenges to data harmonization for trial outcome analyses.

11.
Front Med (Lausanne) ; 11: 1377209, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38903818

RESUMO

Introduction: Obtaining real-world data from routine clinical care is of growing interest for scientific research and personalized medicine. Despite the abundance of medical data across various facilities - including hospitals, outpatient clinics, and physician practices - the intersectoral exchange of information remains largely hindered due to differences in data structure, content, and adherence to data protection regulations. In response to this challenge, the Medical Informatics Initiative (MII) was launched in Germany, focusing initially on university hospitals to foster the exchange and utilization of real-world data through the development of standardized methods and tools, including the creation of a common core dataset. Our aim, as part of the Medical Informatics Research Hub in Saxony (MiHUBx), is to extend the MII concepts to non-university healthcare providers in a more seamless manner to enable the exchange of real-world data among intersectoral medical sites. Methods: We investigated what services are needed to facilitate the provision of harmonized real-world data for cross-site research. On this basis, we designed a Service Platform Prototype that hosts services for data harmonization, adhering to the globally recognized Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) international standard communication format and the Observational Medical Outcomes Partnership (OMOP) common data model (CDM). Leveraging these standards, we implemented additional services facilitating data utilization, exchange and analysis. Throughout the development phase, we collaborated with an interdisciplinary team of experts from the fields of system administration, software engineering and technology acceptance to ensure that the solution is sustainable and reusable in the long term. Results: We have developed the pre-built packages "ResearchData-to-FHIR," "FHIR-to-OMOP," and "Addons," which provide the services for data harmonization and provision of project-related real-world data in both the FHIR MII Core dataset format (CDS) and the OMOP CDM format as well as utilization and a Service Platform Prototype to streamline data management and use. Conclusion: Our development shows a possible approach to extend the MII concepts to non-university healthcare providers to enable cross-site research on real-world data. Our Service Platform Prototype can thus pave the way for intersectoral data sharing, federated analysis, and provision of SMART-on-FHIR applications to support clinical decision making.

12.
Comput Struct Biotechnol J ; 24: 412-419, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38831762

RESUMO

In anticipation of potential future pandemics, we examined the challenges and opportunities presented by the COVID-19 outbreak. This analysis highlights how artificial intelligence (AI) and predictive models can support both patients and clinicians in managing subsequent infectious diseases, and how legislators and policymakers could support these efforts, to bring learning healthcare system (LHS) from guidelines to real-world implementation. This report chronicles the trajectory of the COVID-19 pandemic, emphasizing the diverse data sets generated throughout its course. We propose strategies for harnessing this data via AI and predictive modelling to enhance the functioning of LHS. The challenges faced by patients and healthcare systems around the world during this unprecedented crisis could have been mitigated with an informed and timely adoption of the three pillars of the LHS: Knowledge, Data and Practice. By harnessing AI and predictive analytics, we can develop tools that not only detect potential pandemic-prone diseases early on but also assist in patient management, provide decision support, offer treatment recommendations, deliver patient outcome triage, predict post-recovery long-term disease impacts, monitor viral mutations and variant emergence, and assess vaccine and treatment efficacy in real-time. A patient-centric approach remains paramount, ensuring patients are both informed and actively involved in disease mitigation strategies.

13.
Curr Protoc ; 4(6): e1055, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38837690

RESUMO

Data harmonization involves combining data from multiple independent sources and processing the data to produce one uniform dataset. Merging separate genotypes or whole-genome sequencing datasets has been proposed as a strategy to increase the statistical power of association tests by increasing the effective sample size. However, data harmonization is not a widely adopted strategy due to the difficulties with merging data (including confounding produced by batch effects and population stratification). Detailed data harmonization protocols are scarce and are often conflicting. Moreover, data harmonization protocols that accommodate samples of admixed ancestry are practically non-existent. Existing data harmonization procedures must be modified to ensure the heterogeneous ancestry of admixed individuals is incorporated into additional downstream analyses without confounding results. Here, we propose a set of guidelines for merging multi-platform genetic data from admixed samples that can be adopted by any investigator with elementary bioinformatics experience. We have applied these guidelines to aggregate 1544 tuberculosis (TB) case-control samples from six separate in-house datasets and conducted a genome-wide association study (GWAS) of TB susceptibility. The GWAS performed on the merged dataset had improved power over analyzing the datasets individually and produced summary statistics free from bias introduced by batch effects and population stratification. © 2024 Wiley Periodicals LLC. Basic Protocol 1: Processing separate datasets comprising array genotype data Alternate Protocol 1: Processing separate datasets comprising array genotype and whole-genome sequencing data Alternate Protocol 2: Performing imputation using a local reference panel Basic Protocol 2: Merging separate datasets Basic Protocol 3: Ancestry inference using ADMIXTURE and RFMix Basic Protocol 4: Batch effect correction using pseudo-case-control comparisons.


Assuntos
Estudo de Associação Genômica Ampla , Humanos , Estudo de Associação Genômica Ampla/métodos , Estudo de Associação Genômica Ampla/normas , Genômica/métodos , Genômica/normas , Tuberculose/genética , Estudos de Casos e Controles , Guias como Assunto , Predisposição Genética para Doença
14.
Artigo em Inglês | MEDLINE | ID: mdl-38774479

RESUMO

For deep learning-based machine learning, not only are large and sufficiently diverse data crucial but their good qualities are equally important. However, in real-world applications, it is very common that raw source data may contain incorrect, noisy, inconsistent, improperly formatted and sometimes missing elements, particularly, when the datasets are large and sourced from many sites. In this paper, we present our work towards preparing and making image data ready for the development of AI-driven approaches for studying various aspects of the natural history of oral cancer. Specifically, we focus on two aspects: 1) cleaning the image data; and 2) extracting the annotation information. Data cleaning includes removing duplicates, identifying missing data, correcting errors, standardizing data sets, and removing personal sensitive information, toward combining data sourced from different study sites. These steps are often collectively referred to as data harmonization. Annotation information extraction includes identifying crucial or valuable texts that are manually entered by clinical providers related to the image paths/names and standardizing of the texts of labels. Both are important for the successful deep learning algorithm development and data analyses. Specifically, we provide details on the data under consideration, describe the challenges and issues we observed that motivated our work, present specific approaches and methods that we used to clean and standardize the image data and extract labelling information. Further, we discuss the ways to increase efficiency of the process and the lessons learned. Research ideas on automating the process with ML-driven techniques are also presented and discussed. Our intent in reporting and discussing such work in detail is to help provide insights in automating or, minimally, increasing the efficiency of these critical yet often under-reported processes.

15.
J Alzheimers Dis ; 99(4): 1409-1423, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38759012

RESUMO

Background: Despite numerous past endeavors for the semantic harmonization of Alzheimer's disease (AD) cohort studies, an automatic tool has yet to be developed. Objective: As cohort studies form the basis of data-driven analysis, harmonizing them is crucial for cross-cohort analysis. We aimed to accelerate this task by constructing an automatic harmonization tool. Methods: We created a common data model (CDM) through cross-mapping data from 20 cohorts, three CDMs, and ontology terms, which was then used to fine-tune a BioBERT model. Finally, we evaluated the model using three previously unseen cohorts and compared its performance to a string-matching baseline model. Results: Here, we present our AD-Mapper interface for automatic harmonization of AD cohort studies, which outperformed a string-matching baseline on previously unseen cohort studies. We showcase our CDM comprising 1218 unique variables. Conclusion: AD-Mapper leverages semantic similarities in naming conventions across cohorts to improve mapping performance.


Assuntos
Doença de Alzheimer , Semântica , Doença de Alzheimer/diagnóstico , Humanos , Estudos de Coortes
16.
Sleep ; 2024 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-38752786

RESUMO

STUDY OBJECTIVES: Harmonizing and aggregating data across studies enable pooled analyses that support external validation and enhance replicability and generalizability. However, the multidimensional nature of sleep poses challenges for data harmonization and aggregation. Here we describe and implement our process for harmonizing self-reported sleep data. METHODS: We established a multi-phase framework to harmonize self-reported sleep data: (1) compile items; (2) group items into domains; (3) harmonize items; and (4) evaluate harmonizability. We applied this process to produce a pooled multi-cohort sample of five United States cohorts plus a separate yet fully harmonized sample from Rotterdam, Netherlands. Sleep and sociodemographic data are described and compared to demonstrate the utility of harmonization and aggregation. RESULTS: We collected 190 unique self-reported sleep items and grouped them into 15 conceptual domains. Using these domains as guiderails, we developed 14 harmonized items measuring aspects of Satisfaction, Alertness/Sleepiness, Timing, Efficiency, Duration, Insomnia, and Sleep Apnea. External raters determined that 13 of these 14 items had moderate-to-high harmonizability. Alertness/Sleepiness items had lower harmonizability, while continuous, quantitative items (e.g., timing, total sleep time, efficiency) had higher harmonizability. Descriptive statistics identified features that are more consistent (e.g., wake-up time, duration) and more heterogeneous (e.g., time in bed, bedtime) across samples. CONCLUSIONS: Our process can guide researchers and cohort stewards towards effective sleep harmonization and provides a foundation for further methodological development in this expanding field. Broader national and international initiatives promoting common data elements across cohorts are needed to enhance future harmonization and aggregation efforts.

17.
J Biomed Inform ; 155: 104661, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38806105

RESUMO

BACKGROUND: Establishing collaborations between cohort studies has been fundamental for progress in health research. However, such collaborations are hampered by heterogeneous data representations across cohorts and legal constraints to data sharing. The first arises from a lack of consensus in standards of data collection and representation across cohort studies and is usually tackled by applying data harmonization processes. The second is increasingly important due to raised awareness for privacy protection and stricter regulations, such as the GDPR. Federated learning has emerged as a privacy-preserving alternative to transferring data between institutions through analyzing data in a decentralized manner. METHODS: In this study, we set up a federated learning infrastructure for a consortium of nine Dutch cohorts with appropriate data available to the etiology of dementia, including an extract, transform, and load (ETL) pipeline for data harmonization. Additionally, we assessed the challenges of transforming and standardizing cohort data using the Observational Medical Outcomes Partnership (OMOP) common data model (CDM) and evaluated our tool in one of the cohorts employing federated algorithms. RESULTS: We successfully applied our ETL tool and observed a complete coverage of the cohorts' data by the OMOP CDM. The OMOP CDM facilitated the data representation and standardization, but we identified limitations for cohort-specific data fields and in the scope of the vocabularies available. Specific challenges arise in a multi-cohort federated collaboration due to technical constraints in local environments, data heterogeneity, and lack of direct access to the data. CONCLUSION: In this article, we describe the solutions to these challenges and limitations encountered in our study. Our study shows the potential of federated learning as a privacy-preserving solution for multi-cohort studies that enhance reproducibility and reuse of both data and analyses.


Assuntos
Demência , Humanos , Países Baixos , Estudos de Coortes , Algoritmos , Disseminação de Informação/métodos , Pesquisa Biomédica
18.
Eur J Epidemiol ; 39(7): 773-783, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38805076

RESUMO

While its etiology is not fully elucidated, preterm birth represents a major public health concern as it is the leading cause of child mortality and morbidity. Stress is one of the most common perinatal conditions and may increase the risk of preterm birth. In this paper we aimed to investigate the association of maternal perceived stress and anxiety with length of gestation. We used harmonized data from five birth cohorts from Canada, France, and Norway. A total of 5297 pregnancies of singletons were included in the analysis of perceived stress and gestational duration, and 55,775 pregnancies for anxiety. Federated analyses were performed through the DataSHIELD platform using Cox regression models within intervals of gestational age. The models were fit for each cohort separately, and the cohort-specific results were combined using random effects study-level meta-analysis. Moderate and high levels of perceived stress during pregnancy were associated with a shorter length of gestation in the very/moderately preterm interval [moderate: hazard ratio (HR) 1.92 (95%CI 0.83, 4.48); high: 2.04 (95%CI 0.77, 5.37)], albeit not statistically significant. No association was found for the other intervals. Anxiety was associated with gestational duration in the very/moderately preterm interval [1.66 (95%CI 1.32, 2.08)], and in the early term interval [1.15 (95%CI 1.08, 1.23)]. Our findings suggest that perceived stress and anxiety are associated with an increased risk of earlier birth, but only in the earliest gestational ages. We also found an association in the early term period for anxiety, but the result was only driven by the largest cohort, which collected information the latest in pregnancy. This raised a potential issue of reverse causality as anxiety later in pregnancy could be due to concerns about early signs of a possible preterm birth.


Assuntos
Ansiedade , Idade Gestacional , Nascimento Prematuro , Estresse Psicológico , Humanos , Feminino , Gravidez , Estresse Psicológico/epidemiologia , Ansiedade/epidemiologia , Canadá/epidemiologia , Adulto , Nascimento Prematuro/epidemiologia , Nascimento Prematuro/psicologia , Coorte de Nascimento , Complicações na Gravidez/epidemiologia , Complicações na Gravidez/psicologia , Estudos de Coortes , Fatores de Risco , Recém-Nascido , Modelos de Riscos Proporcionais , Noruega/epidemiologia
20.
J Med Imaging (Bellingham) ; 11(2): 024008, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38571764

RESUMO

Purpose: Two-dimensional single-slice abdominal computed tomography (CT) provides a detailed tissue map with high resolution allowing quantitative characterization of relationships between health conditions and aging. However, longitudinal analysis of body composition changes using these scans is difficult due to positional variation between slices acquired in different years, which leads to different organs/tissues being captured. Approach: To address this issue, we propose C-SliceGen, which takes an arbitrary axial slice in the abdominal region as a condition and generates a pre-defined vertebral level slice by estimating structural changes in the latent space. Results: Our experiments on 2608 volumetric CT data from two in-house datasets and 50 subjects from the 2015 Multi-Atlas Abdomen Labeling Challenge Beyond the Cranial Vault (BTCV) dataset demonstrate that our model can generate high-quality images that are realistic and similar. We further evaluate our method's capability to harmonize longitudinal positional variation on 1033 subjects from the Baltimore longitudinal study of aging dataset, which contains longitudinal single abdominal slices, and confirmed that our method can harmonize the slice positional variance in terms of visceral fat area. Conclusion: This approach provides a promising direction for mapping slices from different vertebral levels to a target slice and reducing positional variance for single-slice longitudinal analysis. The source code is available at: https://github.com/MASILab/C-SliceGen.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA