Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 905
Filtrar
1.
J Integr Bioinform ; 2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39092509

RESUMEN

This paper provides an overview of the development and operation of the Leonhard Med Trusted Research Environment (TRE) at ETH Zurich. Leonhard Med gives scientific researchers the ability to securely work on sensitive research data. We give an overview of the user perspective, the legal framework for processing sensitive data, design history, current status, and operations. Leonhard Med is an efficient, highly secure Trusted Research Environment for data processing, hosted at ETH Zurich and operated by the Scientific IT Services (SIS) of ETH. It provides a full stack of security controls that allow researchers to store, access, manage, and process sensitive data according to Swiss legislation and ETH Zurich Data Protection policies. In addition, Leonhard Med fulfills the BioMedIT Information Security Policies and is compatible with international data protection laws and therefore can be utilized within the scope of national and international collaboration research projects. Initially designed as a "bare-metal" High-Performance Computing (HPC) platform to achieve maximum performance, Leonhard Med was later re-designed as a virtualized, private cloud platform to offer more flexibility to its customers. Sensitive data can be analyzed in secure, segregated spaces called tenants. Technical and Organizational Measures (TOMs) are in place to assure the confidentiality, integrity, and availability of sensitive data. At the same time, Leonhard Med ensures broad access to cutting-edge research software, especially for the analysis of human -omics data and other personalized health applications.

2.
J Appl Crystallogr ; 57(Pt 4): 1217-1228, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39108808

RESUMEN

Presented and discussed here is the implementation of a software solution that provides prompt X-ray diffraction data analysis during fast dynamic compression experiments conducted within the dynamic diamond anvil cell technique. It includes efficient data collection, streaming of data and metadata to a high-performance cluster (HPC), fast azimuthal data integration on the cluster, and tools for controlling the data processing steps and visualizing the data using the DIOPTAS software package. This data processing pipeline is invaluable for a great number of studies. The potential of the pipeline is illustrated with two examples of data collected on ammonia-water mixtures and multiphase mineral assemblies under high pressure. The pipeline is designed to be generic in nature and could be readily adapted to provide rapid feedback for many other X-ray diffraction techniques, e.g. large-volume press studies, in situ stress/strain studies, phase transformation studies, chemical reactions studied with high-resolution diffraction etc.

3.
Proteomics ; : e2300491, 2024 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-39126236

RESUMEN

State-of-the-art mass spectrometers combined with modern bioinformatics algorithms for peptide-to-spectrum matching (PSM) with robust statistical scoring allow for more variable features (i.e., post-translational modifications) being reliably identified from (tandem-) mass spectrometry data, often without the need for biochemical enrichment. Semi-specific proteome searches, that enforce a theoretical enzymatic digestion to solely the N- or C-terminal end, allow to identify of native protein termini or those arising from endogenous proteolytic activity (also referred to as "neo-N-termini" analysis or "N-terminomics"). Nevertheless, deriving biological meaning from these search outputs can be challenging in terms of data mining and analysis. Thus, we introduce TermineR, a data analysis approach for the (1) annotation of peptides according to their enzymatic cleavage specificity and known protein processing features, (2) differential abundance and enrichment analysis of N-terminal sequence patterns, and (3) visualization of neo-N-termini location. We illustrate the use of TermineR by applying it to tandem mass tag (TMT)-based proteomics data of a mouse model of polycystic kidney disease, and assess the semi-specific searches for biological interpretation of cleavage events and the variable contribution of proteolytic products to general protein abundance. The TermineR approach and example data are available as an R package at https://github.com/MiguelCos/TermineR.

4.
Cureus ; 16(7): e64263, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39130982

RESUMEN

Fog computing is a decentralized computing infrastructure that processes data at or near its source, reducing latency and bandwidth usage. This technology is gaining traction in healthcare due to its potential to enhance real-time data processing and decision-making capabilities in critical medical scenarios. A systematic review of existing literature on fog computing in healthcare was conducted. The review included searches in major databases such as PubMed, IEEE Xplore, Scopus, and Google Scholar. The search terms used were "fog computing in healthcare," "real-time diagnostics and fog computing," "continuous patient monitoring fog computing," "predictive analytics fog computing," "interoperability in fog computing healthcare," "scalability issues fog computing healthcare," and "security challenges fog computing healthcare." Articles published between 2010 and 2023 were considered. Inclusion criteria encompassed peer-reviewed articles, conference papers, and review articles focusing on the applications of fog computing in healthcare. Exclusion criteria were articles not available in English, those not related to healthcare applications, and those lacking empirical data. Data extraction focused on the applications of fog computing in real-time diagnostics, continuous monitoring, predictive analytics, and the identified challenges of interoperability, scalability, and security. Fog computing significantly enhances diagnostic capabilities by facilitating real-time data analysis, crucial for urgent diagnostics such as stroke detection, by processing data closer to its source. It also improves monitoring during surgeries by enabling real-time processing of vital signs and physiological parameters, thereby enhancing patient safety. In chronic disease management, continuous data collection and analysis through wearable devices allow for proactive disease management and timely adjustments to treatment plans. Additionally, fog computing supports telemedicine by enabling real-time communication between remote specialists and patients, thereby improving access to specialist care in underserved regions. Fog computing offers transformative potential in healthcare, improving diagnostic precision, patient monitoring, and personalized treatment. Addressing the challenges of interoperability, scalability, and security will be crucial for fully realizing the benefits of fog computing in healthcare, leading to a more connected and efficient healthcare environment.

5.
Sci Rep ; 14(1): 19554, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39174587

RESUMEN

The long-term loss of distribution network in the process of distribution network development is caused by the backward management mode of distribution network. The traditional analysis and calculation methods of distribution network loss can not adapt to the current development environment of distribution network. To improve the accuracy of filling missing values in power load data, particle swarm optimization algorithm is proposed to optimize the clustering center of the clustering algorithm. Furthermore, the original isolated forest anomaly recognition algorithm can be used to detect outliers in the load data, and the coefficient of variation of the load data is used to improve the recognition accuracy of the algorithm. Finally, this paper introduces a breadth-first-based method for calculating line loss in the context of big data. An example is provided using the distribution network system of Yuxi City in Yunnan Province, and a simulation experiment is carried out. And the findings revealed that the error of the enhanced fuzzy C-mean clustering algorithm was on average - 6.35, with a standard deviation of 4.015 in the situation of partially missing data. The area under the characteristic curve of the improved isolated forest algorithm subjects in the case of the abnormal sample fuzzy situation was 0.8586, with the smallest decrease, based on the coefficient of variation, and through the refinement of the analysis, it was discovered that the feeder line loss rate is 7.62%. It is confirmed that the suggested technique can carry out distribution network line loss analysis fast and accurately and can serve as a guide for managing distribution network line loss.

6.
Endosc Int Open ; 12(8): E968-E980, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39184060

RESUMEN

Rapid climate change or climate crisis is one of the most serious emergencies of the 21st century, accounting for highly impactful and irreversible changes worldwide. Climate crisis can also affect the epidemiology and disease burden of gastrointestinal diseases because they have a connection with environmental factors and nutrition. Gastrointestinal endoscopy is a highly intensive procedure with a significant contribution to greenhouse gas (GHG) emissions. Moreover, endoscopy is the third highest generator of waste in healthcare facilities with significant contributions to carbon footprint. The main sources of direct carbon emission in endoscopy are use of high-powered consumption devices (e.g. computers, anesthesia machines, wash machines for reprocessing, scope processors, and lighting) and waste production derived mainly from use of disposable devices. Indirect sources of emissions are those derived from heating and cooling of facilities, processing of histological samples, and transportation of patients and materials. Consequently, sustainable endoscopy and climate change have been the focus of discussions between endoscopy providers and professional societies with the aim of taking action to reduce environmental impact. The term "green endoscopy" refers to the practice of gastroenterology that aims to raise awareness, assess, and reduce endoscopy´s environmental impact. Nevertheless, while awareness has been growing, guidance about practical interventions to reduce the carbon footprint of gastrointestinal endoscopy are lacking. This review aims to summarize current data regarding the impact of endoscopy on GHG emissions and possible strategies to mitigate this phenomenon. Further, we aim to promote the evolution of a more sustainable "green endoscopy".

7.
Parkinsonism Relat Disord ; 127: 107104, 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39153421

RESUMEN

BACKGROUND: Evaluation of disease severity in Parkinson's disease (PD) relies on motor symptoms quantification. However, during early-stage PD, these symptoms are subtle and difficult to quantify by experts, which might result in delayed diagnosis and suboptimal disease management. OBJECTIVE: To evaluate the use of videos and machine learning (ML) for automatic quantification of motor symptoms in early-stage PD. METHODS: We analyzed videos of three movement tasks-Finger Tapping, Hand Movement, and Leg Agility- from 26 aged-matched healthy controls and 31 early-stage PD patients. Utilizing ML algorithms for pose estimation we extracted kinematic features from these videos and trained three classification models based on left and right-side movements, and right/left symmetry. The models were trained to differentiate healthy controls from early-stage PD from videos. RESULTS: Combining left side, right side, and symmetry features resulted in a PD detection accuracy of 79 % from Finger Tap videos, 75 % from Hand Movement videos, 79 % from Leg Agility videos, and 86 % when combining the three tasks using a soft voting approach. In contrast, the classification accuracy varied between 40 % and 72 % when the movement side or symmetry were not considered. CONCLUSIONS: Our methodology effectively differentiated between early-stage PD and healthy controls using videos of standardized motor tasks by integrating kinematic analyses of left-side, right-side, and bilateral symmetry movements. These results demonstrate that ML can detect movement deficits in early-stage PD from videos. This technology is easy-to-use, highly scalable, and has the potential to improve the management and quantification of motor symptoms in early-stage PD.

8.
Anal Bioanal Chem ; 416(22): 4833-4848, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39090266

RESUMEN

The increasing recognition of the health impacts from human exposure to per- and polyfluorinated alkyl substances (PFAS) has surged the need for sophisticated analytical techniques and advanced data analyses, especially for assessing exposure by food of animal origin. Despite the existence of nearly 15,000 PFAS listed in the CompTox chemicals dashboard by the US Environmental Protection Agency, conventional monitoring and suspect screening methods often fall short, covering only a fraction of these substances. This study introduces an innovative automated data processing workflow, named PFlow, for identifying PFAS in environmental samples using direct infusion Fourier transform ion cyclotron resonance mass spectrometry (DI-FT-ICR MS). PFlow's validation on a bream liver sample, representative of low-concentration biota, involves data pre-processing, annotation of PFAS based on their precursor masses, and verification through isotopologues. Notably, PFlow annotated 17 PFAS absent in the comprehensive targeted approach and tentatively identified an additional 53 compounds, thereby demonstrating its efficiency in enhancing PFAS detection coverage. From an initial dataset of 30,332 distinct m/z values, PFlow thoroughly narrowed down the candidates to 84 potential PFAS compounds, utilizing precise mass measurements and chemical logic criteria, underscoring its potential in advancing our understanding of PFAS prevalence and of human exposure.


Asunto(s)
Fluorocarburos , Espectrometría de Masas , Animales , Espectrometría de Masas/métodos , Fluorocarburos/análisis , Flujo de Trabajo , Biota , Automatización , Monitoreo del Ambiente/métodos , Humanos , Hígado/química
9.
J Med Internet Res ; 26: e58502, 2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39178032

RESUMEN

As digital phenotyping, the capture of active and passive data from consumer devices such as smartphones, becomes more common, the need to properly process the data and derive replicable features from it has become paramount. Cortex is an open-source data processing pipeline for digital phenotyping data, optimized for use with the mindLAMP apps, which is used by nearly 100 research teams across the world. Cortex is designed to help teams (1) assess digital phenotyping data quality in real time, (2) derive replicable clinical features from the data, and (3) enable easy-to-share data visualizations. Cortex offers many options to work with digital phenotyping data, although some common approaches are likely of value to all teams using it. This paper highlights the reasoning, code, and example steps necessary to fully work with digital phenotyping data in a streamlined manner. Covering how to work with the data, assess its quality, derive features, and visualize findings, this paper is designed to offer the reader the knowledge and skills to apply toward analyzing any digital phenotyping data set. More specifically, the paper will teach the reader the ins and outs of the Cortex Python package. This includes background information on its interaction with the mindLAMP platform, some basic commands to learn what data can be pulled and how, and more advanced use of the package mixed with basic Python with the goal of creating a correlation matrix. After the tutorial, different use cases of Cortex are discussed, along with limitations. Toward highlighting clinical applications, this paper also provides 3 easy ways to implement examples of Cortex use in real-world settings. By understanding how to work with digital phenotyping data and providing ready-to-deploy code with Cortex, the paper aims to show how the new field of digital phenotyping can be both accessible to all and rigorous in methodology.


Asunto(s)
Fenotipo , Programas Informáticos , Humanos , Biomarcadores , Visualización de Datos
10.
Artículo en Inglés | MEDLINE | ID: mdl-39190874

RESUMEN

OBJECTIVES: Integration of social determinants of health into health outcomes research will allow researchers to study health inequities. The All of Us Research Program has the potential to be a rich source of social determinants of health data. However, user-friendly recommendations for scoring and interpreting the All of Us Social Determinants of Health Survey are needed to return value to communities through advancing researcher competencies in use of the All of Us Research Hub Researcher Workbench. We created a user guide aimed at providing researchers with an overview of the Social Determinants of Health Survey, recommendations for scoring and interpreting participant responses, and readily executable R and Python functions. TARGET AUDIENCE: This user guide targets registered users of the All of Us Research Hub Researcher Workbench, a cloud-based platform that supports analysis of All of Us data, who are currently conducting or planning to conduct analyses using the Social Determinants of Health Survey. SCOPE: We introduce 14 constructs evaluated as part of the Social Determinants of Health Survey and summarize construct operationalization. We offer 30 literature-informed recommendations for scoring participant responses and interpreting scores, with multiple options available for 8 of the constructs. Then, we walk through example R and Python functions for relabeling responses and scoring constructs that can be directly implemented in Jupyter Notebook or RStudio within the Researcher Workbench. Full source code is available in supplemental files and GitHub. Finally, we discuss psychometric considerations related to the Social Determinants of Health Survey for researchers.

11.
J Environ Manage ; 368: 122157, 2024 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-39128349

RESUMEN

With the growing concerns about the protection of ecosystem functions and services, governments have developed public policies and organizations have produced an awesome volume of digital data freely available through their websites. On the other hand, advances in data acquisition through remote sensed sources and processing through geographic information systems (GIS) and statistical tools, allowed an unprecedent capacity to manage ecosystems efficiently. However, the real-world scenario in that regard remains paradoxically challenging. The reasons can be many and diverse, but a strong candidate relates with the limited engagement among the interest parties that hampers bringing all these assets into action. The aim of the study is to demonstrate that management of ecosystem services can be significantly improved by integrating existing environmental policies with environmental big data and low-cost GIS and data processing tools. Using the Upper Rio das Velhas hydrographic basin located in the state of Minas Gerais (Brazil) as example, the study demonstrated how Principal Components Analysis based on a diversity of environmental variables assembled sub-basins into urban, agriculture, mining and heterogeneous profiles, directing management of ecosystem services to the most appropriate officially established conservation plans. The use of GIS tools, on the other hand, allowed narrowing the implementation of each plan to specific sub-basins. This optimized allocation of preferential management plans to priority areas was discussed for a number of conservation plans. A paradigmatic example was the so-called Conservation Use Potential (CUP) devoted to the protection of aquifer recharge (provision service) and control of water erosion (regulation service), as well as to the allocation of uses as function of soil capability (support service). In all cases, the efficiency gains in readiness for plans' implementation and economy of resources were prognosed as noteworthy.

12.
Environ Sci Pollut Res Int ; 31(35): 48725-48741, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39037623

RESUMEN

The intelligent predictive and optimized wastewater treatment plant method represents a ground-breaking shift in how we manage wastewater. By capitalizing on data-driven predictive modeling, automation, and optimization strategies, it introduces a comprehensive framework designed to enhance the efficiency and sustainability of wastewater treatment operations. This methodology encompasses various essential phases, including data gathering and training, the integration of innovative computational models such as Chimp-based GoogLeNet (CbG), data processing, and performance prediction, all while fine-tuning operational parameters. The designed model is a hybrid of the Chimp optimization algorithm and GoogLeNet. The GoogLeNet is a type of deep convolutional architecture, and the Chimp optimization is one of the bio-inspired optimization models based on chimpanzee behavior. It optimizes the operational parameters, such as pH, dosage rate, effluent quality, and energy consumption, of the wastewater treatment plant, by fixing the optimal settings in the GoogLeNet. The designed model includes the process such as pre-processing and feature analysis for the effective prediction of the operation parameters and its optimization. Notably, this innovative approach provides several key advantages, including cost reduction in operations, improved environmental outcomes, and more effective resource management. Through continuous adaptation and refinement, this methodology not only optimizes wastewater treatment plant performance but also effectively tackles evolving environmental challenges while conserving resources. It represents a significant step forward in the quest for efficient and sustainable wastewater treatment practices. The RMSE, MAE, MAPE, and R2 scores for the suggested technique are 1.103, 0.233, 0.012, and 0.002. Also, the model has shown that power usage decreased to about 1.4%, while greenhouse gas emissions have significantly decreased to 0.12% than the existing techniques.


Asunto(s)
Eliminación de Residuos Líquidos , Aguas Residuales , Aguas Residuales/química , Eliminación de Residuos Líquidos/métodos , Algoritmos , Purificación del Agua/métodos
13.
Sensors (Basel) ; 24(13)2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-39001139

RESUMEN

The paper "Using Absorption Models for Insulin and Carbohydrates and Deep Leaning to Improve Glucose Level Predictions" (Sensors2021, 21, 5273) proposes a novel approach to predicting blood glucose levels for people with type 1 diabetes mellitus (T1DM). By building exponential models from raw carbohydrate and insulin data to simulate the absorption in the body, the authors reported a reduction in their model's root-mean-square error (RMSE) from 15.5 mg/dL (raw) to 9.2 mg/dL (exponential) when predicting blood glucose levels one hour into the future. In this comment, we demonstrate that the experimental techniques used in that paper are flawed, which invalidates its results and conclusions. Specifically, after reviewing the authors' code, we found that the model validation scheme was malformed, namely, the training and test data from the same time intervals were mixed. This means that the reported RMSE numbers in the referenced paper did not accurately measure the predictive capabilities of the approaches that were presented. We repaired the measurement technique by appropriately isolating the training and test data, and we discovered that their models actually performed dramatically worse than was reported in the paper. In fact, the models presented in the that paper do not appear to perform any better than a naive model that predicts future glucose levels to be the same as the current ones.


Asunto(s)
Glucemia , Diabetes Mellitus Tipo 1 , Insulina , Insulina/metabolismo , Humanos , Glucemia/metabolismo , Glucemia/análisis , Diabetes Mellitus Tipo 1/metabolismo , Carbohidratos/química , Modelos Biológicos
14.
Int J Mol Sci ; 25(14)2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-39062836

RESUMEN

Common challenges in cryogenic electron microscopy, such as orientation bias, conformational diversity, and 3D misclassification, complicate single particle analysis and lead to significant resource expenditure. We previously introduced an in silico method using the maximum Feret diameter distribution, the Feret signature, to characterize sample heterogeneity of disc-shaped samples. Here, we expanded the Feret signature methodology to identify preferred orientations of samples containing arbitrary shapes with only about 1000 particles required. This method enables real-time adjustments of data acquisition parameters for optimizing data collection strategies or aiding in decisions to discontinue ineffective imaging sessions. Beyond detecting preferred orientations, the Feret signature approach can serve as an early-warning system for inconsistencies in classification during initial image processing steps, a capability that allows for strategic adjustments in data processing. These features establish the Feret signature as a valuable auxiliary tool in the context of single particle analysis, significantly accelerating the structure determination process.


Asunto(s)
Microscopía por Crioelectrón , Procesamiento de Imagen Asistido por Computador , Flujo de Trabajo , Microscopía por Crioelectrón/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Imagenología Tridimensional/métodos
15.
Data Brief ; 54: 110254, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38962210

RESUMEN

The current work presents the generation of a comprehensive spatial dataset of a lightweight beam element composed of four twisted plywood strips, achieved through the application of Structure-from-Motion (SfM) - Multi-view Stereo (MVS) photogrammetry techniques in controlled laboratory conditions. The data collection process was meticulously conducted to ensure accuracy and precision, employing scale bars of varying lengths. The captured images were then processed using photogrammetric software, leading to the creation of point clouds, meshes, and texture files. These data files represent the 3D model of the beam at different mesh sizes (raw, high-poly, medium-poly, and low-poly), adding a high level of detail to the 3D visualization. The dataset holds significant reuse potential and offers essential resources for further studies in numerical modeling, simulations of complex structures, and training machine learning algorithms. This data can also serve as validation sets for emerging photogrammetry methods and form-finding techniques, especially ones involving large deformations and geometric nonlinearities, particularly within the structural engineering field.

16.
Sensors (Basel) ; 24(13)2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-39001161

RESUMEN

This study aimed to measure the differences in commonly used summary acceleration metrics during elite Australian football games under three different data processing protocols (raw, custom-processed, manufacturer-processed). Estimates of distance, speed and acceleration were collected with a 10-Hz GNSS tracking technology device from fourteen matches of 38 elite Australian football players from one team. Raw and manufacturer-processed data were exported from respective proprietary software and two common summary acceleration metrics (number of efforts and distance within medium/high-intensity zone) were calculated for the three processing methods. To estimate the effect of the three different data processing methods on the summary metrics, linear mixed models were used. The main findings demonstrated that there were substantial differences between the three processing methods; the manufacturer-processed acceleration data had the lowest reported distance (up to 184 times lower) and efforts (up to 89 times lower), followed by the custom-processed distance (up to 3.3 times lower) and efforts (up to 4.3 times lower), where raw data had the highest reported distance and efforts. The results indicated that different processing methods changed the metric output and in turn alters the quantification of the demands of a sport (volume, intensity and frequency of the metrics). Coaches, practitioners and researchers need to understand that various processing methods alter the summary metrics of acceleration data. By being informed about how these metrics are affected by processing methods, they can better interpret the data available and effectively tailor their training programs to match the demands of competition.

17.
BMC Med Inform Decis Mak ; 24(1): 194, 2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-39014361

RESUMEN

This research study demonstrates an efficient scheme for early detection of cardiorespiratory complications in pandemics by Utilizing Wearable Electrocardiogram (ECG) sensors for pattern generation and Convolution Neural Networks (CNN) for decision analytics. In health-related outbreaks, timely and early diagnosis of such complications is conclusive in reducing mortality rates and alleviating the burden on healthcare facilities. Existing methods rely on clinical assessments, medical history reviews, and hospital-based monitoring, which are valuable but have limitations in terms of accessibility, scalability, and timeliness, particularly during pandemics. The proposed scheme commences by deploying wearable ECG sensors on the patient's body. These sensors collect data by continuously monitoring the cardiac activity and respiratory patterns of the patient. The collected raw data is then transmitted securely in a wireless manner to a centralized server and stored in a database. Subsequently, the stored data is assessed using a preprocessing process which extracts relevant and important features like heart rate variability and respiratory rate. The preprocessed data is then used as input into the CNN model for the classification of normal and abnormal cardiorespiratory patterns. To achieve high accuracy in abnormality detection the CNN model is trained on labeled data with optimized parameters. The performance of the proposed scheme is evaluated and gauged using different scenarios, which shows a robust performance in detecting abnormal cardiorespiratory patterns with a sensitivity of 95% and specificity of 92%. Prominent observations, which highlight the potential for early interventions include subtle changes in heart rate variability and preceding respiratory distress. These findings show the significance of wearable ECG technology in improving pandemic management strategies and informing public health policies, which enhances preparedness and resilience in the face of emerging health threats.


Asunto(s)
Diagnóstico Precoz , Electrocardiografía , Redes Neurales de la Computación , Dispositivos Electrónicos Vestibles , Humanos , Electrocardiografía/instrumentación , COVID-19/diagnóstico
18.
Talanta ; 279: 126616, 2024 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-39067205

RESUMEN

Exposomics aims to measure human exposures throughout the lifespan and the changes they produce in the human body. Exposome-scale studies have significant potential to understand the interplay of environmental factors with complex multifactorial diseases widespread in our society and whose origin remain unclear. In this framework, the study of the chemical exposome aims to cover all chemical exposures and their effects in human health but, today, this goal still seems unfeasible or at least very challenging, which makes the exposome for now only a concept. Furthermore, the study of the chemical exposome faces several methodological challenges such as moving from specific targeted methodologies towards high-throughput multitargeted and non-targeted approaches, guaranteeing the availability and quality of biological samples to obtain quality analytical data, standardization of applied analytical methodologies, as well as the statistical assignment of increasingly complex datasets, or the identification of (un)known analytes. This review discusses the various steps involved in applying the exposome concept from an analytical perspective. It provides an overview of the wide variety of existing analytical methods and instruments, highlighting their complementarity to develop combined analytical strategies to advance towards the chemical exposome characterization. In addition, this review focuses on endocrine disrupting chemicals (EDCs) to show how studying even a minor part of the chemical exposome represents a great challenge. Analytical strategies applied in an exposomics context have shown great potential to elucidate the role of EDCs in health outcomes. However, translating innovative methods into etiological research and chemical risk assessment will require a multidisciplinary effort. Unlike other review articles focused on exposomics, this review offers a holistic view from the perspective of analytical chemistry and discuss the entire analytical workflow to finally obtain valuable results.


Asunto(s)
Disruptores Endocrinos , Exposoma , Disruptores Endocrinos/análisis , Humanos , Exposición a Riesgos Ambientales/análisis
19.
Artículo en Inglés | MEDLINE | ID: mdl-39013167

RESUMEN

Mass spectrometry is broadly employed to study complex molecular mechanisms in various biological and environmental fields, enabling 'omics' research such as proteomics, metabolomics, and lipidomics. As study cohorts grow larger and more complex with dozens to hundreds of samples, the need for robust quality control (QC) measures through automated software tools becomes paramount to ensure the integrity, high quality, and validity of scientific conclusions from downstream analyses and minimize the waste of resources. Since existing QC tools are mostly dedicated to proteomics, automated solutions supporting metabolomics are needed. To address this need, we developed the software PeakQC, a tool for automated QC of MS data that is independent of omics molecular types (i.e., omics-agnostic). It allows automated extraction and inspection of peak metrics of precursor ions (e.g., errors in mass, retention time, arrival time) and supports various instrumentations and acquisition types, from infusion experiments or using liquid chromatography and/or ion mobility spectrometry front-end separations and with/without fragmentation spectra from data-dependent or independent acquisition analyses. Diagnostic plots for fragmentation spectra are also generated. Here, we describe and illustrate PeakQC's functionalities using different representative data sets, demonstrating its utility as a valuable tool for enhancing the quality and reliability of omics mass spectrometry analyses.

20.
Methods Mol Biol ; 2844: 85-96, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39068333

RESUMEN

Automated high-throughput methods that support tracking of mammalian cell growth are currently needed to advance cell line characterization and identification of desired genetic components required for cell engineering. Here, we describe a high-throughput noninvasive assay based on plate reader measurements. The assay relies on the change in absorbance of the pH indicator phenol red. We show that its basic and acidic absorbance profiles can be converted into a cell growth index consistent with cell count profiles, and that, by adopting a computational pipeline and calibration measurements, it is possible to identify a conversion that enables prediction of cell numbers from plate measurements alone. The assay is suitable for growth characterization of both suspension and adherent cell lines when these are grown under different environmental conditions and treated with chemotherapeutic drugs. The method also supports characterization of stably engineered cell lines and identification of desired promoters based on fluorescence output.


Asunto(s)
Proliferación Celular , Regiones Promotoras Genéticas , Animales , Humanos , Ingeniería Celular/métodos , Fenolsulfonftaleína , Línea Celular , Ensayos Analíticos de Alto Rendimiento/métodos , Técnicas de Cultivo de Célula/métodos , Concentración de Iones de Hidrógeno
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA