Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 890
Filtrar
Más filtros

Intervalo de año de publicación
1.
Methods ; 222: 81-99, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38185226

RESUMEN

Many of the health-associated impacts of the microbiome are mediated by its chemical activity, producing and modifying small molecules (metabolites). Thus, microbiome metabolite quantification has a central role in efforts to elucidate and measure microbiome function. In this review, we cover general considerations when designing experiments to quantify microbiome metabolites, including sample preparation, data acquisition and data processing, since these are critical to downstream data quality. We then discuss data analysis and experimental steps to demonstrate that a given metabolite feature is of microbial origin. We further discuss techniques used to quantify common microbial metabolites, including short-chain fatty acids (SCFA), secondary bile acids (BAs), tryptophan derivatives, N-acyl amides and trimethylamine N-oxide (TMAO). Lastly, we conclude with challenges and future directions for the field.


Asunto(s)
Microbioma Gastrointestinal , Microbiota , Humanos , Microbiota/genética , Ácidos Grasos Volátiles/metabolismo , Metilaminas/metabolismo
2.
Proteomics ; 24(8): e2300112, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37672792

RESUMEN

Machine learning (ML) and deep learning (DL) models for peptide property prediction such as Prosit have enabled the creation of high quality in silico reference libraries. These libraries are used in various applications, ranging from data-independent acquisition (DIA) data analysis to data-driven rescoring of search engine results. Here, we present Oktoberfest, an open source Python package of our spectral library generation and rescoring pipeline originally only available online via ProteomicsDB. Oktoberfest is largely search engine agnostic and provides access to online peptide property predictions, promoting the adoption of state-of-the-art ML/DL models in proteomics analysis pipelines. We demonstrate its ability to reproduce and even improve our results from previously published rescoring analyses on two distinct use cases. Oktoberfest is freely available on GitHub (https://github.com/wilhelm-lab/oktoberfest) and can easily be installed locally through the cross-platform PyPI Python package.


Asunto(s)
Proteómica , Programas Informáticos , Proteómica/métodos , Péptidos , Algoritmos
3.
Proteomics ; : e2400078, 2024 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-38824665

RESUMEN

The human gut microbiome plays a vital role in preserving individual health and is intricately involved in essential functions. Imbalances or dysbiosis within the microbiome can significantly impact human health and are associated with many diseases. Several metaproteomics platforms are currently available to study microbial proteins within complex microbial communities. In this study, we attempted to develop an integrated pipeline to provide deeper insights into both the taxonomic and functional aspects of the cultivated human gut microbiomes derived from clinical colon biopsies. We combined a rapid peptide search by MSFragger against the Unified Human Gastrointestinal Protein database and the taxonomic and functional analyses with Unipept Desktop and MetaLab-MAG. Across seven samples, we identified and matched nearly 36,000 unique peptides to approximately 300 species and 11 phyla. Unipept Desktop provided gene ontology, InterPro entries, and enzyme commission number annotations, facilitating the identification of relevant metabolic pathways. MetaLab-MAG contributed functional annotations through Clusters of Orthologous Genes and Non-supervised Orthologous Groups categories. These results unveiled functional similarities and differences among the samples. This integrated pipeline holds the potential to provide deeper insights into the taxonomy and functions of the human gut microbiome for interrogating the intricate connections between microbiome balance and diseases.

4.
Proteomics ; 24(12-13): e2200436, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38438732

RESUMEN

Ion mobility spectrometry-mass spectrometry (IMS-MS or IM-MS) is a powerful analytical technique that combines the gas-phase separation capabilities of IM with the identification and quantification capabilities of MS. IM-MS can differentiate molecules with indistinguishable masses but different structures (e.g., isomers, isobars, molecular classes, and contaminant ions). The importance of this analytical technique is reflected by a staged increase in the number of applications for molecular characterization across a variety of fields, from different MS-based omics (proteomics, metabolomics, lipidomics, etc.) to the structural characterization of glycans, organic matter, proteins, and macromolecular complexes. With the increasing application of IM-MS there is a pressing need for effective and accessible computational tools. This article presents an overview of the most recent free and open-source software tools specifically tailored for the analysis and interpretation of data derived from IM-MS instrumentation. This review enumerates these tools and outlines their main algorithmic approaches, while highlighting representative applications across different fields. Finally, a discussion of current limitations and expectable improvements is presented.


Asunto(s)
Algoritmos , Espectrometría de Movilidad Iónica , Espectrometría de Masas , Programas Informáticos , Espectrometría de Movilidad Iónica/métodos , Espectrometría de Masas/métodos , Proteómica/métodos , Metabolómica/métodos , Humanos
5.
J Proteome Res ; 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38833568

RESUMEN

Direct-to-Mass Spectrometry and ambient ionization techniques can be used for biochemical fingerprinting in a fast way. Data processing is typically accomplished with vendor-provided software tools. Here, a novel, open-source functionality, entitled Tidy-Direct-to-MS, was developed for data processing of direct-to-MS data sets. It allows for fast and user-friendly processing using different modules for optional sample position detection and separation, mass-to-charge ratio drift detection and correction, consensus spectra calculation, and bracketing across sample positions as well as feature abundance calculation. The tool also provides functionality for the automated comparison of different sets of parameters, thereby assisting the user in the complex task of finding an optimal combination to maximize the total number of detected features while also checking for the detection of user-provided reference features. In addition, Tidy-Direct-to-MS has the capability for data quality review and subsequent data analysis, thereby simplifying the workflow of untargeted ambient MS-based metabolomics studies. Tidy-Direct-to-MS is implemented in the Python programming language as part of the TidyMS library and can thus be easily extended. Capabilities of Tidy-Direct-to-MS are showcased in a data set acquired in a marine metabolomics study reported in MetaboLights (MTBLS1198) using a transmission mode Direct Analysis in Real Time-Mass Spectrometry (TM-DART-MS)-based method.

6.
Hum Brain Mapp ; 45(8): e26751, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38864293

RESUMEN

Effective connectivity (EC) refers to directional or causal influences between interacting neuronal populations or brain regions and can be estimated from functional magnetic resonance imaging (fMRI) data via dynamic causal modeling (DCM). In contrast to functional connectivity, the impact of data processing varieties on DCM estimates of task-evoked EC has hardly ever been addressed. We therefore investigated how task-evoked EC is affected by choices made for data processing. In particular, we considered the impact of global signal regression (GSR), block/event-related design of the general linear model (GLM) used for the first-level task-evoked fMRI analysis, type of activation contrast, and significance thresholding approach. Using DCM, we estimated individual and group-averaged task-evoked EC within a brain network related to spatial conflict processing for all the parameters considered and compared the differences in task-evoked EC between any two data processing conditions via between-group parametric empirical Bayes (PEB) analysis and Bayesian data comparison (BDC). We observed strongly varying patterns of the group-averaged EC depending on the data processing choices. In particular, task-evoked EC and parameter certainty were strongly impacted by GLM design and type of activation contrast as revealed by PEB and BDC, respectively, whereas they were little affected by GSR and the type of significance thresholding. The event-related GLM design appears to be more sensitive to task-evoked modulations of EC, but provides model parameters with lower certainty than the block-based design, while the latter is more sensitive to the type of activation contrast than is the event-related design. Our results demonstrate that applying different reasonable data processing choices can substantially alter task-evoked EC as estimated by DCM. Such choices should be made with care and, whenever possible, varied across parallel analyses to evaluate their impact and identify potential convergence for robust outcomes.


Asunto(s)
Teorema de Bayes , Mapeo Encefálico , Encéfalo , Imagen por Resonancia Magnética , Humanos , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Masculino , Femenino , Mapeo Encefálico/métodos , Adulto , Adulto Joven , Modelos Neurológicos , Procesamiento de Imagen Asistido por Computador/métodos , Vías Nerviosas/fisiología , Vías Nerviosas/diagnóstico por imagen
7.
Small ; 20(25): e2306585, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38212281

RESUMEN

Compact but precise feature-extracting ability is core to processing complex computational tasks in neuromorphic hardware. Physical reservoir computing (RC) offers a robust framework to map temporal data into a high-dimensional space using the time dynamics of a material system, such as a volatile memristor. However, conventional physical RC systems have limited dynamics for the given material properties, restricting the methods to increase their dimensionality. This study proposes an integrated temporal kernel composed of a 2-memristor and 1-capacitor (2M1C) using a W/HfO2/TiN memristor and TiN/ZrO2/Al2O3/ZrO2/TiN capacitor to achieve higher dimensionality and tunable dynamics. The kernel elements are carefully designed and fabricated into an integrated array, of which performances are evaluated under diverse conditions. By optimizing the time dynamics of the 2M1C kernel, each memristor simultaneously extracts complementary information from input signals. The MNIST benchmark digit classification task achieves a high accuracy of 94.3% with a (196×10) single-layer network. Analog input mapping ability is tested with a Mackey-Glass time series prediction, and the system records a normalized root mean square error of 0.04 with a 20×1 readout network, the smallest readout network ever used for Mackey-Glass prediction in RC. These performances demonstrate its high potential for efficient temporal data analysis.

8.
J Synchrotron Radiat ; 31(Pt 3): 635-645, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38656774

RESUMEN

With the development of synchrotron radiation sources and high-frame-rate detectors, the amount of experimental data collected at synchrotron radiation beamlines has increased exponentially. As a result, data processing for synchrotron radiation experiments has entered the era of big data. It is becoming increasingly important for beamlines to have the capability to process large-scale data in parallel to keep up with the rapid growth of data. Currently, there is no set of data processing solutions based on the big data technology framework for beamlines. Apache Hadoop is a widely used distributed system architecture for solving the problem of massive data storage and computation. This paper presents a set of distributed data processing schemes for beamlines with experimental data using Hadoop. The Hadoop Distributed File System is utilized as the distributed file storage system, and Hadoop YARN serves as the resource scheduler for the distributed computing cluster. A distributed data processing pipeline that can carry out massively parallel computation is designed and developed using Hadoop Spark. The entire data processing platform adopts a distributed microservice architecture, which makes the system easy to expand, reduces module coupling and improves reliability.

9.
J Synchrotron Radiat ; 31(Pt 1): 28-34, 2024 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-38095667

RESUMEN

During X-ray diffraction experiments on single crystals, the diffracted beam intensities may be affected by multiple-beam X-ray diffraction (MBD). This effect is particularly frequent at higher X-ray energies and for larger unit cells. The appearance of this so-called Renninger effect often impairs the interpretation of diffracted intensities. This applies in particular to energy spectra analysed in resonant experiments, since during scans of the incident photon energy these conditions are necessarily met for specific X-ray energies. This effect can be addressed by carefully avoiding multiple-beam reflection conditions at a given X-ray energy and a given position in reciprocal space. However, areas which are (nearly) free of MBD are not always available. This article presents a universal concept of data acquisition and post-processing for resonant X-ray diffraction experiments. Our concept facilitates the reliable determination of kinematic (MBD-free) resonant diffraction intensities even at relatively high energies which, in turn, enables the study of higher absorption edges. This way, the applicability of resonant diffraction, e.g. to reveal the local atomic and electronic structure or chemical environment, is extended for a vast majority of crystalline materials. The potential of this approach compared with conventional data reduction is demonstrated by the measurements of the Ta L3 edge of well studied lithium tantalate LiTaO3.

10.
J Synchrotron Radiat ; 31(Pt 4): 670-680, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38838166

RESUMEN

Deflectometric profilometers are used to precisely measure the form of beam shaping optics of synchrotrons and X-ray free-electron lasers. They often utilize autocollimators which measure slope by evaluating the displacement of a reticle image on a detector. Based on our privileged access to the raw image data of an autocollimator, novel strategies to reduce the systematic measurement errors by using a set of overlapping images of the reticle obtained at different positions on the detector are discussed. It is demonstrated that imaging properties such as, for example, geometrical distortions and vignetting, can be extracted from this redundant set of images without recourse to external calibration facilities. This approach is based on the fact that the properties of the reticle itself do not change - all changes in the reticle image are due to the imaging process. Firstly, by combining interpolation and correlation, it is possible to determine the shift of a reticle image relative to a reference image with minimal error propagation. Secondly, the intensity of the reticle image is analysed as a function of its position on the CCD and a vignetting correction is calculated. Thirdly, the size of the reticle image is analysed as a function of its position and an imaging distortion correction is derived. It is demonstrated that, for different measurement ranges and aperture diameters of the autocollimator, reductions in the systematic errors of up to a factor of four to five can be achieved without recourse to external measurements.

11.
Brief Bioinform ; 23(6)2022 11 19.
Artículo en Inglés | MEDLINE | ID: mdl-36274234

RESUMEN

Large-scale metabolomics is a powerful technique that has attracted widespread attention in biomedical studies focused on identifying biomarkers and interpreting the mechanisms of complex diseases. Despite a rapid increase in the number of large-scale metabolomic studies, the analysis of metabolomic data remains a key challenge. Specifically, diverse unwanted variations and batch effects in processing many samples have a substantial impact on identifying true biological markers, and it is a daunting challenge to annotate a plethora of peaks as metabolites in untargeted mass spectrometry-based metabolomics. Therefore, the development of an out-of-the-box tool is urgently needed to realize data integration and to accurately annotate metabolites with enhanced functions. In this study, the LargeMetabo package based on R code was developed for processing and analyzing large-scale metabolomic data. This package is unique because it is capable of (1) integrating multiple analytical experiments to effectively boost the power of statistical analysis; (2) selecting the appropriate biomarker identification method by intelligent assessment for large-scale metabolic data and (3) providing metabolite annotation and enrichment analysis based on an enhanced metabolite database. The LargeMetabo package can facilitate flexibility and reproducibility in large-scale metabolomics. The package is freely available from https://github.com/LargeMetabo/LargeMetabo.


Asunto(s)
Metabolómica , Programas Informáticos , Reproducibilidad de los Resultados , Metabolómica/métodos , Espectrometría de Masas , Biomarcadores
12.
Brief Bioinform ; 23(1)2022 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-34505137

RESUMEN

A comprehensive analysis of omics data can require vast computational resources and access to varied data sources that must be integrated into complex, multi-step analysis pipelines. Execution of many such analyses can be accelerated by applying the cloud computing paradigm, which provides scalable resources for storing data of different types and parallelizing data analysis computations. Moreover, these resources can be reused for different multi-omics analysis scenarios. Traditionally, developers are required to manage a cloud platform's underlying infrastructure, configuration, maintenance and capacity planning. The serverless computing paradigm simplifies these operations by automatically allocating and maintaining both servers and virtual machines, as required for analysis tasks. This paradigm offers highly parallel execution and high scalability without manual management of the underlying infrastructure, freeing developers to focus on operational logic. This paper reviews serverless solutions in bioinformatics and evaluates their usage in omics data analysis and integration. We start by reviewing the application of the cloud computing model to a multi-omics data analysis and exposing some shortcomings of the early approaches. We then introduce the serverless computing paradigm and show its applicability for performing an integrative analysis of multiple omics data sources in the context of the COVID-19 pandemic.


Asunto(s)
COVID-19/genética , COVID-19/metabolismo , Nube Computacional , Biología Computacional , Genómica , Pandemias , SARS-CoV-2 , Programas Informáticos , COVID-19/epidemiología , Humanos , SARS-CoV-2/genética , SARS-CoV-2/metabolismo
13.
Mass Spectrom Rev ; 2023 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-36744702

RESUMEN

The isotope distribution, which reflects the number and probabilities of occurrence of different isotopologues of a molecule, can be theoretically calculated. With the current generation of (ultra)-high-resolution mass spectrometers, the isotope distribution of molecules can be measured with high sensitivity, resolution, and mass accuracy. However, the observed isotope distribution can differ substantially from the expected isotope distribution. Although differences between the observed and expected isotope distribution can complicate the analysis and interpretation of mass spectral data, they can be helpful in a number of specific applications. These applications include, yet are not limited to, the identification of peptides in proteomics, elucidation of the elemental composition of small organic molecules and metabolites, as well as wading through peaks in mass spectra of complex bioorganic mixtures such as petroleum and humus. In this review, we give a nonexhaustive overview of factors that have an impact on the observed isotope distribution, such as elemental isotope deviations, ion sampling, ion interactions, electronic noise and dephasing, centroiding, and apodization. These factors occur at different stages of obtaining the isotope distribution: during the collection of the sample, during the ionization and intake of a molecule in a mass spectrometer, during the mass separation and detection of ionized molecules, and during signal processing.

14.
J Med Virol ; 96(3): e29545, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38506248

RESUMEN

A large-scale outbreak of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) occurred in Shanghai, China, in early December 2022. To study the incidence and characteristics of otitis media with effusion (OME) complicating SARS-CoV-2, we collected 267 middle ear effusion (MEE) samples and 172 nasopharyngeal (NP) swabs from patients. The SARS-CoV-2 virus was detected by RT-PCR targeting. The SARS-CoV-2 virus, angiotensin-converting enzyme 2 (ACE2), and transmembrane serine protease 2 (TMPRSS2) expression in human samples was examined via immunofluorescence. During the COVID-19 epidemic in 2022, the incidence of OME (3%) significantly increased compared to the same period from 2020 to 2022. Ear symptoms in patients with SARS-CoV-2 complicated by OME generally appeared late, even after a negative NP swab, an average of 9.33 ± 6.272 days after COVID-19 infection. The SARS-CoV-2 virus was detected in MEE, which had a higher viral load than NP swabs. The insertion rate of tympanostomy tubes was not significantly higher than in OME patients in 2019-2022. Virus migration led to high viral loads in MEE despite negative NP swabs, indicating that OME lagged behind respiratory infections but had a favorable prognosis. Furthermore, middle ear tissue from adult humans coexpressed the ACE2 receptor for the SARS-CoV-2 virus and the TMPRSS2 cofactors required for virus entry.


Asunto(s)
COVID-19 , Otitis Media con Derrame , Adulto , Humanos , SARS-CoV-2 , COVID-19/complicaciones , Enzima Convertidora de Angiotensina 2 , China/epidemiología
15.
Metabolomics ; 20(2): 42, 2024 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-38491298

RESUMEN

INTRODUCTION: Untargeted direct mass spectrometric analysis of volatile organic compounds has many potential applications across fields such as healthcare and food safety. However, robust data processing protocols must be employed to ensure that research is replicable and practical applications can be realised. User-friendly data processing and statistical tools are becoming increasingly available; however, the use of these tools have neither been analysed, nor are they necessarily suited for every data type. OBJECTIVES: This review aims to analyse data processing and analytic workflows currently in use and examine whether methodological reporting is sufficient to enable replication. METHODS: Studies identified from Web of Science and Scopus databases were systematically examined against the inclusion criteria. The experimental, data processing, and data analysis workflows were reviewed for the relevant studies. RESULTS: From 459 studies identified from the databases, a total of 110 met the inclusion criteria. Very few papers provided enough detail to allow all aspects of the methodology to be replicated accurately, with only three meeting previous guidelines for reporting experimental methods. A wide range of data processing methods were used, with only eight papers (7.3%) employing a largely similar workflow where direct comparability was achievable. CONCLUSIONS: Standardised workflows and reporting systems need to be developed to ensure research in this area is replicable, comparable, and held to a high standard. Thus, allowing the wide-ranging potential applications to be realised.


Asunto(s)
Metabolómica , Compuestos Orgánicos Volátiles , Metabolómica/métodos , Espectrometría de Masas/métodos , Estándares de Referencia , Flujo de Trabajo
16.
J Med Internet Res ; 26: e54580, 2024 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-38551633

RESUMEN

BACKGROUND: The study of disease progression relies on clinical data, including text data, and extracting valuable features from text data has been a research hot spot. With the rise of large language models (LLMs), semantic-based extraction pipelines are gaining acceptance in clinical research. However, the security and feature hallucination issues of LLMs require further attention. OBJECTIVE: This study aimed to introduce a novel modular LLM pipeline, which could semantically extract features from textual patient admission records. METHODS: The pipeline was designed to process a systematic succession of concept extraction, aggregation, question generation, corpus extraction, and question-and-answer scale extraction, which was tested via 2 low-parameter LLMs: Qwen-14B-Chat (QWEN) and Baichuan2-13B-Chat (BAICHUAN). A data set of 25,709 pregnancy cases from the People's Hospital of Guangxi Zhuang Autonomous Region, China, was used for evaluation with the help of a local expert's annotation. The pipeline was evaluated with the metrics of accuracy and precision, null ratio, and time consumption. Additionally, we evaluated its performance via a quantified version of Qwen-14B-Chat on a consumer-grade GPU. RESULTS: The pipeline demonstrates a high level of precision in feature extraction, as evidenced by the accuracy and precision results of Qwen-14B-Chat (95.52% and 92.93%, respectively) and Baichuan2-13B-Chat (95.86% and 90.08%, respectively). Furthermore, the pipeline exhibited low null ratios and variable time consumption. The INT4-quantified version of QWEN delivered an enhanced performance with 97.28% accuracy and a 0% null ratio. CONCLUSIONS: The pipeline exhibited consistent performance across different LLMs and efficiently extracted clinical features from textual data. It also showed reliable performance on consumer-grade hardware. This approach offers a viable and effective solution for mining clinical research data from textual records.


Asunto(s)
Minería de Datos , Registros Electrónicos de Salud , Humanos , Minería de Datos/métodos , Procesamiento de Lenguaje Natural , China , Lenguaje
17.
BMC Med Inform Decis Mak ; 24(1): 194, 2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-39014361

RESUMEN

This research study demonstrates an efficient scheme for early detection of cardiorespiratory complications in pandemics by Utilizing Wearable Electrocardiogram (ECG) sensors for pattern generation and Convolution Neural Networks (CNN) for decision analytics. In health-related outbreaks, timely and early diagnosis of such complications is conclusive in reducing mortality rates and alleviating the burden on healthcare facilities. Existing methods rely on clinical assessments, medical history reviews, and hospital-based monitoring, which are valuable but have limitations in terms of accessibility, scalability, and timeliness, particularly during pandemics. The proposed scheme commences by deploying wearable ECG sensors on the patient's body. These sensors collect data by continuously monitoring the cardiac activity and respiratory patterns of the patient. The collected raw data is then transmitted securely in a wireless manner to a centralized server and stored in a database. Subsequently, the stored data is assessed using a preprocessing process which extracts relevant and important features like heart rate variability and respiratory rate. The preprocessed data is then used as input into the CNN model for the classification of normal and abnormal cardiorespiratory patterns. To achieve high accuracy in abnormality detection the CNN model is trained on labeled data with optimized parameters. The performance of the proposed scheme is evaluated and gauged using different scenarios, which shows a robust performance in detecting abnormal cardiorespiratory patterns with a sensitivity of 95% and specificity of 92%. Prominent observations, which highlight the potential for early interventions include subtle changes in heart rate variability and preceding respiratory distress. These findings show the significance of wearable ECG technology in improving pandemic management strategies and informing public health policies, which enhances preparedness and resilience in the face of emerging health threats.


Asunto(s)
Diagnóstico Precoz , Electrocardiografía , Redes Neurales de la Computación , Dispositivos Electrónicos Vestibles , Humanos , Electrocardiografía/instrumentación , COVID-19/diagnóstico
18.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38732931

RESUMEN

The attributes of diversity and concealment pose formidable challenges in the accurate detection and efficacious management of distresses within subgrade structures. The onset of subgrade distresses may precipitate structural degradation, thereby amplifying the frequency of traffic incidents and instigating economic ramifications. Accurate and timely detection of subgrade distresses is essential for maintaining and repairing road sections with existing distresses. This helps to prolong the service life of road infrastructure and reduce financial burden. In recent years, the advent of numerous novel technologies and methodologies has propelled significant advancements in subgrade distress detection. Therefore, this review delineates a concentrated examination of subgrade distress detection, methodically consolidating and presenting various techniques while dissecting their respective merits and constraints. By furnishing comprehensive guidance on subgrade distress detection, this review facilitates the expedient identification and targeted treatment of subgrade distresses, thereby fortifying safety and enhancing durability. The pivotal role of this review in bolstering the construction and operational facets of transportation infrastructure is underscored.

19.
Sensors (Basel) ; 24(13)2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-39001161

RESUMEN

This study aimed to measure the differences in commonly used summary acceleration metrics during elite Australian football games under three different data processing protocols (raw, custom-processed, manufacturer-processed). Estimates of distance, speed and acceleration were collected with a 10-Hz GNSS tracking technology device from fourteen matches of 38 elite Australian football players from one team. Raw and manufacturer-processed data were exported from respective proprietary software and two common summary acceleration metrics (number of efforts and distance within medium/high-intensity zone) were calculated for the three processing methods. To estimate the effect of the three different data processing methods on the summary metrics, linear mixed models were used. The main findings demonstrated that there were substantial differences between the three processing methods; the manufacturer-processed acceleration data had the lowest reported distance (up to 184 times lower) and efforts (up to 89 times lower), followed by the custom-processed distance (up to 3.3 times lower) and efforts (up to 4.3 times lower), where raw data had the highest reported distance and efforts. The results indicated that different processing methods changed the metric output and in turn alters the quantification of the demands of a sport (volume, intensity and frequency of the metrics). Coaches, practitioners and researchers need to understand that various processing methods alter the summary metrics of acceleration data. By being informed about how these metrics are affected by processing methods, they can better interpret the data available and effectively tailor their training programs to match the demands of competition.

20.
Sensors (Basel) ; 24(6)2024 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-38544121

RESUMEN

The vast amount of information stemming from the deployment of the Internet of Things and open data portals is poised to provide significant benefits for both the private and public sectors, such as the development of value-added services or an increase in the efficiency of public services. This is further enhanced due to the potential of semantic information models such as NGSI-LD, which enable the enrichment and linkage of semantic data, strengthened by the contextual information present by definition. In this scenario, advanced data processing techniques need to be defined and developed for the processing of harmonised datasets and data streams. Our work is based on a structured approach that leverages the principles of linked-data modelling and semantics, as well as a data enrichment toolchain framework developed around NGSI-LD. Within this framework, we reveal the potential for enrichment and linkage techniques to reshape how data are exploited in smart cities, with a particular focus on citizen-centred initiatives. Moreover, we showcase the effectiveness of these data processing techniques through specific examples of entity transformations. The findings, which focus on improving data comprehension and bolstering smart city advancements, set the stage for the future exploration and refinement of the symbiosis between semantic data and smart city ecosystems.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA