Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 19.147
Filtrar
1.
Sensors (Basel) ; 24(7)2024 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-38610260

RESUMO

Wearable technology and neuroimaging equipment using photoplethysmography (PPG) have become increasingly popularized in recent years. Several investigations deriving pulse rate variability (PRV) from PPG have demonstrated that a slight bias exists compared to concurrent heart rate variability (HRV) estimates. PPG devices commonly sample at ~20-100 Hz, where the minimum sampling frequency to derive valid PRV metrics is unknown. Further, due to different autonomic innervation, it is unknown if PRV metrics are harmonious between the cerebral and peripheral vasculature. Cardiac activity via electrocardiography (ECG) and PPG were obtained concurrently in 54 participants (29 females) in an upright orthostatic position. PPG data were collected at three anatomical locations: left third phalanx, middle cerebral artery, and posterior cerebral artery using a Finapres NOVA device and transcranial Doppler ultrasound. Data were sampled for five minutes at 1000 Hz and downsampled to frequencies ranging from 20 to 500 Hz. HRV (via ECG) and PRV (via PPG) were quantified and compared at 1000 Hz using Bland-Altman plots and coefficient of variation (CoV). A sampling frequency of ~100-200 Hz was required to produce PRV metrics with a bias of less than 2%, while a sampling rate of ~40-50 Hz elicited a bias smaller than 20%. At 1000 Hz, time- and frequency-domain PRV measures were slightly elevated compared to those derived from HRV (mean bias: ~1-8%). In conjunction with previous reports, PRV and HRV were not surrogate biomarkers due to the different nature of the collected waveforms. Nevertheless, PRV estimates displayed greater validity at a lower sampling rate compared to HRV estimates.


Assuntos
Sistema Nervoso Autônomo , Benchmarking , Feminino , Humanos , Frequência Cardíaca , Correlação de Dados , Eletrocardiografia
2.
Sensors (Basel) ; 24(7)2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38610312

RESUMO

Electrocardiogram (ECG) reconstruction from contact photoplethysmogram (PPG) would be transformative for cardiac monitoring. We investigated the fundamental and practical feasibility of such reconstruction by first replicating pioneering work in the field, with the aim of assessing the methods and evaluation metrics used. We then expanded existing research by investigating different cycle segmentation methods and different evaluation scenarios to robustly verify both fundamental feasibility, as well as practical potential. We found that reconstruction using the discrete cosine transform (DCT) and a linear ridge regression model shows good results when PPG and ECG cycles are semantically aligned-the ECG R peak and PPG systolic peak are aligned-before training the model. Such reconstruction can be useful from a morphological perspective, but loses important physiological information (precise R peak location) due to cycle alignment. We also found better performance when personalization was used in training, while a general model in a leave-one-subject-out evaluation performed poorly, showing that a general mapping between PPG and ECG is difficult to derive. While such reconstruction is valuable, as the ECG contains more fine-grained information about the cardiac activity as well as offers a different modality (electrical signal) compared to the PPG (optical signal), our findings show that the usefulness of such reconstruction depends on the application, with a trade-off between morphological quality of QRS complexes and precise temporal placement of the R peak. Finally, we highlight future directions that may resolve existing problems and allow for reliable and robust cross-modal physiological monitoring using just PPG.


Assuntos
Eletrocardiografia , Fotopletismografia , Estudos de Viabilidade , Benchmarking , Eletricidade
3.
Sensors (Basel) ; 24(7)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38610349

RESUMO

Seismocardiography (SCG), a method for measuring heart-induced chest vibrations, is gaining attention as a non-invasive, accessible, and cost-effective approach for cardiac pathologies, diagnosis, and monitoring. This study explores the integration of SCG acquired through smartphone technology by assessing the accuracy of metrics derived from smartphone recordings and their consistency when performed by patients. Therefore, we assessed smartphone-derived SCG's reliability in computing median kinetic energy parameters per record in 220 patients with various cardiovascular conditions. The study involved three key procedures: (1) simultaneous measurements of a validated hardware device and a commercial smartphone; (2) consecutive smartphone recordings performed by both clinicians and patients; (3) patients' self-conducted home recordings over three months. Our findings indicate a moderate-to-high reliability of smartphone-acquired SCG metrics compared to those obtained from a validated device, with intraclass correlation (ICC) > 0.77. The reliability of patient-acquired SCG metrics was high (ICC > 0.83). Within the cohort, 138 patients had smartphones that met the compatibility criteria for the study, with an observed at-home compliance rate of 41.4%. This research validates the potential of smartphone-derived SCG acquisition in providing repeatable SCG metrics in telemedicine, thus laying a foundation for future studies to enhance the precision of at-home cardiac data acquisition.


Assuntos
Doenças Cardiovasculares , Smartphone , Humanos , Reprodutibilidade dos Testes , Fenômenos Físicos , Benchmarking , Doenças Cardiovasculares/diagnóstico
4.
Sensors (Basel) ; 24(7)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38610403

RESUMO

The assessment of fine motor competence plays a pivotal role in neuropsychological examinations for the identification of developmental deficits. Several tests have been proposed for the characterization of fine motor competence, with evaluation metrics primarily based on qualitative observation, limiting quantitative assessment to measures such as test durations. The Placing Bricks (PB) test evaluates fine motor competence across the lifespan, relying on the measurement of time to completion. The present study aims at instrumenting the PB test using wearable inertial sensors to complement PB standard assessment with reliable and objective process-oriented measures of performance. Fifty-four primary school children (27 6-year-olds and 27 7-year-olds) performed the PB according to standard protocol with their dominant and non-dominant hands, while wearing two tri-axial inertial sensors, one per wrist. An ad hoc algorithm based on the analysis of forearm angular velocity data was developed to automatically identify task events, and to quantify phases and their variability. The algorithm performance was tested against video recordings in data from five children. Cycle and Placing durations showed a strong agreement between IMU- and Video-derived measurements, with a mean difference <0.1 s, 95% confidence intervals <50% median phase duration, and very high positive correlation (ρ > 0.9). Analyzing the whole population, significant differences were found for age, as follows: six-year-olds exhibited longer cycle durations and higher variability, indicating a stage of development and potential differences in hand dominance; seven-year-olds demonstrated quicker and less variable performance, aligning with the expected maturation and the refined motor control associated with dominant hand training during the first year of school. The proposed sensor-based approach allowed the quantitative assessment of fine motor competence in children, providing a portable and rapid tool for monitoring developmental progress.


Assuntos
Algoritmos , Benchmarking , Criança , Humanos , Antebraço , Longevidade , Testes Neuropsicológicos
5.
Sensors (Basel) ; 24(7)2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38610580

RESUMO

This paper contributes to the development of a Next Generation First Responder (NGFR) communication platform with the key goal of embedding it into a smart city technology infrastructure. The framework of this approach is a concept known as SmartHub, developed by the US Department of Homeland Security. The proposed embedding methodology complies with the standard categories and indicators of smart city performance. This paper offers two practice-centered extensions of the NGFR hub, which are also the main results: first, a cognitive workload monitoring of first responders as a basis for their performance assessment, monitoring, and improvement; and second, a highly sensitive problem of human society, the emergency assistance tools for individuals with disabilities. Both extensions explore various technological-societal dimensions of smart cities, including interoperability, standardization, and accessibility to assistive technologies for people with disabilities. Regarding cognitive workload monitoring, the core result is a novel AI formalism, an ensemble of machine learning processes aggregated using machine reasoning. This ensemble enables predictive situation assessment and self-aware computing, which is the basis of the digital twin concept. We experimentally demonstrate a specific component of a digital twin of an NGFR, a near-real-time monitoring of the NGFR cognitive workload. Regarding our second result, a problem of emergency assistance for individuals with disabilities that originated as accessibility to assistive technologies to promote disability inclusion, we provide the NGFR specification focusing on interactions based on AI formalism and using a unified hub platform. This paper also discusses a technology roadmap using the notion of the Emergency Management Cycle (EMC), a commonly accepted doctrine for managing disasters through the steps of mitigation, preparedness, response, and recovery. It positions the NGFR hub as a benchmark of the smart city emergency service.


Assuntos
Desastres , Serviços Médicos de Emergência , Socorristas , Humanos , Cidades , Benchmarking
6.
BMJ Open Qual ; 13(2)2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38626936

RESUMO

Optimal cord management (OCM), defined as waiting at least 60 seconds (s) before clamping the umbilical cord after birth, is an evidence-based intervention that improves outcomes for both term and preterm babies. All major resuscitation councils recommend OCM for well newborns.National Neonatal Audit Programme (NNAP) benchmarking data identified our tertiary neonatal unit as a negative outlier with regard to OCM practice with only 12.1% of infants receiving the recommended minimum of 60 s. This inspired a quality improvement project (QIP) to increase OCM rates of ≥ 60 s for infants <34 weeks. A multidisciplinary QIP team (Neonatal medical and nursing staff, Obstetricians, Midwives and Anaesthetic colleagues) was formed, and robust evidence-based quality improvement methodologies employed. Our aim was to increase OCM of ≥ 60 s for infants born at <34 weeks to at least 40%.The percentage of infants <34 weeks receiving OCM increased from 32.4% at baseline (June-September 2022) to 73.6% in the 9 months following QIP commencement (October 2022-June 2023). The intervention period spanned two cohorts of rotational doctors, demonstrating its sustainability. Rates of admission normothermia were maintained following the routine adoption of OCM (89.2% vs 88.5%), which is a complication described by other neonatal units.This project demonstrates the power of a multidisciplinary team approach to embedding an intervention that relies on collaboration between multiple departments. It also highlights the importance of national benchmarking data in allowing departments to focus QIP efforts to achieve long-lasting transformational service improvements.


Assuntos
Recém-Nascido Prematuro , Melhoria de Qualidade , Recém-Nascido , Humanos , Hospitalização , Benchmarking
7.
Parasit Vectors ; 17(1): 188, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627870

RESUMO

BACKGROUND: Malaria is a serious public health concern worldwide. Early and accurate diagnosis is essential for controlling the disease's spread and avoiding severe health complications. Manual examination of blood smear samples by skilled technicians is a time-consuming aspect of the conventional malaria diagnosis toolbox. Malaria persists in many parts of the world, emphasising the urgent need for sophisticated and automated diagnostic instruments to expedite the identification of infected cells, thereby facilitating timely treatment and reducing the risk of disease transmission. This study aims to introduce a more lightweight and quicker model-but with improved accuracy-for diagnosing malaria using a YOLOv4 (You Only Look Once v. 4) deep learning object detector. METHODS: The YOLOv4 model is modified using direct layer pruning and backbone replacement. The primary objective of layer pruning is the removal and individual analysis of residual blocks within the C3, C4 and C5 (C3-C5) Res-block bodies of the backbone architecture's C3-C5 Res-block bodies. The CSP-DarkNet53 backbone is simultaneously replaced for enhanced feature extraction with a shallower ResNet50 network. The performance metrics of the models are compared and analysed. RESULTS: The modified models outperform the original YOLOv4 model. The YOLOv4-RC3_4 model with residual blocks pruned from the C3 and C4 Res-block body achieves the highest mean accuracy precision (mAP) of 90.70%. This mAP is > 9% higher than that of the original model, saving approximately 22% of the billion floating point operations (B-FLOPS) and 23 MB in size. The findings indicate that the YOLOv4-RC3_4 model also performs better, with an increase of 9.27% in detecting the infected cells upon pruning the redundant layers from the C3 Res-block bodies of the CSP-DarkeNet53 backbone. CONCLUSIONS: The results of this study highlight the use of the YOLOv4 model for detecting infected red blood cells. Pruning the residual blocks from the Res-block bodies helps to determine which Res-block bodies contribute the most and least, respectively, to the model's performance. Our method has the potential to revolutionise malaria diagnosis and pave the way for novel deep learning-based bioinformatics solutions. Developing an effective and automated process for diagnosing malaria will considerably contribute to global efforts to combat this debilitating disease. We have shown that removing undesirable residual blocks can reduce the size of the model and its computational complexity without compromising its precision.


Assuntos
Aprendizado Profundo , Recuperação Demorada da Anestesia , Malária , Animais , Benchmarking , Biologia Computacional , Malária/diagnóstico
8.
Brief Bioinform ; 25(3)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38627939

RESUMO

The latest breakthroughs in spatially resolved transcriptomics technology offer comprehensive opportunities to delve into gene expression patterns within the tissue microenvironment. However, the precise identification of spatial domains within tissues remains challenging. In this study, we introduce AttentionVGAE (AVGN), which integrates slice images, spatial information and raw gene expression while calibrating low-quality gene expression. By combining the variational graph autoencoder with multi-head attention blocks (MHA blocks), AVGN captures spatial relationships in tissue gene expression, adaptively focusing on key features and alleviating the need for prior knowledge of cluster numbers, thereby achieving superior clustering performance. Particularly, AVGN attempts to balance the model's attention focus on local and global structures by utilizing MHA blocks, an aspect that current graph neural networks have not extensively addressed. Benchmark testing demonstrates its significant efficacy in elucidating tissue anatomy and interpreting tumor heterogeneity, indicating its potential in advancing spatial transcriptomics research and understanding complex biological phenomena.


Assuntos
Benchmarking , Perfilação da Expressão Gênica , Análise por Conglomerados , Redes Neurais de Computação
9.
Brief Bioinform ; 25(3)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38628114

RESUMO

Spatial transcriptomics (ST) has become a powerful tool for exploring the spatial organization of gene expression in tissues. Imaging-based methods, though offering superior spatial resolutions at the single-cell level, are limited in either the number of imaged genes or the sensitivity of gene detection. Existing approaches for enhancing ST rely on the similarity between ST cells and reference single-cell RNA sequencing (scRNA-seq) cells. In contrast, we introduce stDiff, which leverages relationships between gene expression abundance in scRNA-seq data to enhance ST. stDiff employs a conditional diffusion model, capturing gene expression abundance relationships in scRNA-seq data through two Markov processes: one introducing noise to transcriptomics data and the other denoising to recover them. The missing portion of ST is predicted by incorporating the original ST data into the denoising process. In our comprehensive performance evaluation across 16 datasets, utilizing multiple clustering and similarity metrics, stDiff stands out for its exceptional ability to preserve topological structures among cells, positioning itself as a robust solution for cell population identification. Moreover, stDiff's enhancement outcomes closely mirror the actual ST data within the batch space. Across diverse spatial expression patterns, our model accurately reconstructs them, delineating distinct spatial boundaries. This highlights stDiff's capability to unify the observed and predicted segments of ST data for subsequent analysis. We anticipate that stDiff, with its innovative approach, will contribute to advancing ST imputation methodologies.


Assuntos
Benchmarking , Perfilação da Expressão Gênica , Análise por Conglomerados , Difusão , Cadeias de Markov , Análise de Sequência de RNA , Transcriptoma
10.
J Med Internet Res ; 26: e56655, 2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38630520

RESUMO

BACKGROUND: Although patients have easy access to their electronic health records and laboratory test result data through patient portals, laboratory test results are often confusing and hard to understand. Many patients turn to web-based forums or question-and-answer (Q&A) sites to seek advice from their peers. The quality of answers from social Q&A sites on health-related questions varies significantly, and not all responses are accurate or reliable. Large language models (LLMs) such as ChatGPT have opened a promising avenue for patients to have their questions answered. OBJECTIVE: We aimed to assess the feasibility of using LLMs to generate relevant, accurate, helpful, and unharmful responses to laboratory test-related questions asked by patients and identify potential issues that can be mitigated using augmentation approaches. METHODS: We collected laboratory test result-related Q&A data from Yahoo! Answers and selected 53 Q&A pairs for this study. Using the LangChain framework and ChatGPT web portal, we generated responses to the 53 questions from 5 LLMs: GPT-4, GPT-3.5, LLaMA 2, MedAlpaca, and ORCA_mini. We assessed the similarity of their answers using standard Q&A similarity-based evaluation metrics, including Recall-Oriented Understudy for Gisting Evaluation, Bilingual Evaluation Understudy, Metric for Evaluation of Translation With Explicit Ordering, and Bidirectional Encoder Representations from Transformers Score. We used an LLM-based evaluator to judge whether a target model had higher quality in terms of relevance, correctness, helpfulness, and safety than the baseline model. We performed a manual evaluation with medical experts for all the responses to 7 selected questions on the same 4 aspects. RESULTS: Regarding the similarity of the responses from 4 LLMs; the GPT-4 output was used as the reference answer, the responses from GPT-3.5 were the most similar, followed by those from LLaMA 2, ORCA_mini, and MedAlpaca. Human answers from Yahoo data were scored the lowest and, thus, as the least similar to GPT-4-generated answers. The results of the win rate and medical expert evaluation both showed that GPT-4's responses achieved better scores than all the other LLM responses and human responses on all 4 aspects (relevance, correctness, helpfulness, and safety). LLM responses occasionally also suffered from lack of interpretation in one's medical context, incorrect statements, and lack of references. CONCLUSIONS: By evaluating LLMs in generating responses to patients' laboratory test result-related questions, we found that, compared to other 4 LLMs and human answers from a Q&A website, GPT-4's responses were more accurate, helpful, relevant, and safer. There were cases in which GPT-4 responses were inaccurate and not individualized. We identified a number of ways to improve the quality of LLM responses, including prompt engineering, prompt augmentation, retrieval-augmented generation, and response evaluation.


Assuntos
Camelídeos Americanos , Humanos , Animais , Benchmarking , Registros Eletrônicos de Saúde , Engenharia , Idioma
11.
Genome Biol ; 25(1): 97, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622738

RESUMO

BACKGROUND: As most viruses remain uncultivated, metagenomics is currently the main method for virus discovery. Detecting viruses in metagenomic data is not trivial. In the past few years, many bioinformatic virus identification tools have been developed for this task, making it challenging to choose the right tools, parameters, and cutoffs. As all these tools measure different biological signals, and use different algorithms and training and reference databases, it is imperative to conduct an independent benchmarking to give users objective guidance. RESULTS: We compare the performance of nine state-of-the-art virus identification tools in thirteen modes on eight paired viral and microbial datasets from three distinct biomes, including a new complex dataset from Antarctic coastal waters. The tools have highly variable true positive rates (0-97%) and false positive rates (0-30%). PPR-Meta best distinguishes viral from microbial contigs, followed by DeepVirFinder, VirSorter2, and VIBRANT. Different tools identify different subsets of the benchmarking data and all tools, except for Sourmash, find unique viral contigs. Performance of tools improved with adjusted parameter cutoffs, indicating that adjustment of parameter cutoffs before usage should be considered. CONCLUSIONS: Together, our independent benchmarking facilitates selecting choices of bioinformatic virus identification tools and gives suggestions for parameter adjustments to viromics researchers.


Assuntos
Benchmarking , Vírus , Metagenoma , Ecossistema , Metagenômica/métodos , Biologia Computacional/métodos , Bases de Dados Genéticas , Vírus/genética
12.
PLoS One ; 19(4): e0290706, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38625859

RESUMO

In many applications, artificial neural networks are best trained for a task by following a curriculum, in which simpler concepts are learned before more complex ones. This curriculum can be hand-crafted by the engineer or optimised like other hyperparameters, by evaluating many curricula. However, this is computationally intensive and the hyperparameters are unlikely to generalise to new datasets. An attractive alternative, demonstrated in influential prior works, is that the network could choose its own curriculum by monitoring its learning. This would be particularly beneficial for continual learning, in which the network must learn from an environment that is changing over time, relevant both to practical applications and in the modelling of human development. In this paper we test the generality of this approach using a proof-of-principle model, training a network on two sequential tasks under static and continual conditions, and investigating both the benefits of a curriculum and the handicap induced by continuous learning. Additionally, we test a variety of prior task-switching metrics, and find that in some cases even in this simple scenario the a network is often unable to choose the optimal curriculum, as the benefits are sometimes only apparent with hindsight, at the end of training. We discuss the implications of the results for network engineering and models of human development.


Assuntos
Currículo , Redes Neurais de Computação , Humanos , Extremidade Superior , Educação Continuada , Benchmarking
13.
Genome Biol ; 25(1): 91, 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589937

RESUMO

BACKGROUND: Although sequencing technologies have boosted the measurement of the genomic diversity of plant crops, it remains challenging to accurately genotype millions of genetic variants, especially structural variations, with only short reads. In recent years, many graph-based variation genotyping methods have been developed to address this issue and tested for human genomes. However, their performance in plant genomes remains largely elusive. Furthermore, pipelines integrating the advantages of current genotyping methods might be required, considering the different complexity of plant genomes. RESULTS: Here we comprehensively evaluate eight such genotypers in different scenarios in terms of variant type and size, sequencing parameters, genomic context, and complexity, as well as graph size, using both simulated and real data sets from representative plant genomes. Our evaluation reveals that there are still great challenges to applying existing methods to plants, such as excessive repeats and variants or high resource consumption. Therefore, we propose a pipeline called Ensemble Variant Genotyper (EVG) that can achieve better genotyping performance in almost all experimental scenarios and comparably higher genotyping recall and precision even using 5× reads. Furthermore, we demonstrate that EVG is more robust with an increasing number of graphed genomes, especially for insertions and deletions. CONCLUSIONS: Our study will provide new insights into the development and application of graph-based genotyping algorithms. We conclude that EVG provides an accurate, unbiased, and cost-effective way for genotyping both small and large variations and will be potentially used in population-scale genotyping for large, repetitive, and heterozygous plant genomes.


Assuntos
Algoritmos , Benchmarking , Humanos , Genótipo , Genômica/métodos , Técnicas de Genotipagem/métodos , Genoma de Planta , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de DNA/métodos
14.
Sci Rep ; 14(1): 8012, 2024 04 05.
Artigo em Inglês | MEDLINE | ID: mdl-38580704

RESUMO

The objective of human pose estimation (HPE) derived from deep learning aims to accurately estimate and predict the human body posture in images or videos via the utilization of deep neural networks. However, the accuracy of real-time HPE tasks is still to be improved due to factors such as partial occlusion of body parts and limited receptive field of the model. To alleviate the accuracy loss caused by these issues, this paper proposes a real-time HPE model called CCAM - Person based on the YOLOv8 framework. Specifically, we have improved the backbone and neck of the YOLOv8x-pose real-time HPE model to alleviate the feature loss and receptive field constraints. Secondly, we introduce the context coordinate attention module (CCAM) to augment the model's focus on salient features, reduce background noise interference, alleviate key point regression failure caused by limb occlusion, and improve the accuracy of pose estimation. Our approach attains competitive results on multiple metrics of two open-source datasets, MS COCO 2017 and CrowdPose. Compared with the baseline model YOLOv8x-pose, CCAM-Person improves the average precision by 2.8% and 3.5% on the two datasets, respectively.


Assuntos
Benchmarking , Extremidades , Humanos , Redes Neurais de Computação , Postura , Gravação de Videoteipe
17.
Brief Bioinform ; 25(3)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38581415

RESUMO

Discovering hit molecules with desired biological activity in a directed manner is a promising but profound task in computer-aided drug discovery. Inspired by recent generative AI approaches, particularly Diffusion Models (DM), we propose Graph Latent Diffusion Model (GLDM)-a latent DM that preserves both the effectiveness of autoencoders of compressing complex chemical data and the DM's capabilities of generating novel molecules. Specifically, we first develop an autoencoder to encode the molecular data into low-dimensional latent representations and then train the DM on the latent space to generate molecules inducing targeted biological activity defined by gene expression profiles. Manipulating DM in the latent space rather than the input space avoids complicated operations to map molecule decomposition and reconstruction to diffusion processes, and thus improves training efficiency. Experiments show that GLDM not only achieves outstanding performances on molecular generation benchmarks, but also generates samples with optimal chemical properties and potentials to induce desired biological activity.


Assuntos
Benchmarking , Descoberta de Drogas , Difusão
18.
Int J Mol Sci ; 25(7)2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38612602

RESUMO

Molecular property prediction is an important task in drug discovery, and with help of self-supervised learning methods, the performance of molecular property prediction could be improved by utilizing large-scale unlabeled dataset. In this paper, we propose a triple generative self-supervised learning method for molecular property prediction, called TGSS. Three encoders including a bi-directional long short-term memory recurrent neural network (BiLSTM), a Transformer, and a graph attention network (GAT) are used in pre-training the model using molecular sequence and graph structure data to extract molecular features. The variational auto encoder (VAE) is used for reconstructing features from the three models. In the downstream task, in order to balance the information between different molecular features, a feature fusion module is added to assign different weights to each feature. In addition, to improve the interpretability of the model, atomic similarity heat maps were introduced to demonstrate the effectiveness and rationality of molecular feature extraction. We demonstrate the accuracy of the proposed method on chemical and biological benchmark datasets by comparative experiments.


Assuntos
Benchmarking , Descoberta de Drogas , Animais , Fontes de Energia Elétrica , Estro , Aprendizado de Máquina Supervisionado
19.
Int J Mol Sci ; 25(7)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38612639

RESUMO

Single-cell RNA sequencing (scRNA-seq) has emerged as a powerful technique for investigating biological heterogeneity at the single-cell level in human systems and model organisms. Recent advances in scRNA-seq have enabled the pooling of cells from multiple samples into single libraries, thereby increasing sample throughput while reducing technical batch effects, library preparation time, and the overall cost. However, a comparative analysis of scRNA-seq methods with and without sample multiplexing is lacking. In this study, we benchmarked methods from two representative platforms: Parse Biosciences (Parse; with sample multiplexing) and 10x Genomics (10x; without sample multiplexing). By using peripheral blood mononuclear cells (PBMCs) obtained from two healthy individuals, we demonstrate that demultiplexed scRNA-seq data obtained from Parse showed similar cell type frequencies compared to 10x data where samples were not multiplexed. Despite relatively lower cell capture affecting library preparation, Parse can detect rare cell types (e.g., plasmablasts and dendritic cells) which is likely due to its relatively higher sensitivity in gene detection. Moreover, a comparative analysis of transcript quantification between the two platforms revealed platform-specific distributions of gene length and GC content. These results offer guidance for researchers in designing high-throughput scRNA-seq studies.


Assuntos
Benchmarking , Leucócitos Mononucleares , Humanos , Biblioteca Gênica , Genômica , Análise de Sequência de RNA
20.
Public Health Nutr ; 27(1): e101, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38557393

RESUMO

OBJECTIVE: It is unknown how well menu labelling schemes that enforce the display of kilojoule (kJ) labelling at point-of-sale have been implemented on online food delivery (OFD) services in Australia. This study aimed to examine the prevalence of kJ labelling on the online menus of large food outlets with more than twenty locations in the state or fifty locations nationally. A secondary aim was to evaluate the nutritional quality of menu items on OFD from mid-sized outlets that have fewer locations than what is specified in the current scheme. DESIGN: Cross-sectional analysis. Prevalence of kJ labelling by large food outlets on OFD from August to September 2022 was examined. Proportion of discretionary ('junk food') items on menus from mid-sized outlets was assessed. SETTING: Forty-three unique large food outlets on company (e.g. MyMacca's) and third party OFD (Uber Eats, Menulog, Deliveroo) within Sydney, Australia. Ninety-two mid-sized food outlets were analysed. PARTICIPANTS: N/A. RESULTS: On company OFD apps, 35 % (7/23) had complete kJ labelling for each menu item. In comparison, only 4·8 % (2/42), 5·3 % (2/38) and 3·6 % (1/28) of large outlets on Uber Eats, Menulog and Deliveroo had complete kJ labelling at all locations, respectively. Over three-quarters, 76·3 % (345/452) of menu items from mid-sized outlets were classified as discretionary. CONCLUSIONS: Kilojoule labelling was absent or incomplete on a high proportion of online menus. Mid-sized outlets have abundant discretionary choices and yet escape criteria for mandatory menu labelling laws. Our findings show the need to further monitor the implementation of nutrition policies on OFD.


Assuntos
Benchmarking , Ingestão de Energia , Humanos , Estudos Transversais , Rotulagem de Alimentos , Restaurantes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...