Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 24(6)2024 Mar 12.
Article in English | MEDLINE | ID: mdl-38544080

ABSTRACT

Commercially available wearable devices (wearables) show promise for continuous physiological monitoring. Previous works have demonstrated that wearables can be used to detect the onset of acute infectious diseases, particularly those characterized by fever. We aimed to evaluate whether these devices could be used for the more general task of syndromic surveillance. We obtained wearable device data (Oura Ring) from 63,153 participants. We constructed a dataset using participants' wearable device data and participants' responses to daily online questionnaires. We included days from the participants if they (1) completed the questionnaire, (2) reported not experiencing fever and reported a self-collected body temperature below 38 °C (negative class), or reported experiencing fever and reported a self-collected body temperature at or above 38 °C (positive class), and (3) wore the wearable device the nights before and after that day. We used wearable device data (i.e., skin temperature, heart rate, and sleep) from the nights before and after participants' fever day to train a tree-based classifier to detect self-reported fevers. We evaluated the performance of our model using a five-fold cross-validation scheme. Sixteen thousand, seven hundred, and ninety-four participants provided at least one valid ground truth day; there were a total of 724 fever days (positive class examples) from 463 participants and 342,430 non-fever days (negative class examples) from 16,687 participants. Our model exhibited an area under the receiver operating characteristic curve (AUROC) of 0.85 and an average precision (AP) of 0.25. At a sensitivity of 0.50, our calibrated model had a false positive rate of 0.8%. Our results suggest that it might be possible to leverage data from these devices at a public health level for live fever surveillance. Implementing these models could increase our ability to detect disease prevalence and spread in real-time during infectious disease outbreaks.


Subject(s)
Sentinel Surveillance , Wearable Electronic Devices , Humans , Routinely Collected Health Data , Monitoring, Physiologic , Fever/diagnosis , Self Report
3.
Biol Sex Differ ; 14(1): 76, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37915069

ABSTRACT

BACKGROUND: Females have been historically excluded from biomedical research due in part to the documented presumption that results with male subjects will generalize effectively to females. This has been justified in part by the assumption that ovarian rhythms will increase the overall variance of pooled random samples. But not all variance in samples is random. Human biometrics are continuously changing in response to stimuli and biological rhythms; single measurements taken sporadically do not easily support exploration of variance across time scales. Recently we reported that in mice, core body temperature measured longitudinally shows higher variance in males than cycling females, both within and across individuals at multiple time scales. METHODS: Here, we explore longitudinal human distal body temperature, measured by a wearable sensor device (Oura Ring), for 6 months in females and males ranging in age from 20 to 79 years. In this study, we did not limit the comparisons to female versus male, but instead we developed a method for categorizing individuals as cyclic or acyclic depending on the presence of a roughly monthly pattern to their nightly temperature. We then compared structure and variance across time scales using multiple standard instruments. RESULTS: Sex differences exist as expected, but across multiple statistical comparisons and timescales, there was no one group that consistently exceeded the others in variance. When variability was assessed across time, females, whether or not their temperature contained monthly cycles, did not significantly differ from males both on daily and monthly time scales. CONCLUSIONS: These findings contradict the viewpoint that human females are too variable across menstrual cycles to include in biomedical research. Longitudinal temperature of females does not accumulate greater measurement error over time than do males and the majority of unexplained variance is within sex category, not between them.


Women are still excluded from research disproportionately, due in part to documented concerns that menstrual cycles make them more variable and so harder to study. In the past, we have challenged this claim, finding it does not hold for animal physiology, animal behavior, or human behavior. Here we are able to show that it does not hold in human physiology either. We analyzed 6 months of continuously collected temperature data measured by a commercial wearable device, in order to determine if it is true that females are more variable or less predictable than males. We found that temperatures mostly vary as a function of time of day and whether the individual was awake or asleep. Additionally, for some females, nightly maximum temperature contained a cyclical pattern with a period of around 28 days, consistent with menstrual cycles. The variability was different between cycling females, not cycling females, and males, but only cycling female temperature contained a monthly structure, making their changes more predictable than those of non-cycling females and males. We found the majority of unexplained variance to be within each sex/cycling category, not between them. All groups had indistinguishable measurement errors across time. This analysis of temperature suggests data-driven characteristics might be more helpful distinguishing individuals than historical categories such as binary sex. The work also supports the inclusion of females as subjects within biological research, as this inclusion does not weaken statistical comparisons, but does allow more equitable coverage of research results in the world.


Subject(s)
Menstrual Cycle , Wearable Electronic Devices , Humans , Male , Female , Mice , Animals , Young Adult , Adult , Middle Aged , Aged , Temperature , Periodicity , Sex Characteristics
4.
New Phytol ; 238(3): 952-970, 2023 05.
Article in English | MEDLINE | ID: mdl-36694296

ABSTRACT

Wildfires are a global crisis, but current fire models fail to capture vegetation response to changing climate. With drought and elevated temperature increasing the importance of vegetation dynamics to fire behavior, and the advent of next generation models capable of capturing increasingly complex physical processes, we provide a renewed focus on representation of woody vegetation in fire models. Currently, the most advanced representations of fire behavior and biophysical fire effects are found in distinct classes of fine-scale models and do not capture variation in live fuel (i.e. living plant) properties. We demonstrate that plant water and carbon dynamics, which influence combustion and heat transfer into the plant and often dictate plant survival, provide the mechanistic linkage between fire behavior and effects. Our conceptual framework linking remotely sensed estimates of plant water and carbon to fine-scale models of fire behavior and effects could be a critical first step toward improving the fidelity of the coarse scale models that are now relied upon for global fire forecasting. This process-based approach will be essential to capturing the influence of physiological responses to drought and warming on live fuel conditions, strengthening the science needed to guide fire managers in an uncertain future.


Subject(s)
Fires , Wildfires , Plants , Plant Physiological Phenomena , Water , Carbon , Ecosystem
5.
Vaccines (Basel) ; 10(2)2022 Feb 09.
Article in English | MEDLINE | ID: mdl-35214723

ABSTRACT

There is significant variability in neutralizing antibody responses (which correlate with immune protection) after COVID-19 vaccination, but only limited information is available about predictors of these responses. We investigated whether device-generated summaries of physiological metrics collected by a wearable device correlated with post-vaccination levels of antibodies to the SARS-CoV-2 receptor-binding domain (RBD), the target of neutralizing antibodies generated by existing COVID-19 vaccines. One thousand, one hundred and seventy-nine participants wore an off-the-shelf wearable device (Oura Ring), reported dates of COVID-19 vaccinations, and completed testing for antibodies to the SARS-CoV-2 RBD during the U.S. COVID-19 vaccination rollout. We found that on the night immediately following the second mRNA injection (Moderna-NIAID and Pfizer-BioNTech) increases in dermal temperature deviation and resting heart rate, and decreases in heart rate variability (a measure of sympathetic nervous system activation) and deep sleep were each statistically significantly correlated with greater RBD antibody responses. These associations were stronger in models using metrics adjusted for the pre-vaccination baseline period. Greater temperature deviation emerged as the strongest independent predictor of greater RBD antibody responses in multivariable models. In contrast to data on certain other vaccines, we did not find clear associations between increased sleep surrounding vaccination and antibody responses.

6.
F1000Res ; 10: 897, 2021.
Article in English | MEDLINE | ID: mdl-34804501

ABSTRACT

Scientific data analyses often combine several computational tools in automated pipelines, or workflows. Thousands of such workflows have been used in the life sciences, though their composition has remained a cumbersome manual process due to a lack of standards for annotation, assembly, and implementation. Recent technological advances have returned the long-standing vision of automated workflow composition into focus. This article summarizes a recent Lorentz Center workshop dedicated to automated composition of workflows in the life sciences. We survey previous initiatives to automate the composition process, and discuss the current state of the art and future perspectives. We start by drawing the "big picture" of the scientific workflow development life cycle, before surveying and discussing current methods, technologies and practices for semantic domain modelling, automation in workflow development, and workflow assessment. Finally, we derive a roadmap of individual and community-based actions to work toward the vision of automated workflow development in the forthcoming years. A central outcome of the workshop is a general description of the workflow life cycle in six stages: 1) scientific question or hypothesis, 2) conceptual workflow, 3) abstract workflow, 4) concrete workflow, 5) production workflow, and 6) scientific results. The transitions between stages are facilitated by diverse tools and methods, usually incorporating domain knowledge in some form. Formal semantic domain modelling is hard and often a bottleneck for the application of semantic technologies. However, life science communities have made considerable progress here in recent years and are continuously improving, renewing interest in the application of semantic technologies for workflow exploration, composition and instantiation. Combined with systematic benchmarking with reference data and large-scale deployment of production-stage workflows, such technologies enable a more systematic process of workflow development than we know today. We believe that this can lead to more robust, reusable, and sustainable workflows in the future.


Subject(s)
Biological Science Disciplines , Computational Biology , Benchmarking , Software , Workflow
7.
Sensors (Basel) ; 19(20)2019 Oct 11.
Article in English | MEDLINE | ID: mdl-31614544

ABSTRACT

Discovering the Bayesian network (BN) structure from big datasets containing rich causal relationships is becoming increasingly valuable for modeling and reasoning under uncertainties in many areas with big data gathered from sensors due to high volume and fast veracity. Most of the current BN structure learning algorithms have shortcomings facing big data. First, learning a BN structure from the entire big dataset is an expensive task which often ends in failure due to memory constraints. Second, it is quite difficult to select a learner from numerous BN structure learning algorithms to consistently achieve good learning accuracy. Lastly, there is a lack of an intelligent method that merges separately learned BN structures into a well structured BN network. To address these shortcomings, we introduce a novel parallel learning approach called PEnBayes (Parallel Ensemble-based Bayesian network learning). PEnBayes starts with an adaptive data preprocessing phase that calculates the Appropriate Learning Size and intelligently divides a big dataset for fast distributed local structure learning. Then, PEnBayes learns a collection of local BN Structures in parallel using a two-layered weighted adjacent matrix-based structure ensemble method. Lastly, PEnBayes merges the local BN Structures into a global network structure using the structure ensemble method at the global layer. For the experiment, we generate big data sets by simulating sensor data from patient monitoring, transportation, and disease diagnosis domains. The Experimental results show that PEnBayes achieves a significantly improved execution performance with more consistent and stable results compared with three baseline learning algorithms.

9.
PLoS Comput Biol ; 15(3): e1006856, 2019 03.
Article in English | MEDLINE | ID: mdl-30849072

ABSTRACT

Multi-scale computational modeling is a major branch of computational biology as evidenced by the US federal interagency Multi-Scale Modeling Consortium and major international projects. It invariably involves specific and detailed sequences of data analysis and simulation, often with multiple tools and datasets, and the community recognizes improved modularity, reuse, reproducibility, portability and scalability as critical unmet needs in this area. Scientific workflows are a well-recognized strategy for addressing these needs in scientific computing. While there are good examples if the use of scientific workflows in bioinformatics, medical informatics, biomedical imaging and data analysis, there are fewer examples in multi-scale computational modeling in general and cardiac electrophysiology in particular. Cardiac electrophysiology simulation is a mature area of multi-scale computational biology that serves as an excellent use case for developing and testing new scientific workflows. In this article, we develop, describe and test a computational workflow that serves as a proof of concept of a platform for the robust integration and implementation of a reusable and reproducible multi-scale cardiac cell and tissue model that is expandable, modular and portable. The workflow described leverages Python and Kepler-Python actor for plotting and pre/post-processing. During all stages of the workflow design, we rely on freely available open-source tools, to make our workflow freely usable by scientists.


Subject(s)
Heart/physiology , Models, Cardiovascular , Workflow , Computer Simulation , Humans , Proof of Concept Study , Reproducibility of Results
10.
J Comput Sci ; 20: 205-214, 2017 May.
Article in English | MEDLINE | ID: mdl-29104704

ABSTRACT

The BBDTC (https://biobigdata.ucsd.edu) is a community-oriented platform to encourage high-quality knowledge dissemination with the aim of growing a well-informed biomedical big data community through collaborative efforts on training and education. The BBDTC is an e-learning platform that empowers the biomedical community to develop, launch and share open training materials. It deploys hands-on software training toolboxes through virtualization technologies such as Amazon EC2 and Virtualbox. The BBDTC facilitates migration of courses across other course management platforms. The framework encourages knowledge sharing and content personalization through the playlist functionality that enables unique learning experiences and accelerates information dissemination to a wider community.

11.
Biophys J ; 112(12): 2469-2474, 2017 Jun 20.
Article in English | MEDLINE | ID: mdl-28636905

ABSTRACT

With the drive toward high throughput molecular dynamics (MD) simulations involving ever-greater numbers of simulation replicates run for longer, biologically relevant timescales (microseconds), the need for improved computational methods that facilitate fully automated MD workflows gains more importance. Here we report the development of an automated workflow tool to perform AMBER GPU MD simulations. Our workflow tool capitalizes on the capabilities of the Kepler platform to deliver a flexible, intuitive, and user-friendly environment and the AMBER GPU code for a robust and high-performance simulation engine. Additionally, the workflow tool reduces user input time by automating repetitive processes and facilitates access to GPU clusters, whose high-performance processing power makes simulations of large numerical scale possible. The presented workflow tool facilitates the management and deployment of large sets of MD simulations on heterogeneous computing resources. The workflow tool also performs systematic analysis on the simulation outputs and enhances simulation reproducibility, execution scalability, and MD method development including benchmarking and validation.


Subject(s)
Molecular Dynamics Simulation , Software , Computer Graphics , Electronic Data Processing , Humans , Internet , Principal Component Analysis , Tumor Suppressor Protein p53/metabolism , Workflow
12.
Procedia Comput Sci ; 80: 1791-1800, 2016 Jun.
Article in English | MEDLINE | ID: mdl-27478519

ABSTRACT

The BBDTC (https://biobigdata.ucsd.edu) is a community-oriented platform to encourage high-quality knowledge dissemination with the aim of growing a well-informed biomedical big data community through collaborative efforts on training and education. The BBDTC collaborative is an e-learning platform that supports the biomedical community to access, develop and deploy open training materials. The BBDTC supports Big Data skill training for biomedical scientists at all levels, and from varied backgrounds. The natural hierarchy of courses allows them to be broken into and handled as modules. Modules can be reused in the context of multiple courses and reshuffled, producing a new and different, dynamic course called a playlist. Users may create playlists to suit their learning requirements and share it with individual users or the wider public. BBDTC leverages the maturity and design of the HUBzero content-management platform for delivering educational content. To facilitate the migration of existing content, the BBDTC supports importing and exporting course material from the edX platform. Migration tools will be extended in the future to support other platforms. Hands-on training software packages, i.e., toolboxes, are supported through Amazon EC2 and Virtualbox virtualization technologies, and they are available as: (i) downloadable lightweight Virtualbox Images providing a standardized software tool environment with software packages and test data on their personal machines, and (ii) remotely accessible Amazon EC2 Virtual Machines for accessing biomedical big data tools and scalable big data experiments. At the moment, the BBDTC site contains three open Biomedical big data training courses with lecture contents, videos and hands-on training utilizing VM toolboxes, covering diverse topics. The courses have enhanced the hands-on learning environment by providing structured content that users can use at their own pace. A four course biomedical big data series is planned for development in 2016.

13.
Procedia Comput Sci ; 80: 673-679, 2016.
Article in English | MEDLINE | ID: mdl-28232853

ABSTRACT

Modern web technologies facilitate the creation of high-quality data visualizations, and rich, interactive components across a wide variety of devices. Scientific workflow systems can greatly benefit from these technologies by giving scientists a better understanding of their data or model leading to new insights. While several projects have enabled web access to scientific workflow systems, they are primarily organized as a large portal server encapsulating the workflow engine. In this vision paper, we propose the design for Kepler WebView, a lightweight framework that integrates web technologies with the Kepler Scientific Workflow System. By embedding a web server in the Kepler process, Kepler WebView enables a wide variety of usage scenarios that would be difficult or impossible using the portal model.

14.
Proc IEEE Int Conf Big Data ; 2015: 2509-2516, 2015.
Article in English | MEDLINE | ID: mdl-29399671

ABSTRACT

Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data.

15.
BMC Bioinformatics ; 15: 69, 2014 Mar 12.
Article in English | MEDLINE | ID: mdl-24621103

ABSTRACT

BACKGROUND: Mandatory deposit of raw microarray data files for public access, prior to study publication, provides significant opportunities to conduct new bioinformatics analyses within and across multiple datasets. Analysis of raw microarray data files (e.g. Affymetrix CEL files) can be time consuming, complex, and requires fundamental computational and bioinformatics skills. The development of analytical workflows to automate these tasks simplifies the processing of, improves the efficiency of, and serves to standardize multiple and sequential analyses. Once installed, workflows facilitate the tedious steps required to run rapid intra- and inter-dataset comparisons. RESULTS: We developed a workflow to facilitate and standardize Meta-Analysis of Affymetrix Microarray Data analysis (MAAMD) in Kepler. Two freely available stand-alone software tools, R and AltAnalyze were embedded in MAAMD. The inputs of MAAMD are user-editable csv files, which contain sample information and parameters describing the locations of input files and required tools. MAAMD was tested by analyzing 4 different GEO datasets from mice and drosophila.MAAMD automates data downloading, data organization, data quality control assesment, differential gene expression analysis, clustering analysis, pathway visualization, gene-set enrichment analysis, and cross-species orthologous-gene comparisons. MAAMD was utilized to identify gene orthologues responding to hypoxia or hyperoxia in both mice and drosophila. The entire set of analyses for 4 datasets (34 total microarrays) finished in ~ one hour. CONCLUSIONS: MAAMD saves time, minimizes the required computer skills, and offers a standardized procedure for users to analyze microarray datasets and make new intra- and inter-dataset comparisons.


Subject(s)
Computational Biology/methods , Databases, Genetic , Meta-Analysis as Topic , Oligonucleotide Array Sequence Analysis/methods , Software , Animals , Drosophila , Mice , Quality Control
16.
Procedia Comput Sci ; 29: 2162-2167, 2014.
Article in English | MEDLINE | ID: mdl-26605000

ABSTRACT

Increasing numbers of genomic technologies are leading to massive amounts of genomic data, all of which requires complex analysis. More and more bioinformatics analysis tools are being developed by scientist to simplify these analyses. However, different pipelines have been developed using different software environments. This makes integrations of these diverse bioinformatics tools difficult. Kepler provides an open source environment to integrate these disparate packages. Using Kepler, we integrated several external tools including Bioconductor packages, AltAnalyze, a python-based open source tool, and R-based comparison tool to build an automated workflow to meta-analyze both online and local microarray data. The automated workflow connects the integrated tools seamlessly, delivers data flow between the tools smoothly, and hence improves efficiency and accuracy of complex data analyses. Our workflow exemplifies the usage of Kepler as a scientific workflow platform for bioinformatics pipelines.

17.
Procedia Comput Sci ; 20: 2295-2305, 2014.
Article in English | MEDLINE | ID: mdl-25621086

ABSTRACT

Scientific workflows integrate data and computing interfaces as configurable, semi-automatic graphs to solve a scientific problem. Kepler is such a software system for designing, executing, reusing, evolving, archiving and sharing scientific workflows. Electron tomography (ET) enables high-resolution views of complex cellular structures, such as cytoskeletons, organelles, viruses and chromosomes. Imaging investigations produce large datasets. For instance, in Electron Tomography, the size of a 16 fold image tilt series is about 65 Gigabytes with each projection image including 4096 by 4096 pixels. When we use serial sections or montage technique for large field ET, the dataset will be even larger. For higher resolution images with multiple tilt series, the data size may be in terabyte range. Demands of mass data processing and complex algorithms require the integration of diverse codes into flexible software structures. This paper describes a workflow for Electron Tomography Programs in Kepler (EPiK). This EPiK workflow embeds the tracking process of IMOD, and realizes the main algorithms including filtered backprojection (FBP) from TxBR and iterative reconstruction methods. We have tested the three dimensional (3D) reconstruction process using EPiK on ET data. EPiK can be a potential toolkit for biology researchers with the advantage of logical viewing, easy handling, convenient sharing and future extensibility.

18.
Procedia Comput Sci ; 29: 546-556, 2014.
Article in English | MEDLINE | ID: mdl-29399237

ABSTRACT

With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

19.
Procedia Comput Sci ; 29: 1745-1755, 2014.
Article in English | MEDLINE | ID: mdl-29399238

ABSTRACT

We describe the development of automated workflows that support computed-aided drug discovery (CADD) and molecular dynamics (MD) simulations and are included as part of the National Biomedical Computational Resource (NBCR). The main workflow components include: file-management tasks, ligand force field parameterization, receptor-ligand molecular dynamics (MD) simulations, job submission and monitoring on relevant high-performance computing (HPC) resources, receptor structural clustering, virtual screening (VS), and statistical analyses of the VS results. The workflows aim to standardize simulation and analysis and promote best practices within the molecular simulation and CADD communities. Each component is developed as a stand-alone workflow, which allows easy integration into larger frameworks built to suit user needs, while remaining intuitive and easy to extend.

SELECTION OF CITATIONS
SEARCH DETAIL
...