Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 8 de 8
2.
Sci Rep ; 12(1): 3463, 2022 03 02.
Article En | MEDLINE | ID: mdl-35236896

Early detection of diseases such as COVID-19 could be a critical tool in reducing disease transmission by helping individuals recognize when they should self-isolate, seek testing, and obtain early medical intervention. Consumer wearable devices that continuously measure physiological metrics hold promise as tools for early illness detection. We gathered daily questionnaire data and physiological data using a consumer wearable (Oura Ring) from 63,153 participants, of whom 704 self-reported possible COVID-19 disease. We selected 73 of these 704 participants with reliable confirmation of COVID-19 by PCR testing and high-quality physiological data for algorithm training to identify onset of COVID-19 using machine learning classification. The algorithm identified COVID-19 an average of 2.75 days before participants sought diagnostic testing with a sensitivity of 82% and specificity of 63%. The receiving operating characteristic (ROC) area under the curve (AUC) was 0.819 (95% CI [0.809, 0.830]). Including continuous temperature yielded an AUC 4.9% higher than without this feature. For further validation, we obtained SARS CoV-2 antibody in a subset of participants and identified 10 additional participants who self-reported COVID-19 disease with antibody confirmation. The algorithm had an overall ROC AUC of 0.819 (95% CI [0.809, 0.830]), with a sensitivity of 90% and specificity of 80% in these additional participants. Finally, we observed substantial variation in accuracy based on age and biological sex. Findings highlight the importance of including temperature assessment, using continuous physiological features for alignment, and including diverse populations in algorithm development to optimize accuracy in COVID-19 detection from wearables.


Body Temperature , COVID-19/diagnosis , Wearable Electronic Devices , Adolescent , Adult , Aged , Aged, 80 and over , Algorithms , COVID-19/virology , Female , Humans , Male , Middle Aged , SARS-CoV-2/isolation & purification , Young Adult
3.
Vaccines (Basel) ; 10(2)2022 Feb 09.
Article En | MEDLINE | ID: mdl-35214723

There is significant variability in neutralizing antibody responses (which correlate with immune protection) after COVID-19 vaccination, but only limited information is available about predictors of these responses. We investigated whether device-generated summaries of physiological metrics collected by a wearable device correlated with post-vaccination levels of antibodies to the SARS-CoV-2 receptor-binding domain (RBD), the target of neutralizing antibodies generated by existing COVID-19 vaccines. One thousand, one hundred and seventy-nine participants wore an off-the-shelf wearable device (Oura Ring), reported dates of COVID-19 vaccinations, and completed testing for antibodies to the SARS-CoV-2 RBD during the U.S. COVID-19 vaccination rollout. We found that on the night immediately following the second mRNA injection (Moderna-NIAID and Pfizer-BioNTech) increases in dermal temperature deviation and resting heart rate, and decreases in heart rate variability (a measure of sympathetic nervous system activation) and deep sleep were each statistically significantly correlated with greater RBD antibody responses. These associations were stronger in models using metrics adjusted for the pre-vaccination baseline period. Greater temperature deviation emerged as the strongest independent predictor of greater RBD antibody responses in multivariable models. In contrast to data on certain other vaccines, we did not find clear associations between increased sleep surrounding vaccination and antibody responses.

4.
PLoS Comput Biol ; 15(3): e1006856, 2019 03.
Article En | MEDLINE | ID: mdl-30849072

Multi-scale computational modeling is a major branch of computational biology as evidenced by the US federal interagency Multi-Scale Modeling Consortium and major international projects. It invariably involves specific and detailed sequences of data analysis and simulation, often with multiple tools and datasets, and the community recognizes improved modularity, reuse, reproducibility, portability and scalability as critical unmet needs in this area. Scientific workflows are a well-recognized strategy for addressing these needs in scientific computing. While there are good examples if the use of scientific workflows in bioinformatics, medical informatics, biomedical imaging and data analysis, there are fewer examples in multi-scale computational modeling in general and cardiac electrophysiology in particular. Cardiac electrophysiology simulation is a mature area of multi-scale computational biology that serves as an excellent use case for developing and testing new scientific workflows. In this article, we develop, describe and test a computational workflow that serves as a proof of concept of a platform for the robust integration and implementation of a reusable and reproducible multi-scale cardiac cell and tissue model that is expandable, modular and portable. The workflow described leverages Python and Kepler-Python actor for plotting and pre/post-processing. During all stages of the workflow design, we rely on freely available open-source tools, to make our workflow freely usable by scientists.


Heart/physiology , Models, Cardiovascular , Workflow , Computer Simulation , Humans , Proof of Concept Study , Reproducibility of Results
5.
J Comput Sci ; 20: 205-214, 2017 May.
Article En | MEDLINE | ID: mdl-29104704

The BBDTC (https://biobigdata.ucsd.edu) is a community-oriented platform to encourage high-quality knowledge dissemination with the aim of growing a well-informed biomedical big data community through collaborative efforts on training and education. The BBDTC is an e-learning platform that empowers the biomedical community to develop, launch and share open training materials. It deploys hands-on software training toolboxes through virtualization technologies such as Amazon EC2 and Virtualbox. The BBDTC facilitates migration of courses across other course management platforms. The framework encourages knowledge sharing and content personalization through the playlist functionality that enables unique learning experiences and accelerates information dissemination to a wider community.

6.
Biophys J ; 112(12): 2469-2474, 2017 Jun 20.
Article En | MEDLINE | ID: mdl-28636905

With the drive toward high throughput molecular dynamics (MD) simulations involving ever-greater numbers of simulation replicates run for longer, biologically relevant timescales (microseconds), the need for improved computational methods that facilitate fully automated MD workflows gains more importance. Here we report the development of an automated workflow tool to perform AMBER GPU MD simulations. Our workflow tool capitalizes on the capabilities of the Kepler platform to deliver a flexible, intuitive, and user-friendly environment and the AMBER GPU code for a robust and high-performance simulation engine. Additionally, the workflow tool reduces user input time by automating repetitive processes and facilitates access to GPU clusters, whose high-performance processing power makes simulations of large numerical scale possible. The presented workflow tool facilitates the management and deployment of large sets of MD simulations on heterogeneous computing resources. The workflow tool also performs systematic analysis on the simulation outputs and enhances simulation reproducibility, execution scalability, and MD method development including benchmarking and validation.


Molecular Dynamics Simulation , Software , Computer Graphics , Electronic Data Processing , Humans , Internet , Principal Component Analysis , Tumor Suppressor Protein p53/metabolism , Workflow
7.
Procedia Comput Sci ; 80: 1791-1800, 2016 Jun.
Article En | MEDLINE | ID: mdl-27478519

The BBDTC (https://biobigdata.ucsd.edu) is a community-oriented platform to encourage high-quality knowledge dissemination with the aim of growing a well-informed biomedical big data community through collaborative efforts on training and education. The BBDTC collaborative is an e-learning platform that supports the biomedical community to access, develop and deploy open training materials. The BBDTC supports Big Data skill training for biomedical scientists at all levels, and from varied backgrounds. The natural hierarchy of courses allows them to be broken into and handled as modules. Modules can be reused in the context of multiple courses and reshuffled, producing a new and different, dynamic course called a playlist. Users may create playlists to suit their learning requirements and share it with individual users or the wider public. BBDTC leverages the maturity and design of the HUBzero content-management platform for delivering educational content. To facilitate the migration of existing content, the BBDTC supports importing and exporting course material from the edX platform. Migration tools will be extended in the future to support other platforms. Hands-on training software packages, i.e., toolboxes, are supported through Amazon EC2 and Virtualbox virtualization technologies, and they are available as: (i) downloadable lightweight Virtualbox Images providing a standardized software tool environment with software packages and test data on their personal machines, and (ii) remotely accessible Amazon EC2 Virtual Machines for accessing biomedical big data tools and scalable big data experiments. At the moment, the BBDTC site contains three open Biomedical big data training courses with lecture contents, videos and hands-on training utilizing VM toolboxes, covering diverse topics. The courses have enhanced the hands-on learning environment by providing structured content that users can use at their own pace. A four course biomedical big data series is planned for development in 2016.

8.
Proc IEEE Int Conf Big Data ; 2015: 2509-2516, 2015.
Article En | MEDLINE | ID: mdl-29399671

Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data.

...