Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 548
Filtrar
1.
BMC Bioinformatics ; 21(1): 28, 2020 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-31992182

RESUMO

BACKGROUND: Despite the significant contribution of transcriptomics to the fields of biological and biomedical research, interpreting long lists of significantly differentially expressed genes remains a challenging step in the analysis process. Gene set enrichment analysis is a standard approach for summarizing differentially expressed genes into pathways or other gene groupings. Here, we explore an alternative approach to utilizing gene sets from curated databases. We examine the method of deriving custom gene sets which may be relevant to a given experiment using reference data sets from previous transcriptomics studies. We call these data-derived gene sets, "gene signatures" for the biological process tested in the previous study. We focus on the feasibility of this approach in analyzing immune-related processes, which are complicated in their nature but play an important role in the medical research. RESULTS: We evaluate several statistical approaches to detecting the activity of a gene signature in a target data set. We compare the performance of the data-derived gene signature approach with comparable GO term gene sets across all of the statistical tests. A total of 61 differential expression comparisons generated from 26 transcriptome experiments were included in the analysis. These experiments covered eight immunological processes in eight types of leukocytes. The data-derived signatures were used to detect the presence of immunological processes in the test data with modest accuracy (AUC = 0.67). The performance for GO and literature based gene sets was worse (AUC = 0.59). Both approaches were plagued by poor specificity. CONCLUSIONS: When investigators seek to test specific hypotheses, the data-derived signature approach can perform as well, if not better than standard gene-set based approaches for immunological signatures. Furthermore, the data-derived signatures can be generated in the cases that well-defined gene sets are lacking from pathway databases and also offer the opportunity for defining signatures in a cell-type specific manner. However, neither the data-derived signatures nor standard gene-sets can be demonstrated to reliably provide negative predictions for negative cases. We conclude that the data-derived signature approach is a useful and sometimes necessary tool, but analysts should be weary of false positives.


Assuntos
Perfilação da Expressão Gênica , Leucócitos/metabolismo , Animais , Curadoria de Dados , Bases de Dados Genéticas , Humanos , Leucócitos/imunologia , Camundongos , Sensibilidade e Especificidade
2.
PLoS Biol ; 18(1): e3000581, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31929523

RESUMO

Experimental data can broadly be divided in discrete or continuous data. Continuous data are obtained from measurements that are performed as a function of another quantitative variable, e.g., time, length, concentration, or wavelength. The results from these types of experiments are often used to generate plots that visualize the measured variable on a continuous, quantitative scale. To simplify state-of-the-art data visualization and annotation of data from such experiments, an open-source tool was created with R/shiny that does not require coding skills to operate it. The freely available web app accepts wide (spreadsheet) and tidy data and offers a range of options to normalize the data. The data from individual objects can be shown in 3 different ways: (1) lines with unique colors, (2) small multiples, and (3) heatmap-style display. Next to this, the mean can be displayed with a 95% confidence interval for the visual comparison of different conditions. Several color-blind-friendly palettes are available to label the data and/or statistics. The plots can be annotated with graphical features and/or text to indicate any perturbations that are relevant. All user-defined settings can be stored for reproducibility of the data visualization. The app is dubbed PlotTwist and runs locally or online: https://huygens.science.uva.nl/PlotTwist.


Assuntos
Biologia Computacional/métodos , Gráficos por Computador , Agregação de Dados , Curadoria de Dados/métodos , Estudos Longitudinais , Software , Cor , Controle de Formulários e Registros , Humanos , Internet , Aplicativos Móveis , Fatores de Tempo , Interface Usuário-Computador
3.
PLoS Comput Biol ; 16(1): e1007598, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31929520

RESUMO

Passive acoustic monitoring has become an important data collection method, yielding massive datasets replete with biological, environmental and anthropogenic information. Automated signal detectors and classifiers are needed to identify events within these datasets, such as the presence of species-specific sounds or anthropogenic noise. These automated methods, however, are rarely a complete substitute for expert analyst review. The ability to visualize and annotate acoustic events efficiently can enhance scientific insights from large, previously intractable datasets. A MATLAB-based graphical user interface, called DetEdit, was developed to accelerate the editing and annotating of automated detections from extensive acoustic datasets. This tool is highly-configurable and multipurpose, with uses ranging from annotation and classification of individual signals or signal-clusters and evaluation of signal properties, to identification of false detections and false positive rate estimation. DetEdit allows users to step through acoustic events, displaying a range of signal features, including time series of received levels, long-term spectral averages, time intervals between detections, and scatter plots of peak frequency, RMS, and peak-to-peak received levels. Additionally, it displays either individual, or averaged sound pressure waveforms, and power spectra within each acoustic event. These views simultaneously provide analysts with signal-level detail and encounter-level context. DetEdit creates datasets of signal labels for further analyses, such as training classifiers and quantifying occurrence, abundances, or trends. Although designed for evaluating underwater-recorded odontocete echolocation click detections, DetEdit can be adapted to almost any stereotyped impulsive signal. Our software package complements available tools for the bioacoustic community and is provided open source at https://github.com/MarineBioAcousticsRC/DetEdit.


Assuntos
Curadoria de Dados/métodos , Monitoramento Ambiental/métodos , Espectrografia do Som , Interface Usuário-Computador , Vocalização Animal/classificação , Animais , Cetáceos/fisiologia , Bases de Dados Factuais , Internet , Processamento de Sinais Assistido por Computador
4.
BMC Bioinformatics ; 20(1): 542, 2019 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-31675914

RESUMO

BACKGROUND: In biological experiments, comprehensive experimental metadata tracking - which comprises experiment, reagent, and protocol annotation with controlled vocabulary from established ontologies - remains a challenge, especially when the experiment involves multiple laboratory scientists who execute different steps of the protocol. Here we describe Annot, a novel web application designed to provide a flexible solution for this task. RESULTS: Annot enforces the use of controlled vocabulary for sample and reagent annotation while enabling robust investigation, study, and protocol tracking. The cornerstone of Annot's implementation is a json syntax-compatible file format, which can capture detailed metadata for all aspects of complex biological experiments. Data stored in this json file format can easily be ported into spreadsheet or data frame files that can be loaded into R ( https://www.r-project.org/ ) or Pandas, Python's data analysis library ( https://pandas.pydata.org/ ). Annot is implemented in Python3 and utilizes the Django web framework, Postgresql, Nginx, and Debian. It is deployed via Docker and supports all major browsers. CONCLUSIONS: Annot offers a robust solution to annotate samples, reagents, and experimental protocols for established assays where multiple laboratory scientists are involved. Further, it provides a framework to store and retrieve metadata for data analysis and integration, and therefore ensures that data generated in different experiments can be integrated and jointly analyzed. This type of solution to metadata tracking can enhance the utility of large-scale datasets, which we demonstrate here with a large-scale microenvironment microarray study.


Assuntos
Biologia Computacional/métodos , Curadoria de Dados/métodos , Indicadores e Reagentes/provisão & distribução , Metadados , Bancos de Espécimes Biológicos/estatística & dados numéricos , Software , Vocabulário Controlado
5.
Nursing (Säo Paulo) ; 22(258): 3313-3319, nov.2019.
Artigo em Português | LILACS, BDENF - Enfermagem | ID: biblio-1052257

RESUMO

O objetivo é descrever os estágios de adequação do processo de digitalização de prontuários em uma maternidade em São Luís-MA. O início foi em julho de 2018 com a realização de diagnóstico da situação documental, organização de prontuários com o Serviço de Arquivo Médico e instalação do sistema, com o apoio da Secretaria Estadual da Saúde, Instituto Acqua e o setor técnico de informática. A documentação manual deu-se a partir de 1998 até o ano em curso. O processo conta com 3.603 prontuários digitalizados. Os dados referentes há 11 anos estão catalogados para entrega à Secretaria Estadual de Saúde para o arquivo permanente. Em um ano é possível mensurar os principais ganhos como a agilidade, visualização e impressão dos prontuários. A perspectiva é melhor atender as demandas. O projeto é embrionário e um marco na otimização da documentação digital servindo como referência na eficiência administrativa.(AU)


Assuntos
Humanos , Registros Médicos , Gestão da Informação em Saúde , Administração Hospitalar , Maternidades , Curadoria de Dados
6.
PLoS Comput Biol ; 15(10): e1007291, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31622330

RESUMO

As with many other aspects of the modern world, in healthcare, the explosion of data and resources opens new opportunities for the development of added-value services. Still, a number of specific conditions on this domain greatly hinders these developments, including ethical and legal issues, fragmentation of the relevant data in different locations, and a level of (meta)data complexity that requires great expertise across technical, clinical, and biological domains. We propose the Patient Dossier paradigm as a way to organize new innovative healthcare services that sorts the current limitations. The Patient Dossier conceptual framework identifies the different issues and suggests how they can be tackled in a safe, efficient, and responsible way while opening options for independent development for different players in the healthcare sector. An initial implementation of the Patient Dossier concepts in the Rbbt framework is available as open-source at https://github.com/mikisvaz and https://github.com/Rbbt-Workflows.


Assuntos
Curadoria de Dados/métodos , Assistência à Saúde/organização & administração , Registros Médicos/classificação , Humanos , Software
7.
Int J Mol Sci ; 20(18)2019 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-31500324

RESUMO

Independent component analysis (ICA) is a matrix factorization approach where the signals captured by each individual matrix factors are optimized to become as mutually independent as possible. Initially suggested for solving source blind separation problems in various fields, ICA was shown to be successful in analyzing functional magnetic resonance imaging (fMRI) and other types of biomedical data. In the last twenty years, ICA became a part of the standard machine learning toolbox, together with other matrix factorization methods such as principal component analysis (PCA) and non-negative matrix factorization (NMF). Here, we review a number of recent works where ICA was shown to be a useful tool for unraveling the complexity of cancer biology from the analysis of different types of omics data, mainly collected for tumoral samples. Such works highlight the use of ICA in dimensionality reduction, deconvolution, data pre-processing, meta-analysis, and others applied to different data types (transcriptome, methylome, proteome, single-cell data). We particularly focus on the technical aspects of ICA application in omics studies such as using different protocols, determining the optimal number of components, assessing and improving reproducibility of the ICA results, and comparison with other popular matrix factorization techniques. We discuss the emerging ICA applications to the integrative analysis of multi-level omics datasets and introduce a conceptual view on ICA as a tool for defining functional subsystems of a complex biological system and their interactions under various conditions. Our review is accompanied by a Jupyter notebook which illustrates the discussed concepts and provides a practical tool for applying ICA to the analysis of cancer omics datasets.


Assuntos
Biologia Computacional/métodos , Neoplasias/genética , Neoplasias/metabolismo , Algoritmos , Curadoria de Dados , Bases de Dados Factuais , Humanos , Aprendizado de Máquina , Imagem por Ressonância Magnética , Neoplasias/diagnóstico por imagem , Análise de Componente Principal
8.
PLoS Biol ; 17(8): e3000384, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31404057

RESUMO

Citation metrics are widely used and misused. We have created a publicly available database of 100,000 top scientists that provides standardized information on citations, h-index, coauthorship-adjusted hm-index, citations to papers in different authorship positions, and a composite indicator. Separate data are shown for career-long and single-year impact. Metrics with and without self-citations and ratio of citations to citing papers are given. Scientists are classified into 22 scientific fields and 176 subfields. Field- and subfield-specific percentiles are also provided for all scientists who have published at least five papers. Career-long data are updated to end of 2017 and to end of 2018 for comparison.


Assuntos
Autoria/normas , Curadoria de Dados/métodos , Bases de Dados Factuais/normas , Bibliometria , Humanos , Fator de Impacto de Revistas , Publicações/tendências , Editoração/tendências , Padrões de Referência , Pesquisadores
9.
Database (Oxford) ; 20192019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-31411687

RESUMO

The Clinical Drug Experience Knowledgebase (CDEK) is a database and web platform of active pharmaceutical ingredients with evidence of clinical testing as well as the organizations involved in their research and development. CDEK was curated by disambiguating intervention and organization names from ClinicalTrials.gov and cross-referencing these entries with other prominent drug databases. Approximately 43% of active pharmaceutical ingredients in the CDEK database were sourced from ClinicalTrials.gov and cannot be found in any other prominent compound-oriented database. The contents of CDEK are structured around three pillars: active pharmaceutical ingredients (n = 22 292), clinical trials (n = 127 223) and organizations (n = 24 728). The envisioned use of the CDEK is to support the investigation of many aspects of drug development, including discovery, repurposing opportunities, chemo- and bio-informatics, clinical and translational research and regulatory sciences.


Assuntos
Curadoria de Dados , Bases de Dados de Compostos Químicos , Bases de Conhecimento , Preparações Farmacêuticas , Interface Usuário-Computador , Humanos
10.
PLoS One ; 14(7): e0216913, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31361753

RESUMO

Significant progress has been made in applying deep learning on natural language processing tasks recently. However, deep learning models typically require a large amount of annotated training data while often only small labeled datasets are available for many natural language processing tasks in biomedical literature. Building large-size datasets for deep learning is expensive since it involves considerable human effort and usually requires domain expertise in specialized fields. In this work, we consider augmenting manually annotated data with large amounts of data using distant supervision. However, data obtained by distant supervision is often noisy, we first apply some heuristics to remove some of the incorrect annotations. Then using methods inspired from transfer learning, we show that the resulting models outperform models trained on the original manually annotated sets.


Assuntos
Curadoria de Dados , Mineração de Dados , Aprendizado Profundo , Modelos Teóricos , Processamento de Linguagem Natural , Humanos
11.
J Biomed Semantics ; 10(1): 13, 2019 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-31307550

RESUMO

BACKGROUND: Microbial genetics has formed a foundation for understanding many aspects of biology. Systematic annotation that supports computational data mining should reveal further insights for microbes, microbiomes, and conserved functions beyond microbes. The Ontology of Microbial Phenotypes (OMP) was created to support such annotation. RESULTS: We define standards for an OMP-based annotation framework that supports the capture of a variety of phenotypes and provides flexibility for different levels of detail based on a combination of pre- and post-composition using OMP and other Open Biomedical Ontology (OBO) projects. A system for entering and viewing OMP annotations has been added to our online, public, web-based data portal. CONCLUSIONS: The annotation framework described here is ready to support projects to capture phenotypes from the experimental literature for a variety of microbes. Defining the OMP annotation standard should support the development of new software tools for data mining and analysis in comparative phenomics.


Assuntos
Ontologias Biológicas , Curadoria de Dados/métodos , Microbiologia , Fenótipo , Metadados
12.
Database (Oxford) ; 20192019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-31290951

RESUMO

The emergence and spread of drug-resistant Mycobacterium tuberculosis is of global concern. To improve the understanding of drug resistance in Mycobacteria, numerous studies have been performed to discover diagnostic markers and genetic determinants associated with resistance to anti-tuberculosis drug. However, the related information is scattered in a massive body of literature, which is inconvenient for researchers to investigate the molecular mechanism of drug resistance. Therefore, we manually collected 1707 curated associations between 73 compounds and 132 molecules (including coding genes and non-coding RNAs) in 6 mycobacterial species from 465 studies. The experimental details of molecular epidemiology and mechanism exploration research were also summarized and recorded in our work. In addition, multidrug resistance and extensively drug resistance molecules were also extracted to interpret the molecular mechanisms that are responsible for cross resistance among anti-tuberculosis drugs. Finally, we constructed an omnibus repository named MycoResistance, a user friendly interface to conveniently browse, search and download all related entries. We hope that this elaborate database will serve as a beneficial resource for mechanism explanations, precise diagnosis and effective treatment of drug-resistant mycobacterial strains.


Assuntos
Antituberculosos , Bases de Dados Genéticas , Farmacorresistência Bacteriana , Mycobacterium , Antituberculosos/química , Antituberculosos/uso terapêutico , Curadoria de Dados , Farmacorresistência Bacteriana/efeitos dos fármacos , Farmacorresistência Bacteriana/genética , Humanos , Mycobacterium/genética , Mycobacterium/metabolismo
13.
Sports Health ; 11(5): 440-445, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31265352

RESUMO

BACKGROUND: "Research-ready" evidence platforms that link sports data with anonymized electronic health records (EHRs) or other data are important tools for evaluating injury occurrence in response to changes in games, training, rules, and other factors. While there is agreement that high-quality data are essential, there is little evidence to guide data curation. HYPOTHESIS: We hypothesized that an EHR used in the course of clinical care and curated for research readiness can provide a robust evidence platform. Our purpose was to describe the data curation used for active injury surveillance by the National Football League (NFL). STUDY DESIGN: Dynamic cohort study. LEVEL OF EVIDENCE: Level 2. METHODS: Players provide informed consent for research activities through the collective bargaining process. A league-wide EHR is used to record injuries that come to the attention of the teams' athletic trainers and physicians, NFL medical spotters, or unaffiliated neurotrauma consultants. Information about football activities and injuries are linkable by player, setting, and event to other sports-related data, including game statistics and game-day stadium quality measures, using a unique player identification designed to protect player privacy. Ongoing data curation is used to review data completeness and accuracy and is adjusted over time in response to findings. RESULTS: The core data curation activities include monthly injury summaries to team staff, queries to resolve incomplete reporting, and periodic external checks. Experiences derived from producing more than 100 reports per year on diverse topics are used to update coding training and related guidance documents in response to missing data or inconsistent coding that is observed. Roughly 20% more injuries were recorded for the same "reportable" injuries after switching from targeted reporting to an EHR. CONCLUSION: Research-ready databases need systematic curation for quality and completeness, along with related action plans. More injuries were reported through EHR than through targeted reporting. CLINICAL RELEVANCE: Evidence-driven decision-making thrives on reliable data fine-tuned through systematic use, review, and ongoing adjustments to the curation process.


Assuntos
Traumatismos em Atletas/diagnóstico , Registros Eletrônicos de Saúde , Futebol Americano/lesões , Estudos de Coortes , Curadoria de Dados , Humanos , Armazenamento e Recuperação da Informação , Medicina Esportiva
14.
Database (Oxford) ; 20192019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-31267133

RESUMO

In recent years, researches focusing on PIWI-interacting RNAs (piRNAs) have increased rapidly. It has been revealed that piRNAs have strong association with a wide range of diseases; thus, it becomes very important to understand piRNAs' role(s) in disease diagnosis, prognosis and assessment of treatment response. We searched more than 2500 articles using keywords, such as `PIWI-interacting RNAs' and `piRNAs', and further scrutinized the articles to collect piRNAs-disease association data. These data are highly complex and heterogeneous due to various types of piRNA idnetifiers (IDs) and different reference genome versions. We put considerable efforts into removing redundancy and anomalies and thus homogenized the data. Finally, we developed the piRDisease database, which incorporates experimentally supported data for piRNAs' relationship with wide range of diseases. The piRDisease (piRDisease v1.0) is a novel, comprehensive and exclusive database resource, which provides 7939 manually curated associations of experimentally supported 4796 piRNAs involved in 28 diseases. piRDisease facilitates users by providing detailed information of the piRNA in respective disease, explored by experimental support, brief description, sequence and location information. Considering piRNAs' role(s) in wide range of diseases, it is anticipated that huge amount of data would be produced in the near future. We thus offer a submitting page, on which users or researches can contribute in to update our piRDisease database.


Assuntos
Curadoria de Dados , Bases de Dados de Ácidos Nucleicos , Doença/genética , RNA Interferente Pequeno , Humanos , RNA Interferente Pequeno/genética , RNA Interferente Pequeno/metabolismo
15.
Database (Oxford) ; 20192019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-31267135

RESUMO

This study proposes a text similarity model to help biocuration efforts of the Conserved Domain Database (CDD). CDD is a curated resource that catalogs annotated multiple sequence alignment models for ancient domains and full-length proteins. These models allow for fast searching and quick identification of conserved motifs in protein sequences via Reverse PSI-BLAST. In addition, CDD curators prepare summaries detailing the function of these conserved domains and specific protein families, based on published peer-reviewed articles. To facilitate information access for database users, it is desirable to specifically identify the referenced articles that support the assertions of curator-composed sentences. Moreover, CDD curators desire an alert system that scans the newly published literature and proposes related articles of relevance to the existing CDD records. Our approach to address these needs is a text similarity method that automatically maps a curator-written statement to candidate sentences extracted from the list of referenced articles, as well as the articles in the PubMed Central database. To evaluate this proposal, we paired CDD description sentences with the top 10 matching sentences from the literature, which were given to curators for review. Through this exercise, we discovered that we were able to map the articles in the reference list to the CDD description statements with an accuracy of 77%. In the dataset that was reviewed by curators, we were able to successfully provide references for 86% of the curator statements. In addition, we suggested new articles for curator review, which were accepted by curators to be added into the reference list at an acceptance rate of 50%. Through this process, we developed a substantial corpus of similar sentences from biomedical articles on protein sequence, structure and function research, which constitute the CDD text similarity corpus. This corpus contains 5159 sentence pairs judged for their similarity on a scale from 1 (low) to 5 (high) doubly annotated by four CDD curators. Curator-assigned similarity scores have a Pearson correlation coefficient of 0.70 and an inter-annotator agreement of 85%. To date, this is the largest biomedical text similarity resource that has been manually judged, evaluated and made publicly available to the community to foster research and development of text similarity algorithms.


Assuntos
Algoritmos , Curadoria de Dados , Bases de Dados de Proteínas , Proteínas , PubMed , Alinhamento de Sequência , Domínios Proteicos , Proteínas/química , Proteínas/genética
16.
Clin Exp Rheumatol ; 37 Suppl 118(3): 90-96, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31287405

RESUMO

OBJECTIVES: To address the need for automatically assessing the quality of clinical data in terms of accuracy, relevance, conformity, and completeness, through the concise development and application of an automated method which is able to automatically detect problematic fields and match clinical terms under a specific domain. METHODS: The proposed methodology involves the automated construction of three diagnostic reports that summarise valuable information regarding the types and ranges of each term in the dataset, along with the detected outliers, inconsistencies, and missing values, followed by a set of clinically relevant terms based on a reference model which serves as a set of terms which describes the domain knowledge of a disease of interest. RESULTS: A case study was conducted using anonymised data from 250 patients who were diagnosed with primary Sjögren's syndrome (pSS), yielding reliable outcomes that were highlighted for clinical evaluation. Our method was able to successfully identify 28 features with detected outliers, and unknown data types, as well as, identify outliers, missing values, similar terms, and inconsistencies within the dataset. The data standardisation method was able to match 76 out of 85 (89.41%) pSS-related terms according to a standard pSS reference model which has been introduced by the clinicians. CONCLUSIONS: Our results confirm the clinical value of the data curation method towards the improvement of the dataset quality through the precise identification of outliers, missing values, inconsistencies, and similar terms, as well as, through the automated detection of pSS-related relevant terms towards data standardisation.


Assuntos
Curadoria de Dados , Síndrome de Sjogren , Confiabilidade dos Dados , Humanos
17.
Sensors (Basel) ; 19(14)2019 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-31337029

RESUMO

This study investigated to what extent multimodal data can be used to detect mistakes during Cardiopulmonary Resuscitation (CPR) training. We complemented the Laerdal QCPR ResusciAnne manikin with the Multimodal Tutor for CPR, a multi-sensor system consisting of a Microsoft Kinect for tracking body position and a Myo armband for collecting electromyogram information. We collected multimodal data from 11 medical students, each of them performing two sessions of two-minute chest compressions (CCs). We gathered in total 5254 CCs that were all labelled according to five performance indicators, corresponding to common CPR training mistakes. Three out of five indicators, CC rate, CC depth and CC release, were assessed automatically by the ReusciAnne manikin. The remaining two, related to arms and body position, were annotated manually by the research team. We trained five neural networks for classifying each of the five indicators. The results of the experiment show that multimodal data can provide accurate mistake detection as compared to the ResusciAnne manikin baseline. We also show that the Multimodal Tutor for CPR can detect additional CPR training mistakes such as the correct use of arms and body weight. Thus far, these mistakes were identified only by human instructors. Finally, to investigate user feedback in the future implementations of the Multimodal Tutor for CPR, we conducted a questionnaire to collect valuable feedback aspects of CPR training.


Assuntos
Reanimação Cardiopulmonar/educação , Instrução por Computador/métodos , Peso Corporal , Reanimação Cardiopulmonar/métodos , Instrução por Computador/instrumentação , Curadoria de Dados , Bases de Dados Factuais , Educação Médica/métodos , Desenho de Equipamento , Humanos , Armazenamento e Recuperação da Informação , Manequins , Postura , Inquéritos e Questionários , Tórax
18.
Rev. bioét. derecho ; (46): 117-131, jul. 2019.
Artigo em Português | IBECS | ID: ibc-184855

RESUMO

O surgimento de novos softwares baseados em tecnologia blockchainlançam novas perguntas ao novo RGPD, criticado por ter sido criado tendo apenas em vista realidades virtuais centralizadas de controlo de dados. Apesar de quer o RGPD, quer o blockchaindesejarem objetivos comuns, como o aumento da transparência e da confiança na troca de dados online, a verdade é que em vários aspetos os desentendimentos entre ambos são reais: certas noções, como a de responsável pelo tratamento ou subcontratante, dificilmente se adequam; certos direitos, como o direito ao esquecimento ou à transferência de dados, correm o risco de perder conteúdo útil; ou mesmo certos princípios, como o da limitação de tratamento, dificilmente se compatibilizam com esta nova tecnología


La creación y el surgimiento de nuevos programas informáticos basados en la tecnología blockchain desafían el reciente GDPR con nuevas cuestiones, ya que se le critica tener en cuenta sólo las realidades virtuales basadas en el control centralizado de datos. A pesar de que tanto el RGDP como la blockchain comparten intereses comunes para aumentar la transparencia y la confianza en el intercambio de datos en línea, lo cierto es que, en varios aspectos, los malentendidos entre ambos son reales: algunas nociones como la de controlador o procesador de datos, son poco adecuadas; ciertos derechos, como el derecho al olvido o el derecho a la portabilidad de los datos corren el riesgo de perder su aplicación; o incluso ciertos principios, como la minimización de datos, son difícilmente compatibles con esta nueva tecnología


The creation and emergence of new software based on blockchain technology challenge the recent GDPR to new questions, as it is severely criticized for bearing in mind only virtual realities based on centralized data control. Despite both RGDP and blockchain share common interests in increasing transparency and confidence in online data exchange, the truth is that in several ways misunderstandings between the two are real: certain notions, such as data controller or processor, hardly adequate; certain rights, such as right to be forgotten or the right to data portability risk losing their enforcement; or even certain principles, such as data minimization, are hardly compatible with this new technology


La creació i el sorgiment de nous programes informàtics basats en la tecnologia blockchain desafien el recent GDPR amb noves qüestions, ja que se li critica tenir en compte només les realitats virtuals basades en el control centralitzat de dades. A pesar que tant el RGDP com la blockchain comparteixen interessos comuns per a augmentar la transparència i la confiança en l'intercanvi de dades en línia, la veritat és que, en diversos aspectes, els malentesos entre tots dos són reals: algunes nocions com la de controlador o processador de dades, són poc adequades; certs drets, com el dret a l'oblit o el dret a la portabilitat de les dades corren el risc de perdre la seva aplicació; o fins i tot certs principis, com la minimització de dades, són difícilment compatibles amb aquesta nova tecnología


Assuntos
Software/ética , Software/legislação & jurisprudência , Confidencialidade , Segurança Computacional/legislação & jurisprudência , Redes de Comunicação de Computadores , Redes de Comunicação de Computadores/legislação & jurisprudência , Curadoria de Dados/ética , Processamento Eletrônico de Dados/ética , Processamento Eletrônico de Dados/legislação & jurisprudência
20.
BMC Bioinformatics ; 20(1): 331, 2019 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-31195976

RESUMO

BACKGROUND: Principal component analysis (PCA) is frequently used in genomics applications for quality assessment and exploratory analysis in high-dimensional data, such as RNA sequencing (RNA-seq) gene expression assays. Despite the availability of many software packages developed for this purpose, an interactive and comprehensive interface for performing these operations is lacking. RESULTS: We developed the pcaExplorer software package to enhance commonly performed analysis steps with an interactive and user-friendly application, which provides state saving as well as the automated creation of reproducible reports. pcaExplorer is implemented in R using the Shiny framework and exploits data structures from the open-source Bioconductor project. Users can easily generate a wide variety of publication-ready graphs, while assessing the expression data in the different modules available, including a general overview, dimension reduction on samples and genes, as well as functional interpretation of the principal components. CONCLUSION: pcaExplorer is distributed as an R package in the Bioconductor project ( http://bioconductor.org/packages/pcaExplorer/ ), and is designed to assist a broad range of researchers in the critical step of interactive data exploration.


Assuntos
Análise de Componente Principal , Análise de Sequência de RNA/métodos , Software , Sequência de Bases , Curadoria de Dados , Humanos , RNA/genética , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA