Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
J Microsc ; 294(3): 350-371, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38752662

RESUMO

Bioimage data are generated in diverse research fields throughout the life and biomedical sciences. Its potential for advancing scientific progress via modern, data-driven discovery approaches reaches beyond disciplinary borders. To fully exploit this potential, it is necessary to make bioimaging data, in general, multidimensional microscopy images and image series, FAIR, that is, findable, accessible, interoperable and reusable. These FAIR principles for research data management are now widely accepted in the scientific community and have been adopted by funding agencies, policymakers and publishers. To remain competitive and at the forefront of research, implementing the FAIR principles into daily routines is an essential but challenging task for researchers and research infrastructures. Imaging core facilities, well-established providers of access to imaging equipment and expertise, are in an excellent position to lead this transformation in bioimaging research data management. They are positioned at the intersection of research groups, IT infrastructure providers, the institution´s administration, and microscope vendors. In the frame of German BioImaging - Society for Microscopy and Image Analysis (GerBI-GMB), cross-institutional working groups and third-party funded projects were initiated in recent years to advance the bioimaging community's capability and capacity for FAIR bioimage data management. Here, we provide an imaging-core-facility-centric perspective outlining the experience and current strategies in Germany to facilitate the practical adoption of the FAIR principles closely aligned with the international bioimaging community. We highlight which tools and services are ready to be implemented and what the future directions for FAIR bioimage data have to offer.


Assuntos
Microscopia , Pesquisa Biomédica/métodos , Gerenciamento de Dados/métodos , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos
2.
J Med Internet Res ; 26: e53369, 2024 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-39116424

RESUMO

BACKGROUND: Digitization shall improve the secondary use of health care data. The Government of the Kingdom of Saudi Arabia ordered a project to compile the National Master Plan for Health Data Analytics, while the Government of Estonia ordered a project to compile the Person-Centered Integrated Hospital Master Plan. OBJECTIVE: This study aims to map these 2 distinct projects' problems, approaches, and outcomes to find the matching elements for reuse in similar cases. METHODS: We assessed both health care systems' abilities for secondary use of health data by exploratory case studies with purposive sampling and data collection via semistructured interviews and documentation review. The collected content was analyzed qualitatively and coded according to a predefined framework. The analytical framework consisted of data purpose, flow, and sharing. The Estonian project used the Health Information Sharing Maturity Model from the Mitre Corporation as an additional analytical framework. The data collection and analysis in the Kingdom of Saudi Arabia took place in 2019 and covered health care facilities, public health institutions, and health care policy. The project in Estonia collected its inputs in 2020 and covered health care facilities, patient engagement, public health institutions, health care financing, health care policy, and health technology innovations. RESULTS: In both cases, the assessments resulted in a set of recommendations focusing on the governance of health care data. In the Kingdom of Saudi Arabia, the health care system consists of multiple isolated sectors, and there is a need for an overarching body coordinating data sets, indicators, and reports at the national level. The National Master Plan of Health Data Analytics proposed a set of organizational agreements for proper stewardship. Despite Estonia's national Digital Health Platform, the requirements remain uncoordinated between various data consumers. We recommended reconfiguring the stewardship of the national health data to include multipurpose data use into the scope of interoperability standardization. CONCLUSIONS: Proper data governance is the key to improving the secondary use of health data at the national level. The data flows from data providers to data consumers shall be coordinated by overarching stewardship structures and supported by interoperable data custodians.


Assuntos
Atenção à Saúde , Arábia Saudita , Estônia , Humanos , Disseminação de Informação/métodos
3.
Plant J ; 111(2): 335-347, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35535481

RESUMO

The research data life cycle from project planning to data publishing is an integral part of current research. Until the last decade, researchers were responsible for all associated phases in addition to the actual research and were assisted only at certain points by IT or bioinformaticians. Starting with advances in sequencing, the automation of analytical methods in all life science fields, including in plant phenotyping, has led to ever-increasing amounts of ever more complex data. The tasks associated with these challenges now often exceed the expertise of and infrastructure available to scientists, leading to an increased risk of data loss over time. The IPK Gatersleben has one of the world's largest germplasm collections and two decades of experience in crop plant research data management. In this article we show how challenges in modern, data-driven research can be addressed by data stewards. Based on concrete use cases, data management processes and best practices from plant phenotyping, we describe which expertise and skills are required and how data stewards as an integral actor can enhance the quality of a necessary digital transformation in progressive research.


Assuntos
Big Data , Fenômica , Plantas , Produtos Agrícolas/genética , Plantas/genética
4.
J Biomed Inform ; 140: 104337, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36935012

RESUMO

Data stewardship is a term that is understood in heterogenous ways. In recent organisational developments and efforts to build infrastructures and hire professional staff for research data management in various scientific fields in Europe, data stewardship is understood as mainly aiming at optimising data management in line with the FAIR principles (findability, accessibility, interoperability, reusability) forpurposes of reuse in the interests of the scientific community and the public. In addition, especially in the health and biomedical sciences some understandings of data stewardship mainly focus on the responsibility to respect the informational rights of data subjects. Following on from these different understandings and from recent developments to include ever more stakeholders in data stewardship, we propose a comprehensive understanding of data stewardship. According to this comprehensive understanding, data stewardship includes responsibilities towards all pertinent stakeholders and to equally consider and respect their legitimate rights and interests in order to build and maintain an efficient, trusted and fair data ecosystem. We also point out some of the practical challenges implied in such a comprehensive understanding.


Assuntos
Gerenciamento de Dados , Ecossistema , Humanos , Europa (Continente)
5.
J Med Internet Res ; 25: e45013, 2023 08 28.
Artigo em Inglês | MEDLINE | ID: mdl-37639292

RESUMO

BACKGROUND: Thorough data stewardship is a key enabler of comprehensive health research. Processes such as data collection, storage, access, sharing, and analytics require researchers to follow elaborate data management strategies properly and consistently. Studies have shown that findable, accessible, interoperable, and reusable (FAIR) data leads to improved data sharing in different scientific domains. OBJECTIVE: This scoping review identifies and discusses concepts, approaches, implementation experiences, and lessons learned in FAIR initiatives in health research data. METHODS: The Arksey and O'Malley stage-based methodological framework for scoping reviews was applied. PubMed, Web of Science, and Google Scholar were searched to access relevant publications. Articles written in English, published between 2014 and 2020, and addressing FAIR concepts or practices in the health domain were included. The 3 data sources were deduplicated using a reference management software. In total, 2 independent authors reviewed the eligibility of each article based on defined inclusion and exclusion criteria. A charting tool was used to extract information from the full-text papers. The results were reported using the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. RESULTS: A total of 2.18% (34/1561) of the screened articles were included in the final review. The authors reported FAIRification approaches, which include interpolation, inclusion of comprehensive data dictionaries, repository design, semantic interoperability, ontologies, data quality, linked data, and requirement gathering for FAIRification tools. Challenges and mitigation strategies associated with FAIRification, such as high setup costs, data politics, technical and administrative issues, privacy concerns, and difficulties encountered in sharing health data despite its sensitive nature were also reported. We found various workflows, tools, and infrastructures designed by different groups worldwide to facilitate the FAIRification of health research data. We also uncovered a wide range of problems and questions that researchers are trying to address by using the different workflows, tools, and infrastructures. Although the concept of FAIR data stewardship in the health research domain is relatively new, almost all continents have been reached by at least one network trying to achieve health data FAIRness. Documented outcomes of FAIRification efforts include peer-reviewed publications, improved data sharing, facilitated data reuse, return on investment, and new treatments. Successful FAIRification of data has informed the management and prognosis of various diseases such as cancer, cardiovascular diseases, and neurological diseases. Efforts to FAIRify data on a wider variety of diseases have been ongoing since the COVID-19 pandemic. CONCLUSIONS: This work summarises projects, tools, and workflows for the FAIRification of health research data. The comprehensive review shows that implementing the FAIR concept in health data stewardship carries the promise of improved research data management and transparency in the era of big data and open research publishing. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/22505.


Assuntos
COVID-19 , Doenças Cardiovasculares , Humanos , Pandemias , Big Data , Confiabilidade dos Dados
6.
Ergonomics ; 66(11): 1782-1799, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38054452

RESUMO

Participatory data stewardship (PDS) empowers individuals to shape and govern their data via responsible collection and use. As artificial intelligence (AI) requires massive amounts of data, research must assess what factors predict consumers' willingness to provide their data to AI. This mixed-methods study applied the extended Technology Acceptance Model (TAM) with additional predictors of trust and subjective norms. Participants' data donation profile was also measured to assess the influence of individuals' social duty, understanding of the purpose and guilt. Participants (N = 322) completed an experimental survey. Individuals were willing to provide data to AI via PDS when they believed it was their social duty, understood the purpose and trusted AI. However, the TAM may not be a complete model for assessing user willingness. This study establishes that individuals value the importance of trusting and comprehending the broader societal impact of AI when providing their data to AI.Practitioner summary: To build responsible and representative AI, individuals are needed to participate in data stewardship. The factors driving willingness to participate in such methods were studied via an online survey. Trust, social duty and understanding the purpose significantly predicted willingness to provide data to AI via participatory data stewardship.


Assuntos
Inteligência Artificial , Tecnologia , Humanos , Confiança
7.
Biol Chem ; 403(8-9): 717-730, 2022 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-35357794

RESUMO

Enzyme reactions are highly dependent on reaction conditions. To ensure reproducibility of enzyme reaction parameters, experiments need to be carefully designed and kinetic modeling meticulously executed. Furthermore, to enable quality control of enzyme reaction parameters, the experimental conditions, the modeling process as well as the raw data need to be reported comprehensively. By taking these steps, enzyme reaction parameters can be open and FAIR (findable, accessible, interoperable, re-usable) as well as repeatable, replicable and reproducible. This review discusses these requirements and provides a practical guide to designing initial rate experiments for the determination of enzyme reaction parameters and gives an open, FAIR and re-editable example of the kinetic modeling of an enzyme reaction. Both the guide and example are scripted with Python in Jupyter Notebooks and are publicly available (https://fairdomhub.org/investigations/483/snapshots/1). Finally, the prerequisites of automated data analysis and machine learning algorithms are briefly discussed to provide further motivation for the comprehensive, open and FAIR reporting of enzyme reaction parameters.


Assuntos
Algoritmos , Enzimas , Enzimas/química , Cinética , Reprodutibilidade dos Testes
8.
Pediatr Radiol ; 52(11): 2111-2119, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35790559

RESUMO

The integration of human and machine intelligence promises to profoundly change the practice of medicine. The rapidly increasing adoption of artificial intelligence (AI) solutions highlights its potential to streamline physician work and optimize clinical decision-making, also in the field of pediatric radiology. Large imaging databases are necessary for training, validating and testing these algorithms. To better promote data accessibility in multi-institutional AI-enabled radiologic research, these databases centralize the large volumes of data required to effect accurate models and outcome predictions. However, such undertakings must consider the sensitivity of patient information and therefore utilize requisite data governance measures to safeguard data privacy and security, to recognize and mitigate the effects of bias and to promote ethical use. In this article we define data stewardship and data governance, review their key considerations and applicability to radiologic research in the pediatric context, and consider the associated best practices along with the ramifications of poorly executed data governance. We summarize several adaptable data governance frameworks and describe strategies for their implementation in the form of distributed and centralized approaches to data management.


Assuntos
Inteligência Artificial , Radiologia , Algoritmos , Criança , Bases de Dados Factuais , Humanos , Radiologistas , Radiologia/métodos
10.
Neuroinformatics ; 21(3): 589-600, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37344699

RESUMO

The sharing of open-access neuroimaging data has increased significantly during the last few years. Sharing neuroimaging data is crucial to accelerating scientific advancement, particularly in the field of neuroscience. A number of big initiatives that will increase the amount of available neuroimaging data are currently in development. The Big Brain Data Initiative project was started by Universiti Sains Malaysia as the first neuroimaging data repository platform in Malaysia for the purpose of data sharing. In order to ensure that the neuroimaging data in this project is accessible, usable, and secure, as well as to offer users high-quality data that can be consistently accessed, we first came up with good data stewardship practices. Then, we developed MyneuroDB, an online repository database system for data sharing purposes. Here, we describe the Big Brain Data Initiative and MyneuroDB, a data repository that provides the ability to openly share neuroimaging data, currently including magnetic resonance imaging (MRI), electroencephalography (EEG), and magnetoencephalography (MEG), following the FAIR principles for data sharing.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Malásia , Bases de Dados Factuais , Encéfalo/diagnóstico por imagem , Neuroimagem , Disseminação de Informação
11.
Int J Popul Data Sci ; 8(4): 2142, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38419825

RESUMO

Introduction: Around the world, many organisations are working on ways to increase the use, sharing, and reuse of person-level data for research, evaluation, planning, and innovation while ensuring that data are secure and privacy is protected. As a contribution to broader efforts to improve data governance and management, in 2020 members of our team published 12 minimum specification essential requirements (min specs) to provide practical guidance for organisations establishing or operating data trusts and other forms of data infrastructure. Approach and Aims: We convened an international team, consisting mostly of participants from Canada and the United States of America, to test and refine the original 12 min specs. Twenty-three (23) data-focused organisations and initiatives recorded the various ways they address the min specs. Sub-teams analysed the results, used the findings to make improvements to the min specs, and identified materials to support organisations/initiatives in addressing the min specs. Results: Analyses and discussion led to an updated set of 15 min specs covering five categories: one min spec for Legal, five for Governance, four for Management, two for Data Users, and three for Stakeholder & Public Engagement. Multiple changes were made to make the min specs language more technically complete and precise. The updated set of 15 min specs has been integrated into a Canadian national standard that, to our knowledge, is the first to include requirements for public engagement and Indigenous Data Sovereignty. Conclusions: The testing and refinement of the min specs led to significant additions and improvements. The min specs helped the 23 organisations/initiatives involved in this project communicate and compare how they achieve responsible and trustworthy data governance and management. By extension, the min specs, and the Canadian national standard based on them, are likely to be useful for other data-focused organisations and initiatives.


Assuntos
Privacidade , Humanos , Estados Unidos , Canadá
12.
Front Big Data ; 5: 883341, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35647536

RESUMO

Although all the technical components supporting fully orchestrated Digital Twins (DT) currently exist, what remains missing is a conceptual clarification and analysis of a more generalized concept of a DT that is made FAIR, that is, universally machine actionable. This methodological overview is a first step toward this clarification. We present a review of previously developed semantic artifacts and how they may be used to compose a higher-order data model referred to here as a FAIR Digital Twin (FDT). We propose an architectural design to compose, store and reuse FDTs supporting data intensive research, with emphasis on privacy by design and their use in GDPR compliant open science.

13.
Front Big Data ; 5: 888384, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35923558

RESUMO

As a society, we need to become more sophisticated in assessing and addressing data asymmetries-and their resulting political and economic power inequalities-particularly in the realm of open science, research, and development. This article seeks to start filling the analytical gap regarding data asymmetries globally, with a specific focus on the asymmetrical availability of privately-held data for open science, and a look at current efforts to address these data asymmetries. It provides a taxonomy of asymmetries, as well as both their societal and institutional impacts. Moreover, this contribution outlines a set of solutions that could provide a toolbox for open science practitioners and data demand-side actors that stand to benefit from increased access to data. The concept of data liquidity (and portability) is explored at length in connection with efforts to generate an ecosystem of responsible data exchanges. We also examine how data holders and demand-side actors are experimenting with new and emerging operational models and governance frameworks for purpose-driven, cross-sector data collaboratives that connect previously siloed datasets. Key solutions discussed include professionalizing and re-imagining data steward roles and functions (i.e., individuals or groups who are tasked with managing data and their ethical and responsible reuse within organizations). We present these solutions through case studies on notable efforts to address science data asymmetries. We examine these cases using a repurposable analytical framework that could inform future research. We conclude with recommended actions that could support the creation of an evidence base on work to address data asymmetries and unlock the public value of greater science data liquidity and responsible reuse.

14.
Nanomaterials (Basel) ; 11(6)2021 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-34201308

RESUMO

In this paper we describe the pragmatic approach of initiating, designing and implementing the Data Management Plan (DMP) and the data FAIRification process in the multidisciplinary Horizon 2020 nanotechnology project, Anticipating Safety Issues at the Design Stage of NAno Product Development (ASINA). We briefly describe the general DMP requirements, emphasizing that the initial steps in the direction towards data FAIRification must be conceptualized and visualized in a systematic way. We demonstrate the use of a generic questionnaire to capture primary data and metadata description from our consortium (data creators/experimentalists and data analysts/modelers). We then display the interactive process with external FAIR data initiatives (data curators/quality assessors), regarding guidance for data and metadata capturing and future integration into repositories. After the preliminary data capturing and FAIRification template is formed, the inner-communication process begins between the partners, which leads to developing case-specific templates. This paper assists future data creators, data analysts, stewards and shepherds engaged in the multi-faceted data shepherding process, in any project, by providing a roadmap, demonstrated in the case of ASINA.

15.
JMIR Res Protoc ; 10(2): e22505, 2021 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-33528373

RESUMO

BACKGROUND: Data stewardship is an essential driver of research and clinical practice. Data collection, storage, access, sharing, and analytics are dependent on the proper and consistent use of data management principles among the investigators. Since 2016, the FAIR (findable, accessible, interoperable, and reusable) guiding principles for research data management have been resonating in scientific communities. Enabling data to be findable, accessible, interoperable, and reusable is currently believed to strengthen data sharing, reduce duplicated efforts, and move toward harmonization of data from heterogeneous unconnected data silos. FAIR initiatives and implementation trends are rising in different facets of scientific domains. It is important to understand the concepts and implementation practices of the FAIR data principles as applied to human health data by studying the flourishing initiatives and implementation lessons relevant to improved health research, particularly for data sharing during the coronavirus pandemic. OBJECTIVE: This paper aims to conduct a scoping review to identify concepts, approaches, implementation experiences, and lessons learned in FAIR initiatives in the health data domain. METHODS: The Arksey and O'Malley stage-based methodological framework for scoping reviews will be used for this review. PubMed, Web of Science, and Google Scholar will be searched to access relevant primary and grey publications. Articles written in English and published from 2014 onwards with FAIR principle concepts or practices in the health domain will be included. Duplication among the 3 data sources will be removed using a reference management software. The articles will then be exported to a systematic review management software. At least two independent authors will review the eligibility of each article based on defined inclusion and exclusion criteria. A pretested charting tool will be used to extract relevant information from the full-text papers. Qualitative thematic synthesis analysis methods will be employed by coding and developing themes. Themes will be derived from the research questions and contents in the included papers. RESULTS: The results will be reported using the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-analyses Extension for Scoping Reviews) reporting guidelines. We anticipate finalizing the manuscript for this work in 2021. CONCLUSIONS: We believe comprehensive information about the FAIR data principles, initiatives, implementation practices, and lessons learned in the FAIRification process in the health domain is paramount to supporting both evidence-based clinical practice and research transparency in the era of big data and open research publishing. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/22505.

16.
J Am Med Inform Assoc ; 28(7): 1591-1599, 2021 07 14.
Artigo em Inglês | MEDLINE | ID: mdl-33496785

RESUMO

OBJECTIVE: Data quality (DQ) must be consistently defined in context. The attributes, metadata, and context of longitudinal real-world data (RWD) have not been formalized for quality improvement across the data production and curation life cycle. We sought to complete a literature review on DQ assessment frameworks, indicators and tools for research, public health, service, and quality improvement across the data life cycle. MATERIALS AND METHODS: The review followed PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Databases from health, physical and social sciences were used: Cinahl, Embase, Scopus, ProQuest, Emcare, PsycINFO, Compendex, and Inspec. Embase was used instead of PubMed (an interface to search MEDLINE) because it includes all MeSH (Medical Subject Headings) terms used and journals in MEDLINE as well as additional unique journals and conference abstracts. A combined data life cycle and quality framework guided the search of published and gray literature for DQ frameworks, indicators, and tools. At least 2 authors independently identified articles for inclusion and extracted and categorized DQ concepts and constructs. All authors discussed findings iteratively until consensus was reached. RESULTS: The 120 included articles yielded concepts related to contextual (data source, custodian, and user) and technical (interoperability) factors across the data life cycle. Contextual DQ subcategories included relevance, usability, accessibility, timeliness, and trust. Well-tested computable DQ indicators and assessment tools were also found. CONCLUSIONS: A DQ assessment framework that covers intrinsic, technical, and contextual categories across the data life cycle enables assessment and management of RWD repositories to ensure fitness for purpose. Balancing security, privacy, and FAIR principles requires trust and reciprocity, transparent governance, and organizational cultures that value good documentation.


Assuntos
Confiabilidade dos Dados , Melhoria de Qualidade , Animais , Estágios do Ciclo de Vida
17.
Patterns (N Y) ; 1(1): 100004, 2020 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-33205081

RESUMO

Entropy is the natural tendency for decline toward disorder over time. Information entropy is the decline in data, information, and understanding that occurs after data are used and results are published. As time passes, the information slowly fades into obscurity. Data discovery is not enough to slow this process. High-quality metadata that support understanding and reuse and cross domains are a critical antidote to information entropy, particularly as it supports reuse of the data-adding to community knowledge and wisdom. Ensuring the creation and preservation of these metadata is a responsibility shared across the entire data life cycle from creation through analysis and publication to archiving and reuse. Repositories can play an important role in this process by augmenting metadata through time with persistent identifiers and connections they facilitate. Data providers need to work with repositories to encourage metadata evolution as new capabilities and connections emerge.

18.
Eur J Psychotraumatol ; 11(1): 1739885, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32341765

RESUMO

This editorial argues that it is time for the traumatic stress field to join the growing international movement towards Findable, Accessible, Interoperable, and Re-usable (FAIR) research data, and that we are well-positioned to do so. The field has a huge, largely untapped resource in the enormous number of rich potentially re-usable datasets that are not currently shared or preserved. We have several promising shared data resources created via international collaborative efforts by traumatic stress researchers, but we do not yet have common standards for data description, sharing, or preservation. And, despite the promise of novel findings from data sharing and re-use, there are a number of barriers to researchers' adoption of FAIR data practices. We present a vision for the future of FAIR traumatic stress data, and a call to action for the traumatic stress research community and individual researchers and research teams to help achieve this vision.


Esta editorial argumenta que es hora de que el campo del estrés traumático se una al creciente movimiento internacional hacia datos de investigación Hallables, Accesibles, Interoperables y Reutilizables (FAIR en su sigla es inglés), y que estamos en una buena posición para hacerlo. El campo tiene un recurso enorme, en gran parte sin explotar, en la enorme y rica cantidad de conjuntos de datos potencialmente reutilizables que actualmente no son conservados o compartidos. Tenemos varios recursos de datos compartidos prometedores creados a través de esfuerzos de colaboración internacional por investigadores de estrés traumático, pero aún no tenemos estándares comunes para la descripción, el intercambio o la preservación de datos. Y, a pesar de la promesa de nuevos hallazgos del uso compartido y la reutilización de datos, existen numerosas barreras para la adopción de prácticas de datos FAIR por parte de los investigadores. Presentamos una visión para el futuro de los datos de estrés traumático FAIR, y un llamado a la acción para la comunidad de investigación de estrés traumático y los investigadores individuales y equipos de investigación para ayudar a lograr esta visión.

19.
Genetics ; 213(4): 1189-1196, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31796553

RESUMO

Model organisms are essential experimental platforms for discovering gene functions, defining protein and genetic networks, uncovering functional consequences of human genome variation, and for modeling human disease. For decades, researchers who use model organisms have relied on Model Organism Databases (MODs) and the Gene Ontology Consortium (GOC) for expertly curated annotations, and for access to integrated genomic and biological information obtained from the scientific literature and public data archives. Through the development and enforcement of data and semantic standards, these genome resources provide rapid access to the collected knowledge of model organisms in human readable and computation-ready formats that would otherwise require countless hours for individual researchers to assemble on their own. Since their inception, the MODs for the predominant biomedical model organisms [Mus sp (laboratory mouse), Saccharomyces cerevisiae, Drosophila melanogaster, Caenorhabditis elegans, Danio rerio, and Rattus norvegicus] along with the GOC have operated as a network of independent, highly collaborative genome resources. In 2016, these six MODs and the GOC joined forces as the Alliance of Genome Resources (the Alliance). By implementing shared programmatic access methods and data-specific web pages with a unified "look and feel," the Alliance is tackling barriers that have limited the ability of researchers to easily compare common data types and annotations across model organisms. To adapt to the rapidly changing landscape for evaluating and funding core data resources, the Alliance is building a modern, extensible, and operationally efficient "knowledge commons" for model organisms using shared, modular infrastructure.


Assuntos
Bases de Dados Genéticas , Ecossistema , Genoma , Modelos Biológicos , Ontologia Genética
20.
Data Sci J ; 182019.
Artigo em Inglês | MEDLINE | ID: mdl-34764990

RESUMO

Data have become the new global currency, and a powerful force in making decisions and wielding power. As the world engages with open data, big data reuse, and data linkage, what do data-driven futures look like for communities plagued by data inequities? Indigenous data stakeholders and non-Indigenous allies have explored this question over the last three years in a series of meetings through the Research Data Alliance (RDA). Drawing on RDA and other gatherings, and a systematic scan of literature and practice, we consider possible answers to this question in the context of Indigenous peoples vis-á-vis two emerging concepts: Indigenous data sovereignty and Indigenous data governance. Specifically, we focus on the data challenges facing Native nations and the intersection of data, tribal sovereignty, and power. Indigenous data sovereignty is the right of each Native nation to govern the collection, ownership, and application of the tribe's data. Native nations exercise Indigenous data sovereignty through the interrelated processes of Indigenous data governance and decolonizing data. This paper explores the implications of Indigenous data sovereignty and Indigenous data governance for Native nations and others. We argue for the repositioning of authority over Indigenous data back to Indigenous peoples. At the same time, we recognize that there are significant obstacles to rebuilding effective Indigenous data systems and the process will require resources, time, and partnerships among Native nations, other governments, and data agents.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa