Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.575
Filter
1.
PLoS One ; 19(6): e0306100, 2024.
Article in English | MEDLINE | ID: mdl-38917182

ABSTRACT

Making data FAIR-findable, accessible, interoperable, reproducible-has become the recurring theme behind many research data management efforts. dtool is a lightweight data management tool that packages metadata with immutable data to promote accessibility, interoperability, and reproducibility. Each dataset is self-contained and does not require metadata to be stored in a centralised system. This decentralised approach means that finding datasets can be difficult. dtool's lookup server, short dserver, as defined by a REST API, makes dtool datasets findable, hence rendering the dtool ecosystem fit for a FAIR data management world. Its simplicity, modularity, accessibility and standardisation via API distinguish dtool and dserver from other solutions and enable it to serve as a common denominator for cross-disciplinary research data management. The dtool ecosystem bridges the gap between standardisation-free data management by individuals and FAIR platform solutions with rigid metadata requirements.


Subject(s)
Software , Data Management/methods , Metadata , Ecosystem , Reproducibility of Results , Internet
2.
J Med Libr Assoc ; 112(1): 42-47, 2024 Jan 16.
Article in English | MEDLINE | ID: mdl-38911529

ABSTRACT

Background: By defining search strategies and related database exports as code/scripts and data, librarians and information professionals can expand the mandate of research data management (RDM) infrastructure to include this work. This new initiative aimed to create a space in McGill University's institutional data repository for our librarians to deposit and share their search strategies for knowledge syntheses (KS). Case Presentation: The authors, a health sciences librarian and an RDM specialist, created a repository collection of librarian-authored knowledge synthesis (KS) searches in McGill University's Borealis Dataverse collection. We developed and hosted a half-day "Dataverse-a-thon" where we worked with a team of health sciences librarians to develop a standardized KS data management plan (DMP), search reporting documentation, Dataverse software training, and howto guidance for the repository. Conclusion: In addition to better documentation and tracking of KS searches at our institution, the KS Dataverse collection enables sharing of searches among colleagues with discoverable metadata fields for searching within deposited searches. While the initial creation of the DMP and documentation took about six hours, the subsequent deposit of search strategies into the institutional data repository requires minimal effort (e.g., 5-10 minutes on average per deposit). The Dataverse collection also empowers librarians to retain intellectual ownership over search strategies as valuable stand-alone research outputs and raise the visibility of their labor. Overall, institutional data repositories provide specific benefits in facilitating compliance both with PRISMA-S guidance and with RDM best practices.


Subject(s)
Information Storage and Retrieval , Humans , Information Storage and Retrieval/methods , Information Dissemination/methods , Data Management/methods , Libraries, Medical/organization & administration , Librarians/statistics & numerical data
3.
Health Informatics J ; 30(2): 14604582241259336, 2024.
Article in English | MEDLINE | ID: mdl-38848696

ABSTRACT

Keeping track of data semantics and data changes in the databases is essential to support retrospective studies and the reproducibility of longitudinal clinical analysis by preventing false conclusions from being drawn from outdated data. A knowledge model combined with a temporal model plays an essential role in organizing the data and improving query expressiveness across time and multiple institutions. This paper presents a modelling framework for temporal relational databases using an ontology to derive a shareable and interoperable data model. The framework is based on: OntoRela an ontology-driven database modelling approach and Unified Historicization Framework a temporal database modelling approach. The method was applied to hospital organizational structures to show the impact of tracking organizational changes on data quality assessment, healthcare activities and data access rights. The paper demonstrated the usefulness of an ontology to provide a formal, interoperable, and reusable definition of entities and their relationships, as well as the adequacy of the temporal database to store, trace, and query data over time.


Subject(s)
Databases, Factual , Humans , Hospital Administration/methods , Data Management/methods
4.
BMC Bioinformatics ; 25(1): 210, 2024 Jun 12.
Article in English | MEDLINE | ID: mdl-38867185

ABSTRACT

BACKGROUND: In the realm of biomedical research, the growing volume, diversity and quantity of data has escalated the demand for statistical analysis as it is indispensable for synthesizing, interpreting, and publishing data. Hence the need for accessible analysis tools drastically increased. StatiCAL emerges as a user-friendly solution, enabling researchers to conduct basic analyses without necessitating extensive programming expertise. RESULTS: StatiCAL includes divers functionalities: data management, visualization on variables and statistical analysis. Data management functionalities allow the user to freely add or remove variables, to select sub-population and to visualise selected data to better perform the analysis. With this tool, users can freely perform statistical analysis such as descriptive, graphical, univariate, and multivariate analysis. All of this can be performed without the need to learn R coding as the software is a graphical user interface where all the action can be performed by clicking a button. CONCLUSIONS: StatiCAL represents a valuable contribution to the field of biomedical research. By being open-access and by providing an intuitive interface with robust features, StatiCAL allow researchers to gain autonomy in conducting their projects.


Subject(s)
Biomedical Research , Software , User-Computer Interface , Computational Biology/methods , Data Management/methods , Data Interpretation, Statistical
5.
Health Informatics J ; 30(2): 14604582241262961, 2024.
Article in English | MEDLINE | ID: mdl-38881290

ABSTRACT

Objectives: This study aims to address the critical challenges of data integrity, accuracy, consistency, and precision in the application of electronic medical record (EMR) data within the healthcare sector, particularly within the context of Chinese medical information data management. The research seeks to propose a solution in the form of a medical metadata governance framework that is efficient and suitable for clinical research and transformation. Methods: The article begins by outlining the background of medical information data management and reviews the advancements in artificial intelligence (AI) technology relevant to the field. It then introduces the "Service, Patient, Regression, base/Away, Yeast" (SPRAY)-type AI application as a case study to illustrate the potential of AI in EMR data management. Results: The research identifies the scarcity of scientific research on the transformation of EMR data in Chinese hospitals and proposes a medical metadata governance framework as a solution. This framework is designed to achieve scientific governance of clinical data by integrating metadata management and master data management, grounded in clinical practices, medical disciplines, and scientific exploration. Furthermore, it incorporates an information privacy security architecture to ensure data protection. Conclusion: The proposed medical metadata governance framework, supported by AI technology, offers a structured approach to managing and transforming EMR data into valuable scientific research outcomes. This framework provides guidance for the identification, cleaning, mining, and deep application of EMR data, thereby addressing the bottlenecks currently faced in the healthcare scenario and paving the way for more effective clinical research and data-driven decision-making.


Subject(s)
Artificial Intelligence , Electronic Health Records , Artificial Intelligence/trends , China , Humans , Electronic Health Records/trends , Data Management/methods , Metadata
6.
J Health Organ Manag ; ahead-of-print(ahead-of-print)2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38865114

ABSTRACT

PURPOSE: Norway, like other welfare states, seeks to leverage data to transform its pressured public healthcare system. While managers will be central to doing so, we lack knowledge about how specifically they would do so and what constraints and expectations they operate under. Public sources, like the Norwegian policy documents investigated here, provide important backdrops against which such managerial work emerges. This article therefore aims to analyze how key Norwegian policy documents construe data use in health management. DESIGN/METHODOLOGY/APPROACH: We analyzed five notable policy documents using a "practice-oriented" framework, considering these as arenas for "organizing visions" (OVs) about managerial use of data in healthcare organizations. This framework considers documents as not just texts that comment on a topic but as discursive tools that formulate, negotiate and shape issues of national importance, such as expectations about data use in health management. FINDINGS: The OVs we identify anticipate a bold future for health management, where data use is supported through interconnected information systems that provide relevant information on demand. These OVs are similar to discourse on "evidence-based management," but differ in important ways. Managers are consistently framed as key stakeholders that can benefit from using secondary data, but this requires better data integration across the health system. Despite forward-looking OVs, we find considerable ambiguity regarding the practical, social and epistemic dimensions of data use in health management. Our analysis calls for a reframing, by moving away from the hype of "data-driven" health management toward an empirically-oriented, "data-centric" approach that recognizes the situated and relational nature of managerial work on secondary data. ORIGINALITY/VALUE: By exploring OVs in the Norwegian health policy landscape, this study adds to our growing understanding of expectations towards healthcare managers' use of data. Given Norway's highly digitized health system, our analysis has relevance for health services in other countries.


Subject(s)
Health Policy , Norway , Humans , Data Management
7.
J Allied Health ; 53(2): e77-e91, 2024.
Article in English | MEDLINE | ID: mdl-38834346

ABSTRACT

BACKGROUND: Data management (DM) systems represent an opportunity for innovation in education and data-driven decision-making (DDDM) in allied health education. Understanding clinical education (CE) DM systems in entry-level physical therapy (PT) education programs could provide valuable insight into structure and operation and may represent opportunities to address CE challenges. The purpose of this study is to describe how PT programs are using CE DM systems to inform recommendations for CE DM and support knowledge sharing and DDDM. SUBJECTS: CE faculty and administrators were recruited from entry-level PT education programs to participate in a cross-sectional survey. METHODS: The authors designed a novel survey which included demographics and use of CE DM systems. Descriptive statistics and content analysis of narrative data were used to examine responses. RESULTS: The survey was distributed to 220 academic PT programs in June 2021 with 111 respondents (50% response rate). Respondents use multiple systems to complete CE tasks (e.g., placement process, on-boarding, agreement tracking, as a CE site database). Forty-three percent (n=47) use one system, 76% (n=35) of those use the same Software as a Service vendor. Eighty-six percent (n=96) are satisfied with their current CE DM system. Respondents enter data related to CE site information, CE environment, length of the CE experience, and accreditation-required clinical instructor information. Ninety-four percent (n=93) and 70% (n=70) extract data to make decisions about the placement process and curriculum, respectively. CONCLUSION: While variability across CE DM systems presents a challenge, survey respondents indicated common practices related to functionality, data entry, and extraction. Clinical education DM systems house critical data to address challenges in CE. Strategies to improve accessibility and use of this data to support DDDM should be explored.


Subject(s)
Data Management , Humans , Cross-Sectional Studies , Physical Therapy Specialty/education , Surveys and Questionnaires , Physical Therapists/education , Male , Female
8.
Sci Data ; 11(1): 622, 2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38871749

ABSTRACT

The demand for open data and open science is on the rise, fueled by expectations from the scientific community, calls to increase transparency and reproducibility in research findings, and developments such as the Final Data Management and Sharing Policy from the U.S. National Institutes of Health and a memorandum on increasing public access to federally funded research, issued by the U.S. Office of Science and Technology Policy. This paper explores the pivotal role of data repositories in biomedical research and open science, emphasizing their importance in managing, preserving, and sharing research data. Our objective is to familiarize readers with the functions of data repositories, set expectations for their services, and provide an overview of methods to evaluate their capabilities. The paper serves to introduce fundamental concepts and community-based guiding principles and aims to equip researchers, repository operators, funders, and policymakers with the knowledge to select appropriate repositories for their data management and sharing needs and foster a foundation for the open sharing and preservation of research data.


Subject(s)
Biomedical Research , Information Dissemination , Data Management
10.
Exp Neurol ; 378: 114815, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38762093

ABSTRACT

Effective data management and sharing have become increasingly crucial in biomedical research; however, many laboratory researchers lack the necessary tools and knowledge to address this challenge. This article provides an introductory guide into research data management (RDM), and the importance of FAIR (Findable, Accessible, Interoperable, and Reusable) data-sharing principles for laboratory researchers produced by practicing scientists. We explore the advantages of implementing organized data management strategies and introduce key concepts such as data standards, data documentation, and the distinction between machine and human-readable data formats. Furthermore, we offer practical guidance for creating a data management plan and establishing efficient data workflows within the laboratory setting, suitable for labs of all sizes. This includes an examination of requirements analysis, the development of a data dictionary for routine data elements, the implementation of unique subject identifiers, and the formulation of standard operating procedures (SOPs) for seamless data flow. To aid researchers in implementing these practices, we present a simple organizational system as an illustrative example, which can be tailored to suit individual needs and research requirements. By presenting a user-friendly approach, this guide serves as an introduction to the field of RDM and offers practical tips to help researchers effortlessly meet the common data management and sharing mandates rapidly becoming prevalent in biomedical research.


Subject(s)
Biomedical Research , Data Management , Information Dissemination , Humans , Biomedical Research/methods , Biomedical Research/standards , Data Management/methods , Information Dissemination/methods , Research Personnel
11.
Contemp Clin Trials ; 142: 107573, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38759865

ABSTRACT

INTRODUCTION: Accurately estimating the costs of clinical trials is challenging. There is currently no reference class data to allow researchers to understand the potential costs associated with database change management in clinical trials. METHODS: We used a case-based approach, summarising post-live changes in eleven clinical trial databases managed by Sheffield Clinical Trials Research Unit. We reviewed the database specifications for each trial and summarised the number of changes, change type, change category, and timing of changes. We pooled our experiences and made observations in relation to key themes. RESULTS: Median total number of changes across the eleven trials was 71 (range 40-155) and median number of changes per study week was 0.48 (range 0.32-1.34). The most common change type was modification (median 39, range 20-90), followed by additions (median 32, range 18-55), then deletions (median 7, range 1-12). In our sample, changes were more common in the first half of the trial's lifespan, regardless of its overall duration. Trials which saw continuous changes seemed more likely to be external pilots or trials in areas where the trial team was either less experienced overall or within the particular therapeutic area. CONCLUSIONS: Researchers should plan trials with the expectation that clinical trial databases will require changes within the life of the trial, particularly in the early stages or with a less experienced trial team. More research is required to understand potential differences between clinical trial units and database types.


Subject(s)
Clinical Trials as Topic , Databases, Factual , Humans , Clinical Trials as Topic/organization & administration , Clinical Trials as Topic/methods , Clinical Trials as Topic/standards , United Kingdom , Data Management/methods
12.
J Microsc ; 294(3): 350-371, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38752662

ABSTRACT

Bioimage data are generated in diverse research fields throughout the life and biomedical sciences. Its potential for advancing scientific progress via modern, data-driven discovery approaches reaches beyond disciplinary borders. To fully exploit this potential, it is necessary to make bioimaging data, in general, multidimensional microscopy images and image series, FAIR, that is, findable, accessible, interoperable and reusable. These FAIR principles for research data management are now widely accepted in the scientific community and have been adopted by funding agencies, policymakers and publishers. To remain competitive and at the forefront of research, implementing the FAIR principles into daily routines is an essential but challenging task for researchers and research infrastructures. Imaging core facilities, well-established providers of access to imaging equipment and expertise, are in an excellent position to lead this transformation in bioimaging research data management. They are positioned at the intersection of research groups, IT infrastructure providers, the institution´s administration, and microscope vendors. In the frame of German BioImaging - Society for Microscopy and Image Analysis (GerBI-GMB), cross-institutional working groups and third-party funded projects were initiated in recent years to advance the bioimaging community's capability and capacity for FAIR bioimage data management. Here, we provide an imaging-core-facility-centric perspective outlining the experience and current strategies in Germany to facilitate the practical adoption of the FAIR principles closely aligned with the international bioimaging community. We highlight which tools and services are ready to be implemented and what the future directions for FAIR bioimage data have to offer.


Subject(s)
Microscopy , Biomedical Research/methods , Data Management/methods , Image Processing, Computer-Assisted/methods , Microscopy/methods
13.
F1000Res ; 13: 8, 2024.
Article in English | MEDLINE | ID: mdl-38779317

ABSTRACT

Biomedical research projects are becoming increasingly complex and require technological solutions that support all phases of the data lifecycle and application of the FAIR principles. At the Berlin Institute of Health (BIH), we have developed and established a flexible and cost-effective approach to building customized cloud platforms for supporting research projects. The approach is based on a microservice architecture and on the management of a portfolio of supported services. On this basis, we created and maintained cloud platforms for several international research projects. In this article, we present our approach and argue that building customized cloud platforms can offer multiple advantages over using multi-project platforms. Our approach is transferable to other research environments and can be easily adapted by other projects and other service providers.


Subject(s)
Biomedical Research , Cloud Computing , Data Management , Humans , Data Management/methods
14.
Front Health Serv Manage ; 40(4): 10-13, 2024.
Article in English | MEDLINE | ID: mdl-38781506

ABSTRACT

To translate raw data into information that is understandable and actionable, healthcare leaders must leverage decision-making tools that can drive strategic innovation, improve processes, and shape the future of healthcare. Continuous changes in healthcare delivery require constant monitoring of an expanding range of data. Population demographics, psychographics, and availability of care all must be considered, as well as provider practice patterns, patient utilization, clinical and service quality, costs, and many other key variables over time. RWJBarnabas Health is navigating significant changes in its approach to managing data. A unified operating model is driving standardization, continuous quality improvement, and cost reductions across the system. The solution is based on an electronic health record system designed to meet the needs of the entire system, an array of carefully selected external data sources, and a business intelligence tool to enable leaders to quickly draw insights from all the available data.


Subject(s)
Electronic Health Records , Humans , Data Management , Evidence-Based Practice , Organizational Case Studies , Delivery of Health Care/organization & administration
15.
Sci Data ; 11(1): 524, 2024 May 22.
Article in English | MEDLINE | ID: mdl-38778016

ABSTRACT

Datasets consist of measurement data and metadata. Metadata provides context, essential for understanding and (re-)using data. Various metadata standards exist for different methods, systems and contexts. However, relevant information resides at differing stages across the data-lifecycle. Often, this information is defined and standardized only at publication stage, which can lead to data loss and workload increase. In this study, we developed Metadatasheet, a metadata standard based on interviews with members of two biomedical consortia and systematic screening of data repositories. It aligns with the data-lifecycle allowing synchronous metadata recording within Microsoft Excel, a widespread data recording software. Additionally, we provide an implementation, the Metadata Workbook, that offers user-friendly features like automation, dynamic adaption, metadata integrity checks, and export options for various metadata standards. By design and due to its extensive documentation, the proposed metadata standard simplifies recording and structuring of metadata for biomedical scientists, promoting practicality and convenience in data management. This framework can accelerate scientific progress by enhancing collaboration and knowledge transfer throughout the intermediate steps of data creation.


Subject(s)
Data Management , Metadata , Biomedical Research , Data Management/standards , Metadata/standards , Software
17.
Brief Bioinform ; 25(3)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38557674

ABSTRACT

Quality control in quantitative proteomics is a persistent challenge, particularly in identifying and managing outliers. Unsupervised learning models, which rely on data structure rather than predefined labels, offer potential solutions. However, without clear labels, their effectiveness might be compromised. Single models are susceptible to the randomness of parameters and initialization, which can result in a high rate of false positives. Ensemble models, on the other hand, have shown capabilities in effectively mitigating the impacts of such randomness and assisting in accurately detecting true outliers. Therefore, we introduced SEAOP, a Python toolbox that utilizes an ensemble mechanism by integrating multi-round data management and a statistics-based decision pipeline with multiple models. Specifically, SEAOP uses multi-round resampling to create diverse sub-data spaces and employs outlier detection methods to identify candidate outliers in each space. Candidates are then aggregated as confirmed outliers via a chi-square test, adhering to a 95% confidence level, to ensure the precision of the unsupervised approaches. Additionally, SEAOP introduces a visualization strategy, specifically designed to intuitively and effectively display the distribution of both outlier and non-outlier samples. Optimal hyperparameter models of SEAOP for outlier detection were identified by using a gradient-simulated standard dataset and Mann-Kendall trend test. The performance of the SEAOP toolbox was evaluated using three experimental datasets, confirming its reliability and accuracy in handling quantitative proteomics.


Subject(s)
Data Management , Proteomics , Reproducibility of Results , Quality Control , Data Interpretation, Statistical
18.
BMC Med Inform Decis Mak ; 24(1): 101, 2024 Apr 18.
Article in English | MEDLINE | ID: mdl-38637746

ABSTRACT

BACKGROUND: The effective management of epilepsy in women of child-bearing age necessitates a concerted effort from multidisciplinary teams. Nevertheless, there exists an inadequacy in the seamless exchange of knowledge among healthcare providers within this context. Consequently, it is imperative to enhance the availability of informatics resources and the development of decision support tools to address this issue comprehensively. MATERIALS AND METHODS: The development of the Women with Epilepsy of Child-Bearing Age Ontology (WWECA) adhered to established ontology construction principles. The ontology's scope and universal terminology were initially established by the development team and subsequently subjected to external evaluation through a rapid Delphi consensus exercise involving domain experts. Additional entities and attribute annotation data were sourced from authoritative guideline documents and specialized terminology databases within the respective field. Furthermore, the ontology has played a pivotal role in steering the creation of an online question-and-answer system, which is actively employed and assessed by a diverse group of multidisciplinary healthcare providers. RESULTS: WWECA successfully integrated a total of 609 entities encompassing various facets related to the diagnosis and medication for women of child-bearing age afflicted with epilepsy. The ontology exhibited a maximum depth of 8 within its hierarchical structure. Each of these entities featured three fundamental attributes, namely Chinese labels, definitions, and synonyms. The evaluation of WWECA involved 35 experts from 10 different hospitals across China, resulting in a favorable consensus among the experts. Furthermore, the ontology-driven online question and answer system underwent evaluation by a panel of 10 experts, including neurologists, obstetricians, and gynecologists. This evaluation yielded an average rating of 4.2, signifying a positive reception and endorsement of the system's utility and effectiveness. CONCLUSIONS: Our ontology and the associated online question and answer system hold the potential to serve as a scalable assistant for healthcare providers engaged in the management of women with epilepsy (WWE). In the future, this developmental framework has the potential for broader application in the context of long-term management of more intricate chronic health conditions.


Subject(s)
Epilepsy , Informatics , Female , Humans , Epilepsy/therapy , Databases, Factual , Data Management , China
19.
Methods Mol Biol ; 2787: 3-38, 2024.
Article in English | MEDLINE | ID: mdl-38656479

ABSTRACT

In this chapter, we explore the application of high-throughput crop phenotyping facilities for phenotype data acquisition and the extraction of significant information from the collected data through image processing and data mining methods. Additionally, the construction and outlook of crop phenotype databases are introduced and the need for global cooperation and data sharing is emphasized. High-throughput crop phenotyping significantly improves accuracy and efficiency compared to traditional measurements, making significant contributions to overcoming bottlenecks in the phenotyping field and advancing crop genetics.


Subject(s)
Crops, Agricultural , Data Mining , Image Processing, Computer-Assisted , Phenotype , Crops, Agricultural/genetics , Crops, Agricultural/growth & development , Data Mining/methods , Image Processing, Computer-Assisted/methods , Data Management/methods , High-Throughput Screening Assays/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...