Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Bioinformatics ; 40(7)2024 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-38924508

RESUMEN

MOTIVATION: Citations have a fundamental role in scholarly communication and assessment. Citation accuracy and transparency is crucial for the integrity of scientific evidence. In this work, we focus on quotation errors, errors in citation content that can distort the scientific evidence and that are hard to detect for humans. We construct a corpus and propose natural language processing (NLP) methods to identify such errors in biomedical publications. RESULTS: We manually annotated 100 highly-cited biomedical publications (reference articles) and citations to them. The annotation involved labeling citation context in the citing article, relevant evidence sentences in the reference article, and the accuracy of the citation. A total of 3063 citation instances were annotated (39.18% with accuracy errors). For NLP, we combined a sentence retriever with a fine-tuned claim verification model to label citations as ACCURATE, NOT_ACCURATE, or IRRELEVANT. We also explored few-shot in-context learning with generative large language models. The best performing model-which uses citation sentences as citation context, the BM25 model with MonoT5 reranker for retrieving top-20 sentences, and a fine-tuned MultiVerS model for accuracy label classification-yielded 0.59 micro-F1 and 0.52 macro-F1 score. GPT-4 in-context learning performed better in identifying accurate citations, but it lagged for erroneous citations (0.65 micro-F1, 0.45 macro-F1). Citation quotation errors are often subtle, and it is currently challenging for NLP models to identify erroneous citations. With further improvements, the models could serve to improve citation quality and accuracy. AVAILABILITY AND IMPLEMENTATION: We make the corpus and the best-performing NLP model publicly available at https://github.com/ScienceNLP-Lab/Citation-Integrity/.


Asunto(s)
Procesamiento de Lenguaje Natural , Humanos , Publicaciones , Investigación Biomédica
2.
BMJ Evid Based Med ; 29(2): 121-126, 2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-37463764

RESUMEN

The incorporation of publications that have been retracted is a risk in reliable evidence synthesis. Retraction is an important mechanism for correcting the literature and protecting its integrity. Within the medical literature, the continued citation of retracted publications occurs for a variety of reasons. Recent evidence suggests that systematic reviews and meta-analyses often unwittingly cite retracted publications which, at least in some cases, may significantly impact quantitative effect estimates in meta-analyses. There is strong evidence that authors of systematic reviews and meta-analyses may be unaware of the retracted status of publications and treat them as if they are not retracted. These problems are difficult to address for several reasons: identifying retracted publications is important but logistically challenging; publications may be retracted while a review is in preparation or in press and problems with a publication may also be discovered after the evidence synthesis is published. We propose a set of concrete actions that stakeholders (eg, scientists, peer-reviewers, journal editors) might take in the near-term, and that research funders, citation management systems, and databases and search engines might take in the longer term to limit the impact of retracted primary studies on evidence syntheses.


Asunto(s)
Mala Conducta Científica , Humanos , Revisiones Sistemáticas como Asunto , Metaanálisis como Asunto , Bases de Datos Bibliográficas
3.
Quant Sci Stud ; 2(4): 1144-1169, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36186715

RESUMEN

We present the first database-wide study on the citation contexts of retracted papers, which covers 7,813 retracted papers indexed in PubMed, 169,434 citations collected from iCite, and 48,134 citation contexts identified from the XML version of the PubMed Central Open Access Subset. Compared with previous citation studies that focused on comparing citation counts using two time frames (i.e., preretraction and postretraction), our analyses show the longitudinal trends of citations to retracted papers in the past 60 years (1960-2020). Our temporal analyses show that retracted papers continued to be cited, but that old retracted papers stopped being cited as time progressed. Analysis of the text progression of pre- and postretraction citation contexts shows that retraction did not change the way the retracted papers were cited. Furthermore, among the 13,252 postretraction citation contexts, only 722 (5.4%) citation contexts acknowledged the retraction. In these 722 citation contexts, the retracted papers were most commonly cited as related work or as an example of problematic science. Our findings deepen the understanding of why retraction does not stop citation and demonstrate that the vast majority of postretraction citations in biomedicine do not document the retraction.

4.
Res Integr Peer Rev ; 7(1): 6, 2022 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-36123607

RESUMEN

BACKGROUND: Retraction is a mechanism for alerting readers to unreliable material and other problems in the published scientific and scholarly record. Retracted publications generally remain visible and searchable, but the intention of retraction is to mark them as "removed" from the citable record of scholarship. However, in practice, some retracted articles continue to be treated by researchers and the public as valid content as they are often unaware of the retraction. Research over the past decade has identified a number of factors contributing to the unintentional spread of retracted research. The goal of the Reducing the Inadvertent Spread of Retracted Science: Shaping a Research and Implementation Agenda (RISRS) project was to develop an actionable agenda for reducing the inadvertent spread of retracted science. This included identifying how retraction status could be more thoroughly disseminated, and determining what actions are feasible and relevant for particular stakeholders who play a role in the distribution of knowledge. METHODS: These recommendations were developed as part of a year-long process that included a scoping review of empirical literature and successive rounds of stakeholder consultation, culminating in a three-part online workshop that brought together a diverse body of 65 stakeholders in October-November 2020 to engage in collaborative problem solving and dialogue. Stakeholders held roles such as publishers, editors, researchers, librarians, standards developers, funding program officers, and technologists and worked for institutions such as universities, governmental agencies, funding organizations, publishing houses, libraries, standards organizations, and technology providers. Workshop discussions were seeded by materials derived from stakeholder interviews (N = 47) and short original discussion pieces contributed by stakeholders. The online workshop resulted in a set of recommendations to address the complexities of retracted research throughout the scholarly communications ecosystem. RESULTS: The RISRS recommendations are: (1) Develop a systematic cross-industry approach to ensure the public availability of consistent, standardized, interoperable, and timely information about retractions; (2) Recommend a taxonomy of retraction categories/classifications and corresponding retraction metadata that can be adopted by all stakeholders; (3) Develop best practices for coordinating the retraction process to enable timely, fair, unbiased outcomes; and (4) Educate stakeholders about pre- and post-publication stewardship, including retraction and correction of the scholarly record. CONCLUSIONS: Our stakeholder engagement study led to 4 recommendations to address inadvertent citation of retracted research, and formation of a working group to develop the Communication of Retractions, Removals, and Expressions of Concern (CORREC) Recommended Practice. Further work will be needed to determine how well retractions are currently documented, how retraction of code and datasets impacts related publications, and to identify if retraction metadata (fails to) propagate. Outcomes of all this work should lead to ensuring retracted papers are never cited without awareness of the retraction, and that, in public fora outside of science, retracted papers are not treated as valid scientific outputs.

5.
AMIA Jt Summits Transl Sci Proc ; 2022: 406-413, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35854734

RESUMEN

Systematic reviews are extremely time-consuming. The goal of this work is to assess work savings and recall for a publication type filtering strategy that uses the output of two machine learning models, Multi-Tagger and web RCT Tagger, applied retrospectively to 10 systematic reviews on drug effectiveness. Our filtering strategy resulted in mean work savings of 33.6% and recall of 98.3%. Of 363 articles finally included in any of the systematic reviews, 7 were filtered out by our strategy, but 1 "error" was actually an article using a publication type that the SR team had not pre-specified as relevant for inclusion. Our analysis suggests that automated publication type filtering can potentially provide substantial work savings with minimal loss of included articles. Publication type filtering should be personalized for each systematic review and might be combined with other filtering or ranking methods to provide additional work savings for manual triage.

6.
JAMIA Open ; 5(1): ooac015, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35571360

RESUMEN

Objectives: To produce a systematic review (SR), reviewers typically screen thousands of titles and abstracts of articles manually to find a small number which are read in full text to find relevant articles included in the final SR. Here, we evaluate a proposed automated probabilistic publication type screening strategy applied to the randomized controlled trial (RCT) articles (i.e., those which present clinical outcome results of RCT studies) included in a corpus of previously published Cochrane reviews. Materials and Methods: We selected a random subset of 558 published Cochrane reviews that specified RCT study only inclusion criteria, containing 7113 included articles which could be matched to PubMed identifiers. These were processed by our automated RCT Tagger tool to estimate the probability that each article reports clinical outcomes of a RCT. Results: Removing articles with low predictive scores P < 0.01 eliminated 288 included articles, of which only 22 were actually typical RCT articles, and only 18 were actually typical RCT articles that MEDLINE indexed as such. Based on our sample set, this screening strategy led to fewer than 0.05 relevant RCT articles being missed on average per Cochrane SR. Discussion: This scenario, based on real SRs, demonstrates that automated tagging can identify RCT articles accurately while maintaining very high recall. However, we also found that even SRs whose inclusion criteria are restricted to RCT studies include not only clinical outcome articles per se, but a variety of ancillary article types as well. Conclusions: This encourages further studies learning how best to incorporate automated tagging of additional publication types into SR triage workflows.

7.
J Med Libr Assoc ; 110(1): 103-108, 2022 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-35210969

RESUMEN

BACKGROUND: An article's citations are useful for finding related articles that may not be readily found by keyword searches or textual similarity. Citation analysis is also important for analyzing scientific innovation and the structure of the biomedical literature. We wanted to facilitate citation analysis for the broad community by providing a user-friendly interface for accessing and analyzing citation data for biomedical articles. CASE PRESENTATION: We seeded the Citation Cloud dataset with over 465 million open access citations culled from six different sources: PubMed Central, Microsoft Academic Graph, ArnetMiner, Semantic Scholar, Open Citations, and the NIH iCite dataset. We implemented a free, public extension to PubMed that allows any user to visualize and analyze the entire citation cloud around any paper of interest A: the set of articles cited by A, those which cite A, those which are co-cited with A, and those which are bibliographically coupled to A. CONCLUSIONS: Citation Cloud greatly enables the study of citations by the scientific community, including relatively advanced analyses (co-citations and bibliographic coupling) that cannot be undertaken using other available tools. The tool can be accessed by running any PubMed query on the Anne O'Tate value-added search interface and clicking on the Citations button next to any retrieved article.


Asunto(s)
Bibliometría , Publicaciones , Internet , PubMed
8.
J Biomed Inform ; 116: 103717, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33647518

RESUMEN

OBJECTIVE: To annotate a corpus of randomized controlled trial (RCT) publications with the checklist items of CONSORT reporting guidelines and using the corpus to develop text mining methods for RCT appraisal. METHODS: We annotated a corpus of 50 RCT articles at the sentence level using 37 fine-grained CONSORT checklist items. A subset (31 articles) was double-annotated and adjudicated, while 19 were annotated by a single annotator and reconciled by another. We calculated inter-annotator agreement at the article and section level using MASI (Measuring Agreement on Set-Valued Items) and at the CONSORT item level using Krippendorff's α. We experimented with two rule-based methods (phrase-based and section header-based) and two supervised learning approaches (support vector machine and BioBERT-based neural network classifiers), for recognizing 17 methodology-related items in the RCT Methods sections. RESULTS: We created CONSORT-TM consisting of 10,709 sentences, 4,845 (45%) of which were annotated with 5,246 labels. A median of 28 CONSORT items (out of possible 37) were annotated per article. Agreement was moderate at the article and section levels (average MASI: 0.60 and 0.64, respectively). Agreement varied considerably among individual checklist items (Krippendorff's α= 0.06-0.96). The model based on BioBERT performed best overall for recognizing methodology-related items (micro-precision: 0.82, micro-recall: 0.63, micro-F1: 0.71). Combining models using majority vote and label aggregation further improved precision and recall, respectively. CONCLUSION: Our annotated corpus, CONSORT-TM, contains more fine-grained information than earlier RCT corpora. Low frequency of some CONSORT items made it difficult to train effective text mining models to recognize them. For the items commonly reported, CONSORT-TM can serve as a testbed for text mining methods that assess RCT transparency, rigor, and reliability, and support methods for peer review and authoring assistance. Minor modifications to the annotation scheme and a larger corpus could facilitate improved text mining models. CONSORT-TM is publicly available at https://github.com/kilicogluh/CONSORT-TM.


Asunto(s)
Lista de Verificación , Publicaciones Seriadas/normas , Máquina de Vectores de Soporte , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto
9.
AMIA Annu Symp Proc ; 2020: 554-563, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33936429

RESUMEN

A longstanding issue with knowledge bases that discuss drug-drug interactions (DDIs) is that they are inconsistent with one another. Computerized support might help experts be more objective in assessing DDI evidence. A requirement for such systems is accurate automatic classification of evidence types. In this pilot study, we developed a hierarchical classifier to classify clinical DDI studies into formally defined evidence types. The area under the ROC curve for sub-classifiers in the ensemble ranged from 0.78 to 0.87. The entire system achieved an F1 of 0.83 and 0.63 on two held-out datasets, the latter consisting focused on completely novel drugs from what the system was trained on. The results suggest that it is feasible to accurately automate the classification of a sub-set of DDI evidence types and that the hierarchical approach shows promise. Future work will test more advanced feature engineering techniques while expanding the system to classify a more complex set of evidence types.


Asunto(s)
Minería de Datos/métodos , Bases de Datos Factuales , Interacciones Farmacológicas , Aprendizaje Automático , Publicaciones , Computadores , Minería de Datos/estadística & datos numéricos , Bases de Datos Factuales/estadística & datos numéricos , Humanos , Procesamiento de Lenguaje Natural , Proyectos Piloto
10.
Front Pharmacol ; 11: 608068, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33762928

RESUMEN

Despite the significant health impacts of adverse events associated with drug-drug interactions, no standard models exist for managing and sharing evidence describing potential interactions between medications. Minimal information models have been used in other communities to establish community consensus around simple models capable of communicating useful information. This paper reports on a new minimal information model for describing potential drug-drug interactions. A task force of the Semantic Web in Health Care and Life Sciences Community Group of the World-Wide Web consortium engaged informaticians and drug-drug interaction experts in in-depth examination of recent literature and specific potential interactions. A consensus set of information items was identified, along with example descriptions of selected potential drug-drug interactions (PDDIs). User profiles and use cases were developed to demonstrate the applicability of the model. Ten core information items were identified: drugs involved, clinical consequences, seriousness, operational classification statement, recommended action, mechanism of interaction, contextual information/modifying factors, evidence about a suspected drug-drug interaction, frequency of exposure, and frequency of harm to exposed persons. Eight best practice recommendations suggest how PDDI knowledge artifact creators can best use the 10 information items when synthesizing drug interaction evidence into artifacts intended to aid clinicians. This model has been included in a proposed implementation guide developed by the HL7 Clinical Decision Support Workgroup and in PDDIs published in the CDS Connect repository. The complete description of the model can be found at https://w3id.org/hclscg/pddi.

11.
Proc ACM/IEEE Joint Conf Digit Libr ; 2020: 217-226, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34305485

RESUMEN

Scientific digital libraries speed dissemination of scientific publications, but also the propagation of invalid or unreliable knowledge. Although many papers with known validity problems are highly cited, no auditing process is currently available to determine whether a citing paper's findings fundamentally depend on invalid or unreliable knowledge. To address this, we introduce a new framework, the keystone framework, designed to identify when and how citing unreliable findings impacts a paper, using argumentation theory and citation context analysis. Through two pilot case studies, we demonstrate how the keystone framework can be applied to knowledge maintenance tasks for digital libraries, including addressing citations of a non-reproducible paper and identifying statements most needing validation in a high-impact paper. We identify roles for librarians, database maintainers, knowledgebase curators, and research software engineers in applying the framework to scientific digital libraries.

12.
Artículo en Inglés | MEDLINE | ID: mdl-34316510

RESUMEN

Systematic reviews answer specific questions based on primary literature. However, systematic reviews on the same topic frequently disagree, yet there are no approaches for understanding why at a glance. Our goal is to provide a visual summary that could be useful to researchers, policy makers, and health care professionals in understanding why health controversies persist in the expert literature over time. We present a case study of a single controversy in public health, around the question: "Is reducing dietary salt beneficial at a population level?" We define and visualize three new constructs: the overall evidence base, which consists of the evidence summarized by systematic reviews (the inclusion network) and the unused evidence (isolated nodes). Our network visualization shows at a glance what evidence has been synthesized by each systematic review. Visualizing the temporal evolution of the network captures two key moments when new scientific opinions emerged, both associated with a turn to new sets of evidence that had little to no overlap with previously reviewed evidence. Limited overlap between the evidence reviewed was also found for systematic reviews published in the same year. Future work will focus on understanding the reasons for limited overlap and automating this methodology for medical literature databases.

13.
J Biomed Inform ; 91: 103123, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30753947

RESUMEN

Quantifying scientific impact of researchers and journals relies largely on citation counts, despite the acknowledged limitations of this approach. The need for more suitable alternatives has prompted research into developing advanced metrics, such as h-index and Relative Citation Ratio (RCR), as well as better citation categorization schemes to capture the various functions that citations serve in a publication. One such scheme involves citation sentiment: whether a reference paper is cited positively (agreement with the findings of the reference paper), negatively (disagreement), or neutrally. The ability to classify citation function in this manner can be viewed as a first step toward a more fine-grained bibliometrics. In this study, we compared several approaches, varying in complexity, for classification of citation sentiment in clinical trial publications. Using a corpus of 285 discussion sections from as many publications (a total of 4,182 citations), we developed a rule-based method as well as supervised machine learning models based on support vector machines (SVM) and two variants of deep neural networks; namely, convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM). A CNN model augmented with hand-crafted features yielded the best performance (0.882 accuracy and 0.721 macro-F1 on held-out set). Our results show that baseline performances of traditional supervised learning algorithms and deep neural network architectures are similar and that hand-crafted features based on sentiment dictionaries and rhetorical structure allow neural network approaches to outperform traditional machine learning approaches for this task. We make the rule-based method and the best-performing neural network model publicly available at: https://github.com/kilicogluh/clinical-citation-sentiment.


Asunto(s)
Investigación Biomédica , Aprendizaje Automático , Edición , Algoritmos
14.
J Med Internet Res ; 21(1): e11182, 2019 01 04.
Artículo en Inglés | MEDLINE | ID: mdl-30609981

RESUMEN

BACKGROUND: Preventing drug interactions is an important goal to maximize patient benefit from medications. Summarizing potential drug-drug interactions (PDDIs) for clinical decision support is challenging, and there is no single repository for PDDI evidence. Additionally, inconsistencies across compendia and other sources have been well documented. Standard search strategies for complete and current evidence about PDDIs have not heretofore been developed or validated. OBJECTIVE: This study aimed to identify common methods for conducting PDDI literature searches used by experts who routinely evaluate such evidence. METHODS: We invited a convenience sample of 70 drug information experts, including compendia editors, knowledge-base vendors, and clinicians, via emails to complete a survey on identifying PDDI evidence. We created a Web-based survey that included questions regarding the (1) development and conduct of searches; (2) resources used, for example, databases, compendia, search engines, etc; (3) types of keywords used to search for the specific PDDI information; (4) study types included and excluded in searches; and (5) search terms used. Search strategy questions focused on 6 topics of the PDDI information-(1) that a PDDI exists; (2) seriousness; (3) clinical consequences; (4) management options; (5) mechanism; and (6) health outcomes. RESULTS: Twenty participants (response rate, 20/70, 29%) completed the survey. The majority (17/20, 85%) were drug information specialists, drug interaction researchers, compendia editors, or clinical pharmacists, with 60% (12/20) having >10 years' experience. Over half (11/20, 55%) worked for clinical solutions vendors or knowledge-base vendors. Most participants developed (18/20, 90%) and conducted (19/20, 95%) search strategies without librarian assistance. PubMed (20/20, 100%) and Google Scholar (11/20, 55%) were most commonly searched for papers, followed by Google Web Search (7/20, 35%) and EMBASE (3/20, 15%). No respondents reported using Scopus. A variety of subscription and open-access databases were used, most commonly Lexicomp (9/20, 45%), Micromedex (8/20, 40%), Drugs@FDA (17/20, 85%), and DailyMed (13/20, 65%). Facts and Comparisons was the most commonly used compendia (8/20, 40%). Across the 6 attributes of interest, generic drug name was the most common keyword used. Respondents reported using more types of keywords when searching to identify the existence of PDDIs and determine their mechanism than when searching for the other 4 attributes (seriousness, consequences, management, and health outcomes). Regarding the types of evidence useful for evaluating a PDDI, clinical trials, case reports, and systematic reviews were considered relevant, while animal and in vitro data studies were not. CONCLUSIONS: This study suggests that drug interaction experts use various keyword strategies and various database and Web resources depending on the PDDI evidence they are seeking. Greater automation and standardization across search strategies could improve one's ability to identify PDDI evidence. Hence, future research focused on enhancing the existing search tools and designing recommended standards is needed.


Asunto(s)
Interacciones Farmacológicas , Humanos , Internet , Encuestas y Cuestionarios
15.
Transform Digit Worlds (2018) ; 10766: 367-377, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30637417

RESUMEN

Systematic review is a type of literature review designed to synthesize all available evidence on a given question. Systematic reviews require significant time and effort, which has led to the continuing development of computer support. This paper seeks to identify the gaps and opportunities for computer support. By interviewing experienced systematic reviewers from diverse fields, we identify the technical problems and challenges reviewers face in conducting a systematic review and their current uses of computer support. We propose potential research directions for how computer support could help to speed the systematic review process while retaining or improving review quality.

16.
ILAR J ; 58(1): 80-89, 2017 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-28838071

RESUMEN

Informatics methodologies exploit computer-assisted techniques to help biomedical researchers manage large amounts of information. In this paper, we focus on the biomedical research literature (MEDLINE). We first provide an overview of some text mining techniques that offer assistance in research by identifying biomedical entities (e.g., genes, substances, and diseases) and relations between them in text.We then discuss Semantic MEDLINE, an application that integrates PubMed document retrieval, concept and relation identification, and visualization, thus enabling a user to explore concepts and relations from within a set of retrieved citations. Semantic MEDLINE provides a roadmap through content and helps users discern patterns in large numbers of retrieved citations. We illustrate its use with an informatics method we call "discovery browsing," which provides a principled way of navigating through selected aspects of some biomedical research area. The method supports an iterative process that accommodates learning and hypothesis formation in which a user is provided with high level connections before delving into details.As a use case, we examine current developments in basic research on mechanisms of Alzheimer's disease. Out of the nearly 90 000 citations returned by the PubMed query "Alzheimer's disease," discovery browsing led us to 73 citations on sortilin and that disorder. We provide a synopsis of the basic research reported in 15 of these. There is wide-spread consensus among researchers working with a range of animal models and human cells that increased sortilin expression and decreased receptor expression are associated with amyloid beta and/or amyloid precursor protein.


Asunto(s)
Minería de Datos/métodos , Almacenamiento y Recuperación de la Información , MEDLINE , Humanos , Semántica
17.
Stud Health Technol Inform ; 245: 960-964, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29295242

RESUMEN

In this research we aim to demonstrate that an ontology-based system can categorize potential drug-drug interaction (PDDI) evidence items into complex types based on a small set of simple questions. Such a method could increase the transparency and reliability of PDDI evidence evaluation, while also reducing the variations in content and seriousness ratings present in PDDI knowledge bases. We extended the DIDEO ontology with 44 formal evidence type definitions. We then manually annotated the evidence types of 30 evidence items. We tested an RDF/OWL representation of answers to a small number of simple questions about each of these 30 evidence items and showed that automatic inference can determine the detailed evidence types based on this small number of simpler questions. These results show proof-of-concept for a decision support infrastructure that frees the evidence evaluator from mastering relatively complex written evidence type definitions.


Asunto(s)
Interacciones Farmacológicas , Bases del Conocimiento , Ontologías Biológicas , Humanos , Reproducibilidad de los Resultados
18.
CEUR Workshop Proc ; 17472016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-33139971

RESUMEN

In this poster we present novel development and extension of the Drug-drug Interaction and Drug-drug Interaction Evidence Ontology (DIDEO). We demonstrate how reasoning over this extension of DIDEO can a) automatically create a multi-level hierarchy of evidence types from descriptions of the underlying scientific observations and b) automatically subsume individual evidence items under the correct evidence type. Thus DIDEO will enable evidence items added manually by curators to be automatically categorized into a drug-drug interaction framework with precision and minimal effort from curators. As with all previous DIDEO development this extension is consistent with OBO Foundry principles.

19.
CEUR Workshop Proc ; 1309: 16-31, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33139970

RESUMEN

Inadequate representation of evidence and knowledge about potential drug-drug interactions is a major factor underlying disagreements among sources of drug information that are used by clinicians. In this paper we describe the initial steps toward developing a foundational domain representation that allows tracing the evidence underlying potential drug-drug interaction knowledge. The new representation includes biological and biomedical entities represented in existing ontologies and terminologies to foster integration of data from relevant fields such as physiology, anatomy, and laboratory sciences.

20.
J Biomed Semantics ; 4(1): 5, 2013 Jan 26.
Artículo en Inglés | MEDLINE | ID: mdl-23351881

RESUMEN

Out-of-date or incomplete drug product labeling information may increase the risk of otherwise preventable adverse drug events. In recognition of these concerns, the United States Federal Drug Administration (FDA) requires drug product labels to include specific information. Unfortunately, several studies have found that drug product labeling fails to keep current with the scientific literature. We present a novel approach to addressing this issue. The primary goal of this novel approach is to better meet the information needs of persons who consult the drug product label for information on a drug's efficacy, effectiveness, and safety. Using FDA product label regulations as a guide, the approach links drug claims present in drug information sources available on the Semantic Web with specific product label sections. Here we report on pilot work that establishes the baseline performance characteristics of a proof-of-concept system implementing the novel approach. Claims from three drug information sources were linked to the Clinical Studies, Drug Interactions, and Clinical Pharmacology sections of the labels for drug products that contain one of 29 psychotropic drugs. The resulting Linked Data set maps 409 efficacy/effectiveness study results, 784 drug-drug interactions, and 112 metabolic pathway assertions derived from three clinically-oriented drug information sources (ClinicalTrials.gov, the National Drug File - Reference Terminology, and the Drug Interaction Knowledge Base) to the sections of 1,102 product labels. Proof-of-concept web pages were created for all 1,102 drug product labels that demonstrate one possible approach to presenting information that dynamically enhances drug product labeling. We found that approximately one in five efficacy/effectiveness claims were relevant to the Clinical Studies section of a psychotropic drug product, with most relevant claims providing new information. We also identified several cases where all of the drug-drug interaction claims linked to the Drug Interactions section for a drug were potentially novel. The baseline performance characteristics of the proof-of-concept will enable further technical and user-centered research on robust methods for scaling the approach to the many thousands of product labels currently on the market.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...