Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 64
Filter
1.
PLoS Biol ; 20(2): e3001562, 2022 02.
Article in English | MEDLINE | ID: mdl-35180228

ABSTRACT

The power of language to modify the reader's perception of interpreting biomedical results cannot be underestimated. Misreporting and misinterpretation are pressing problems in randomized controlled trials (RCT) output. This may be partially related to the statistical significance paradigm used in clinical trials centered around a P value below 0.05 cutoff. Strict use of this P value may lead to strategies of clinical researchers to describe their clinical results with P values approaching but not reaching the threshold to be "almost significant." The question is how phrases expressing nonsignificant results have been reported in RCTs over the past 30 years. To this end, we conducted a quantitative analysis of English full texts containing 567,758 RCTs recorded in PubMed between 1990 and 2020 (81.5% of all published RCTs in PubMed). We determined the exact presence of 505 predefined phrases denoting results that approach but do not cross the line of formal statistical significance (P < 0.05). We modeled temporal trends in phrase data with Bayesian linear regression. Evidence for temporal change was obtained through Bayes factor (BF) analysis. In a randomly sampled subset, the associated P values were manually extracted. We identified 61,741 phrases in 49,134 RCTs indicating almost significant results (8.65%; 95% confidence interval (CI): 8.58% to 8.73%). The overall prevalence of these phrases remained stable over time, with the most prevalent phrases being "marginally significant" (in 7,735 RCTs), "all but significant" (7,015), "a nonsignificant trend" (3,442), "failed to reach statistical significance" (2,578), and "a strong trend" (1,700). The strongest evidence for an increased temporal prevalence was found for "a numerical trend," "a positive trend," "an increasing trend," and "nominally significant." In contrast, the phrases "all but significant," "approaches statistical significance," "did not quite reach statistical significance," "difference was apparent," "failed to reach statistical significance," and "not quite significant" decreased over time. In a random sampled subset of 29,000 phrases, the manually identified and corresponding 11,926 P values, 68,1% ranged between 0.05 and 0.15 (CI: 67. to 69.0; median 0.06). Our results show that RCT reports regularly contain specific phrases describing marginally nonsignificant results to report P values close to but above the dominant 0.05 cutoff. The fact that the prevalence of the phrases remained stable over time indicates that this practice of broadly interpreting P values close to a predefined threshold remains prevalent. To enhance responsible and transparent interpretation of RCT results, researchers, clinicians, reviewers, and editors may reduce the focus on formal statistical significance thresholds and stimulate reporting of P values with corresponding effect sizes and CIs and focus on the clinical relevance of the statistical difference found in RCTs.


Subject(s)
PubMed/standards , Publications/standards , Randomized Controlled Trials as Topic/standards , Research Design/standards , Research Report/standards , Bayes Theorem , Bias , Humans , Linear Models , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/standards , Outcome Assessment, Health Care/statistics & numerical data , PubMed/statistics & numerical data , Publications/statistics & numerical data , Randomized Controlled Trials as Topic/statistics & numerical data , Reproducibility of Results
2.
Health Info Libr J ; 38(1): 72-76, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33684264

ABSTRACT

Teaching students how to conduct bibliographic searches in health sciences' databases is essential training. One of the challenges librarians face is how to motivate students during classroom learning. In this article, two hospital libraries, in Spain, used Escape rooms as a method of bringing creativity, teamwork, communication and critical thinking into bibliographic search instruction. Escape rooms are a series of puzzles that must be solved to exit the game. This article explores the methods used for integrating escape rooms into training programmes and evaluates the results. Escape Rooms are a useful tool that can be integrated into residents' training to support their instruction on bibliographic searches. This kind of learning stablishes competences like logical thinking and deductive approaching. These aspects aid participants to make their own decision and to develop social and intellectual skills.


Subject(s)
Information Dissemination/methods , PubMed/standards , Humans , PubMed/instrumentation , PubMed/trends
3.
Health Info Libr J ; 38(2): 113-124, 2021 Jun.
Article in English | MEDLINE | ID: mdl-31837099

ABSTRACT

BACKGROUND: PubMed is one of the most important basic tools to access medical literature. Semantic query expansion using synonyms can improve retrieval efficacy. OBJECTIVE: The objective was to evaluate the performance of three semantic query expansion strategies. METHODS: Queries were built for forty MeSH descriptors using three semantic expansion strategies (MeSH synonyms, UMLS mappings, and mappings created by the CISMeF team), then sent to PubMed. To evaluate expansion performances for each query, the first twenty citations were selected, and their relevance were judged by three independent evaluators based on the title and abstract. RESULTS: Queries built with the UMLS expansion provided new citations with a slightly higher mean precision (74.19%) than with the CISMeF expansion (70.28%), although the difference was not significant. Inter-rater agreement was 0.28. Results varied greatly depending on the descriptor selected. DISCUSSION: The number of citations retrieved by the three strategies and their precision varied greatly according to the descriptor. This heterogeneity could be explained by the quality of the synonyms. Optimal use of these different expansions would be through various combinations of UMLS and CISMeF intersections or unions. CONCLUSION: Information retrieval tools should propose different semantic expansions depending on the descriptor and the search objectives.


Subject(s)
Appetitive Behavior , PubMed/standards , Humans , Information Storage and Retrieval/methods , Program Evaluation/methods , PubMed/trends , Semantics
4.
BMC Med Res Methodol ; 20(1): 57, 2020 03 11.
Article in English | MEDLINE | ID: mdl-32160871

ABSTRACT

BACKGROUND: The aims of this study were to assess whether the previous registration of a systematic review (SR) is associated with the improvement of the quality of the report of SRs and whether SR registration reduced outcome reporting bias. METHODS: We performed a search in PubMed for SRs in dentistry indexed in 2017. Data related to SR registration and reporting characteristics were extracted. We analyzed if the reporting of 21 characteristics of included SRs was associated with the prospective registration of protocols or reporting of a previously established protocol. The association between prospective registering of protocols, reporting of funding and number of included studies versus outcome reporting bias was tested via multivariable logistic regression. RESULTS: We included 495 SRs. One hundred and 62 (32.7%) SRs reported registering the SR protocol or working from a previously established protocol. Thirteen reporting characteristics were described statistically significant in SRs registered versus SRs that were not. Publication bias assessment and Report the number of participants showed the highest effects favoring the register (RR 1.59, CI 95% 1.19-2.12; RR 1.58, CI 95% 1.31-1.92 respectively). Moreover, Registration was not significantly linked with the articles' reporting statistical significance (OR 0.96, CI 95% 0.49-1.90). CONCLUSION: There is a positive influence of previously registering a protocol in the final report quality of SRs in dentistry. However, we did not observe an association between protocol registration and reduction in outcome reporting bias.


Subject(s)
Dentistry/standards , PubMed/standards , Research Report/standards , Systematic Reviews as Topic/standards , Humans , Logistic Models , Multivariate Analysis , Outcome Assessment, Health Care , Prospective Studies , Publication Bias , Reference Standards , Research Design/standards
5.
J Med Internet Res ; 22(1): e16816, 2020 01 23.
Article in English | MEDLINE | ID: mdl-32012074

ABSTRACT

BACKGROUND: Natural language processing (NLP) is an important traditional field in computer science, but its application in medical research has faced many challenges. With the extensive digitalization of medical information globally and increasing importance of understanding and mining big data in the medical field, NLP is becoming more crucial. OBJECTIVE: The goal of the research was to perform a systematic review on the use of NLP in medical research with the aim of understanding the global progress on NLP research outcomes, content, methods, and study groups involved. METHODS: A systematic review was conducted using the PubMed database as a search platform. All published studies on the application of NLP in medicine (except biomedicine) during the 20 years between 1999 and 2018 were retrieved. The data obtained from these published studies were cleaned and structured. Excel (Microsoft Corp) and VOSviewer (Nees Jan van Eck and Ludo Waltman) were used to perform bibliometric analysis of publication trends, author orders, countries, institutions, collaboration relationships, research hot spots, diseases studied, and research methods. RESULTS: A total of 3498 articles were obtained during initial screening, and 2336 articles were found to meet the study criteria after manual screening. The number of publications increased every year, with a significant growth after 2012 (number of publications ranged from 148 to a maximum of 302 annually). The United States has occupied the leading position since the inception of the field, with the largest number of articles published. The United States contributed to 63.01% (1472/2336) of all publications, followed by France (5.44%, 127/2336) and the United Kingdom (3.51%, 82/2336). The author with the largest number of articles published was Hongfang Liu (70), while Stéphane Meystre (17) and Hua Xu (33) published the largest number of articles as the first and corresponding authors. Among the first author's affiliation institution, Columbia University published the largest number of articles, accounting for 4.54% (106/2336) of the total. Specifically, approximately one-fifth (17.68%, 413/2336) of the articles involved research on specific diseases, and the subject areas primarily focused on mental illness (16.46%, 68/413), breast cancer (5.81%, 24/413), and pneumonia (4.12%, 17/413). CONCLUSIONS: NLP is in a period of robust development in the medical field, with an average of approximately 100 publications annually. Electronic medical records were the most used research materials, but social media such as Twitter have become important research materials since 2015. Cancer (24.94%, 103/413) was the most common subject area in NLP-assisted medical research on diseases, with breast cancers (23.30%, 24/103) and lung cancers (14.56%, 15/103) accounting for the highest proportions of studies. Columbia University and the talents trained therein were the most active and prolific research forces on NLP in the medical field.


Subject(s)
Bibliometrics , Natural Language Processing , Precision Medicine/methods , PubMed/standards , Humans , Time Factors
6.
J Med Internet Res ; 22(6): e18457, 2020 06 16.
Article in English | MEDLINE | ID: mdl-32543443

ABSTRACT

BACKGROUND: Studies using Taiwan's National Health Insurance (NHI) claims data have expanded rapidly both in quantity and quality during the first decade following the first study published in 2000. However, some of these studies were criticized for being merely data-dredging studies rather than hypothesis-driven. In addition, the use of claims data without the explicit authorization from individual patients has incurred litigation. OBJECTIVE: This study aimed to investigate whether the research output during the second decade after the release of the NHI claims database continues growing, to explore how the emergence of open access mega journals (OAMJs) and lawsuit against the use of this database affect the research topics and publication volume and to discuss the underlying reasons. METHODS: PubMed was used to locate publications based on NHI claims data between 1996 and 2017. Concept extraction using MetaMap was employed to mine research topics from article titles. Research trends were analyzed from various aspects, including publication amount, journals, research topics and types, and cooperation between authors. RESULTS: A total of 4473 articles were identified. A rapid growth in publications was witnessed from 2000 to 2015, followed by a plateau. Diabetes, stroke, and dementia were the top 3 most popular research topics whereas statin therapy, metformin, and Chinese herbal medicine were the most investigated interventions. Approximately one-third of the articles were published in open access journals. Studies with two or more medical conditions, but without any intervention, were the most common study type. Studies of this type tended to be contributed by prolific authors and published in OAMJs. CONCLUSIONS: The growth in publication volume during the second decade after the release of the NHI claims database was different from that during the first decade. OAMJs appeared to provide fertile soil for the rapid growth of research based on NHI claims data, in particular for those studies with two or medical conditions in the article title. A halt in the growth of publication volume was observed after the use of NHI claims data for research purposes had been restricted in response to legal controversy. More efforts are needed to improve the impact of knowledge gained from NHI claims data on medical decisions and policy making.


Subject(s)
Bibliometrics , Data Mining/standards , National Health Programs/standards , PubMed/standards , Databases, Factual , Humans , Taiwan
7.
BMC Med Res Methodol ; 19(1): 132, 2019 06 28.
Article in English | MEDLINE | ID: mdl-31253092

ABSTRACT

BACKGROUND: Stringent requirements exist regarding the transparency of the study selection process and the reliability of results. A 2-step selection process is generally recommended; this is conducted by 2 reviewers independently of each other (conventional double-screening). However, the approach is resource intensive, which can be a problem, as systematic reviews generally need to be completed within a defined period with a limited budget. The aim of the following methodological systematic review was to analyse the evidence available on whether single screening is equivalent to double screening in the screening process conducted in systematic reviews. METHODS: We searched Medline, PubMed and the Cochrane Methodology Register (last search 10/2018). We also used supplementary search techniques and sources ("similar articles" function in PubMed, conference abstracts and reference lists). We included all evaluations comparing single with double screening. Data were summarized in a structured, narrative way. RESULTS: The 4 evaluations included investigated a total of 23 single screenings (12 sets for screening involving 9 reviewers). The median proportion of missed studies was 5% (range 0 to 58%). The median proportion of missed studies was 3% for the 6 experienced reviewers (range: 0 to 21%) and 13% for the 3 reviewers with less experience (range: 0 to 58%). The impact of missing studies on the findings of meta-analyses had been reported in 2 evaluations for 7 single screenings including a total of 18,148 references. In 3 of these 7 single screenings - all conducted by the same reviewer (with less experience) - the findings would have changed substantially. The remaining 4 of these 7 screenings were conducted by experienced reviewers and the missing studies had no impact or a negligible on the findings of the meta-analyses. CONCLUSIONS: Single screening of the titles and abstracts of studies retrieved in bibliographic searches is not equivalent to double screening, as substantially more studies are missed. However, in our opinion such an approach could still represent an appropriate methodological shortcut in rapid reviews, as long as it is conducted by an experienced reviewer. Further research on single screening is required, for instance, regarding factors influencing the number of studies missed.


Subject(s)
Abstracting and Indexing/standards , Information Storage and Retrieval/standards , Information Systems/standards , Systematic Reviews as Topic , Abstracting and Indexing/methods , Abstracting and Indexing/statistics & numerical data , Humans , Information Storage and Retrieval/methods , Information Systems/statistics & numerical data , PubMed/standards , PubMed/statistics & numerical data , Publications/standards , Publications/statistics & numerical data
8.
J Med Libr Assoc ; 107(1): 57-61, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30598649

ABSTRACT

OBJECTIVES: The number of predatory journals is increasing in the scholarly communication realm. These journals use questionable business practices, minimal or no peer review, or limited editorial oversight and, thus, publish articles below a minimally accepted standard of quality. These publications have the potential to alter the results of knowledge syntheses. The objective of this study was to determine the degree to which articles published by a major predatory publisher in the health and biomedical sciences are cited in systematic reviews. METHODS: The authors downloaded citations of articles published by a known predatory publisher. Using forward reference searching in Google Scholar, we examined whether these publications were cited in systematic reviews. RESULTS: The selected predatory publisher published 459 journals in the health and biomedical sciences. Sixty-two of these journal titles had published a total of 120 articles that were cited by at least 1 systematic review, with a total of 157 systematic reviews citing an article from 1 of these predatory journals. DISCUSSION: Systematic review authors should be vigilant for predatory journals that can appear to be legitimate. To reduce the risk of including articles from predatory journals in knowledge syntheses, systematic reviewers should use a checklist to ensure a measure of quality control for included papers and be aware that Google Scholar and PubMed do not provide the same level of quality control as other bibliographic databases.


Subject(s)
Manuscripts as Topic , Open Access Publishing/standards , Peer Review/standards , Periodicals as Topic/standards , PubMed/standards , Quality Control , Research Report/standards , Animals , Bibliometrics , Humans
9.
J Med Libr Assoc ; 107(1): 16-29, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30598645

ABSTRACT

OBJECTIVE: PubMed's provision of MEDLINE and other National Library of Medicine (NLM) resources has made it one of the most widely accessible biomedical resources globally. The growth of PubMed Central (PMC) and public access mandates have affected PubMed's composition. The authors tested recent claims that content in PMC is of low quality and affects PubMed's reliability, while exploring PubMed's role in the current scholarly communications landscape. METHODS: The percentage of MEDLINE-indexed records was assessed in PubMed and various subsets of records from PMC. Data were retrieved via the National Center for Biotechnology Information (NCBI) interface, and follow-up interviews with a PMC external reviewer and staff at NLM were conducted. RESULTS: Almost all PubMed content (91%) is indexed in MEDLINE; however, since the launch of PMC, the percentage of PubMed records indexed in MEDLINE has slowly decreased. This trend is the result of an increase in PMC content from journals that are not indexed in MEDLINE and not a result of author manuscripts submitted to PMC in compliance with public access policies. Author manuscripts in PMC continue to be published in MEDLINE-indexed journals at a high rate (85%). The interviewees clarified the difference between the sources, with MEDLINE serving as a highly selective index of journals in biomedical literature and PMC serving as an open archive of quality biomedical and life sciences literature and a repository of funded research. CONCLUSION: The differing scopes of PMC and MEDLINE will likely continue to affect their overlap; however, quality control exists in the maintenance and facilitation of both resources, and funding from major grantors is a major component of quality assurance in PMC.


Subject(s)
Abstracting and Indexing/standards , Information Storage and Retrieval/standards , MEDLINE/standards , Periodicals as Topic/standards , PubMed/standards , Scholarly Communication/standards , Humans , National Library of Medicine (U.S.) , Reproducibility of Results , United States
10.
BMC Bioinformatics ; 19(1): 541, 2018 Dec 22.
Article in English | MEDLINE | ID: mdl-30577747

ABSTRACT

BACKGROUND: Biomedical literature is expanding rapidly, and tools that help locate information of interest are needed. To this end, a multitude of different approaches for classifying sentences in biomedical publications according to their coarse semantic and rhetoric categories (e.g., Background, Methods, Results, Conclusions) have been devised, with recent state-of-the-art results reported for a complex deep learning model. Recent evidence showed that shallow and wide neural models such as fastText can provide results that are competitive or superior to complex deep learning models while requiring drastically lower training times and having better scalability. We analyze the efficacy of the fastText model in the classification of biomedical sentences in the PubMed 200k RCT benchmark, and introduce a simple pre-processing step that enables the application of fastText on sentence sequences. Furthermore, we explore the utility of two unsupervised pre-training approaches in scenarios where labeled training data are limited. RESULTS: Our fastText-based methodology yields a state-of-the-art F1 score of.917 on the PubMed 200k benchmark when sentence ordering is taken into account, with a training time of only 73 s on standard hardware. Applying fastText on single sentences, without taking sentence ordering into account, yielded an F1 score of.852 (training time 13 s). Unsupervised pre-training of N-gram vectors greatly improved the results for small training set sizes, with an increase of F1 score of.21 to.74 when trained on only 1000 randomly picked sentences without taking sentence ordering into account. CONCLUSIONS: Because of it's ease of use and performance, fastText should be among the first choices of tools when tackling biomedical text classification problems with large corpora. Unsupervised pre-training of N-gram vectors on domain-specific corpora also makes it possible to apply fastText when labeled training data are limited.


Subject(s)
Biomedical Research , Natural Language Processing , Neural Networks, Computer , PubMed/standards , Unified Medical Language System , Humans , Language
11.
BMC Med Res Methodol ; 18(1): 109, 2018 10 19.
Article in English | MEDLINE | ID: mdl-30340533

ABSTRACT

BACKGROUND: Sexual desire is one of the domains of sexual function with multiple dimensions, which commonly affects men and women around the world. Classically, its assessment has been applied through self-report tools; however, an issue is related to the evidence level of these questionnaires and their validity. Therefore, a systematic review addressing the available questionnaires is really relevant, since it will be able to show their psychometric properties and evidence levels. METHOD: A systematic review was carried out in the PubMed, EMBASE, PsycINFO, Science Direct, and Web of Science databases. The search strategy was developed according to the following research question and combination of descriptors and keywords, including original studies with no limit of publication date and in Portuguese, English, and Spanish. Two reviewers carried out the selection of articles by abstracts and full texts as well as the analysis of the studies independently. The methodological quality of the instruments was evaluated by the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist. RESULTS: The search resulted in 1203 articles, of which 15 were included in the review. It identified 10 instruments originally developed in the English language. Unsatisfactory results on methodological quality were evidenced in cultural adaptation studies with no description of the steps of the processes and inadequacy of techniques and parameters of adequacy for models. The Principal Component Analysis with Varimax rotation predominated in the studies. CONCLUSIONS: The limitation of the techniques applied in the validation process of the reviewed instruments was evident. A limitation was observed in the number of adaptations conducted and contexts to which the instruments were applied, making it impossible to reach a better understanding of the functioning of instruments. In future studies, the use of robust techniques can ensure the quality of the psychometric properties and the accuracy and stability of instruments. A detailed description of procedures and results in validation studies may facilitate the selection and use of instruments in the academic and/or clinical settings. SYSTEMATIC REVIEW REGISTRATION: PROSPERO CRD42018085706.


Subject(s)
Psychometrics/methods , Self Report , Sexual Behavior/physiology , Surveys and Questionnaires , Data Accuracy , Databases, Bibliographic/standards , Databases, Bibliographic/statistics & numerical data , Female , Humans , Male , PubMed/standards , PubMed/statistics & numerical data , Sexual Behavior/psychology
12.
BMC Med Res Methodol ; 18(1): 171, 2018 12 18.
Article in English | MEDLINE | ID: mdl-30563471

ABSTRACT

BACKGROUND: Little evidence is available on searches for non-randomized studies (NRS) in bibliographic databases within the framework of systematic reviews. For instance, it is currently unclear whether, when searching for NRS, effective restriction of the search strategy to certain study types is possible. The following challenges need to be considered: 1) For non-randomized controlled trials (NRCTs): whether they can be identified by established filters for randomized controlled trials (RCTs). 2) For other NRS types (such as cohort studies): whether study filters exist for each study type and, if so, which performance measures they have. The aims of the present analysis were to identify and validate existing NRS filters in MEDLINE as well as to evaluate established RCT filters using a set of MEDLINE citations. METHODS: Our analysis is a retrospective analysis of study filters based on MEDLINE citations of NRS from Cochrane reviews. In a first step we identified existing NRS filters. For the generation of the reference set, we screened Cochrane reviews evaluating NRS, which covered a broad range of study types. The citations of the studies included in the Cochrane reviews were identified via the reviews' bibliographies and the corresponding PubMed identification numbers (PMIDs) were extracted from PubMed. Random samples comprising up to 200 citations (i.e. 200 PMIDs) each were created for each study type to generate the test sets. RESULTS: A total of 271 Cochrane reviews from 41 different Cochrane groups were eligible for data extraction. We identified 14 NRS filters published since 2001. The study filters generated between 660,000 and 9.5 million hits in MEDLINE. Most filters covered several study types. The reference set included 2890 publications classified as NRS for the generation of the test sets. Twelve test sets were generated (one for each study type), of which 8 included 200 citations each. None of the study filters achieved sufficient sensitivity (≥ 92%) for all of the study types targeted. CONCLUSIONS: The performance of current NRS filters is insufficient for effective use in daily practice. It is therefore necessary to develop new strategies (e.g. new NRS filters in combination with other search techniques). The challenges related to NRS should be taken into account.


Subject(s)
Databases, Bibliographic/statistics & numerical data , Information Storage and Retrieval/statistics & numerical data , Non-Randomized Controlled Trials as Topic/statistics & numerical data , Databases, Bibliographic/standards , Humans , Information Storage and Retrieval/methods , Information Storage and Retrieval/standards , MEDLINE/standards , MEDLINE/statistics & numerical data , PubMed/standards , PubMed/statistics & numerical data , Reproducibility of Results , Research Design/standards , Retrospective Studies , Review Literature as Topic
13.
J Med Internet Res ; 20(1): e26, 2018 01 22.
Article in English | MEDLINE | ID: mdl-29358159

ABSTRACT

BACKGROUND: Many health care systems now allow patients to access their electronic health record (EHR) notes online through patient portals. Medical jargon in EHR notes can confuse patients, which may interfere with potential benefits of patient access to EHR notes. OBJECTIVE: The aim of this study was to develop and evaluate the usability and content quality of NoteAid, a Web-based natural language processing system that links medical terms in EHR notes to lay definitions, that is, definitions easily understood by lay people. METHODS: NoteAid incorporates two core components: CoDeMed, a lexical resource of lay definitions for medical terms, and MedLink, a computational unit that links medical terms to lay definitions. We developed innovative computational methods, including an adapted distant supervision algorithm to prioritize medical terms important for EHR comprehension to facilitate the effort of building CoDeMed. Ten physician domain experts evaluated the user interface and content quality of NoteAid. The evaluation protocol included a cognitive walkthrough session and a postsession questionnaire. Physician feedback sessions were audio-recorded. We used standard content analysis methods to analyze qualitative data from these sessions. RESULTS: Physician feedback was mixed. Positive feedback on NoteAid included (1) Easy to use, (2) Good visual display, (3) Satisfactory system speed, and (4) Adequate lay definitions. Opportunities for improvement arising from evaluation sessions and feedback included (1) improving the display of definitions for partially matched terms, (2) including more medical terms in CoDeMed, (3) improving the handling of terms whose definitions vary depending on different contexts, and (4) standardizing the scope of definitions for medicines. On the basis of these results, we have improved NoteAid's user interface and a number of definitions, and added 4502 more definitions in CoDeMed. CONCLUSIONS: Physician evaluation yielded useful feedback for content validation and refinement of this innovative tool that has the potential to improve patient EHR comprehension and experience using patient portals. Future ongoing work will develop algorithms to handle ambiguous medical terms and test and evaluate NoteAid with patients.


Subject(s)
Electronic Health Records/standards , PubMed/standards , Unified Medical Language System/standards , Humans , Natural Language Processing , Physicians
14.
J Med Internet Res ; 20(6): e10281, 2018 06 25.
Article in English | MEDLINE | ID: mdl-29941415

ABSTRACT

BACKGROUND: A major barrier to the practice of evidence-based medicine is efficiently finding scientifically sound studies on a given clinical topic. OBJECTIVE: To investigate a deep learning approach to retrieve scientifically sound treatment studies from the biomedical literature. METHODS: We trained a Convolutional Neural Network using a noisy dataset of 403,216 PubMed citations with title and abstract as features. The deep learning model was compared with state-of-the-art search filters, such as PubMed's Clinical Query Broad treatment filter, McMaster's textword search strategy (no Medical Subject Heading, MeSH, terms), and Clinical Query Balanced treatment filter. A previously annotated dataset (Clinical Hedges) was used as the gold standard. RESULTS: The deep learning model obtained significantly lower recall than the Clinical Queries Broad treatment filter (96.9% vs 98.4%; P<.001); and equivalent recall to McMaster's textword search (96.9% vs 97.1%; P=.57) and Clinical Queries Balanced filter (96.9% vs 97.0%; P=.63). Deep learning obtained significantly higher precision than the Clinical Queries Broad filter (34.6% vs 22.4%; P<.001) and McMaster's textword search (34.6% vs 11.8%; P<.001), but was significantly lower than the Clinical Queries Balanced filter (34.6% vs 40.9%; P<.001). CONCLUSIONS: Deep learning performed well compared to state-of-the-art search filters, especially when citations were not indexed. Unlike previous machine learning approaches, the proposed deep learning model does not require feature engineering, or time-sensitive or proprietary features, such as MeSH terms and bibliometrics. Deep learning is a promising approach to identifying reports of scientifically rigorous clinical research. Further work is needed to optimize the deep learning model and to assess generalizability to other areas, such as diagnosis, etiology, and prognosis.


Subject(s)
Deep Learning/standards , Information Storage and Retrieval/methods , Neural Networks, Computer , PubMed/standards , Humans
15.
BMC Med Inform Decis Mak ; 18(1): 27, 2018 05 08.
Article in English | MEDLINE | ID: mdl-29739392

ABSTRACT

BACKGROUND: Although evidence-based practice in healthcare has been facilitated by Internet access through wireless mobile devices, research on the effectiveness of clinical decision support for clinicians at the point of care is lacking. This study examined how evidence as abstracts and the bottom-line summaries, accessed with PubMed4Hh mobile devices, affected clinicians' decision making at the point of care. METHODS: Three iterative steps were taken to evaluate the usefulness of PubMed4Hh tools at the NIH Clinical Center. First, feasibility testing was conducted using data collected from a librarian. Next, usability testing was carried out by a postdoctoral research fellow shadowing clinicians during rounds for one month in the inpatient setting. Then, a pilot study was conducted from February, 2016 to January, 2017, with clinicians using a mobile version of PubMed4Hh. Invitations were sent via e-mail lists to clinicians (physicians, physician assistants and nurse practitioners) along with periodic reminders. Participants rated the usefulness of retrieved bottom-line summaries and abstracts and indicated their usefulness on a 7-point Likert scale. They also indicated location of use (office, rounds, etc.). RESULTS: Of the 166 responses collected in the feasibility phase, more than half of questions (57%, n = 94) were answerable by both the librarian using various resources and by the postdoctoral research fellow using PubMed4Hh. Sixty-six questions were collected during usability testing. More than half of questions (60.6%) were related to information about medication or treatment, while 21% were questions regarding diagnosis, and 12% were specific to disease entities. During the pilot study, participants reviewed 34 abstracts and 40 bottom-line summaries. The abstracts' usefulness mean scores were higher (95% CI [6.12, 6.64) than the scores of the bottom-line summaries (95% CI [5.25, 6.10]). The most frequent reason given was that it confirmed current or tentative diagnostic or treatment plan. The bottom-line summaries were used more in the office (79.3%), and abstracts were used more at point of care (51.9%). CONCLUSIONS: Clinicians reported that retrieving relevant health information from biomedical literature using the PubMed4Hh was useful at the point of care and in the office.


Subject(s)
Attitude of Health Personnel , Clinical Decision-Making , Medical Staff, Hospital , Mobile Applications/standards , Nursing Staff, Hospital , Point-of-Care Systems , PubMed/standards , Adult , Feasibility Studies , Female , Humans , Male , Middle Aged , National Institutes of Health (U.S.) , Pilot Projects , United States
17.
BMC Med Inform Decis Mak ; 17(1): 94, 2017 Jul 03.
Article in English | MEDLINE | ID: mdl-28673304

ABSTRACT

BACKGROUND: MEDLINE is the most widely used medical bibliographic database in the world. Most of its citations are in English and this can be an obstacle for some researchers to access the information the database contains. We created a multilingual query builder to facilitate access to the PubMed subset using a language other than English. The aim of our study was to assess the impact of this multilingual query builder on the quality of PubMed queries for non-native English speaking physicians and medical researchers. METHODS: A randomised controlled study was conducted among French speaking general practice residents. We designed a multi-lingual query builder to facilitate information retrieval, based on available MeSH translations and providing users with both an interface and a controlled vocabulary in their own language. Participating residents were randomly allocated either the French or the English version of the query builder. They were asked to translate 12 short medical questions into MeSH queries. The main outcome was the quality of the query. Two librarians blind to the arm independently evaluated each query, using a modified published classification that differentiated eight types of errors. RESULTS: Twenty residents used the French version of the query builder and 22 used the English version. 492 queries were analysed. There were significantly more perfect queries in the French group vs. the English group (respectively 37.9% vs. 17.9%; p < 0.01). It took significantly more time for the members of the English group than the members of the French group to build each query, respectively 194 sec vs. 128 sec; p < 0.01. CONCLUSIONS: This multi-lingual query builder is an effective tool to improve the quality of PubMed queries in particular for researchers whose first language is not English.


Subject(s)
Information Storage and Retrieval/standards , Multilingualism , PubMed/standards , Humans , Language , Librarians , Translating
18.
J Med Libr Assoc ; 104(2): 138-42, 2016 Apr.
Article in English | MEDLINE | ID: mdl-27076801

ABSTRACT

OBJECTIVE: The authors sought to determine whether unexpected gaps existed in Scopus's author affiliation indexing of publications written by the University of Nebraska Medical Center or Nebraska Medicine (UNMC/NM) authors during 2014. METHODS: First, we compared Scopus affiliation identifier search results to PubMed affiliation keyword search results. Then, we searched Scopus using affiliation keywords (UNMC, etc.) and compared the results to PubMed affiliation keyword and Scopus affiliation identifier searches. RESULTS: We found that Scopus's records for approximately 7% of UNMC/NM authors' publications lacked appropriate UNMC/NM author affiliation identifiers, and many journals' publishers were supplying incomplete author affiliation information to PubMed. CONCLUSIONS: Institutions relying on Scopus to track their impact should determine whether Scopus's affiliation identifiers will, in fact, identify all articles published by their authors and investigators.


Subject(s)
Abstracting and Indexing/methods , Databases, Bibliographic/standards , Information Storage and Retrieval/methods , PubMed/standards , Bibliometrics , Humans
20.
Anaesthesist ; 63(4): 287-93, 2014 Apr.
Article in German | MEDLINE | ID: mdl-24718414

ABSTRACT

AIM: This study assessed the publication performance of university departments of anesthesiology in Austria, Germany and Switzerland. The number of publications, original articles, impact factors and citations were evaluated. MATERIAL AND METHODS: A search was performed in PubMed to identify publications related to anesthesiology from 2001 to 2010. All articles from anesthesiology journals listed in the fields of anesthesia/pain therapy, critical care and emergency medicine by the "journal citation report 2013" in Thomson Reuters ISI web of knowledge were included. Articles from non-anaesthesiology journals, where the stem of the word anesthesia (anes*, anaes*, anäst*, anast*) appears in the affiliation field of PubMed, were included as well. The time periods 2001-2005 and 2006-2010 were compared. Articles were allocated to university departments in Austria, Germany and Switzerland via the affiliation field. RESULTS: A total of 45 university departments in Austria, Germany and Switzerland and 125,979 publications from 2,863 journals (65 anesthesiology journals, 2,798 non-anesthesiology journals) were analyzed. Of the publications 23 % could not be allocated to a given university department of anesthesiology. In the observation period the university department of anesthesiology in Berlin achieved most publications (n = 479) and impact points (1,384), whereas Vienna accumulated most original articles (n = 156). Austria had the most publications per million inhabitants in 2006-2010 (n=50) followed by Switzerland (n=49) and Germany (n=35). The number of publications during the observation period decreased in Germany (0.5 %), Austria (7 %) and Switzerland (8 %). Tables 2 and 4-8 of this article are available at Springer Link under Supplemental. CONCLUSIONS: The research performance varied among the university departments of anesthesiology in Germany, Austria and Switzerland whereby larger university departments, such as Berlin or Vienna published most. Publication output in Germany, Austria and Switzerland has decreased. Data processing in PubMed should be improved.


Subject(s)
Anesthesiology/trends , Publishing/trends , Universities/trends , Anesthesiology/statistics & numerical data , Austria , Germany , Journal Impact Factor , PubMed/standards , PubMed/statistics & numerical data , Publishing/statistics & numerical data , Switzerland , Universities/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL