Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 5.203
Filtrar
1.
BMC Med Res Methodol ; 24(1): 108, 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38724903

RESUMO

OBJECTIVE: Systematic literature reviews (SLRs) are critical for life-science research. However, the manual selection and retrieval of relevant publications can be a time-consuming process. This study aims to (1) develop two disease-specific annotated corpora, one for human papillomavirus (HPV) associated diseases and the other for pneumococcal-associated pediatric diseases (PAPD), and (2) optimize machine- and deep-learning models to facilitate automation of the SLR abstract screening. METHODS: This study constructed two disease-specific SLR screening corpora for HPV and PAPD, which contained citation metadata and corresponding abstracts. Performance was evaluated using precision, recall, accuracy, and F1-score of multiple combinations of machine- and deep-learning algorithms and features such as keywords and MeSH terms. RESULTS AND CONCLUSIONS: The HPV corpus contained 1697 entries, with 538 relevant and 1159 irrelevant articles. The PAPD corpus included 2865 entries, with 711 relevant and 2154 irrelevant articles. Adding additional features beyond title and abstract improved the performance (measured in Accuracy) of machine learning models by 3% for HPV corpus and 2% for PAPD corpus. Transformer-based deep learning models that consistently outperformed conventional machine learning algorithms, highlighting the strength of domain-specific pre-trained language models for SLR abstract screening. This study provides a foundation for the development of more intelligent SLR systems.


Assuntos
Aprendizado de Máquina , Infecções por Papillomavirus , Humanos , Infecções por Papillomavirus/diagnóstico , Economia Médica , Algoritmos , Avaliação de Resultados em Cuidados de Saúde/métodos , Aprendizado Profundo , Indexação e Redação de Resumos/métodos
2.
PLoS One ; 19(5): e0302108, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38696383

RESUMO

OBJECTIVE: To assess the reporting quality of published RCT abstracts regarding patients with endometriosis pelvic pain and investigate the prevalence and characteristics of spin in these abstracts. METHODS: PubMed and Scopus were searched for RCT abstracts addressing endometriosis pelvic pain published from January 1st, 2010 to December 1st, 2023.The reporting quality of RCT abstracts was assessed using the CONSORT statement for abstracts. Additionally, spin was evaluated in the results and conclusions section of the abstracts, defined as the misleading reporting of study findings to emphasize the perceived benefits of an intervention or to confound readers from statistically non-significant results. Assessing factors affecting the reporting quality and spin existence, linear and logistic regression was used, respectively. RESULTS: A total of 47 RCT abstracts were included. Out of 16 checklist items, only three items including objective, intervention and conclusions were sufficiently reported in the most abstracts (more than 95%), and none of the abstracts presented precise data as required by the CONSORT-A guidelines. In the reporting quality of material and method section, trial design, type of randomization, the generation of random allocation sequences, the allocation concealment and blinding were most items identified that were suboptimal. The total score for the quality varied between 5 and 15 (mean: 9.59, SD: 3.03, median: 9, IQR: 5). Word count (beta = 0.015, p-value = 0.005) and publishing in open-accessed journals (beta = 2.023, p-value = 0.023) were the significant factors that affecting the reporting quality. Evaluating spin within each included paper, we found that 18 (51.43%) papers had statistically non-significant results. From these studies, 12 (66.66%) had spin in both results and conclusion sections. Furthermore, the spin intensity increased during 2010-2023 and 38.29% of abstracts had spin in both results and conclusion sections. CONCLUSION: Overall poor adherence to CONSORT-A was observed, with spin detected in several RCTs featuring non-significant primary endpoints in obstetrics and gynecology literature.


Assuntos
Endometriose , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Feminino , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Projetos de Pesquisa/normas , Dor Pélvica , Indexação e Redação de Resumos/normas
3.
Reg Anesth Pain Med ; 49(5): 381-390, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38719229
4.
Med Ref Serv Q ; 43(2): 106-118, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38722606

RESUMO

The objective of this study was to examine the accuracy of indexing for "Appalachian Region"[Mesh]. Researchers performed a search in PubMed for articles published in 2019 using "Appalachian Region"[Mesh] or "Appalachia" or "Appalachian" in the title or abstract. Only 17.88% of the articles retrieved by the search were about Appalachia according to the ARC definition. Most articles retrieved appeared because they were indexed with state terms that were included as part of the mesh term. Database indexing and searching transparency is of growing importance as indexers rely increasingly on automated systems to catalog information and publications.


Assuntos
Indexação e Redação de Resumos , Região dos Apalaches , Indexação e Redação de Resumos/métodos , Humanos , Medical Subject Headings , PubMed , Bibliometria
5.
Pediatr Dent ; 46(2): 89, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38664908
6.
J Biomed Semantics ; 15(1): 3, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654304

RESUMO

BACKGROUND: Systematic reviews of Randomized Controlled Trials (RCTs) are an important part of the evidence-based medicine paradigm. However, the creation of such systematic reviews by clinical experts is costly as well as time-consuming, and results can get quickly outdated after publication. Most RCTs are structured based on the Patient, Intervention, Comparison, Outcomes (PICO) framework and there exist many approaches which aim to extract PICO elements automatically. The automatic extraction of PICO information from RCTs has the potential to significantly speed up the creation process of systematic reviews and this way also benefit the field of evidence-based medicine. RESULTS: Previous work has addressed the extraction of PICO elements as the task of identifying relevant text spans or sentences, but without populating a structured representation of a trial. In contrast, in this work, we treat PICO elements as structured templates with slots to do justice to the complex nature of the information they represent. We present two different approaches to extract this structured information from the abstracts of RCTs. The first approach is an extractive approach based on our previous work that is extended to capture full document representations as well as by a clustering step to infer the number of instances of each template type. The second approach is a generative approach based on a seq2seq model that encodes the abstract describing the RCT and uses a decoder to infer a structured representation of a trial including its arms, treatments, endpoints and outcomes. Both approaches are evaluated with different base models on a manually annotated dataset consisting of RCT abstracts on an existing dataset comprising 211 annotated clinical trial abstracts for Type 2 Diabetes and Glaucoma. For both diseases, the extractive approach (with flan-t5-base) reached the best F 1 score, i.e. 0.547 ( ± 0.006 ) for type 2 diabetes and 0.636 ( ± 0.006 ) for glaucoma. Generally, the F 1 scores were higher for glaucoma than for type 2 diabetes and the standard deviation was higher for the generative approach. CONCLUSION: In our experiments, both approaches show promising performance extracting structured PICO information from RCTs, especially considering that most related work focuses on the far easier task of predicting less structured objects. In our experimental results, the extractive approach performs best in both cases, although the lead is greater for glaucoma than for type 2 diabetes. For future work, it remains to be investigated how the base model size affects the performance of both approaches in comparison. Although the extractive approach currently leaves more room for direct improvements, the generative approach might benefit from larger models.


Assuntos
Indexação e Redação de Resumos , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Processamento de Linguagem Natural , Armazenamento e Recuperação da Informação/métodos
9.
Int J Gynecol Cancer ; 34(5): 669-674, 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38627032

RESUMO

OBJECTIVE: To determine if reviewer experience impacts the ability to discriminate between human-written and ChatGPT-written abstracts. METHODS: Thirty reviewers (10 seniors, 10 juniors, and 10 residents) were asked to differentiate between 10 ChatGPT-written and 10 human-written (fabricated) abstracts. For the study, 10 gynecologic oncology abstracts were fabricated by the authors. For each human-written abstract we generated a ChatGPT matching abstract by using the same title and the fabricated results of each of the human generated abstracts. A web-based questionnaire was used to gather demographic data and to record the reviewers' evaluation of the 20 abstracts. Comparative statistics and multivariable regression were used to identify factors associated with a higher correct identification rate. RESULTS: The 30 reviewers discriminated 20 abstracts, giving a total of 600 abstract evaluations. The reviewers were able to correctly identify 300/600 (50%) of the abstracts: 139/300 (46.3%) of the ChatGPT-generated abstracts and 161/300 (53.7%) of the human-written abstracts (p=0.07). Human-written abstracts had a higher rate of correct identification (median (IQR) 56.7% (49.2-64.1%) vs 45.0% (43.2-48.3%), p=0.023). Senior reviewers had a higher correct identification rate (60%) than junior reviewers and residents (45% each; p=0.043 and p=0.002, respectively). In a linear regression model including the experience level of the reviewers, familiarity with artificial intelligence (AI) and the country in which the majority of medical training was achieved (English speaking vs non-English speaking), the experience of the reviewer (ß=10.2 (95% CI 1.8 to 18.7)) and familiarity with AI (ß=7.78 (95% CI 0.6 to 15.0)) were independently associated with the correct identification rate (p=0.019 and p=0.035, respectively). In a correlation analysis the number of publications by the reviewer was positively correlated with the correct identification rate (r28)=0.61, p<0.001. CONCLUSION: A total of 46.3% of abstracts written by ChatGPT were detected by reviewers. The correct identification rate increased with reviewer and publication experience.


Assuntos
Indexação e Redação de Resumos , Humanos , Indexação e Redação de Resumos/normas , Feminino , Revisão da Pesquisa por Pares , Redação/normas , Ginecologia , Inquéritos e Questionários , Editoração/estatística & dados numéricos
10.
Am J Crit Care ; 33(3): e1-e10, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38688843
11.
Acad Emerg Med ; 31 Suppl 1: 8-401, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38676388
12.
Otol Neurotol ; 45(5): e363-e365, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38626773

RESUMO

OBJECTIVE: To analyze the effect of visual abstracts versus automated tweets on social media participation in Otology & Neurotology . PATIENTS: N/A. INTERVENTIONS: Introduction of visual abstracts developed by the social media editorial team to established automated tweets created by the dlvr.it computer program on the Otology & Neurotology Twitter account. MAIN OUTCOME MEASURES: Twitter analytics including the number of new followers per month, impressions per tweet, and engagements per tweet. The Kruskal-Wallis analysis of variance test was used to compare means. RESULTS: From October 2016 to October 2017 (average of 20 new followers per month), 101 automated tweets averaged 536 impressions and 16 engagements per tweet. The visual abstract was introduced in November 2017. From November 2017 to November 2020 (average of 39 new followers per month), 447 automated tweets averaged 747 impressions and 22 engagements per tweet, whereas 157 visual abstracts averaged 1977 impressions and 78 engagements per tweet. Automated tweets were discontinued in December 2020. From December 2020 to December 2022 (average of 44 new followers per month), 95 visual abstracts averaged 1893 impressions and 103 engagements per tweet. With the introduction of the visual abstract, the average number of followers, impressions per tweet, and engagements per tweet significantly increased (all p -values <0.01; all large effect sizes of 0.16, 0.47, and 0.47, respectively). CONCLUSIONS: Visual abstracts created by a social media editorial team have a positive impact on social media participation in the field of otology and neurotology. The impact is greater than that of social media content generated by Twitter automation tools.


Assuntos
Neuro-Otologia , Otolaringologia , Mídias Sociais , Humanos , Indexação e Redação de Resumos
14.
PLoS One ; 19(3): e0297526, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38478542

RESUMO

The Medical Subject Headings (MeSH) thesaurus is a controlled vocabulary developed by the U.S. National Library of Medicine (NLM) for classifying journal articles. It is increasingly used by researchers studying medical innovation to classify text into disease areas and other categories. Although this process was once manual, human indexers are now assisted by algorithms that automate some of the indexing process. NLM has made one of their algorithms, the Medical Text Indexer (MTI), available to researchers. MTI can be used to easily assign MeSH descriptors to arbitrary text, including from document types other than publications. However, the reliability of extending MTI to other document types has not been studied directly. To assess this, we collected text from grants, patents, and drug indications, and compared MTI's classification to expert manual classification of the same documents. We examined MTI's recall (how often correct terms were identified) and found that MTI identified 78% of expert-classified MeSH descriptors for grants, 78% for patents, and 86% for drug indications. This high recall could be driven merely by excess suggestions (at an extreme, all diseases being assigned to a piece of text); therefore, we also examined precision (how often identified terms were correct) and found that most MTI outputs were also identified by expert manual classification: precision was 53% for grant text, 73% for patent text, and 64% for drug indications. Additionally, we found that recall and precision could be improved by (i) utilizing ranking scores provided by MTI, (ii) excluding long documents, and (iii) aggregating to higher MeSH categories. For simply detecting the presence of any disease, MTI showed > 94% recall and > 87% precision. Our overall assessment is that MTI is a potentially useful tool for researchers wishing to classify texts from a variety of sources into disease areas.


Assuntos
Indexação e Redação de Resumos , Medical Subject Headings , Estados Unidos , Humanos , Reprodutibilidade dos Testes , Algoritmos , National Library of Medicine (U.S.)
15.
Int J Gynaecol Obstet ; 165(3): 1257-1260, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38234125

RESUMO

OBJECTIVES: To use machine learning to optimize the detection of obstetrics and gynecology (OBGYN) Chat Generative Pre-trained Transformer (ChatGPT) -written abstracts of all OBGYN journals. METHODS: We used Web of Science to identify all original articles published in all OBGYN journals in 2022. Seventy-five original articles were randomly selected. For each, we prompted ChatGPT to write an abstract based on the title and results of the original abstracts. Each abstract was tested by Grammarly software and reports were inserted into a database. Machine-learning modes were trained and examined on the database created. RESULTS: Overall, 75 abstracts from 12 different OBGYN journals were randomly selected. There were seven (58%) Q1 journals, one (8%) Q2 journal, two (17%) Q3 journals, and two (17%) Q4 journals. Use of mixed dialects of English, absence of comma-misuse, absence of incorrect verb forms, and improper formatting were important prediction variables of ChatGPT-written abstracts. The deep-learning model had the highest predictive performance of all examined models. This model achieved the following performance: accuracy 0.90, precision 0.92, recall 0.85, area under the curve 0.95. CONCLUSIONS: Machine-learning-based tools reach high accuracy in identifying ChatGPT-written OBGYN abstracts.


Assuntos
Indexação e Redação de Resumos , Ginecologia , Aprendizado de Máquina , Obstetrícia , Humanos , Publicações Periódicas como Assunto
17.
JAMA ; 331(3): 252-253, 2024 01 16.
Artigo em Inglês | MEDLINE | ID: mdl-38150261

RESUMO

This study assesses affiliation bias in peer review of medical abstracts by a commonly used large language model.


Assuntos
Idioma , Revisão por Pares , Viés de Publicação , Grupo Associado , Indexação e Redação de Resumos , Modelos Teóricos
18.
Artigo em Inglês | MEDLINE | ID: mdl-38082894

RESUMO

The Medical Subject Headings (MeSH) is a comprehensive indexing vocabulary used to label millions of books and articles on PubMed. The MeSH annotation of a document consists of one or more descriptors, the main headings, and of qualifiers, subheadings specific to a descriptor. Currently, there are more than 34 million documents on PubMed, which are manually tagged with MeSH terms. In this paper, we describe a machine-learning procedure that, given a document and its MeSH descriptors, predicts the respective qualifiers. In our experiment, we restricted the dataset to documents with the Heart Transplantation descriptor and we only used the PubMed abstracts. We trained binary classifiers to predict qualifiers of this descriptor using logistic regression with a tfidf vectorizer and a fine-tuned DistilBERT model. We carried out a small-scale evaluation of our models with the Mortality qualifier on a test set consisting of 30 articles (15 positives and 15 negatives). This test set was then manually re-annotated by a cardiac surgeon, expert in thoracic transplantation. On this re-annotated test set, we obtained macroaveraged F1 scores of 0.81 for the logistic regression model and of 0.85 for the DistilBERT model. Both scores are higher than the macroaveraged F1 score of 0.76 from the initial PubMed manual annotation. Our procedure would be easily extensible to all the MeSH descriptors with sufficient training data and, we believe, would enable human annotators to complete the indexing work more easily.Clinical Relevance-Selecting relevant articles is important for clinicians and researchers, but also often a challenge, especially in complex subspecialties such as heart transplantation. In this study, a machine-learning model outperformed PubMed's manual annotation, which is promising for improved quality in information retrieval.


Assuntos
Indexação e Redação de Resumos , Medical Subject Headings , Humanos , PubMed , Armazenamento e Recuperação da Informação , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...