Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 73
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Eur Heart J ; 45(5): 332-345, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38170821

RESUMEN

Natural language processing techniques are having an increasing impact on clinical care from patient, clinician, administrator, and research perspective. Among others are automated generation of clinical notes and discharge letters, medical term coding for billing, medical chatbots both for patients and clinicians, data enrichment in the identification of disease symptoms or diagnosis, cohort selection for clinical trial, and auditing purposes. In the review, an overview of the history in natural language processing techniques developed with brief technical background is presented. Subsequently, the review will discuss implementation strategies of natural language processing tools, thereby specifically focusing on large language models, and conclude with future opportunities in the application of such techniques in the field of cardiology.


Asunto(s)
Inteligencia Artificial , Cardiología , Humanos , Procesamiento de Lenguaje Natural , Alta del Paciente
2.
J Biomed Inform ; 151: 104618, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38431151

RESUMEN

OBJECTIVE: Goals of care (GOC) discussions are an increasingly used quality metric in serious illness care and research. Wide variation in documentation practices within the Electronic Health Record (EHR) presents challenges for reliable measurement of GOC discussions. Novel natural language processing approaches are needed to capture GOC discussions documented in real-world samples of seriously ill hospitalized patients' EHR notes, a corpus with a very low event prevalence. METHODS: To automatically detect sentences documenting GOC discussions outside of dedicated GOC note types, we proposed an ensemble of classifiers aggregating the predictions of rule-based, feature-based, and three transformers-based classifiers. We trained our classifier on 600 manually annotated EHR notes among patients with serious illnesses. Our corpus exhibited an extremely imbalanced ratio between sentences discussing GOC and sentences that do not. This ratio challenges standard supervision methods to train a classifier. Therefore, we trained our classifier with active learning. RESULTS: Using active learning, we reduced the annotation cost to fine-tune our ensemble by 70% while improving its performance in our test set of 176 EHR notes, with 0.557 F1-score for sentence classification and 0.629 for note classification. CONCLUSION: When classifying notes, with a true positive rate of 72% (13/18) and false positive rate of 8% (13/158), our performance may be sufficient for deploying our classifier in the EHR to facilitate bedside clinicians' access to GOC conversations documented outside of dedicated notes types, without overburdening clinicians with false positives. Improvements are needed before using it to enrich trial populations or as an outcome measure.


Asunto(s)
Comunicación , Documentación , Humanos , Registros Electrónicos de Salud , Procesamiento de Lenguaje Natural , Planificación de Atención al Paciente
3.
Dis Esophagus ; 2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38745432

RESUMEN

Patients with chronic diseases have increasingly turned to social media to discuss symptoms and share the challenges they face with disease management. The primary aim of this study is to use naturally occurring data from X (formerly known as Twitter) to identify barriers to care faced by individuals affected by eosinophilic esophagitis (EoE). For this qualitative study, the X application programming interface with academic research access was used to search for posts that referenced EoE between 1 January 2019 and 10 August 2022. The posts were identified as being either related to barriers to care for EoE or not. Those related to barriers to care were further categorized by the type of barrier that was expressed. A total of 8636 EoE-related posts were annotated of which 12.1% were related to barriers to care in EoE. The themes that emerged about barriers to care included: dietary challenges, limited treatment options, lack of community support, lack of physician awareness of disease, misinformation, cost of care, lack of patient belief in disease or trust in physician, and limited access to care. Saturation of themes was achieved. This study highlights barriers to care in EoE using readily accessible social media data that is not derived from a curated research setting. Identifying these obstacles is key to improving care for this chronic disease.

4.
J Med Internet Res ; 26: e50652, 2024 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-38526542

RESUMEN

We manually annotated 9734 tweets that were posted by users who reported their pregnancy on Twitter, and used them to train, evaluate, and deploy deep neural network classifiers (F1-score=0.93) to detect tweets that report having a child with attention-deficit/hyperactivity disorder (678 users), autism spectrum disorders (1744 users), delayed speech (902 users), or asthma (1255 users), demonstrating the potential of Twitter as a complementary resource for assessing associations between pregnancy exposures and childhood health outcomes on a large scale.


Asunto(s)
Asma , Trastorno del Espectro Autista , Medios de Comunicación Sociales , Niño , Femenino , Embarazo , Humanos , Asma/epidemiología , Redes Neurales de la Computación
5.
J Med Internet Res ; 26: e47923, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38488839

RESUMEN

BACKGROUND: Patient health data collected from a variety of nontraditional resources, commonly referred to as real-world data, can be a key information source for health and social science research. Social media platforms, such as Twitter (Twitter, Inc), offer vast amounts of real-world data. An important aspect of incorporating social media data in scientific research is identifying the demographic characteristics of the users who posted those data. Age and gender are considered key demographics for assessing the representativeness of the sample and enable researchers to study subgroups and disparities effectively. However, deciphering the age and gender of social media users poses challenges. OBJECTIVE: This scoping review aims to summarize the existing literature on the prediction of the age and gender of Twitter users and provide an overview of the methods used. METHODS: We searched 15 electronic databases and carried out reference checking to identify relevant studies that met our inclusion criteria: studies that predicted the age or gender of Twitter users using computational methods. The screening process was performed independently by 2 researchers to ensure the accuracy and reliability of the included studies. RESULTS: Of the initial 684 studies retrieved, 74 (10.8%) studies met our inclusion criteria. Among these 74 studies, 42 (57%) focused on predicting gender, 8 (11%) focused on predicting age, and 24 (32%) predicted a combination of both age and gender. Gender prediction was predominantly approached as a binary classification task, with the reported performance of the methods ranging from 0.58 to 0.96 F1-score or 0.51 to 0.97 accuracy. Age prediction approaches varied in terms of classification groups, with a higher range of reported performance, ranging from 0.31 to 0.94 F1-score or 0.43 to 0.86 accuracy. The heterogeneous nature of the studies and the reporting of dissimilar performance metrics made it challenging to quantitatively synthesize results and draw definitive conclusions. CONCLUSIONS: Our review found that although automated methods for predicting the age and gender of Twitter users have evolved to incorporate techniques such as deep neural networks, a significant proportion of the attempts rely on traditional machine learning methods, suggesting that there is potential to improve the performance of these tasks by using more advanced methods. Gender prediction has generally achieved a higher reported performance than age prediction. However, the lack of standardized reporting of performance metrics or standard annotated corpora to evaluate the methods used hinders any meaningful comparison of the approaches. Potential biases stemming from the collection and labeling of data used in the studies was identified as a problem, emphasizing the need for careful consideration and mitigation of biases in future studies. This scoping review provides valuable insights into the methods used for predicting the age and gender of Twitter users, along with the challenges and considerations associated with these methods.


Asunto(s)
Medios de Comunicación Sociales , Humanos , Adulto Joven , Adulto , Reproducibilidad de los Resultados , Redes Neurales de la Computación , Aprendizaje Automático
6.
J Med Internet Res ; 24(4): e35788, 2022 04 29.
Artículo en Inglés | MEDLINE | ID: mdl-35486433

RESUMEN

BACKGROUND: A growing amount of health research uses social media data. Those critical of social media research often cite that it may be unrepresentative of the population; however, the suitability of social media data in digital epidemiology is more nuanced. Identifying the demographics of social media users can help establish representativeness. OBJECTIVE: This study aims to identify the different approaches or combination of approaches to extract race or ethnicity from social media and report on the challenges of using these methods. METHODS: We present a scoping review to identify methods used to extract the race or ethnicity of Twitter users from Twitter data sets. We searched 17 electronic databases from the date of inception to May 15, 2021, and carried out reference checking and hand searching to identify relevant studies. Sifting of each record was performed independently by at least two researchers, with any disagreement discussed. Studies were required to extract the race or ethnicity of Twitter users using either manual or computational methods or a combination of both. RESULTS: Of the 1249 records sifted, we identified 67 (5.36%) that met our inclusion criteria. Most studies (51/67, 76%) have focused on US-based users and English language tweets (52/67, 78%). A range of data was used, including Twitter profile metadata, such as names, pictures, information from bios (including self-declarations), or location or content of the tweets. A range of methodologies was used, including manual inference, linkage to census data, commercial software, language or dialect recognition, or machine learning or natural language processing. However, not all studies have evaluated these methods. Those that evaluated these methods found accuracy to vary from 45% to 93% with significantly lower accuracy in identifying categories of people of color. The inference of race or ethnicity raises important ethical questions, which can be exacerbated by the data and methods used. The comparative accuracies of the different methods are also largely unknown. CONCLUSIONS: There is no standard accepted approach or current guidelines for extracting or inferring the race or ethnicity of Twitter users. Social media researchers must carefully interpret race or ethnicity and not overpromise what can be achieved, as even manual screening is a subjective, imperfect method. Future research should establish the accuracy of methods to inform evidence-based best practice guidelines for social media researchers and be guided by concerns of equity and social justice.


Asunto(s)
Medios de Comunicación Sociales , Recolección de Datos , Etnicidad , Humanos , Aprendizaje Automático , Procesamiento de Lenguaje Natural
7.
Bioinformatics ; 36(20): 5120-5121, 2020 12 22.
Artículo en Inglés | MEDLINE | ID: mdl-32683454

RESUMEN

SUMMARY: We present GeoBoost2, a natural language-processing pipeline for extracting the location of infected hosts for enriching metadata in nucleotide sequences repositories like National Center of Biotechnology Information's GenBank for downstream analysis including phylogeography and genomic epidemiology. The increasing number of pathogen sequences requires complementary information extraction methods for focused research, including surveillance within countries and between borders. In this article, we describe the enhancements from our earlier release including improvement in end-to-end extraction performance and speed, availability of a fully functional web-interface and state-of-the-art methods for location extraction using deep learning. AVAILABILITY AND IMPLEMENTATION: Application is freely available on the web at https://zodo.asu.edu/geoboost2. Source code, usage examples and annotated data for GeoBoost2 is freely available at https://github.com/ZooPhy/geoboost2. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Bases de Datos de Ácidos Nucleicos , Metadatos , Genómica , Filogeografía , Programas Informáticos
8.
J Med Internet Res ; 23(1): e25314, 2021 01 22.
Artículo en Inglés | MEDLINE | ID: mdl-33449904

RESUMEN

BACKGROUND: In the United States, the rapidly evolving COVID-19 outbreak, the shortage of available testing, and the delay of test results present challenges for actively monitoring its spread based on testing alone. OBJECTIVE: The objective of this study was to develop, evaluate, and deploy an automatic natural language processing pipeline to collect user-generated Twitter data as a complementary resource for identifying potential cases of COVID-19 in the United States that are not based on testing and, thus, may not have been reported to the Centers for Disease Control and Prevention. METHODS: Beginning January 23, 2020, we collected English tweets from the Twitter Streaming application programming interface that mention keywords related to COVID-19. We applied handwritten regular expressions to identify tweets indicating that the user potentially has been exposed to COVID-19. We automatically filtered out "reported speech" (eg, quotations, news headlines) from the tweets that matched the regular expressions, and two annotators annotated a random sample of 8976 tweets that are geo-tagged or have profile location metadata, distinguishing tweets that self-report potential cases of COVID-19 from those that do not. We used the annotated tweets to train and evaluate deep neural network classifiers based on bidirectional encoder representations from transformers (BERT). Finally, we deployed the automatic pipeline on more than 85 million unlabeled tweets that were continuously collected between March 1 and August 21, 2020. RESULTS: Interannotator agreement, based on dual annotations for 3644 (41%) of the 8976 tweets, was 0.77 (Cohen κ). A deep neural network classifier, based on a BERT model that was pretrained on tweets related to COVID-19, achieved an F1-score of 0.76 (precision=0.76, recall=0.76) for detecting tweets that self-report potential cases of COVID-19. Upon deploying our automatic pipeline, we identified 13,714 tweets that self-report potential cases of COVID-19 and have US state-level geolocations. CONCLUSIONS: We have made the 13,714 tweets identified in this study, along with each tweet's time stamp and US state-level geolocation, publicly available to download. This data set presents the opportunity for future work to assess the utility of Twitter data as a complementary resource for tracking the spread of COVID-19.


Asunto(s)
COVID-19/epidemiología , COVID-19/transmisión , Conjuntos de Datos como Asunto , Procesamiento de Lenguaje Natural , Medios de Comunicación Sociales/estadística & datos numéricos , COVID-19/diagnóstico , Brotes de Enfermedades/estadística & datos numéricos , Humanos , Estudios Longitudinales , SARS-CoV-2 , Autoinforme , Habla , Estados Unidos/epidemiología
9.
J Biomed Inform ; 112S: 100076, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-34417007

RESUMEN

BACKGROUND: In the United States, 17% of pregnancies end in fetal loss: miscarriage or stillbirth. Preterm birth affects 10% of live births in the United States and is the leading cause of neonatal death globally. Preterm births with low birthweight are the second leading cause of infant mortality in the United States. Despite their prevalence, the causes of miscarriage, stillbirth, and preterm birth are largely unknown. OBJECTIVE: The primary objectives of this study are to (1) assess whether women report miscarriage, stillbirth, and preterm birth, among others, on Twitter, and (2) develop natural language processing (NLP) methods to automatically identify users from which to select cases for large-scale observational studies. METHODS: We handcrafted regular expressions to retrieve tweets that mention an adverse pregnancy outcome, from a database containing more than 400 million publicly available tweets posted by more than 100,000 users who have announced their pregnancy on Twitter. Two annotators independently annotated 8109 (one random tweet per user) of the 22,912 retrieved tweets, distinguishing those reporting that the user has personally experienced the outcome ("outcome" tweets) from those that merely mention the outcome ("non-outcome" tweets). Inter-annotator agreement was κ = 0.90 (Cohen's kappa). We used the annotated tweets to train and evaluate feature-engineered and deep learning-based classifiers. We further annotated 7512 (of the 8109) tweets to develop a generalizable, rule-based module designed to filter out reported speech-that is, posts containing what was said by others-prior to automatic classification. We performed an extrinsic evaluation assessing whether the reported speech filter could improve the detection of women reporting adverse pregnancy outcomes on Twitter. RESULTS: The tweets annotated as "outcome" include 1632 women reporting miscarriage, 119 stillbirth, 749 preterm birth or premature labor, 217 low birthweight, 558 NICU admission, and 458 fetal/infant loss in general. A deep neural network, BERT-based classifier achieved the highest overall F1-score (0.88) for automatically detecting "outcome" tweets (precision = 0.87, recall = 0.89), with an F1-score of at least 0.82 and a precision of at least 0.84 for each of the adverse pregnancy outcomes. Our reported speech filter significantly (P < 0.05) improved the accuracy of Logistic Regression (from 78.0% to 80.8%) and majority voting-based ensemble (from 81.1% to 82.9%) classifiers. Although the filter did not improve the F1-score of the BERT-based classifier, it did improve precision-a trade-off of recall that may be acceptable for automated case selection of more prevalent outcomes. Without the filter, reported speech is one of the main sources of errors for the BERT-based classifier. CONCLUSION: This study demonstrates that (1) women do report their adverse pregnancy outcomes on Twitter, (2) our NLP pipeline can automatically identify users from which to select cases for large-scale observational studies, and (3) our reported speech filter would reduce the cost of annotating health-related social media data and can significantly improve the overall performance of feature-based classifiers.

10.
J Med Internet Res ; 22(2): e15861, 2020 02 26.
Artículo en Inglés | MEDLINE | ID: mdl-32130117

RESUMEN

BACKGROUND: Social media data are being increasingly used for population-level health research because it provides near real-time access to large volumes of consumer-generated data. Recently, a number of studies have explored the possibility of using social media data, such as from Twitter, for monitoring prescription medication abuse. However, there is a paucity of annotated data or guidelines for data characterization that discuss how information related to abuse-prone medications is presented on Twitter. OBJECTIVE: This study discusses the creation of an annotated corpus suitable for training supervised classification algorithms for the automatic classification of medication abuse-related chatter. The annotation strategies used for improving interannotator agreement (IAA), a detailed annotation guideline, and machine learning experiments that illustrate the utility of the annotated corpus are also described. METHODS: We employed an iterative annotation strategy, with interannotator discussions held and updates made to the annotation guidelines at each iteration to improve IAA for the manual annotation task. Using the grounded theory approach, we first characterized tweets into fine-grained categories and then grouped them into 4 broad classes-abuse or misuse, personal consumption, mention, and unrelated. After the completion of manual annotations, we experimented with several machine learning algorithms to illustrate the utility of the corpus and generate baseline performance metrics for automatic classification on these data. RESULTS: Our final annotated set consisted of 16,443 tweets mentioning at least 20 abuse-prone medications including opioids, benzodiazepines, atypical antipsychotics, central nervous system stimulants, and gamma-aminobutyric acid analogs. Our final overall IAA was 0.86 (Cohen kappa), which represents high agreement. The manual annotation process revealed the variety of ways in which prescription medication misuse or abuse is discussed on Twitter, including expressions indicating coingestion, nonmedical use, nonstandard route of intake, and consumption above the prescribed doses. Among machine learning classifiers, support vector machines obtained the highest automatic classification accuracy of 73.00% (95% CI 71.4-74.5) over the test set (n=3271). CONCLUSIONS: Our manual analysis and annotations of a large number of tweets have revealed types of information posted on Twitter about a set of abuse-prone prescription medications and their distributions. In the interests of reproducible and community-driven research, we have made our detailed annotation guidelines and the training data for the classification experiments publicly available, and the test data will be used in future shared tasks.


Asunto(s)
Medicamentos bajo Prescripción/uso terapéutico , Medios de Comunicación Sociales/normas , Recolección de Datos , Guías como Asunto , Humanos , Medicamentos bajo Prescripción/farmacología
11.
Bioinformatics ; 34(13): i565-i573, 2018 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-29950020

RESUMEN

Motivation: Virus phylogeographers rely on DNA sequences of viruses and the locations of the infected hosts found in public sequence databases like GenBank for modeling virus spread. However, the locations in GenBank records are often only at the country or state level, and may require phylogeographers to scan the journal articles associated with the records to identify more localized geographic areas. To automate this process, we present a named entity recognizer (NER) for detecting locations in biomedical literature. We built the NER using a deep feedforward neural network to determine whether a given token is a toponym or not. To overcome the limited human annotated data available for training, we use distant supervision techniques to generate additional samples to train our NER. Results: Our NER achieves an F1-score of 0.910 and significantly outperforms the previous state-of-the-art system. Using the additional data generated through distant supervision further boosts the performance of the NER achieving an F1-score of 0.927. The NER presented in this research improves over previous systems significantly. Our experiments also demonstrate the NER's capability to embed external features to further boost the system's performance. We believe that the same methodology can be applied for recognizing similar biomedical entities in scientific literature.


Asunto(s)
Aprendizaje Profundo , Almacenamiento y Recuperación de la Información/métodos , Filogeografía/métodos , Virus/genética , Bases de Datos de Ácidos Nucleicos , Humanos
12.
Bioinformatics ; 34(9): 1606-1608, 2018 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-29240889

RESUMEN

Summary: GeoBoost is a command-line software package developed to address sparse or incomplete metadata in GenBank sequence records that relate to the location of the infected host (LOIH) of viruses. Given a set of GenBank accession numbers corresponding to virus GenBank records, GeoBoost extracts, integrates and normalizes geographic information reflecting the LOIH of the viruses using integrated information from GenBank metadata and related full-text publications. In addition, to facilitate probabilistic geospatial modeling, GeoBoost assigns probability scores for each possible LOIH. Availability and implementation: Binaries and resources required for running GeoBoost are packed into a single zipped file and freely available for download at https://tinyurl.com/geoboost. A video tutorial is included to help users quickly and easily install and run the software. The software is implemented in Java 1.8, and supported on MS Windows and Linux platforms. Contact: gragon@upenn.edu. Supplementary information: Supplementary data are available at Bioinformatics online.


Asunto(s)
Metadatos , Virus , Bases de Datos de Ácidos Nucleicos , Programas Informáticos
13.
J Biomed Inform ; 98: 103268, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31421211

RESUMEN

OBJECTIVE: The assessment of written medical examinations is a tedious and expensive process, requiring significant amounts of time from medical experts. Our objective was to develop a natural language processing (NLP) system that can expedite the assessment of unstructured answers in medical examinations by automatically identifying relevant concepts in the examinee responses. MATERIALS AND METHODS: Our NLP system, Intelligent Clinical Text Evaluator (INCITE), is semi-supervised in nature. Learning from a limited set of fully annotated examples, it sequentially applies a series of customized text comparison and similarity functions to determine if a text span represents an entry in a given reference standard. Combinations of fuzzy matching and set intersection-based methods capture inexact matches and also fragmented concepts. Customizable, dynamic similarity-based matching thresholds allow the system to be tailored for examinee responses of different lengths. RESULTS: INCITE achieved an average F1-score of 0.89 (precision = 0.87, recall = 0.91) against human annotations over held-out evaluation data. Fuzzy text matching, dynamic thresholding and the incorporation of supervision using annotated data resulted in the biggest jumps in performances. DISCUSSION: Long and non-standard expressions are difficult for INCITE to detect, but the problem is mitigated by the use of dynamic thresholding (i.e., varying the similarity threshold for a text span to be considered a match). Annotation variations within exams and disagreements between annotators were the primary causes for false positives. Small amounts of annotated data can significantly improve system performance. CONCLUSIONS: The high performance and interpretability of INCITE will likely significantly aid the assessment process and also help mitigate the impact of manual assessment inconsistencies.


Asunto(s)
Educación Médica/métodos , Educación Médica/normas , Evaluación Educacional/métodos , Licencia Médica/normas , Procesamiento de Lenguaje Natural , Facultades de Medicina , Algoritmos , Competencia Clínica/normas , Recolección de Datos , Curaduría de Datos/métodos , Lógica Difusa , Humanos , Registros Médicos , Reconocimiento de Normas Patrones Automatizadas , Reproducibilidad de los Resultados , Programas Informáticos , Unified Medical Language System
14.
J Biomed Inform ; 88: 98-107, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-30445220

RESUMEN

BACKGROUND: Data collection and extraction from noisy text sources such as social media typically rely on keyword-based searching/listening. However, health-related terms are often misspelled in such noisy text sources due to their complex morphology, resulting in the exclusion of relevant data for studies. In this paper, we present a customizable data-centric system that automatically generates common misspellings for complex health-related terms, which can improve the data collection process from noisy text sources. MATERIALS AND METHODS: The spelling variant generator relies on a dense vector model learned from large, unlabeled text, which is used to find semantically close terms to the original/seed keyword, followed by the filtering of terms that are lexically dissimilar beyond a given threshold. The process is executed recursively, converging when no new terms similar (lexically and semantically) to the seed keyword are found. The weighting of intra-word character sequence similarities allows further problem-specific customization of the system. RESULTS: On a dataset prepared for this study, our system outperforms the current state-of-the-art medication name variant generator with best F1-score of 0.69 and F14-score of 0.78. Extrinsic evaluation of the system on a set of cancer-related terms demonstrated an increase of over 67% in retrieval rate from Twitter posts when the generated variants are included. DISCUSSION: Our proposed spelling variant generator has several advantages over past spelling variant generators-(i) it is capable of filtering out lexically similar but semantically dissimilar terms, (ii) the number of variants generated is low, as many low-frequency and ambiguous misspellings are filtered out, and (iii) the system is fully automatic, customizable and easily executable. While the base system is fully unsupervised, we show how supervision may be employed to adjust weights for task-specific customizations. CONCLUSION: The performance and relative simplicity of our proposed approach make it a much-needed spelling variant generation resource for health-related text mining from noisy sources. The source code for the system has been made publicly available for research.


Asunto(s)
Minería de Datos/métodos , Informática Médica/métodos , Procesamiento de Lenguaje Natural , Medios de Comunicación Sociales , Algoritmos , Recolección de Datos/métodos , Registros Electrónicos de Salud , Lógica Difusa , Reconocimiento de Normas Patrones Automatizadas , Reproducibilidad de los Resultados
15.
J Biomed Inform ; 87: 68-78, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30292855

RESUMEN

BACKGROUND: Although birth defects are the leading cause of infant mortality in the United States, methods for observing human pregnancies with birth defect outcomes are limited. OBJECTIVE: The primary objectives of this study were (i) to assess whether rare health-related events-in this case, birth defects-are reported on social media, (ii) to design and deploy a natural language processing (NLP) approach for collecting such sparse data from social media, and (iii) to utilize the collected data to discover a cohort of women whose pregnancies with birth defect outcomes could be observed on social media for epidemiological analysis. METHODS: To assess whether birth defects are mentioned on social media, we mined 432 million tweets posted by 112,647 users who were automatically identified via their public announcements of pregnancy on Twitter. To retrieve tweets that mention birth defects, we developed a rule-based, bootstrapping approach, which relies on a lexicon, lexical variants generated from the lexicon entries, regular expressions, post-processing, and manual analysis guided by distributional properties. To identify users whose pregnancies with birth defect outcomes could be observed for epidemiological analysis, inclusion criteria were (i) tweets indicating that the user's child has a birth defect, and (ii) accessibility to the user's tweets during pregnancy. We conducted a semi-automatic evaluation to estimate the recall of the tweet-collection approach, and performed a preliminary assessment of the prevalence of selected birth defects among the pregnancy cohort derived from Twitter. RESULTS: We manually annotated 16,822 retrieved tweets, distinguishing tweets indicating that the user's child has a birth defect (true positives) from tweets that merely mention birth defects (false positives). Inter-annotator agreement was substantial: κ = 0.79 (Cohen's kappa). Analyzing the timelines of the 646 users whose tweets were true positives resulted in the discovery of 195 users that met the inclusion criteria. Congenital heart defects are the most common type of birth defect reported on Twitter, consistent with findings in the general population. Based on an evaluation of 4169 tweets retrieved using alternative text mining methods, the recall of the tweet-collection approach was 0.95. CONCLUSIONS: Our contributions include (i) evidence that rare health-related events are indeed reported on Twitter, (ii) a generalizable, systematic NLP approach for collecting sparse tweets, (iii) a semi-automatic method to identify undetected tweets (false negatives), and (iv) a collection of publicly available tweets by pregnant users with birth defect outcomes, which could be used for future epidemiological analysis. In future work, the annotated tweets could be used to train machine learning algorithms to automatically identify users reporting birth defect outcomes, enabling the large-scale use of social media mining as a complementary method for such epidemiological research.


Asunto(s)
Anomalías Congénitas/diagnóstico , Recolección de Datos/métodos , Minería de Datos/métodos , Cardiopatías Congénitas/diagnóstico , Medios de Comunicación Sociales , Algoritmos , Anomalías Congénitas/epidemiología , Europa (Continente) , Reacciones Falso Positivas , Femenino , Georgia , Humanos , Illinois , Lactante , Recién Nacido , Clasificación Internacional de Enfermedades , Aprendizaje Automático , Masculino , Procesamiento de Lenguaje Natural , Embarazo , Reproducibilidad de los Resultados , Unified Medical Language System , Estados Unidos
18.
medRxiv ; 2024 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-37503241

RESUMEN

Background: There has been an unprecedented effort to sequence the SARS-CoV-2 virus and examine its molecular evolution. This has been facilitated by the availability of publicly accessible databases, the Global Initiative on Sharing All Influenza Data (GISAID) and GenBank, which collectively hold millions of SARS-CoV-2 sequence records. Genomic epidemiology, however, seeks to go beyond phylogenetic analysis by linking genetic information to patient characteristics and disease outcomes, enabling a comprehensive understanding of transmission dynamics and disease impact.While these repositories include fields reflecting patient-related metadata for a given sequence, inclusion of these demographic and clinical details is scarce. The extent to which patient-related metadata is reported in published sequencing studies and its quality remains largely unexplored. Methods: The NIH's LitCovid collection will be used for automated classification of articles reporting having deposited SARS-CoV-2 sequences in public repositories, while an independent search will be conducted in PubMed for validation. Data extraction will be conducted using Covidence. The extracted data will be synthesized and summarized to quantify the availability of patient metadata in the published literature of SARS-CoV-2 sequencing studies. For the bibliometric analysis, relevant data points, such as author affiliations and citation metrics will be extracted. Discussion: This scoping review will report on the extent and types of patient-related metadata reported in genomic viral sequencing studies of SARS-CoV-2, identify gaps in this reporting, and make recommendations for improving the quality and consistency of reporting in this area. The bibliometric analysis will uncover trends and patterns in the reporting of patient-related metadata, including differences in reporting based on study types or geographic regions. Co-occurrence networks of author keywords will also be presented. The insights gained from this study may help improve the quality and consistency of reporting patient metadata, enhancing the utility of sequence metadata and facilitating future research on infectious diseases.

19.
Drug Saf ; 47(1): 81-91, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37995049

RESUMEN

INTRODUCTION: Hypertension is the leading cause of heart disease in the world, and discontinuation or nonadherence of antihypertensive medication constitutes a significant global health concern. Patients with hypertension have high rates of medication nonadherence. Studies of reasons for nonadherence using traditional surveys are limited, can be expensive, and suffer from response, white-coat, and recall biases. Mining relevant posts by patients on social media is inexpensive and less impacted by the pressures and biases of formal surveys, which may provide direct insights into factors that lead to non-compliance with antihypertensive medication. METHODS: This study examined medication ratings posted to WebMD, an online health forum that allows patients to post medication reviews. We used a previously developed natural language processing classifier to extract indications and reasons for changes in angiotensin receptor II blocker (ARB) and angiotensin-converting enzyme inhibitor (ACEI) treatments. After extraction, ratings were manually annotated and compared with data from the US Food and Drug administration (FDA) Adverse Events Reporting System (FAERS) public database. RESULTS: From a collection of 343,459 WebMD reviews, we automatically extracted 1867 posts mentioning changes in ACEIs or ARBs, and manually reviewed the 300 most recent posts regarding ACEI treatments and the 300 most recent posts regarding ARB treatments. After excluding posts that only mentioned a dose change or were a false-positive mention, 142 posts in the ARBs dataset and 187 posts in the ACEIs dataset remained. The majority of posts (97% ARBs, 91% ACEIs) indicated experiencing an adverse event as the reason for medication change. The most common adverse events reported mapped to the Medical Dictionary for Regulatory Activities were "musculoskeletal and connective tissue disorders" like muscle and joint pain for ARBs, and "respiratory, thoracic, and mediastinal disorders" like cough and shortness of breath for ACEIs. These categories also had the largest differences in percentage points, appearing more frequently on WebMD data than FDA data (p < 0.001). CONCLUSION: Musculoskeletal and respiratory symptoms were the most commonly reported adverse effects in social media postings associated with drug discontinuation. Managing such symptoms is a potential target of interventions seeking to improve medication persistence.


Asunto(s)
Hipertensión , Medios de Comunicación Sociales , Humanos , Antihipertensivos/efectos adversos , Inhibidores de la Enzima Convertidora de Angiotensina/efectos adversos , Antagonistas de Receptores de Angiotensina/uso terapéutico , Hipertensión/tratamiento farmacológico , Medición de Resultados Informados por el Paciente
20.
J Am Med Inform Assoc ; 31(4): 991-996, 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38218723

RESUMEN

OBJECTIVE: The aim of the Social Media Mining for Health Applications (#SMM4H) shared tasks is to take a community-driven approach to address the natural language processing and machine learning challenges inherent to utilizing social media data for health informatics. In this paper, we present the annotated corpora, a technical summary of participants' systems, and the performance results. METHODS: The eighth iteration of the #SMM4H shared tasks was hosted at the AMIA 2023 Annual Symposium and consisted of 5 tasks that represented various social media platforms (Twitter and Reddit), languages (English and Spanish), methods (binary classification, multi-class classification, extraction, and normalization), and topics (COVID-19, therapies, social anxiety disorder, and adverse drug events). RESULTS: In total, 29 teams registered, representing 17 countries. In general, the top-performing systems used deep neural network architectures based on pre-trained transformer models. In particular, the top-performing systems for the classification tasks were based on single models that were pre-trained on social media corpora. CONCLUSION: To facilitate future work, the datasets-a total of 61 353 posts-will remain available by request, and the CodaLab sites will remain active for a post-evaluation phase.


Asunto(s)
Medios de Comunicación Sociales , Humanos , Minería de Datos/métodos , Aprendizaje Automático , Procesamiento de Lenguaje Natural , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA