Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Ann Surg ; 272(4): 629-636, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32773639

RESUMO

OBJECTIVES: We present the development and validation of a portable NLP approach for automated surveillance of SSIs. SUMMARY OF BACKGROUND DATA: The surveillance of SSIs is labor-intensive limiting the generalizability and scalability of surgical quality surveillance programs. METHODS: We abstracted patient clinical text notes after surgical procedures from 2 independent healthcare systems using different electronic healthcare records. An SSI detected as part of the American College of Surgeons' National Surgical Quality Improvement Program was used as the reference standard. We developed a rules-based NLP system (Easy Clinical Information Extractor [CIE]-SSI) for operative event-level detection of SSIs using an training cohort (4574 operative events) from 1 healthcare system and then conducted internal validation on a blind cohort from the same healthcare system (1850 operative events) and external validation on a blind cohort from the second healthcare system (15,360 operative events). EasyCIE-SSI performance was measured using sensitivity, specificity, and area under the receiver-operating-curve (AUC). RESULTS: The prevalence of SSI was 4% and 5% in the internal and external validation corpora. In internal validation, EasyCIE-SSI had a sensitivity, specificity, AUC of 94%, 88%, 0.912 for the detection of SSI, respectively. In external validation, EasyCIE-SSI had sensitivity, specificity, AUC of 79%, 92%, 0.852 for the detection of SSI, respectively. The sensitivity of EasyCIE-SSI decreased in clean, skin/subcutaneous, and outpatient procedures in the external validation compared to internal validation. CONCLUSION: Automated surveillance of SSIs can be achieved using NLP of clinical notes with high sensitivity and specificity.


Assuntos
Aplicativos Móveis , Processamento de Linguagem Natural , Infecção da Ferida Cirúrgica/diagnóstico , Adulto , Idoso , Estudos de Coortes , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Vigilância da População/métodos , Melhoria de Qualidade , Procedimentos Cirúrgicos Operatórios/normas
2.
BMC Med Inform Decis Mak ; 19(Suppl 3): 70, 2019 04 04.
Artigo em Inglês | MEDLINE | ID: mdl-30943963

RESUMO

BACKGROUND: A shareable repository of clinical notes is critical for advancing natural language processing (NLP) research, and therefore a goal of many NLP researchers is to create a shareable repository of clinical notes, that has breadth (from multiple institutions) as well as depth (as much individual data as possible). METHODS: We aimed to assess the degree to which individuals would be willing to contribute their health data to such a repository. A compact e-survey probed willingness to share demographic and clinical data categories. Participants were faculty, staff, and students in two geographically diverse major medical centers (Utah and New York). Such a sample could be expected to respond like a typical potential participant from the general public who is given complete and fully informed consent about the pros and cons of participating in a research study. RESULTS: Two thousand one hundred forty respondents completed the surveys. 56% of respondents were "somewhat/definitely willing" to share clinical data with identifiers, while 89% of respondents were "somewhat (17%)/definitely willing (72%)" to share without identifiers. Results were consistent across gender, age, and education, but there were some differences by geographical region. Individuals were most reluctant (50-74%) sharing mental health, substance abuse, and domestic violence data. CONCLUSIONS: We conclude that a substantial fraction of potential patient participants, once educated about risks and benefits, would be willing to donate de-identified clinical data to a shared research repository. A slight majority even would be willing to share absent de-identification, suggesting that perceptions about data misuse are not a major concern. Such a repository of clinical notes should be invaluable for clinical NLP research and advancement.


Assuntos
Centros Médicos Acadêmicos , Pesquisa Biomédica , Pessoal de Saúde , Disseminação de Informação , Processamento de Linguagem Natural , Adolescente , Adulto , Confidencialidade , Feminino , Humanos , Consentimento Livre e Esclarecido , Masculino , Pessoa de Meia-Idade , New York , Participação do Paciente , Inquéritos e Questionários , Adulto Jovem
3.
J Biomed Inform ; 85: 106-113, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-30092358

RESUMO

OBJECTIVE: To develop and evaluate an efficient Trie structure for large-scale, rule-based clinical natural language processing (NLP), which we call n-trie. BACKGROUND: Despite the popularity of machine learning techniques in natural language processing, rule-based systems boast important advantages: distinctive transparency, ease of incorporating external knowledge, and less demanding annotation requirements. However, processing efficiency remains a major obstacle for adopting standard rule-base NLP solutions in big data analyses. METHODS: We developed n-trie to specifically address the token-based nature of context detection, an important facet of clinical NLP that is known to slow down NLP pipelines. N-trie, a new rule processing engine using a revised Trie structure, allows fast execution of lexicon-based NLP rules. To determine its applicability and evaluate its performance, we applied the n-trie engine in an implementation (called FastContext) of the ConText algorithm and compared its processing speed and accuracy with JavaConText and GeneralConText, two widely used Java ConText implementations, as well as with a standalone machine learning NegEx implementation, NegScope. RESULTS: The n-trie engine ran two orders of magnitude faster and was far less sensitive to rule set size than the comparison implementations, and it proved faster than the best machine learning negation detector. Additionally, the engine consistently gained accuracy improvement as the rule set increased (the desired outcome of adding new rules), while the other implementations did not. CONCLUSIONS: The n-trie engine is an efficient, scalable engine to support NLP rule processing and shows the potential for application in other NLP tasks beyond context detection.


Assuntos
Algoritmos , Processamento de Linguagem Natural , Biologia Computacional , Bases de Dados Factuais , Humanos , Aprendizado de Máquina
4.
J Biomed Inform ; 64: 265-272, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27989816

RESUMO

OBJECTIVES: Extracting data from publication reports is a standard process in systematic review (SR) development. However, the data extraction process still relies too much on manual effort which is slow, costly, and subject to human error. In this study, we developed a text summarization system aimed at enhancing productivity and reducing errors in the traditional data extraction process. METHODS: We developed a computer system that used machine learning and natural language processing approaches to automatically generate summaries of full-text scientific publications. The summaries at the sentence and fragment levels were evaluated in finding common clinical SR data elements such as sample size, group size, and PICO values. We compared the computer-generated summaries with human written summaries (title and abstract) in terms of the presence of necessary information for the data extraction as presented in the Cochrane review's study characteristics tables. RESULTS: At the sentence level, the computer-generated summaries covered more information than humans do for systematic reviews (recall 91.2% vs. 83.8%, p<0.001). They also had a better density of relevant sentences (precision 59% vs. 39%, p<0.001). At the fragment level, the ensemble approach combining rule-based, concept mapping, and dictionary-based methods performed better than individual methods alone, achieving an 84.7% F-measure. CONCLUSION: Computer-generated summaries are potential alternative information sources for data extraction in systematic review development. Machine learning and natural language processing are promising approaches to the development of such an extractive summarization system.


Assuntos
Aprendizado de Máquina , Processamento de Linguagem Natural , Revisões Sistemáticas como Assunto , Humanos , Mineração de Dados , Idioma , Publicações
6.
J Biomed Inform ; 52: 121-9, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24929181

RESUMO

Institutional Review Boards (IRBs) are a critical component of clinical research and can become a significant bottleneck due to the dramatic increase, in both volume and complexity of clinical research. Despite the interest in developing clinical research informatics (CRI) systems and supporting data standards to increase clinical research efficiency and interoperability, informatics research in the IRB domain has not attracted much attention in the scientific community. The lack of standardized and structured application forms across different IRBs causes inefficient and inconsistent proposal reviews and cumbersome workflows. These issues are even more prominent in multi-institutional clinical research that is rapidly becoming the norm. This paper proposes and evaluates a domain analysis model for electronic IRB (eIRB) systems, paving the way for streamlined clinical research workflow via integration with other CRI systems and improved IRB application throughput via computer-assisted decision support.


Assuntos
Pesquisa Biomédica , Comitês de Ética em Pesquisa , Informática Médica , Pesquisa Biomédica/métodos , Pesquisa Biomédica/normas , Humanos , Informática Médica/métodos , Informática Médica/normas , Modelos Teóricos
7.
Acad Med ; 98(11): 1278-1282, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37506388

RESUMO

PROBLEM: Although holistic review has been used successfully in some residency programs to decrease bias, such review is time-consuming and unsustainable for many programs without initial prescreening. The unstructured qualitative data in residency applications, including notable experiences, letters of recommendation, personal statement, and medical student performance evaluations, require extensive time, resources, and metrics to evaluate; therefore, previous applicant screening relied heavily on quantitative metrics, which can be socioeconomically and racially biased. APPROACH: Using residency applications to the University of Utah internal medicine-pediatrics program from 2015 to 2019, the authors extracted relevant snippets of text from the narrative sections of applications. Expert reviewers annotated these snippets into specific values (academic strength; intellectual curiosity; compassion; communication; work ethic; teamwork; leadership; self-awareness; diversity, equity, and inclusion; professionalism; and adaptability) previously identified as associated with resident success. The authors prospectively applied a machine learning model (MLM) to snippets from applications from 2023, and output was compared with a manual holistic review performed without knowledge of MLM results. OUTCOMES: Overall, the MLM had a sensitivity of 0.64, specificity of 0.97, positive predictive value of 0.62, negative predictive value of 0.97, and F1 score of 0.63. The mean (SD) total number of annotations per application was significantly correlated with invited for interview status (invited: 208.6 [59.1]; not invited: 145.2 [57.2]; P < .001). In addition, 8 of the 10 individual values were significantly predictive of an applicant's invited for interview status. NEXT STEPS: The authors created an MLM that can identify several values important for resident success in internal medicine-pediatrics programs with moderate sensitivity and high specificity. The authors will continue to refine the MLM by increasing the number of annotations, exploring parameter tuning and feature engineering options, and identifying which application sections have the highest correlation with invited for interview status.


Assuntos
Internato e Residência , Humanos , Criança , Processamento de Linguagem Natural , Medicina Interna/educação , Profissionalismo , Comunicação
8.
BMC Med Inform Decis Mak ; 12: 41, 2012 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-22621674

RESUMO

BACKGROUND: PubMed data potentially can provide decision support information, but PubMed was not exclusively designed to be a point-of-care tool. Natural language processing applications that summarize PubMed citations hold promise for extracting decision support information. The objective of this study was to evaluate the efficiency of a text summarization application called Semantic MEDLINE, enhanced with a novel dynamic summarization method, in identifying decision support data. METHODS: We downloaded PubMed citations addressing the prevention and drug treatment of four disease topics. We then processed the citations with Semantic MEDLINE, enhanced with the dynamic summarization method. We also processed the citations with a conventional summarization method, as well as with a baseline procedure. We evaluated the results using clinician-vetted reference standards built from recommendations in a commercial decision support product, DynaMed. RESULTS: For the drug treatment data, Semantic MEDLINE enhanced with dynamic summarization achieved average recall and precision scores of 0.848 and 0.377, while conventional summarization produced 0.583 average recall and 0.712 average precision, and the baseline method yielded average recall and precision values of 0.252 and 0.277. For the prevention data, Semantic MEDLINE enhanced with dynamic summarization achieved average recall and precision scores of 0.655 and 0.329. The baseline technique resulted in recall and precision scores of 0.269 and 0.247. No conventional Semantic MEDLINE method accommodating summarization for prevention exists. CONCLUSION: Semantic MEDLINE with dynamic summarization outperformed conventional summarization in terms of recall, and outperformed the baseline method in both recall and precision. This new approach to text summarization demonstrates potential in identifying decision support data for multiple needs.


Assuntos
Algoritmos , Técnicas de Apoio para a Decisão , Armazenamento e Recuperação da Informação/métodos , Semântica , Diabetes Mellitus Tipo 2/tratamento farmacológico , Diabetes Mellitus Tipo 2/prevenção & controle , Insuficiência Cardíaca/tratamento farmacológico , Insuficiência Cardíaca/prevenção & controle , Humanos , Hipertensão/tratamento farmacológico , Hipertensão/prevenção & controle , MEDLINE , Processamento de Linguagem Natural , Pneumonia Pneumocócica/tratamento farmacológico , PubMed
9.
BMC Med Inform Decis Mak ; 11: 6, 2011 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-21284871

RESUMO

BACKGROUND: Traditional information retrieval techniques typically return excessive output when directed at large bibliographic databases. Natural Language Processing applications strive to extract salient content from the excessive data. Semantic MEDLINE, a National Library of Medicine (NLM) natural language processing application, highlights relevant information in PubMed data. However, Semantic MEDLINE implements manually coded schemas, accommodating few information needs. Currently, there are only five such schemas, while many more would be needed to realistically accommodate all potential users. The aim of this project was to develop and evaluate a statistical algorithm that automatically identifies relevant bibliographic data; the new algorithm could be incorporated into a dynamic schema to accommodate various information needs in Semantic MEDLINE, and eliminate the need for multiple schemas. METHODS: We developed a flexible algorithm named Combo that combines three statistical metrics, the Kullback-Leibler Divergence (KLD), Riloff's RlogF metric (RlogF), and a new metric called PredScal, to automatically identify salient data in bibliographic text. We downloaded citations from a PubMed search query addressing the genetic etiology of bladder cancer. The citations were processed with SemRep, an NLM rule-based application that produces semantic predications. SemRep output was processed by Combo, in addition to the standard Semantic MEDLINE genetics schema and independently by the two individual KLD and RlogF metrics. We evaluated each summarization method using an existing reference standard within the task-based context of genetic database curation. RESULTS: Combo asserted 74 genetic entities implicated in bladder cancer development, whereas the traditional schema asserted 10 genetic entities; the KLD and RlogF metrics individually asserted 77 and 69 genetic entities, respectively. Combo achieved 61% recall and 81% precision, with an F-score of 0.69. The traditional schema achieved 23% recall and 100% precision, with an F-score of 0.37. The KLD metric achieved 61% recall, 70% precision, with an F-score of 0.65. The RlogF metric achieved 61% recall, 72% precision, with an F-score of 0.66. CONCLUSIONS: Semantic MEDLINE summarization using the new Combo algorithm outperformed a conventional summarization schema in a genetic database curation task. It potentially could streamline information acquisition for other needs without having to hand-build multiple saliency schemas.


Assuntos
Algoritmos , Bases de Dados Bibliográficas , Armazenamento e Recuperação da Informação/métodos , Bases de Dados Factuais , Internet , MEDLINE , National Library of Medicine (U.S.) , Processamento de Linguagem Natural , Estados Unidos
10.
Surgery ; 170(4): 1175-1182, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34090671

RESUMO

BACKGROUND: The objective of this study was to develop a portal natural language processing approach to aid in the identification of postoperative venous thromboembolism events from free-text clinical notes. METHODS: We abstracted clinical notes from 25,494 operative events from 2 independent health care systems. A venous thromboembolism detected as part of the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) was used as the reference standard. A natural language processing engine, easy clinical information extractor-pulmonary embolism/deep vein thrombosis (EasyCIE-PEDVT), was trained to detect pulmonary embolism and deep vein thrombosis from clinical notes. International Classification of Diseases (ICD) discharge diagnosis codes for venous thromboembolism were used as baseline comparators. The classification performance of EasyCIE-PEDVT was compared with International Classification of Diseases codes using sensitivity, specificity, area under the receiver operating characteristic curve, using an internal and external validation cohort. RESULTS: To detect pulmonary embolism, EasyCIE-PEDVT had a sensitivity of 0.714 and 0.815 in internal and external validation, respectively. To detect deep vein thrombosis, EasyCIE-PEDVT had a sensitivity of 0.846 and 0.849 in internal and external validation, respectively. EasyCIE-PEDVT had significantly higher discrimination for deep vein thrombosis compared with International Classification of Diseases codes in internal validation (area under the receiver operating characteristic curve: 0.920 vs 0.761; P < .001) and external validation (area under the receiver operating characteristic curve: 0.921 vs 0.794; P < .001). There was no significant difference in the discrimination for pulmonary embolism between EasyCIE-PEDVT and ICD codes. CONCLUSION: Accurate surveillance of postoperative venous thromboembolism may be achieved using natural language processing on clinical notes in 2 independent health care systems. These findings suggest natural language processing may augment manual chart abstraction for large registries such as NSQIP.


Assuntos
Processamento de Linguagem Natural , Complicações Pós-Operatórias/diagnóstico , Melhoria de Qualidade , Trombose Venosa/diagnóstico , Estudos de Coortes , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Curva ROC , Estudos Retrospectivos
11.
J Med Libr Assoc ; 98(4): 273-81, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-20936065

RESUMO

OBJECTIVE: This paper examines the development and evaluation of an automatic summarization system in the domain of molecular genetics. The system is a potential component of an advanced biomedical information management application called Semantic MEDLINE and could assist librarians in developing secondary databases of genetic information extracted from the primary literature. METHODS: An existing summarization system was modified for identifying biomedical text relevant to the genetic etiology of disease. The summarization system was evaluated on the task of identifying data describing genes associated with bladder cancer in MEDLINE citations. A gold standard was produced using records from Genetics Home Reference and Online Mendelian Inheritance in Man. Genes in text found by the system were compared to the gold standard. Recall, precision, and F-measure were calculated. RESULTS: The system achieved recall of 46%, and precision of 88% (F-measure=0.61) by taking Gene References into Function (GeneRIFs) into account. CONCLUSION: The new summarization schema for genetic etiology has potential as a component in Semantic MEDLINE to support the work of data curators.


Assuntos
Bases de Dados Genéticas , Armazenamento e Recuperação da Informação/métodos , MEDLINE , Processamento de Linguagem Natural , Semântica , Terminologia como Assunto , Genética Médica , Humanos , Descritores , Estados Unidos
12.
Stud Health Technol Inform ; 160(Pt 2): 944-8, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20841823

RESUMO

An important proportion of the information about the medications a patient is taking is mentioned only in narrative text in the electronic health record. Automated information extraction can make this information accessible for decision support, research, or any other automated processing. In the context of the "i2b2 medication extraction challenge," we have developed a new NLP application called Textractor to automatically extract medications and details about them (e.g., dosage, frequency, reason for their prescription). This application and its evaluation with part of the reference standard for this "challenge" are presented here, along with an analysis of the development of this reference standard. During this evaluation, Textractor reached a system-level overall F1-measure, the reference metric for this challenge, of about 77% for exact matches. The best performance was measured with medication routes (F1-measure 86.4%), and the worst with prescription reasons (F1-measure 29%). These results are consistent with the agreement observed between human annotators when developing the reference standard, and with other published research.


Assuntos
Prescrições de Medicamentos , Armazenamento e Recuperação da Informação/métodos , Registros Eletrônicos de Saúde/normas , Humanos , Processamento de Linguagem Natural , Vocabulário Controlado
13.
J Acad Nutr Diet ; 119(1): 45-56, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30413342

RESUMO

BACKGROUND: Household food purchases are potential indicators of the quality of the home food environment, and grocery purchase behavior is a main focus of US Department of Agriculture (USDA) nutrition education programs; therefore, objective measures of grocery purchases are needed. OBJECTIVE: The objective of the study was to evaluate the Grocery Purchase Quality Index-2016 (GPQI-2016) as a tool for assessing grocery food purchase quality by using the Healthy Eating Index-2015 (HEI-2015) as the reference standard. DESIGN: In 2012, the USDA Economic Research Service conducted the National Household Food Acquisition and Purchase Survey. Members of participating households recorded all foods acquired for a week. Foods purchased at stores were mapped to the 29 food categories used in USDA Food Plans, expenditure shares were estimated, and GPQI-2016 scores were calculated. USDA food codes, provided in the survey database, were used to calculate the HEI-2015. PARTICIPANTS/SETTING: All households in the 48 coterminous states were eligible for the survey. The analytic sample size was 4,276 households. MAIN OUTCOME MEASURES: GPQI-2016 and HEI-2015 scores were compared. STATISTICAL ANALYSES PERFORMED: Correlation of scores was assessed using Spearman's correlation coefficient. Linear regression models with fixed effects were used to determine differences among various subgroups of households. RESULTS: The correlation coefficient for the total GPQI-2016 score and the total HEI-2015 score was 0.70. For the component scores, the strongest correlations were for Total and Whole Fruit (0.89 to 0.90); the weakest were for Dairy (0.67), Refined Grains (0.66), and Sweets and Sodas/Added Sugars (0.65) (all, P<0.01). Both the GPQI-2016 and HEI-2015 were significantly different among subgroups in expected directions. CONCLUSIONS: Overall, the GPQI-2016, estimated from a national survey of households, performed similarly to the HEI-2015. The tool has potential for evaluating nutrition education programs and retail-oriented interventions when the nutrient content and gram weights of foods purchased are not available.


Assuntos
Comércio/estatística & dados numéricos , Comportamento do Consumidor/estatística & dados numéricos , Dieta Saudável/métodos , Preferências Alimentares , Qualidade dos Alimentos , Alimentos/estatística & dados numéricos , Doces/classificação , Bebidas Gaseificadas/classificação , Laticínios/classificação , Grão Comestível/classificação , Características da Família , Alimentos/classificação , Alimentos/economia , Frutas/classificação , Comportamentos Relacionados com a Saúde , Humanos , Valor Nutritivo , Fatores Socioeconômicos , Inquéritos e Questionários , Estados Unidos , United States Department of Agriculture
14.
J Biomed Inform ; 41(6): 944-52, 2008 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18442951

RESUMO

This study predicted graft and recipient survival in kidney transplantation based on the USRDS dataset by regression models and artificial neural networks (ANNs). We examined single time-point models (logistic regression and single-output ANNs) versus multiple time-point models (Cox models and multiple-output ANNs). These models in general achieved good prediction discrimination (AUC up to 0.82) and model calibration. This study found that: (1) Single time-point and multiple time-point models can achieve comparable AUC, except for multiple-output ANNs, which may perform poorly when a large proportion of observations are censored, (2) Logistic regression is able to achieve comparable performance as ANNs if there are no strong interactions or non-linear relationships among the predictors and the outcomes, (3) Time-varying effects must be modeled explicitly in Cox models when predictors have significantly different effects on short-term versus long-term survival, and (4) Appropriate baseline survivor function should be specified for Cox models to achieve good model calibration, especially when clinical decision support is designed to provide exact predicted survival rates.


Assuntos
Transplante de Rim , Modelos Teóricos , Humanos , Modelos Logísticos , Modelos de Riscos Proporcionais
15.
Arch Intern Med ; 167(10): 1041-9, 2007 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-17533207

RESUMO

BACKGROUND: The Food and Drug Administration (FDA) and pharmaceutical manufacturers conduct most postmarketing pharmaceutical safety investigations. These efforts are frequently based on data mining of databases. In 1998, investigators initiated the Research on Adverse Drug events And Reports (RADAR) project to investigate reports of serious adverse drug reactions (ADRs) and prospectively obtain information on these cases. We compare safety efforts for evaluating serious ADRs conducted by the FDA and pharmaceutical manufacturers vs the RADAR project. METHODS: We evaluated the completeness of serious ADR descriptions in the FDA and RADAR databases and the comprehensiveness of notifications disseminated by pharmaceutical manufacturers and the RADAR investigators. A serious ADR was defined as an event that led to death or required intensive therapies to reverse. RESULTS: The RADAR investigators evaluated 16 serious ADRs. Compared with descriptions of these ADRs in FDA databases (2296 reports), reports in RADAR databases (472 reports) had a 2-fold higher rate of including information on history and physical examination (92% vs 45%; P<.001) and a 9-fold higher rate of including basic science findings (34% vs 4%; P = .08). Safety notifications were disseminated earlier by pharmaceutical suppliers (2 vs 4 years after approval, respectively), although notifications were less likely to include information on incidence (46% vs 93%; P = .02), outcomes (8% vs 100%; P<.001), treatment or prophylaxis (25% vs 93%; P<.001), or references (8% vs 80%; P<.001). CONCLUSION: Proactive safety efforts conducted by the RADAR investigators are more comprehensive than those conducted by the FDA and pharmaceutical manufacturers, but dissemination of related safety notifications is less timely.


Assuntos
Indústria Farmacêutica , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Vigilância de Produtos Comercializados/métodos , United States Food and Drug Administration , Bases de Dados Factuais , Humanos , Disseminação de Informação , Estudos Prospectivos , Estados Unidos
16.
J Am Med Inform Assoc ; 14(4): 391-3, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17460125

RESUMO

The AMIA Board of Directors has decided to periodically publish AMIA's Code of Professional Ethical Conduct for its members in the Journal of the American Medical Informatics Association. The Code also will be available on the AMIA Web site at www.amia.org as it continues to evolve in response to feedback from the AMIA membership. The AMIA Board acknowledges the continuing work and dedication of the AMIA Ethics Committee. AMIA is the copyright holder of this work.


Assuntos
Códigos de Ética , Informática Médica/ética , Sociedades Médicas/ética , Estados Unidos
17.
J Biomed Inform ; 40(2): 174-82, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-16901760

RESUMO

Methods for surveillance of adverse events (AEs) in clinical settings are limited by cost, technology, and appropriate data availability. In this study, two methods for semi-automated review of text records within the Veterans Administration database are utilized to identify AEs related to the placement of central venous catheters (CVCs): a Natural Language Processing program and a phrase-matching algorithm. A sample of manually reviewed records were then compared to the results of both methods to assess sensitivity and specificity. The phrase-matching algorithm was found to be a sensitive but relatively non-specific method, whereas a natural language processing system was significantly more specific but less sensitive. Positive predictive values for each method estimated the CVC-associated AE rate at this institution to be 6.4 and 6.2%, respectively. Using both methods together results in acceptable sensitivity and specificity (72.0 and 80.1%, respectively). All methods including manual chart review are limited by incomplete or inaccurate clinician documentation. A secondary finding was related to the completeness of administrative data (ICD-9 and CPT codes) used to identify intensive care unit patients in whom a CVC was placed. Administrative data identified less than 11% of patients who had a CVC placed. This suggests that other methods, including automated methods such as phrase matching, may be more sensitive than administrative data in identifying patients with devices. Considerable potential exists for the use of such methods for the identification of patients at risk, AE surveillance, and prevention of AEs through decision support technologies.


Assuntos
Cateterismo Venoso Central/efeitos adversos , Sistemas de Gerenciamento de Base de Dados , Armazenamento e Recuperação da Informação/métodos , Erros Médicos , Sistemas Computadorizados de Registros Médicos , Processamento de Linguagem Natural , Reconhecimento Automatizado de Padrão/métodos , Inteligência Artificial , Humanos
18.
Am J Health Syst Pharm ; 64(8): 842-9, 2007 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-17420201

RESUMO

PURPOSE: A systematic review and metaanalysis were conducted to determine if studies that included pharmacists as chart reviewers detected higher rates of adverse drug events (ADEs) than studies that included other health care professionals or hospital personnel as chart reviewers. METHODS: A systematic review and metaanalysis of studies using chart review as the method of detection of ADEs were conducted. Pooled estimates of the ADE rates were calculated using the inverse variance weight method. Meta-analysis was performed using a random effects model. Using the Mann-Whitney U test, weighted rates of studies in which pharmacists versus other clinicians were the chart reviewers were compared. RESULTS: Thirteen studies satisfied the inclusion criteria. Using random effects metaanalysis, the mean of the weighted incidence rate detected by pharmacists was 0.33 ADE per admission (95% confidence interval [CI], 0.17-0.50); the mean was 0.16 ADE per admission (95% CI, 0.11-0.22) with detection by nonpharmacists. Significant heterogeneity was present between studies in both groups. A significant difference (p=0.003) existed between the ADE rate reported by pharmacists (median=0.23; interquartile range [IQR], 0.18-0.44) and that of nonpharmacists (median=0.12; IQR, 0.02-0.49). Although there is overwhelming evidence of statistical heterogeneity, the numbers pertaining to the ADE rates detected by the two groups were large enough to indicate significant differences. Despite the heterogeneity, there is strong evidence that pharmacist-led interventions based on chart review report a higher ADE rate among inpatients. CONCLUSION: A review of the literature revealed that pharmacists make a salient contribution as manual chart reviewers in inpatient ADE interventions.


Assuntos
Sistemas de Notificação de Reações Adversas a Medicamentos , Auditoria Médica/métodos , Farmacêuticos , Hospitais , Modelos Estatísticos , Recursos Humanos em Hospital , Serviço de Farmácia Hospitalar , Papel Profissional
19.
Nutrients ; 9(5)2017 May 05.
Artigo em Inglês | MEDLINE | ID: mdl-28475153

RESUMO

This study presents a method laying the groundwork for systematically monitoring food quality and the healthfulness of consumers' point-of-sale grocery purchases. The method automates the process of identifying United States Department of Agriculture (USDA) Food Patterns Equivalent Database (FPED) components of grocery food items. The input to the process is the compact abbreviated descriptions of food items that are similar to those appearing on the point-of-sale sales receipts of most food retailers. The FPED components of grocery food items are identified using Natural Language Processing techniques combined with a collection of food concept maps and relationships that are manually built using the USDA Food and Nutrient Database for Dietary Studies, the USDA National Nutrient Database for Standard Reference, the What We Eat In America food categories, and the hierarchical organization of food items used by many grocery stores. We have established the construct validity of the method using data from the National Health and Nutrition Examination Survey, but further evaluation of validity and reliability will require a large-scale reference standard with known grocery food quality measures. Here we evaluate the method's utility in identifying the FPED components of grocery food items available in a large sample of retail grocery sales data (~190 million transaction records).


Assuntos
Comportamento do Consumidor , Qualidade dos Alimentos , Bases de Dados Factuais , Dieta Saudável , Humanos , Marketing , Inquéritos Nutricionais , Reprodutibilidade dos Testes , Estados Unidos , United States Department of Agriculture
20.
Arch Intern Med ; 165(10): 1111-6, 2005 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-15911723

RESUMO

BACKGROUND: Numerous studies have shown that specific computerized interventions may reduce medication errors, but few have examined adverse drug events (ADEs) across all stages of the computerized medication process. We describe the frequency and type of inpatient ADEs that occurred following the adoption of multiple computerized medication ordering and administration systems, including computerized physician order entry (CPOE). METHODS: Using explicit standardized criteria, pharmacists classified inpatient ADEs from prospective daily reviews of electronic medical records from a random sample of all admissions during a 20-week period at a Veterans Administration hospital. We analyzed ADEs that necessitated a changed treatment plan. RESULTS: Among 937 hospital admissions, 483 clinically significant inpatient ADEs were identified, accounting for 52 ADEs per 100 admissions and an incidence density of 70 ADEs per 1000 patient-days. One quarter of the hospitalizations had at least 1 ADE. Of all ADEs, 9% resulted in serious harm, 22% in additional monitoring and interventions, 32% in interventions alone, and 11% in monitoring alone; 27% should have resulted in additional interventions or monitoring. Medication errors contributed to 27% of these ADEs. Errors associated with ADEs occurred in the following stages: 61% ordering, 25% monitoring, 13% administration, 1% dispensing, and 0% transcription. The medical record reflected recognition of 76% of the ADEs. CONCLUSIONS: High rates of ADEs may continue to occur after implementation of CPOE and related computerized medication systems that lack decision support for drug selection, dosing, and monitoring.


Assuntos
Quimioterapia Assistida por Computador , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Hospitais Universitários , Sistemas Computadorizados de Registros Médicos , Erros de Medicação/estatística & dados numéricos , Sistemas de Informação em Farmácia Clínica , Sistemas de Apoio a Decisões Clínicas , Seguimentos , Humanos , Sistemas de Medicação no Hospital , Distribuição Aleatória , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA