Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Ann Surg ; 2023 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-37860868

RESUMO

OBJECTIVE AND BACKGROUND: Clinically significant posthepatectomy liver failure (PHLF B+C) remains the main cause of mortality after major hepatic resection. This study aimed to establish an APRI+ALBI, aspartate aminotransferase to platelet ratio (APRI) combined with albumin-bilirubin grade (ALBI), based multivariable model (MVM) to predict PHLF and compare its performance to indocyanine green clearance (ICG-R15 or ICG-PDR) and albumin-ICG evaluation (ALICE). METHODS: 12,056 patients from the National Surgical Quality Improvement Program (NSQIP) database were used to generate a MVM to predict PHLF B+C. The model was determined using stepwise backwards elimination. Performance of the model was tested using receiver operating characteristic curve analysis and validated in an international cohort of 2,525 patients. In 620 patients, the APRI+ALBI MVM, trained in the NSQIP cohort, was compared with MVM's based on other liver function tests (ICG clearance, ALICE) by comparing the areas under the curve (AUC). RESULTS: A MVM including APRI+ALBI, age, sex, tumor type and extent of resection was found to predict PHLF B+C with an AUC of 0.77, with comparable performance in the validation cohort (AUC 0.74). In direct comparison with other MVM's based on more expensive and time-consuming liver function tests (ICG clearance, ALICE), the APRI+ALBI MVM demonstrated equal predictive potential for PHLF B+C. A smartphone application for calculation of the APRI+ALBI MVM was designed. CONCLUSION: Risk assessment via the APRI+ALBI MVM for PHLF B+C increases preoperative predictive accuracy and represents an universally available and cost-effective risk assessment prior to hepatectomy, facilitated by a freely available smartphone app.

2.
J Med Internet Res ; 23(10): e30545, 2021 10 26.
Artigo em Inglês | MEDLINE | ID: mdl-34697010

RESUMO

One of the greatest strengths of artificial intelligence (AI) and machine learning (ML) approaches in health care is that their performance can be continually improved based on updates from automated learning from data. However, health care ML models are currently essentially regulated under provisions that were developed for an earlier age of slowly updated medical devices-requiring major documentation reshape and revalidation with every major update of the model generated by the ML algorithm. This creates minor problems for models that will be retrained and updated only occasionally, but major problems for models that will learn from data in real time or near real time. Regulators have announced action plans for fundamental changes in regulatory approaches. In this Viewpoint, we examine the current regulatory frameworks and developments in this domain. The status quo and recent developments are reviewed, and we argue that these innovative approaches to health care need matching innovative approaches to regulation and that these approaches will bring benefits for patients. International perspectives from the World Health Organization, and the Food and Drug Administration's proposed approach, based around oversight of tool developers' quality management systems and defined algorithm change protocols, offer a much-needed paradigm shift, and strive for a balanced approach to enabling rapid improvements in health care through AI innovation while simultaneously ensuring patient safety. The draft European Union (EU) regulatory framework indicates similar approaches, but no detail has yet been provided on how algorithm change protocols will be implemented in the EU. We argue that detail must be provided, and we describe how this could be done in a manner that would allow the full benefits of AI/ML-based innovation for EU patients and health care systems to be realized.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Algoritmos , Atenção à Saúde , Humanos
3.
JAMIA Open ; 4(2): ooab025, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33898938

RESUMO

OBJECTIVE: We present the Berlin-Tübingen-Oncology corpus (BRONCO), a large and freely available corpus of shuffled sentences from German oncological discharge summaries annotated with diagnosis, treatments, medications, and further attributes including negation and speculation. The aim of BRONCO is to foster reproducible and openly available research on Information Extraction from German medical texts. MATERIALS AND METHODS: BRONCO consists of 200 manually deidentified discharge summaries of cancer patients. Annotation followed a structured and quality-controlled process involving 2 groups of medical experts to ensure consistency, comprehensiveness, and high quality of annotations. We present results of several state-of-the-art techniques for different IE tasks as baselines for subsequent research. RESULTS: The annotated corpus consists of 11 434 sentences and 89 942 tokens, annotated with 11 124 annotations for medical entities and 3118 annotations of related attributes. We publish 75% of the corpus as a set of shuffled sentences, and keep 25% as held-out data set for unbiased evaluation of future IE tools. On this held-out dataset, our baselines reach depending on the specific entity types F1-scores of 0.72-0.90 for named entity recognition, 0.10-0.68 for entity normalization, 0.55 for negation detection, and 0.33 for speculation detection. DISCUSSION: Medical corpus annotation is a complex and time-consuming task. This makes sharing of such resources even more important. CONCLUSION: To our knowledge, BRONCO is the first sizable and freely available German medical corpus. Our baseline results show that more research efforts are necessary to lift the quality of information extraction in German medical texts to the level already possible for English.

4.
Wellcome Open Res ; 5: 120, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32766457

RESUMO

Background: Timely diagnosis of dementia is a policy priority in the United Kingdom (UK). Primary care physicians receive incentives to diagnose dementia; however, 33% of patients are still not receiving a diagnosis. We explored automating early detection of dementia using data from patients' electronic health records (EHRs). We investigated: a) how early a machine-learning model could accurately identify dementia before the physician; b) if models could be tuned for dementia subtype; and c) what the best clinical features were for achieving detection. Methods: Using EHRs from Clinical Practice Research Datalink in a case-control design, we selected patients aged >65y with a diagnosis of dementia recorded 2000-2012 (cases) and matched them 1:1 to controls; we also identified subsets of Alzheimer's and vascular dementia patients. Using 77 coded concepts recorded in the 5 years before diagnosis, we trained random forest classifiers, and evaluated models using Area Under the Receiver Operating Characteristic Curve (AUC). We examined models by year prior to diagnosis, subtype, and the most important features contributing to classification. Results: 95,202 patients (median age 83y; 64.8% female) were included (50% dementia cases). Classification of dementia cases and controls was poor 2-5 years prior to physician-recorded diagnosis (AUC range 0.55-0.65) but good in the year before (AUC: 0.84). Features indicating increasing cognitive and physical frailty dominated models 2-5 years before diagnosis; in the final year, initiation of the dementia diagnostic pathway (symptoms, screening and referral) explained the sudden increase in accuracy. No substantial differences were seen between all-cause dementia and subtypes. Conclusions: Automated detection of dementia earlier than the treating physician may be problematic, if using only primary care data. Future work should investigate more complex modelling, benefits of linking multiple sources of healthcare data and monitoring devices, or contextualising the algorithm to those cases that the GP would need to investigate.

5.
BMC Bioinformatics ; 20(1): 429, 2019 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-31419935

RESUMO

BACKGROUND: Diagnosis and treatment decisions in cancer increasingly depend on a detailed analysis of the mutational status of a patient's genome. This analysis relies on previously published information regarding the association of variations to disease progression and possible interventions. Clinicians to a large degree use biomedical search engines to obtain such information; however, the vast majority of scientific publications focus on basic science and have no direct clinical impact. We develop the Variant-Information Search Tool (VIST), a search engine designed for the targeted search of clinically relevant publications given an oncological mutation profile. RESULTS: VIST indexes all PubMed abstracts and content from ClinicalTrials.gov. It applies advanced text mining to identify mentions of genes, variants and drugs and uses machine learning based scoring to judge the clinical relevance of indexed abstracts. Its functionality is available through a fast and intuitive web interface. We perform several evaluations, showing that VIST's ranking is superior to that of PubMed or a pure vector space model with regard to the clinical relevance of a document's content. CONCLUSION: Different user groups search repositories of scientific publications with different intentions. This diversity is not adequately reflected in the standard search engines, often leading to poor performance in specialized settings. We develop a search engine for the specific case of finding documents that are clinically relevant in the course of cancer treatment. We believe that the architecture of our engine, heavily relying on machine learning algorithms, can also act as a blueprint for search engines in other, equally specific domains. VIST is freely available at https://vist.informatik.hu-berlin.de/.


Assuntos
Neoplasias/patologia , Medicina de Precisão , Ferramenta de Busca , Algoritmos , Bases de Dados como Assunto , Documentação , Humanos , Internet , Interface Usuário-Computador
6.
Artigo em Inglês | MEDLINE | ID: mdl-32914021

RESUMO

PURPOSE: Precision oncology depends on the availability of up-to-date, comprehensive, and accurate information about associations between genetic variants and therapeutic options. Recently, a number of knowledge bases (KBs) have been developed that gather such information on the basis of expert curation of the scientific literature. We performed a quantitative and qualitative comparison of Clinical Interpretations of Variants in Cancer, OncoKB, Cancer Gene Census, Database of Curated Mutations, CGI Biomarkers (the cancer genome interpreter biomarker database), Tumor Alterations Relevant for Genomics-Driven Therapy, and the Precision Medicine Knowledge Base. METHODS: We downloaded each KB and restructured their content to describe variants, genes, drugs, and gene-drug associations in a common format. We normalized gene names to Entrez Gene IDs and drug names to ChEMBL and DrugBank IDs. For the analysis of clinically relevant gene-drug associations, we obtained lists of genes affected by genetic alterations and putative drug therapies for 113 patients with cancer whose cases were presented at the Molecular Tumor Board (MTB) of the Charité Comprehensive Cancer Center. RESULTS: Our analysis revealed that the KBs are largely overlapping but also that each source harbors a notable amount of unique information. Although some KBs cover more genes, others contain more data about gene-drug associations. Retrospective comparisons with findings of the Charitè MTB at the gene level showed that use of multiple KBs may considerably improve retrieval results. The relative importance of a KB in terms of cancer genes was assessed in more detail by logistic regression, which revealed that all but one source had a notable impact on result quality. We confirmed these findings using a second data set obtained from an independent MTB. CONCLUSION: To date, none of the existing publicly available KBs on gene-drug associations in precision oncology fully subsumes the others, but all of them exhibit specific strengths and weaknesses. Consideration of multiple KBs, therefore, is essential to obtain comprehensive results.

7.
BMC Med Inform Decis Mak ; 18(1): 107, 2018 11 21.
Artigo em Inglês | MEDLINE | ID: mdl-30463544

RESUMO

BACKGROUND: The decreasing cost of obtaining high-quality calls of genomic variants and the increasing availability of clinically relevant data on such variants are important drivers for personalized oncology. To allow rational genome-based decisions in diagnosis and treatment, clinicians need intuitive access to up-to-date and comprehensive variant information, encompassing, for instance, prevalence in populations and diseases, functional impact at the molecular level, associations to druggable targets, or results from clinical trials. In practice, collecting such comprehensive information on genomic variants is difficult since the underlying data is dispersed over a multitude of distributed, heterogeneous, sometimes conflicting, and quickly evolving data sources. To work efficiently, clinicians require powerful Variant Information Systems (VIS) which automatically collect and aggregate available evidences from such data sources without suppressing existing uncertainty. METHODS: We address the most important cornerstones of modeling a VIS: We take from emerging community standards regarding the necessary breadth of variant information and procedures for their clinical assessment, long standing experience in implementing biomedical databases and information systems, our own clinical record of diagnosis and treatment of cancer patients based on molecular profiles, and extensive literature review to derive a set of design principles along which we develop a relational data model for variant level data. In addition, we characterize a number of public variant data sources, and describe a data integration pipeline to integrate their data into a VIS. RESULTS: We provide a number of contributions that are fundamental to the design and implementation of a comprehensive, operational VIS. In particular, we (a) present a relational data model to accurately reflect data extracted from public databases relevant for clinical variant interpretation, (b) introduce a fault tolerant and performant integration pipeline for public variant data sources, and (c) offer recommendations regarding a number of intricate challenges encountered when integrating variant data for clincal interpretation. CONCLUSION: The analysis of requirements for representation of variant level data in an operational data model, together with the implementation-ready relational data model presented here, and the instructional description of methods to acquire comprehensive information to fill it, are an important step towards variant information systems for genomic medicine.


Assuntos
Variação Genética , Genômica , Aplicações da Informática Médica , Oncologia , Medicina de Precisão , Genômica/métodos , Humanos , Oncologia/métodos , Medicina de Precisão/métodos
8.
Nucleic Acids Res ; 40(Web Server issue): W585-91, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-22693219

RESUMO

Research results are primarily published in scientific literature and curation efforts cannot keep up with the rapid growth of published literature. The plethora of knowledge remains hidden in large text repositories like MEDLINE. Consequently, life scientists have to spend a great amount of time searching for specific information. The enormous ambiguity among most names of biomedical objects such as genes, chemicals and diseases often produces too large and unspecific search results. We present GeneView, a semantic search engine for biomedical knowledge. GeneView is built upon a comprehensively annotated version of PubMed abstracts and openly available PubMed Central full texts. This semi-structured representation of biomedical texts enables a number of features extending classical search engines. For instance, users may search for entities using unique database identifiers or they may rank documents by the number of specific mentions they contain. Annotation is performed by a multitude of state-of-the-art text-mining tools for recognizing mentions from 10 entity classes and for identifying protein-protein interactions. GeneView currently contains annotations for >194 million entities from 10 classes for ∼21 million citations with 271,000 full text bodies. GeneView can be searched at http://bc3.informatik.hu-berlin.de/.


Assuntos
PubMed , Ferramenta de Busca , Software , Indexação e Redação de Resumos , Genes , Internet , Polimorfismo de Nucleotídeo Único , Mapeamento de Interação de Proteínas , Interface Usuário-Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA