Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
J Biomed Inform ; 46(4): 665-75, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23727053

RESUMO

Although technological or organizational systems that enforce systematic procedures and best practices can lead to improvements in quality, these systems must also be designed to allow users to adapt to the inherent uncertainty, complexity, and variations in healthcare. We present a framework, called Systematic Yet Flexible Systems Analysis (SYFSA) that supports the design and analysis of Systematic Yet Flexible (SYF) systems (whether organizational or technical) by formally considering the tradeoffs between systematicity and flexibility. SYFSA is based on analyzing a task using three related problem spaces: the idealized space, the natural space, and the system space. The idealized space represents the best practice-how the task is to be accomplished under ideal conditions. The natural space captures the task actions and constraints on how the task is currently done. The system space specifies how the task is done in a redesigned system, including how it may deviate from the idealized space, and how the system supports or enforces task constraints. The goal of the framework is to support the design of systems that allow graceful degradation from the idealized space to the natural space. We demonstrate the application of SYFSA for the analysis of a simplified central line insertion task. We also describe several information-theoretic measures of flexibility that can be used to compare alternative designs, and to measure how efficiently a system supports a given task, the relative cognitive workload, and learnability.


Assuntos
Análise de Sistemas , Atenção à Saúde/organização & administração , Incerteza , Carga de Trabalho
2.
BMC Bioinformatics ; 13 Suppl 13: S2, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23320851

RESUMO

BACKGROUND: Electronic Health Records aggregated in Clinical Data Warehouses (CDWs) promise to revolutionize Comparative Effectiveness Research and suggest new avenues of research. However, the effectiveness of CDWs is diminished by the lack of properly labeled data. We present a novel approach that integrates knowledge from the CDW, the biomedical literature, and the Unified Medical Language System (UMLS) to perform high-throughput phenotyping. In this paper, we automatically construct a graphical knowledge model and then use it to phenotype breast cancer patients. We compare the performance of this approach to using MetaMap when labeling records. RESULTS: MetaMap's overall accuracy at identifying breast cancer patients was 51.1% (n=428); recall=85.4%, precision=26.2%, and F1=40.1%. Our unsupervised graph-based high-throughput phenotyping had accuracy of 84.1%; recall=46.3%, precision=61.2%, and F1=52.8%. CONCLUSIONS: We conclude that our approach is a promising alternative for unsupervised high-throughput phenotyping.


Assuntos
Neoplasias da Mama/classificação , Simulação por Computador , Registros Eletrônicos de Saúde , Modelos Biológicos , Feminino , Humanos , Fenótipo , Unified Medical Language System
3.
J Biomed Inform ; 44 Suppl 1: S69-S77, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21986292

RESUMO

Proposal and execution of clinical trials, computation of quality measures and discovery of correlation between medical phenomena are all applications where an accurate count of patients is needed. However, existing sources of this type of patient information, including Clinical Data Warehouses (CDWs) may be incomplete or inaccurate. This research explores applying probabilistic techniques, supported by the MayBMS probabilistic database, to obtain accurate patient counts from a Clinical Data Warehouse containing synthetic patient data. We present a synthetic Clinical Data Warehouse, and populate it with simulated data using a custom patient data generation engine. We then implement, evaluate and compare different techniques for obtaining patients counts. We model billing as a test for the presence of a condition. We compute billing's sensitivity and specificity both by conducting a "Simulated Expert Review" where a representative sample of records are reviewed and labeled by experts, and by obtaining the ground truth for every record. We compute the posterior probability of a patient having a condition through a "Bayesian Chain", using Bayes' Theorem to calculate the probability of a patient having a condition after each visit. The second method is a "one-shot" approach that computes the probability of a patient having a condition based on whether the patient is ever billed for the condition. Our results demonstrate the utility of probabilistic approaches, which improve on the accuracy of raw counts. In particular, the simulated review paired with a single application of Bayes' Theorem produces the best results, with an average error rate of 2.1% compared to 43.7% for the straightforward billing counts. Overall, this research demonstrates that Bayesian probabilistic approaches improve patient counts on simulated patient populations. We believe that total patient counts based on billing data are one of the many possible applications of our Bayesian framework. Use of these probabilistic techniques will enable more accurate patient counts and better results for applications requiring this metric.


Assuntos
Teorema de Bayes , Bases de Dados Factuais , Registros Eletrônicos de Saúde , Humanos , Pacientes
4.
Rev Med Chil ; 139(12): 1611-6, 2011 Dec.
Artigo em Espanhol | MEDLINE | ID: mdl-22446710

RESUMO

Biomedical Informatics is a new discipline that arose from the need to incorporate information technologies to the generation, storage, distribution and analysis of information in the domain of biomedical sciences. This discipline comprises basic biomedical informatics, and public health informatics. The development of the discipline in Chile has been modest and most projects have originated from the interest of individual people or institutions, without a systematic and coordinated national development. Considering the unique features of health care system of our country, research in the area of biomedical informatics is becoming an imperative.


Assuntos
Informática Médica/educação , Chile , Humanos
5.
J Am Med Inform Assoc ; 14(2): 212-20, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17213501

RESUMO

OBJECTIVE: To characterize PubMed usage over a typical day and compare it to previous studies of user behavior on Web search engines. DESIGN: We performed a lexical and semantic analysis of 2,689,166 queries issued on PubMed over 24 consecutive hours on a typical day. MEASUREMENTS: We measured the number of queries, number of distinct users, queries per user, terms per query, common terms, Boolean operator use, common phrases, result set size, MeSH categories, used semantic measurements to group queries into sessions, and studied the addition and removal of terms from consecutive queries to gauge search strategies. RESULTS: The size of the result sets from a sample of queries showed a bimodal distribution, with peaks at approximately 3 and 100 results, suggesting that a large group of queries was tightly focused and another was broad. Like Web search engine sessions, most PubMed sessions consisted of a single query. However, PubMed queries contained more terms. CONCLUSION: PubMed's usage profile should be considered when educating users, building user interfaces, and developing future biomedical information retrieval systems.


Assuntos
Armazenamento e Recuperação da Informação/estatística & dados numéricos , PubMed/estatística & dados numéricos , Algoritmos , Internet , Medical Subject Headings/estatística & dados numéricos
6.
J Biomed Inform ; 40(2): 93-9, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-16469545

RESUMO

Databases continue to grow but the metrics available to evaluate information retrieval systems have not changed. Large collections such as MEDLINE and the World Wide Web contain many relevant documents for common queries. Ranking is therefore increasingly important and successful information retrieval systems, such as Google, have emphasized ranking. However, existing evaluation metrics such as precision and recall, do not directly account for ranking. This paper describes a novel way of measuring information retrieval performance using weighted hit curves adapted from the field of statistical detection to reflect multiple desirable characteristics such as relevance, importance, and methodologic quality. In statistical detection, hit curves have been proposed to represent occurrence of interesting events during a detection process. Similarly, hit curves can be used to study the position of relevant documents within large result sets. We describe hit curves in light of a formal model of information retrieval, show how hit curves represent system performance including ranking, and define ways to statistically compare performance of multiple systems using hit curves. We provide example scenarios where traditional measures are less suitable than hit curves and conclude that hit curves may be useful for evaluating retrieval from large collections where ranking performance is crucial.


Assuntos
Algoritmos , Interpretação Estatística de Dados , Sistemas de Gerenciamento de Base de Dados , Bases de Dados Factuais , Armazenamento e Recuperação da Informação/métodos , Processamento de Linguagem Natural , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
7.
J Am Med Inform Assoc ; 13(1): 96-105, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-16221938

RESUMO

OBJECTIVE: To determine whether algorithms developed for the World Wide Web can be applied to the biomedical literature in order to identify articles that are important as well as relevant. DESIGN AND MEASUREMENTS A direct comparison of eight algorithms: simple PubMed queries, clinical queries (sensitive and specific versions), vector cosine comparison, citation count, journal impact factor, PageRank, and machine learning based on polynomial support vector machines. The objective was to prioritize important articles, defined as being included in a pre-existing bibliography of important literature in surgical oncology. RESULTS Citation-based algorithms were more effective than noncitation-based algorithms at identifying important articles. The most effective strategies were simple citation count and PageRank, which on average identified over six important articles in the first 100 results compared to 0.85 for the best noncitation-based algorithm (p < 0.001). The authors saw similar differences between citation-based and noncitation-based algorithms at 10, 20, 50, 200, 500, and 1,000 results (p < 0.001). Citation lag affects performance of PageRank more than simple citation count. However, in spite of citation lag, citation-based algorithms remain more effective than noncitation-based algorithms. CONCLUSION Algorithms that have proved successful on the World Wide Web can be applied to biomedical information retrieval. Citation-based algorithms can help identify important articles within large sets of relevant results. Further studies are needed to determine whether citation-based algorithms can effectively meet actual user information needs.


Assuntos
Algoritmos , Inteligência Artificial , Armazenamento e Recuperação da Informação/métodos , MEDLINE , Bibliometria , Medicina Baseada em Evidências , Internet , PubMed
8.
J Am Med Inform Assoc ; 22(5): 962-6, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26063744

RESUMO

INTRODUCTION: Automatically identifying specific phenotypes in free-text clinical notes is critically important for the reuse of clinical data. In this study, the authors combine expert-guided feature (text) selection with one-class classification for text processing. OBJECTIVES: To compare the performance of one-class classification to traditional binary classification; to evaluate the utility of feature selection based on expert-selected salient text (snippets); and to determine the robustness of these models with respects to irrelevant surrounding text. METHODS: The authors trained one-class support vector machines (1C-SVMs) and two-class SVMs (2C-SVMs) to identify notes discussing breast cancer. Manually annotated visit summary notes (88 positive and 88 negative for breast cancer) were used to compare the performance of models trained on whole notes labeled as positive or negative to models trained on expert-selected text sections (snippets) relevant to breast cancer status. Model performance was evaluated using a 70:30 split for 20 iterations and on a realistic dataset of 10 000 records with a breast cancer prevalence of 1.4%. RESULTS: When tested on a balanced experimental dataset, 1C-SVMs trained on snippets had comparable results to 2C-SVMs trained on whole notes (F = 0.92 for both approaches). When evaluated on a realistic imbalanced dataset, 1C-SVMs had a considerably superior performance (F = 0.61 vs. F = 0.17 for the best performing model) attributable mainly to improved precision (p = .88 vs. p = .09 for the best performing model). CONCLUSIONS: 1C-SVMs trained on expert-selected relevant text sections perform better than 2C-SVMs classifiers trained on either snippets or whole notes when applied to realistically imbalanced data with low prevalence of the positive class.


Assuntos
Neoplasias da Mama/classificação , Processamento de Linguagem Natural , Máquina de Vetores de Suporte , Feminino , Humanos , Prontuários Médicos
9.
J Am Med Inform Assoc ; 21(1): 97-104, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-23703827

RESUMO

INTRODUCTION: Clinical databases require accurate entity resolution (ER). One approach is to use algorithms that assign questionable cases to manual review. Few studies have compared the performance of common algorithms for such a task. Furthermore, previous work has been limited by a lack of objective methods for setting algorithm parameters. We compared the performance of common ER algorithms: using algorithmic optimization, rather than manual parameter tuning, and on two-threshold classification (match/manual review/non-match) as well as single-threshold (match/non-match). METHODS: We manually reviewed 20,000 randomly selected, potential duplicate record-pairs to identify matches (10,000 training set, 10,000 test set). We evaluated the probabilistic expectation maximization, simple deterministic and fuzzy inference engine (FIE) algorithms. We used particle swarm to optimize algorithm parameters for a single and for two thresholds. We ran 10 iterations of optimization using the training set and report averaged performance against the test set. RESULTS: The overall estimated duplicate rate was 6%. FIE and simple deterministic algorithms allowed a lower manual review set compared to the probabilistic method (FIE 1.9%, simple deterministic 2.5%, probabilistic 3.6%; p<0.001). For a single threshold, the simple deterministic algorithm performed better than the probabilistic method (positive predictive value 0.956 vs 0.887, sensitivity 0.985 vs 0.887, p<0.001). ER with FIE classifies 98.1% of record-pairs correctly (1/10,000 error rate), assigning the remainder to manual review. CONCLUSIONS: Optimized deterministic algorithms outperform the probabilistic method. There is a strong case for considering optimized deterministic methods for ER.


Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Benchmarking , Lógica Fuzzy , Humanos , Registro Médico Coordenado/métodos , Probabilidade
10.
AMIA Annu Symp Proc ; 2013: 721-30, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24551372

RESUMO

Clinical databases may contain several records for a single patient. Multiple general entity-resolution algorithms have been developed to identify such duplicate records. To achieve optimal accuracy, algorithm parameters must be tuned to a particular dataset. The purpose of this study was to determine the required training set size for probabilistic, deterministic and Fuzzy Inference Engine (FIE) algorithms with parameters optimized using the particle swarm approach. Each algorithm classified potential duplicates into: definite match, non-match and indeterminate (i.e., requires manual review). Training sets size ranged from 2,000-10,000 randomly selected record-pairs. We also evaluated marginal uncertainty sampling for active learning. Optimization reduced manual review size (Deterministic 11.6% vs. 2.5%; FIE 49.6% vs. 1.9%; and Probabilistic 10.5% vs. 3.5%). FIE classified 98.1% of the records correctly (precision=1.0). Best performance required training on all 10,000 randomly-selected record-pairs. Active learning achieved comparable results with 3,000 records. Automated optimization is effective and targeted sampling can reduce the required training set size.


Assuntos
Algoritmos , Inteligência Artificial , Registros Eletrônicos de Saúde , Lógica Fuzzy
11.
AMIA Annu Symp Proc ; 2013: 1150-9, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24551399

RESUMO

Medication reconciliation is an important and complex task for which careful user interface design has the potential to help reduce errors and improve quality of care. In this paper we focus on the hospital discharge scenario and first describe a novel interface called Twinlist. Twinlist illustrates the novel use of spatial layout combined with multi-step animation, to help medical providers see what is different and what is similar between the lists (e.g., intake list and hospital list), and rapidly choose the drugs they want to include in the reconciled list. We then describe a series of variant designs and discuss their comparative advantages and disadvantages. Finally we report on a pilot study that suggests that animation might help users learn new spatial layouts such as the one used in Twinlist.


Assuntos
Gráficos por Computador , Reconciliação de Medicamentos/métodos , Interface Usuário-Computador , Registros Eletrônicos de Saúde , Humanos , Alta do Paciente , Projetos Piloto
12.
AMIA Annu Symp Proc ; 2012: 940-9, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23304369

RESUMO

The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.


Assuntos
Indexação e Redação de Resumos/métodos , Medical Subject Headings , Modelos Teóricos , Processamento de Linguagem Natural , PubMed , MEDLINE
13.
J Am Med Inform Assoc ; 19(3): 473-8, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-21917645

RESUMO

OBJECTIVE: To determine whether past access to biomedical documents can predict future document access. MATERIALS AND METHODS: The authors used 394 days of query log (August 1, 2009 to August 29, 2010) from PubMed users in the Texas Medical Center, which is the largest medical center in the world. The authors evaluated two document access models based on the work of Anderson and Schooler. The first is based on how frequently a document was accessed. The second is based on both frequency and recency. RESULTS: The model based only on frequency of past access was highly correlated with the empirical data (R²=0.932), whereas the model based on frequency and recency had a much lower correlation (R²=0.668). DISCUSSION: The frequency-only model accurately predicted whether a document will be accessed based on past use. Modeling accesses as a function of frequency requires storing only the number of accesses and the creation date for the document. This model requires low storage overheads and is computationally efficient, making it scalable to large corpora such as MEDLINE. CONCLUSION: It is feasible to accurately model the probability of a document being accessed in the future based on past accesses.


Assuntos
Bibliometria , Armazenamento e Recuperação da Informação , PubMed , Humanos , Modelos Estatísticos , Probabilidade , Texas
14.
J Am Med Inform Assoc ; 19(6): 988-94, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22744961

RESUMO

OBJECTIVE: To present a framework for combining implicit knowledge acquisition from multiple experts with machine learning and to evaluate this framework in the context of anemia alerts. MATERIALS AND METHODS: Five internal medicine residents reviewed 18 anemia alerts, while 'talking aloud'. They identified features that were reviewed by two or more physicians to determine appropriate alert level, etiology and treatment recommendation. Based on these features, data were extracted from 100 randomly-selected anemia cases for a training set and an additional 82 cases for a test set. Two staff internists assigned an alert level, etiology and treatment recommendation before and after reviewing the entire electronic medical record. The training set of 118 cases (100 plus 18) and the test set of 82 cases were explored using RIDOR and JRip algorithms. RESULTS: The feature set was sufficient to assess 93% of anemia cases (intraclass correlation for alert level before and after review of the records by internists 1 and 2 were 0.92 and 0.95, respectively). High-precision classifiers were constructed to identify low-level alerts (precision p=0.87, recall R=0.4), iron deficiency (p=1.0, R=0.73), and anemia associated with kidney disease (p=0.87, R=0.77). DISCUSSION: It was possible to identify low-level alerts and several conditions commonly associated with chronic anemia. This approach may reduce the number of clinically unimportant alerts. The study was limited to anemia alerts. Furthermore, clinicians were aware of the study hypotheses potentially biasing their evaluation. CONCLUSION: Implicit knowledge acquisition, collaborative filtering and machine learning were combined automatically to induce clinically meaningful and precise decision rules.


Assuntos
Anemia/prevenção & controle , Inteligência Artificial , Sistemas de Apoio a Decisões Clínicas , Diagnóstico por Computador , Registros Eletrônicos de Saúde , Humanos , Medicina Interna , Israel , Padrões de Prática Médica
15.
AMIA Annu Symp Proc ; 2011: 1252-60, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22195186

RESUMO

Medication reconciliation is a National Patient Safety Goal (NPSG) from The Joint Commission (TJC) that entails reviewing all medications a patient takes after a health care transition. Medication reconciliation is a resource-intensive, error-prone task, and the resources to accomplish it may not be routinely available. Computer-based methods have the potential to overcome these barriers. We designed and explored a rule-based medication reconciliation algorithm to accomplish this task across different healthcare transitions. We tested our algorithm on a random sample of 94 transitions from the Clinical Data Warehouse at the University of Texas Health Science Center at Houston. We found that the algorithm reconciled, on average, 23.4% of the potentially reconcilable medications. Our study did not have sufficient statistical power to establish whether the kind of transition affects reconcilability. We conclude that automated reconciliation is possible and will help accomplish the NPSG.


Assuntos
Algoritmos , Reconciliação de Medicamentos/métodos , Transferência da Responsabilidade pelo Paciente , Humanos
16.
Int J Med Inform ; 80(6): 431-41, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21439897

RESUMO

BACKGROUND: As the volume of biomedical text increases exponentially, automatic indexing becomes increasingly important. However, existing approaches do not distinguish central (or core) concepts from concepts that were mentioned in passing. We focus on the problem of indexing MEDLINE records, a process that is currently performed by highly trained humans at the National Library of Medicine (NLM). NLM indexers are assisted by a system called the Medical Text Indexer (MTI) that suggests candidate indexing terms. OBJECTIVE: To improve the ability of MTI to select the core terms in MEDLINE abstracts. These core concepts are deemed to be most important and are designated as "major headings" by MEDLINE indexers. We introduce and evaluate a graph-based indexing methodology called MEDRank that generates concept graphs from biomedical text and then ranks the concepts within these graphs to identify the most important ones. METHODS: We insert a MEDRank step into the MTI and compare MTI's output with and without MEDRank to the MEDLINE indexers' selected terms for a sample of 11,803 PubMed Central articles. We also tested whether human raters prefer terms generated by the MEDLINE indexers, MTI without MEDRank, and MTI with MEDRank for a sample of 36 PubMed Central articles. RESULTS: MEDRank improved recall of major headings designated by 30% over MTI without MEDRank (0.489 vs. 0.376). Overall recall was only slightly (6.5%) higher (0.490 vs. 0.460) as was F(2) (3%, 0.408 vs. 0.396). However, overall precision was 3.9% lower (0.268 vs. 0.279). Human raters preferred terms generated by MTI with MEDRank over terms generated by MTI without MEDRank (by an average of 1.00 more term per article), and preferred terms generated by MTI with MEDRank and the MEDLINE indexers at the same rate. CONCLUSIONS: The addition of MEDRank to MTI significantly improved the retrieval of core concepts in MEDLINE abstracts and more closely matched human expectations compared to MTI without MEDRank. In addition, MEDRank slightly improved overall recall and F(2).


Assuntos
Indexação e Redação de Resumos/métodos , Inteligência Artificial , Processamento Eletrônico de Dados , Armazenamento e Recuperação da Informação , MEDLINE , Medical Subject Headings , Algoritmos , Humanos , National Library of Medicine (U.S.) , Software , Estados Unidos
17.
AMIA Annu Symp Proc ; 2010: 296-300, 2010 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-21346988

RESUMO

Online courses will play a key role in the high-volume Informatics education required to train the personnel that will be necessary to fulfill the health IT needs of the country. Online courses can cause feelings of isolation in students. A common way to address these feelings is to hold synchronous online "chats" for students. Conventional chats, however, can be confusing and impose a high extrinsic cognitive load on their participants that hinders the learning process. In this paper we present a qualitative analysis that shows the causes of this high cognitive load and our solution through the use of a moderated chat system.


Assuntos
Computadores , Informática Médica , Comunicação , Educação a Distância , Humanos , Internet , Aprendizagem , Informática Médica/educação , Sistemas On-Line , Estudantes
18.
Rev. méd. Chile ; 139(12): 1611-1616, dic. 2011. ilus
Artigo em Espanhol | LILACS | ID: lil-627598

RESUMO

Biomedical Informatics is a new discipline that arose from the need to incorporate information technologies to the generation, storage, distribution and analysis of information in the domain of biomedical sciences. This discipline comprises basic biomedical informatics, and public health informatics. The development of the discipline in Chile has been modest and most projects have originated from the interest of individual people or institutions, without a systematic and coordinated national development. Considering the unique features of health care system of our country, research in the area of biomedical informatics is becoming an imperative.


Assuntos
Humanos , Informática Médica/educação , Chile
19.
AMIA Annu Symp Proc ; : 316-20, 2005.
Artigo em Inglês | MEDLINE | ID: mdl-16779053

RESUMO

Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.


Assuntos
Algoritmos , Armazenamento e Recuperação da Informação/métodos , MEDLINE , Bibliometria , PubMed
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA