RESUMO
OBJECTIVE: We describe a novel, crowdsourcing method for generating a knowledge base of problem-medication pairs that takes advantage of manually asserted links between medications and problems. METHODS: Through iterative review, we developed metrics to estimate the appropriateness of manually entered problem-medication links for inclusion in a knowledge base that can be used to infer previously unasserted links between problems and medications. RESULTS: Clinicians manually linked 231,223 medications (55.30% of prescribed medications) to problems within the electronic health record, generating 41,203 distinct problem-medication pairs, although not all were accurate. We developed methods to evaluate the accuracy of the pairs, and after limiting the pairs to those meeting an estimated 95% appropriateness threshold, 11,166 pairs remained. The pairs in the knowledge base accounted for 183,127 total links asserted (76.47% of all links). Retrospective application of the knowledge base linked 68,316 medications not previously linked by a clinician to an indicated problem (36.53% of unlinked medications). Expert review of the combined knowledge base, including inferred and manually linked problem-medication pairs, found a sensitivity of 65.8% and a specificity of 97.9%. CONCLUSION: Crowdsourcing is an effective, inexpensive method for generating a knowledge base of problem-medication pairs that is automatically mapped to local terminologies, up-to-date, and reflective of local prescribing practices and trends.