Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
J Med Internet Res ; 21(11): e14809, 2019 11 28.
Artigo em Inglês | MEDLINE | ID: mdl-31778117

RESUMO

BACKGROUND: In drug development clinical trials, there is a need for balance between restricting variables by setting eligibility criteria and representing the broader patient population that may use a product once it is approved. Similarly, although recent policy initiatives focusing on the inclusion of historically underrepresented groups are being implemented, barriers still remain. These limitations of clinical trials may mask potential product benefits and side effects. To bridge these gaps, online communication in health communities may serve as an additional population signal for drug side effects. OBJECTIVE: The aim of this study was to employ a nontraditional dataset to identify drug side-effect signals. The study was designed to apply both natural language processing (NLP) technology and hands-on linguistic analysis to a set of online posts from known statin users to (1) identify any underlying crossover between the use of statins and impairment of memory or cognition and (2) obtain patient lexicon in their descriptions of experiences with statin medications and memory changes. METHODS: Researchers utilized user-generated content on Inspire, looking at over 11 million posts across Inspire. Posts were written by patients and caregivers belonging to a variety of communities on Inspire. After identifying these posts, researchers used NLP and hands-on linguistic analysis to draw and expand upon correlations among statin use, memory, and cognition. RESULTS: NLP analysis of posts identified statistical correlations between statin users and the discussion of memory impairment, which were not observed in control groups. NLP found that, out of all members on Inspire, 3.1% had posted about memory or cognition. In a control group of those who had posted about TNF inhibitors, 6.2% had also posted about memory and cognition. In comparison, of all those who had posted about a statin medication, 22.6% (P<.001) also posted about memory and cognition. Furthermore, linguistic analysis of a sample of posts provided themes and context to these statistical findings. By looking at posts from statin users about memory, four key themes were found and described in detail in the data: memory loss, aphasia, cognitive impairment, and emotional change. CONCLUSIONS: Correlations from this study point to a need for further research on the impact of statins on memory and cognition. Furthermore, when using nontraditional datasets, such as online communities, NLP and linguistic methodologies broaden the population for identifying side-effect signals. For side effects such as those on memory and cognition, where self-reporting may be unreliable, these methods can provide another avenue to inform patients, providers, and the Food and Drug Administration.


Assuntos
Cognição/fisiologia , Inibidores de Hidroximetilglutaril-CoA Redutases/uso terapêutico , Memória/fisiologia , Medidas de Resultados Relatados pelo Paciente , Comunicação , Feminino , Humanos , Inibidores de Hidroximetilglutaril-CoA Redutases/farmacologia , Internet , Masculino , Pesquisa Qualitativa
2.
PLoS Med ; 15(11): e1002699, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30481176

RESUMO

BACKGROUND: Magnetic resonance imaging (MRI) of the knee is the preferred method for diagnosing knee injuries. However, interpretation of knee MRI is time-intensive and subject to diagnostic error and variability. An automated system for interpreting knee MRI could prioritize high-risk patients and assist clinicians in making diagnoses. Deep learning methods, in being able to automatically learn layers of features, are well suited for modeling the complex relationships between medical images and their interpretations. In this study we developed a deep learning model for detecting general abnormalities and specific diagnoses (anterior cruciate ligament [ACL] tears and meniscal tears) on knee MRI exams. We then measured the effect of providing the model's predictions to clinical experts during interpretation. METHODS AND FINDINGS: Our dataset consisted of 1,370 knee MRI exams performed at Stanford University Medical Center between January 1, 2001, and December 31, 2012 (mean age 38.0 years; 569 [41.5%] female patients). The majority vote of 3 musculoskeletal radiologists established reference standard labels on an internal validation set of 120 exams. We developed MRNet, a convolutional neural network for classifying MRI series and combined predictions from 3 series per exam using logistic regression. In detecting abnormalities, ACL tears, and meniscal tears, this model achieved area under the receiver operating characteristic curve (AUC) values of 0.937 (95% CI 0.895, 0.980), 0.965 (95% CI 0.938, 0.993), and 0.847 (95% CI 0.780, 0.914), respectively, on the internal validation set. We also obtained a public dataset of 917 exams with sagittal T1-weighted series and labels for ACL injury from Clinical Hospital Centre Rijeka, Croatia. On the external validation set of 183 exams, the MRNet trained on Stanford sagittal T2-weighted series achieved an AUC of 0.824 (95% CI 0.757, 0.892) in the detection of ACL injuries with no additional training, while an MRNet trained on the rest of the external data achieved an AUC of 0.911 (95% CI 0.864, 0.958). We additionally measured the specificity, sensitivity, and accuracy of 9 clinical experts (7 board-certified general radiologists and 2 orthopedic surgeons) on the internal validation set both with and without model assistance. Using a 2-sided Pearson's chi-squared test with adjustment for multiple comparisons, we found no significant differences between the performance of the model and that of unassisted general radiologists in detecting abnormalities. General radiologists achieved significantly higher sensitivity in detecting ACL tears (p-value = 0.002; q-value = 0.019) and significantly higher specificity in detecting meniscal tears (p-value = 0.003; q-value = 0.019). Using a 1-tailed t test on the change in performance metrics, we found that providing model predictions significantly increased clinical experts' specificity in identifying ACL tears (p-value < 0.001; q-value = 0.006). The primary limitations of our study include lack of surgical ground truth and the small size of the panel of clinical experts. CONCLUSIONS: Our deep learning model can rapidly generate accurate clinical pathology classifications of knee MRI exams from both internal and external datasets. Moreover, our results support the assertion that deep learning models can improve the performance of clinical experts during medical imaging interpretation. Further research is needed to validate the model prospectively and to determine its utility in the clinical setting.


Assuntos
Lesões do Ligamento Cruzado Anterior/diagnóstico por imagem , Aprendizado Profundo , Diagnóstico por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Joelho/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Lesões do Menisco Tibial/diagnóstico por imagem , Adulto , Automação , Bases de Dados Factuais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Estudos Retrospectivos , Adulto Jovem
3.
NPJ Digit Med ; 3: 23, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32140566

RESUMO

Artificial intelligence (AI) algorithms continue to rival human performance on a variety of clinical tasks, while their actual impact on human diagnosticians, when incorporated into clinical workflows, remains relatively unexplored. In this study, we developed a deep learning-based assistant to help pathologists differentiate between two subtypes of primary liver cancer, hepatocellular carcinoma and cholangiocarcinoma, on hematoxylin and eosin-stained whole-slide images (WSI), and evaluated its effect on the diagnostic performance of 11 pathologists with varying levels of expertise. Our model achieved accuracies of 0.885 on a validation set of 26 WSI, and 0.842 on an independent test set of 80 WSI. Although use of the assistant did not change the mean accuracy of the 11 pathologists (p = 0.184, OR = 1.281), it significantly improved the accuracy (p = 0.045, OR = 1.499) of a subset of nine pathologists who fell within well-defined experience levels (GI subspecialists, non-GI subspecialists, and trainees). In the assisted state, model accuracy significantly impacted the diagnostic decisions of all 11 pathologists. As expected, when the model's prediction was correct, assistance significantly improved accuracy (p = 0.000, OR = 4.289), whereas when the model's prediction was incorrect, assistance significantly decreased accuracy (p = 0.000, OR = 0.253), with both effects holding across all pathologist experience levels and case difficulty levels. Our results highlight the challenges of translating AI models into the clinical setting, and emphasize the importance of taking into account potential unintended negative consequences of model assistance when designing and testing medical AI-assistance tools.

4.
JMIR Public Health Surveill ; 5(2): e11264, 2019 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-31162134

RESUMO

BACKGROUND: Adverse drug reactions (ADRs) occur in nearly all patients on chemotherapy, causing morbidity and therapy disruptions. Detection of such ADRs is limited in clinical trials, which are underpowered to detect rare events. Early recognition of ADRs in the postmarketing phase could substantially reduce morbidity and decrease societal costs. Internet community health forums provide a mechanism for individuals to discuss real-time health concerns and can enable computational detection of ADRs. OBJECTIVE: The goal of this study is to identify cutaneous ADR signals in social health networks and compare the frequency and timing of these ADRs to clinical reports in the literature. METHODS: We present a natural language processing-based, ADR signal-generation pipeline based on patient posts on Internet social health networks. We identified user posts from the Inspire health forums related to two chemotherapy classes: erlotinib, an epidermal growth factor receptor inhibitor, and nivolumab and pembrolizumab, immune checkpoint inhibitors. We extracted mentions of ADRs from unstructured content of patient posts. We then performed population-level association analyses and time-to-detection analyses. RESULTS: Our system detected cutaneous ADRs from patient reports with high precision (0.90) and at frequencies comparable to those documented in the literature but an average of 7 months ahead of their literature reporting. Known ADRs were associated with higher proportional reporting ratios compared to negative controls, demonstrating the robustness of our analyses. Our named entity recognition system achieved a 0.738 microaveraged F-measure in detecting ADR entities, not limited to cutaneous ADRs, in health forum posts. Additionally, we discovered the novel ADR of hypohidrosis reported by 23 patients in erlotinib-related posts; this ADR was absent from 15 years of literature on this medication and we recently reported the finding in a clinical oncology journal. CONCLUSIONS: Several hundred million patients report health concerns in social health networks, yet this information is markedly underutilized for pharmacosurveillance. We demonstrated the ability of a natural language processing-based signal-generation pipeline to accurately detect patient reports of ADRs months in advance of literature reporting and the robustness of statistical analyses to validate system detections. Our findings suggest the important contributions that social health network data can play in contributing to more comprehensive and timely pharmacovigilance.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA