RESUMO
INTRODUCTION: There is significant public health interest towards providing medical care at mass-gathering events. Furthermore, mass gatherings have the potential to have a detrimental impact on the availability of already-limited municipal Emergency Medical Services (EMS) resources. This study presents a cross-sectional descriptive analysis to report broad trends regarding patients who were transported from National Collegiate Athletic Association (NCAA) Division 1 collegiate football games at a major public university in order to better inform emergency preparedness and resource planning for mass gatherings. METHODS: Patient care reports (PCRs) from ambulance transports originating from varsity collegiate football games at the University of Minnesota across six years were examined. Pertinent information was abstracted from each PCR. RESULTS: Across the six years of data, there were a total of 73 patient transports originating from NCAA collegiate football games: 45.2% (n = 33) were male, and the median age was 22 years. Alcohol-related chief complaints were involved in 50.7% (n = 37) of transports. In total, 31.5% of patients had an initial Glasgow Coma Scale (GCS) of less than 15. The majority (65.8%; n = 48; 0.11 per 10,000 attendees) were transported by Basic Life Support (BLS) ambulances. The remaining patients (34.2%; n = 25; 0.06 per 10,000 attendees) were transported by Advanced Life Support (ALS) ambulances and were more likely to be older, have abnormal vital signs, and have a lower GCS. CONCLUSIONS: This analysis of ambulance transports from NCAA Division 1 collegiate football games emphasizes the prevalence of alcohol-related chief complaints, but also underscores the likelihood of more life-threatening conditions at mass gatherings. These results and additional research will help inform emergency preparedness at mass-gathering events.
Assuntos
Ambulâncias , Futebol Americano , Humanos , Masculino , Estudos Transversais , Feminino , Adulto Jovem , Minnesota , Adulto , Serviços Médicos de Emergência , Universidades , Adolescente , AglomeraçãoRESUMO
Importance: The Sentinel System is a key component of the US Food and Drug Administration (FDA) postmarketing safety surveillance commitment and uses clinical health care data to conduct analyses to inform drug labeling and safety communications, FDA advisory committee meetings, and other regulatory decisions. However, observational data are frequently deemed insufficient for reliable evaluation of safety concerns owing to limitations in underlying data or methodology. Advances in large language models (LLMs) provide new opportunities to address some of these limitations. However, careful consideration is necessary for how and where LLMs can be effectively deployed for these purposes. Observations: LLMs may provide new avenues to support signal-identification activities to identify novel adverse event signals from narrative text of electronic health records. These algorithms may be used to support epidemiologic investigations examining the causal relationship between exposure to a medical product and an adverse event through development of probabilistic phenotyping of health outcomes of interest and extraction of information related to important confounding factors. LLMs may perform like traditional natural language processing tools by annotating text with controlled vocabularies with additional tailored training activities. LLMs offer opportunities for enhancing information extraction from adverse event reports, medical literature, and other biomedical knowledge sources. There are several challenges that must be considered when leveraging LLMs for postmarket surveillance. Prompt engineering is needed to ensure that LLM-extracted associations are accurate and specific. LLMs require extensive infrastructure to use, which many health care systems lack, and this can impact diversity, equity, and inclusion, and result in obscuring significant adverse event patterns in some populations. LLMs are known to generate nonfactual statements, which could lead to false positive signals and downstream evaluation activities by the FDA and other entities, incurring substantial cost. Conclusions and Relevance: LLMs represent a novel paradigm that may facilitate generation of information to support medical product postmarket surveillance activities that have not been possible. However, additional work is required to ensure LLMs can be used in a fair and equitable manner, minimize false positive findings, and support the necessary rigor of signal detection needed for regulatory activities.
Assuntos
Processamento de Linguagem Natural , Vigilância de Produtos Comercializados , United States Food and Drug Administration , Vigilância de Produtos Comercializados/métodos , Humanos , Estados Unidos , Registros Eletrônicos de SaúdeRESUMO
Electronic health record (EHR) data are seen as an important source for Pharmacoepidemiology studies. In the US healthcare system, EHR systems often only identify fragments of patients' health information across the care continuum, including primary care, specialist care, hospitalizations, and pharmacy dispensing. This leads to unobservable information in longitudinal evaluations of medication effects causing unmeasured confounding, misclassification, and truncated follow-up times. A remedy is to link EHR data with longitudinal claims data which record all encounters during a defined enrollment period across all care settings. We evaluate EHR and claims data sources in three aspects relevant to etiologic studies of medical products: data continuity, data granularity, and data chronology. Reflecting on the strengths and limitations of EHR and insurance claims data, it becomes obvious that they complement each other. The combination of both will improve the validity of etiologic studies and expand the range of questions that can be answered. As the research community transitions towards a future state with access to large-scale combined EHR+claims data, we outline analytic templates to improve the validity and broaden the scope of pharmacoepidemiology studies in the current environment where EHR data are available only for a subset of patients with claims data.
RESUMO
Adverse drug reactions are a common cause of morbidity in health care. The US Food and Drug Administration (FDA) evaluates individual case safety reports of adverse events (AEs) after submission to the FDA Adverse Event Reporting System as part of its surveillance activities. Over the past decade, the FDA has explored the application of artificial intelligence (AI) to evaluate these reports to improve the efficiency and scientific rigor of the process. However, a gap remains between AI algorithm development and deployment. This viewpoint aims to describe the lessons learned from our experience and research needed to address both general issues in case-based reasoning using AI and specific needs for individual case safety report assessment. Beginning with the recognition that the trustworthiness of the AI algorithm is the main determinant of its acceptance by human experts, we apply the Diffusion of Innovations theory to help explain why certain algorithms for evaluating AEs at the FDA were accepted by safety reviewers and others were not. This analysis reveals that the process by which clinicians decide from case reports whether a drug is likely to cause an AE is not well defined beyond general principles. This makes the development of high performing, transparent, and explainable AI algorithms challenging, leading to a lack of trust by the safety reviewers. Even accounting for the introduction of large language models, the pharmacovigilance community needs an improved understanding of causal inference and of the cognitive framework for determining the causal relationship between a drug and an AE. We describe specific future research directions that underpin facilitating implementation and trust in AI for drug safety applications, including improved methods for measuring and controlling of algorithmic uncertainty, computational reproducibility, and clear articulation of a cognitive framework for causal inference in case-based reasoning.
Assuntos
Inteligência Artificial , United States Food and Drug Administration , Estados Unidos , Humanos , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Tomada de Decisão Clínica , Vigilância de Produtos Comercializados/métodos , Sistemas de Notificação de Reações Adversas a Medicamentos , Algoritmos , ConfiançaRESUMO
BACKGROUND: Unmeasured confounding is often raised as a source of potential bias during the design of non-randomized studies but quantifying such concerns is challenging. METHODS: We developed a simulation-based approach to assess the potential impact of unmeasured confounding during the study design stage. The approach involved generation of hypothetical individual-level cohorts using realistic parameters including a binary treatment (prevalence 25%), a time-to-event outcome (incidence 5%), 13 measured covariates, a binary unmeasured confounder (u1, 10%), and a binary measured 'proxy' variable (p1) correlated with u1. Strength of unmeasured confounding and correlations between u1 and p1 were varied in simulation scenarios. Treatment effects were estimated with, a) no adjustment, b) adjustment for measured confounders (Level 1), c) adjustment for measured confounders and their proxy (Level 2). We computed absolute standardized mean differences in u1 and p1 and relative bias with each level of adjustment. RESULTS: Across all scenarios, Level 2 adjustment led to improvement in balance of u1, but this improvement was highly dependent on the correlation between u1 and p1. Level 2 adjustments also had lower relative bias than Level 1 adjustments (in strong u1 scenarios: relative bias of 9.2%, 12.2%, 13.5% at correlations 0.7, 0.5, and 0.3, respectively versus 16.4%, 15.8%, 15.0% for Level 1, respectively). CONCLUSION: An approach using simulated individual-level data was useful to explicitly convey the potential for bias due to unmeasured confounding while designing non-randomized studies and can be helpful in informing design choices.
RESUMO
OBJECTIVE: To present a general framework providing high-level guidance to developers of computable algorithms for identifying patients with specific clinical conditions (phenotypes) through a variety of approaches, including but not limited to machine learning and natural language processing methods to incorporate rich electronic health record data. MATERIALS AND METHODS: Drawing on extensive prior phenotyping experiences and insights derived from 3 algorithm development projects conducted specifically for this purpose, our team with expertise in clinical medicine, statistics, informatics, pharmacoepidemiology, and healthcare data science methods conceptualized stages of development and corresponding sets of principles, strategies, and practical guidelines for improving the algorithm development process. RESULTS: We propose 5 stages of algorithm development and corresponding principles, strategies, and guidelines: (1) assessing fitness-for-purpose, (2) creating gold standard data, (3) feature engineering, (4) model development, and (5) model evaluation. DISCUSSION AND CONCLUSION: This framework is intended to provide practical guidance and serve as a basis for future elaboration and extension.
Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Processamento de Linguagem Natural , Fenótipo , Humanos , Aprendizado de MáquinaRESUMO
Telehealth is an effective way to increase access to genetic services and can address several challenges, including geographic barriers, a shortage of interpreter services, and workforce issues, especially for prenatal diagnosis. The addition of prenatal telegenetics to current workflows shows promise in enhancing the delivery of genetic counseling and testing in prenatal care, providing accessibility, accuracy, patient satisfaction, and cost-effectiveness. Further research is needed to explore long-term patient outcomes and the evolving role of telehealth for prenatal diagnosis. Future studies should address the accuracy of diagnoses, the impact of receiving a diagnosis in a virtual setting, and patient outcomes in order to make informed decisions about the appropriate use of telemedicine in prenatal genetics service delivery.
Assuntos
Telemedicina , Gravidez , Feminino , Humanos , Aconselhamento Genético , Satisfação do Paciente , Diagnóstico Pré-NatalRESUMO
BACKGROUND: Chronic, intractable, neuropathic pain is readily treatable with spinal cord stimulation (SCS). Technological advancements, including device miniaturization, are advancing the field of neuromodulation. OBJECTIVES: We report here the results of an SCS clinical trial to treat chronic, low back and leg pain, with a micro-implantable pulse generator (micro-IPG). STUDY DESIGN: This was a single-arm, prospective, multicenter, postmarket, observational study. SETTING: Patients were recruited from 15 US-based comprehensive pain centers. METHODS: This open-label clinical trial was designed to evaluate the performance of the Nalu™ Neurostimulation System (Nalu Medical, Inc., Carlsbad, CA) in the treatment of low back and leg pain. Patients, who provided informed consent and were successfully screened for study entry, were implanted with temporary trial leads. Patients went on to receive a permanent implant of the leads and micro-IPG if they demonstrated a >= 50% reduction in pain during the temporary trial period. Patient-reported outcomes (PROs), such as pain scores, functional disability, mood, patient impression of change, comfort, therapy use profile, and device ease of use, were captured. RESULTS: At baseline, the average pain Visual Analog Scale (VAS) score was 72.1 ± 17.9 in the leg and 78.0 ± 15.4 in the low back. At 90 days following permanent implant (end of study), pain scores improved by 76% (VAS 18.5 ± 18.8) in the leg and 75% (VAS 19.7 ± 20.8) in the low back. Eighty-six percent of both leg pain and low back pain patients demonstrated a >= 50% reduction in pain at 90 days following implant. The comfort of the external wearable (Therapy Disc and Adhesive Clip) was rated 1.16 ± 1.53, on average, at 90 days on an 11-point rating scale (0 = very comfortable, 10 = very uncomfortable). All PROs demonstrated statistically significant symptomatic improvement at 90 days following implant of the micro-IPG. LIMITATIONS: Limitations of this study include the lack of long-term results (beyond 90 days) and a relatively small sample size of 35 patients who were part of the analysis; additionally, there was no control arm or randomization as this was a single-arm study, without a comparator, designed to document the efficacy and safety of the device. Therefore, no direct comparisons to other SCS systems were possible. CONCLUSIONS: This clinical study demonstrated profound leg and low back pain relief in terms of overall pain reduction, as well as the proportion of therapy responders. The study patients reported the wearable aspects of the system to be very comfortable.
Assuntos
Dor Crônica , Dor Lombar , Neuralgia , Dor Intratável , Estimulação da Medula Espinal , Humanos , Dor Lombar/terapia , Estudos Prospectivos , Resultado do Tratamento , Medição da Dor/métodos , Dor Crônica/terapia , Estimulação da Medula Espinal/métodos , Neuralgia/terapia , Medula EspinalRESUMO
Congress mandated the creation of a postmarket Active Risk Identification and Analysis (ARIA) system containing data on 100 million individuals for monitoring risks associated with drug and biologic products using data from disparate sources to complement the US Food and Drug Administration's (FDA's) existing postmarket capabilities. We report on the first 6 years of ARIA utilization in the Sentinel System (2016-2021). The FDA has used the ARIA system to evaluate 133 safety concerns; 54 of these evaluations have closed with regulatory determinations, whereas the rest remain in progress. If the ARIA system and the FDA's Adverse Event Reporting System are deemed insufficient to address a safety concern, then the FDA may issue a postmarket requirement to a product's manufacturer. One hundred ninety-seven ARIA insufficiency determinations have been made. The most common situation for which ARIA was found to be insufficient is the evaluation of adverse pregnancy and fetal outcomes following in utero drug exposure, followed by neoplasms and death. ARIA was most likely to be sufficient for thromboembolic events, which have high positive predictive value in claims data alone and do not require supplemental clinical data. The lessons learned from this experience illustrate the continued challenges using administrative claims data, especially to define novel clinical outcomes. This analysis can help to identify where more granular clinical data are needed to fill gaps to improve the use of real-world data for drug safety analyses and provide insights into what is needed to efficiently generate high-quality real-world evidence for efficacy.
Assuntos
Alimentos , Vigilância de Produtos Comercializados , Estados Unidos , Humanos , Preparações Farmacêuticas , United States Food and Drug AdministrationRESUMO
Identifying patient cohorts meeting the criteria of specific phenotypes is essential in biomedicine and particularly timely in precision medicine. Many research groups deliver pipelines that automatically retrieve and analyze data elements from one or more sources to automate this task and deliver high-performing computable phenotypes. We applied a systematic approach based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines to conduct a thorough scoping review on computable clinical phenotyping. Five databases were searched using a query that combined the concepts of automation, clinical context, and phenotyping. Subsequently, four reviewers screened 7960 records (after removing over 4000 duplicates) and selected 139 that satisfied the inclusion criteria. This dataset was analyzed to extract information on target use cases, data-related topics, phenotyping methodologies, evaluation strategies, and portability of developed solutions. Most studies supported patient cohort selection without discussing the application to specific use cases, such as precision medicine. Electronic Health Records were the primary source in 87.1 % (N = 121) of all studies, and International Classification of Diseases codes were heavily used in 55.4 % (N = 77) of all studies, however, only 25.9 % (N = 36) of the records described compliance with a common data model. In terms of the presented methods, traditional Machine Learning (ML) was the dominant method, often combined with natural language processing and other approaches, while external validation and portability of computable phenotypes were pursued in many cases. These findings revealed that defining target use cases precisely, moving away from sole ML strategies, and evaluating the proposed solutions in the real setting are essential opportunities for future work. There is also momentum and an emerging need for computable phenotyping to support clinical and epidemiological research and precision medicine.
Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Aprendizado de Máquina , Processamento de Linguagem Natural , FenótipoRESUMO
We sought to determine whether machine learning and natural language processing (NLP) applied to electronic medical records could improve performance of automated health-care claims-based algorithms to identify anaphylaxis events using data on 516 patients with outpatient, emergency department, or inpatient anaphylaxis diagnosis codes during 2015-2019 in 2 integrated health-care institutions in the Northwest United States. We used one site's manually reviewed gold-standard outcomes data for model development and the other's for external validation based on cross-validated area under the receiver operating characteristic curve (AUC), positive predictive value (PPV), and sensitivity. In the development site 154 (64%) of 239 potential events met adjudication criteria for anaphylaxis compared with 180 (65%) of 277 in the validation site. Logistic regression models using only structured claims data achieved a cross-validated AUC of 0.58 (95% CI: 0.54, 0.63). Machine learning improved cross-validated AUC to 0.62 (0.58, 0.66); incorporating NLP-derived covariates further increased cross-validated AUCs to 0.70 (0.66, 0.75) in development and 0.67 (0.63, 0.71) in external validation data. A classification threshold with cross-validated PPV of 79% and cross-validated sensitivity of 66% in development data had cross-validated PPV of 78% and cross-validated sensitivity of 56% in external data. Machine learning and NLP-derived data improved identification of validated anaphylaxis events.
Assuntos
Anafilaxia , Processamento de Linguagem Natural , Humanos , Anafilaxia/diagnóstico , Anafilaxia/epidemiologia , Aprendizado de Máquina , Algoritmos , Serviço Hospitalar de Emergência , Registros Eletrônicos de SaúdeRESUMO
BACKGROUND: Acute pancreatitis is a serious gastrointestinal disease that is an important target for drug safety surveillance. Little is known about the accuracy of ICD-10 codes for acute pancreatitis in the United States, or their performance in specific clinical settings. We conducted a validation study to assess the accuracy of acute pancreatitis ICD-10 diagnosis codes in inpatient, emergency department (ED), and outpatient settings. METHODS: We reviewed electronic medical records for encounters with acute pancreatitis diagnosis codes in an integrated healthcare system from October 2015 to December 2019. Trained abstractors and physician adjudicators determined whether events met criteria for acute pancreatitis. RESULTS: Out of 1,844 eligible events, we randomly sampled 300 for review. Across all clinical settings, 182 events met validation criteria for an overall positive predictive value (PPV) of 61% (95% confidence intervals [CI] = 55, 66). The PPV was 87% (95% CI = 79, 92%) for inpatient codes, but only 45% for ED (95% CI = 35, 54%) and outpatient (95% CI = 34, 55%) codes. ED and outpatient encounters accounted for 43% of validated events. Acute pancreatitis codes from any encounter type with lipase >3 times the upper limit of normal had a PPV of 92% (95% CI = 86, 95%) and identified 85% of validated events (95% CI = 79, 89%), while codes with lipase <3 times the upper limit of normal had a PPV of only 22% (95% CI = 16, 30%). CONCLUSIONS: These results suggest that ICD-10 codes accurately identified acute pancreatitis in the inpatient setting, but not in the ED and outpatient settings. Laboratory data substantially improved algorithm performance.
Assuntos
Prestação Integrada de Cuidados de Saúde , Pancreatite , Adulto , Humanos , Estados Unidos/epidemiologia , Doença Aguda , Pancreatite/diagnóstico , Pancreatite/epidemiologia , Classificação Internacional de Doenças , Valor Preditivo dos Testes , LipaseRESUMO
The US Food and Drug Administration (FDA) created the Sentinel System in response to a requirement in the FDA Amendments Act of 2007 that the agency establish a system for monitoring risks associated with drug and biologic products using data from disparate sources. The Sentinel System has completed hundreds of analyses, including many that have directly informed regulatory decisions. The Sentinel System also was designed to support a national infrastructure for a learning health system. Sentinel governance and guiding principles were designed to facilitate Sentinel's role as a national resource. The Sentinel System infrastructure now supports multiple non-FDA projects for stakeholders ranging from regulated industry to other federal agencies, international regulators, and academics. The Sentinel System is a working example of a learning health system that is expanding with the potential to create a global learning health system that can support medical product safety assessments and other research.
Assuntos
Sistema de Aprendizagem em Saúde , Estados Unidos , United States Food and Drug Administration , Preparações FarmacêuticasRESUMO
The safety of medical products due to adverse events (AE) from drugs, therapeutic biologics, and medical devices is a major public health concern worldwide. Likelihood ratio test (LRT) approaches to pharmacovigilance constitute a class of rigorous statistical tools that permit objective identification of AEs of a specific drug and/or a class of drugs cataloged in spontaneous reporting system databases. However, the existing LRT approaches encounter certain theoretical and computational challenges when an underlying Poisson model assumption is violated, including in cases of zero-inflated data. We briefly review existing LRT approaches and propose a novel class of (pseudo-) LRT methods to address these challenges. Our approach uses an alternative parametrization to formulate a unified framework with a common test statistic that can handle both Poisson and zero-inflated Poisson (ZIP) models. The proposed framework is computationally efficient, and it reveals deeper insights into the comparative behaviors of the Poisson and the ZIP models for handling AE data. Our extensive simulation studies document notably superior performances of the proposed methods over existing approaches particularly under zero-inflation, both in terms of statistical (eg, much better control of the nominal level and false discovery rate with substantially enhanced power) and computational ( â¼ $$ \sim $$ 100-500-fold gains in average running times) performance metrics. An application of our method on the statin drug class from the FDA FAERS database reveals interesting insights on potential AEs. An R package, pvLRT, implementing our methods has been released in the public domain.
Assuntos
Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Farmacovigilância , Estados Unidos , Humanos , Funções Verossimilhança , Sistemas de Notificação de Reações Adversas a Medicamentos , United States Food and Drug AdministrationRESUMO
INTRODUCTION: COVID-19 spurred an unprecedented transition from in-person to telemedicine visits in March 2020 at our institution for all prenatal counseling sessions. This study aims to explore differences in demographics of expectant mothers evaluated pre- and post-telemedicine implementation and to explore the patient experience with telemedicine. METHODS: A mixed methods study was completed for mothers with a pregnancy complicated by a fetal surgical anomaly who visited a large tertiary fetal center. Using medical records as quantitative data, patient information was collected for all prenatal visits from 3/2019 to 3/2021. The sample was grouped into pre- and post-telemedicine implementation (based on transition date of 3/2020). Univariate analysis was used to compare demographics between the study groups. Statistical significance was defined as P < 0.05. Eighteen semi-structured interviews were conducted from 8/2021 to 12/2021 to explore patients' experiences. Line-by-line coding and thematic analysis was performed to develop emerging themes. RESULTS: 292 pregnancies were evaluated from 3/2019 to 3/2021 (pre-telemedicine 123, post-telemedicine 169). There was no significant difference in self-reported race (P = 0.28), ethnicity (P = 0.46), or primary language (P = 0.98). In qualitative interviews, patients reported advantages to telemedicine, including the convenience of the modality with the option to conduct their session in familiar settings (e.g., home) and avoid stressors (e.g., travel to the medical center and finding childcare). Some women reported difficulties establishing a physician-patient connection and a preference for in-person consultations. CONCLUSIONS: There was no difference in patient demographics at our fetal center in the year leading up to, and the time following, a significant transition to telemedicine. However, patients had unique perspectives on the advantages and disadvantages of the telemedicine experience. To ensure patient centered care, these findings suggest patient preference should be considered when scheduling outpatient surgical counseling and visits.
Assuntos
COVID-19 , Telemedicina , Humanos , Feminino , Gravidez , Telemedicina/métodos , Preferência do Paciente , Aconselhamento , Encaminhamento e ConsultaRESUMO
There is great interest in the application of 'artificial intelligence' (AI) to pharmacovigilance (PV). Although US FDA is broadly exploring the use of AI for PV, we focus on the application of AI to the processing and evaluation of Individual Case Safety Reports (ICSRs) submitted to the FDA Adverse Event Reporting System (FAERS). We describe a general framework for considering the readiness of AI for PV, followed by some examples of the application of AI to ICSR processing and evaluation in industry and FDA. We conclude that AI can usefully be applied to some aspects of ICSR processing and evaluation, but the performance of current AI algorithms requires a 'human-in-the-loop' to ensure good quality. We identify outstanding scientific and policy issues to be addressed before the full potential of AI can be exploited for ICSR processing and evaluation, including approaches to quality assurance of 'human-in-the-loop' AI systems, large-scale, publicly available training datasets, a well-defined and computable 'cognitive framework', a formal sociotechnical framework for applying AI to PV, and development of best practices for applying AI to PV. Practical experience with stepwise implementation of AI for ICSR processing and evaluation will likely provide important lessons that will inform the necessary policy and regulatory framework to facilitate widespread adoption and provide a foundation for further development of AI approaches to other aspects of PV.
Assuntos
Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Farmacovigilância , Sistemas de Notificação de Reações Adversas a Medicamentos , Algoritmos , Inteligência Artificial , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos/prevenção & controle , HumanosRESUMO
There is an evolution and increasing need for the utilization of emerging cellular, molecular and in silico technologies and novel approaches for safety assessment of food, drugs, and personal care products. Convergence of these emerging technologies is also enabling rapid advances and approaches that may impact regulatory decisions and approvals. Although the development of emerging technologies may allow rapid advances in regulatory decision making, there is concern that these new technologies have not been thoroughly evaluated to determine if they are ready for regulatory application, singularly or in combinations. The magnitude of these combined technical advances may outpace the ability to assess fit for purpose and to allow routine application of these new methods for regulatory purposes. There is a need to develop strategies to evaluate the new technologies to determine which ones are ready for regulatory use. The opportunity to apply these potentially faster, more accurate, and cost-effective approaches remains an important goal to facilitate their incorporation into regulatory use. However, without a clear strategy to evaluate emerging technologies rapidly and appropriately, the value of these efforts may go unrecognized or may take longer. It is important for the regulatory science field to keep up with the research in these technically advanced areas and to understand the science behind these new approaches. The regulatory field must understand the critical quality attributes of these novel approaches and learn from each other's experience so that workforces can be trained to prepare for emerging global regulatory challenges. Moreover, it is essential that the regulatory community must work with the technology developers to harness collective capabilities towards developing a strategy for evaluation of these new and novel assessment tools.
Assuntos
Pesquisa Biomédica , Simulação por Computador , HumanosRESUMO
The Sentinel System is a major component of the United States Food and Drug Administration's (FDA) approach to active medical product safety surveillance. While Sentinel has historically relied on large quantities of health insurance claims data, leveraging longitudinal electronic health records (EHRs) that contain more detailed clinical information, as structured and unstructured features, may address some of the current gaps in capabilities. We identify key challenges when using EHR data to investigate medical product safety in a scalable and accelerated way, outline potential solutions, and describe the Sentinel Innovation Center's initiatives to put solutions into practice by expanding and strengthening the existing system with a query-ready, large-scale data infrastructure of linked EHR and claims data. We describe our initiatives in four strategic priority areas: (1) data infrastructure, (2) feature engineering, (3) causal inference, and (4) detection analytics, with the goal of incorporating emerging data science innovations to maximize the utility of EHR data for medical product safety surveillance.