Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
J Am Med Inform Assoc ; 31(3): 705-713, 2024 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-38031481

RESUMEN

OBJECTIVE: The complexity and rapid pace of development of algorithmic technologies pose challenges for their regulation and oversight in healthcare settings. We sought to improve our institution's approach to evaluation and governance of algorithmic technologies used in clinical care and operations by creating an Implementation Guide that standardizes evaluation criteria so that local oversight is performed in an objective fashion. MATERIALS AND METHODS: Building on a framework that applies key ethical and quality principles (clinical value and safety, fairness and equity, usability and adoption, transparency and accountability, and regulatory compliance), we created concrete guidelines for evaluating algorithmic technologies at our institution. RESULTS: An Implementation Guide articulates evaluation criteria used during review of algorithmic technologies and details what evidence supports the implementation of ethical and quality principles for trustworthy health AI. Application of the processes described in the Implementation Guide can lead to algorithms that are safer as well as more effective, fair, and equitable upon implementation, as illustrated through 4 examples of technologies at different phases of the algorithmic lifecycle that underwent evaluation at our academic medical center. DISCUSSION: By providing clear descriptions/definitions of evaluation criteria and embedding them within standardized processes, we streamlined oversight processes and educated communities using and developing algorithmic technologies within our institution. CONCLUSIONS: We developed a scalable, adaptable framework for translating principles into evaluation criteria and specific requirements that support trustworthy implementation of algorithmic technologies in patient care and healthcare operations.


Asunto(s)
Inteligencia Artificial , Instituciones de Salud , Humanos , Algoritmos , Centros Médicos Académicos , Cooperación del Paciente
2.
Circ Heart Fail ; 16(2): e010158, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36314130

RESUMEN

BACKGROUND: Guideline-directed medical therapy (GDMT) for heart failure with reduced ejection fraction (HFrEF) improves clinical outcomes and quality of life. Optimizing GDMT in the hospital is associated with greater long-term use in HFrEF. This study aimed to describe the efficacy of a multidisciplinary virtual HF intervention on GDMT optimization among patients with HFrEF admitted for any cause. METHODS: In this pilot randomized, controlled study, consecutive patients with HFrEF admitted to noncardiology medicine services for any cause were identified at a large academic tertiary care hospital between May to September 2021. Major exclusions were end-stage renal disease, hemodynamic instability, concurrent COVID-19 infection, and current enrollment in hospice care. Patients were randomized to a clinician-level virtual peer-to-peer consult intervention providing GDMT recommendations and information on medication costs versus usual care. Primary end points included (1) proportion of patients with new GDMT initiation or use and (2) changes to HF optimal medical therapy scores which included target dosing (range, 0-9). RESULTS: Of 242 patients identified, 91 (38%) were eligible and randomized to intervention (N=52) or usual care (N=39). Baseline characteristics were similar between intervention and usual care (mean age 63 versus 67 years, 23% versus 26% female, 46% versus 49% Black, mean ejection fraction 33% versus 31%). GDMT use on admission was also similar. There were greater proportions of patients with GDMT initiation or continuation with the intervention compared with usual care. After adjusting for optimal medical therapy score on admission, changes to optimal medical therapy score at discharge were higher for the intervention group compared with usual care (+0.44 versus -0.31, absolute difference +0.75, adjusted estimate 0.86±0.42; P=0.041). CONCLUSIONS: Among eligible patients with HFrEF hospitalized for any cause on noncardiology services, a multidisciplinary pilot virtual HF consultation increased new GDMT initiation and dose optimization at discharge.


Asunto(s)
COVID-19 , Insuficiencia Cardíaca , Humanos , Femenino , Persona de Mediana Edad , Masculino , Insuficiencia Cardíaca/terapia , Calidad de Vida , Proyectos Piloto , Volumen Sistólico , Hospitales , Derivación y Consulta
3.
J Am Med Inform Assoc ; 29(9): 1631-1636, 2022 08 16.
Artículo en Inglés | MEDLINE | ID: mdl-35641123

RESUMEN

Artificial intelligence/machine learning models are being rapidly developed and used in clinical practice. However, many models are deployed without a clear understanding of clinical or operational impact and frequently lack monitoring plans that can detect potential safety signals. There is a lack of consensus in establishing governance to deploy, pilot, and monitor algorithms within operational healthcare delivery workflows. Here, we describe a governance framework that combines current regulatory best practices and lifecycle management of predictive models being used for clinical care. Since January 2021, we have successfully added models to our governance portfolio and are currently managing 52 models.


Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Algoritmos , Atención a la Salud
4.
Clin Infect Dis ; 75(3): 503-511, 2022 08 31.
Artículo en Inglés | MEDLINE | ID: mdl-34739080

RESUMEN

BACKGROUND: The impact of the US Centers for Medicare & Medicaid Services (CMS) Severe Sepsis and Septic Shock: Management Bundle (SEP-1) core measure on overall antibacterial utilization is unknown. METHODS: We performed a retrospective multicenter longitudinal cohort study with interrupted time-series analysis to determine the impact of SEP-1 implementation on antibacterial utilization and patient outcomes. All adult patients admitted to 26 hospitals between 1 October 2014 and 30 September 2015 (SEP-1 preparation period) and between 1 November 2015 and 31 October 2016 (SEP-1 implementation period) were evaluated for inclusion. The primary outcome was total antibacterial utilization, measured as days of therapy (DOT) per 1000 patient-days. RESULTS: The study cohort included 701 055 eligible patient admissions and 4.2 million patient-days. Overall antibacterial utilization increased 2% each month during SEP-1 preparation (relative rate [RR], 1.02 per month [95% confidence interval {CI}, 1.00-1.04]; P = .02). Cumulatively, the mean monthly DOT per 1000 patient-days increased 24.4% (95% CI, 18.0%-38.8%) over the entire study period (October 2014-October 2016). The rate of sepsis diagnosis/1000 patients increased 2% each month during SEP-1 preparation (RR, 1.02 per month [95% CI, 1.00-1.04]; P = .04). The rate of all-cause mortality rate per 1000 patients decreased during the study period (RR for SEP-1 preparation, 0.95 [95% CI, .92-.98; P = .001]; RR for SEP-1 implementation, .98 [.97-1.00; P = .01]). Cumulatively, the monthly mean all-cause mortality rate/1000 patients declined 38.5% (95% CI, 25.9%-48.0%) over the study period. CONCLUSIONS: Announcement and implementation of the CMS SEP-1 process measure was associated with increased diagnosis of sepsis and antibacterial utilization and decreased mortality rate among hospitalized patients.


Asunto(s)
Paquetes de Atención al Paciente , Sepsis , Adulto , Anciano , Antibacterianos/uso terapéutico , Estudios de Cohortes , Humanos , Estudios Longitudinales , Medicaid , Medicare , Estudios Retrospectivos , Estados Unidos
5.
J Am Coll Cardiol ; 78(20): 2004-2012, 2021 11 16.
Artículo en Inglés | MEDLINE | ID: mdl-34763778

RESUMEN

Sodium-glucose cotransporter-2 inhibitor therapy is well suited for initiation during the heart failure hospitalization, owing to clinical benefits that accrue rapidly within days to weeks, a strong safety and tolerability profile, minimal to no effects on blood pressure, and no excess risk of adverse kidney events. There is no evidence to suggest that deferring initiation to the outpatient setting accomplishes anything beneficial. Instead, there is compelling evidence that deferring in-hospital initiation exposes patients to excess risk of early postdischarge clinical worsening and death. Lessons from other heart failure with reduced ejection fraction therapies highlight that deferring initiation of guideline-recommended medications to the U.S. outpatient setting carries a >75% chance they will not be initiated within the next year. Recognizing that 1 in 4 patients hospitalized for worsening heart failure die or are readmitted within 30 days, clinicians should embrace the in-hospital period as an optimal time to initiate sodium-glucose cotransporter-2 inhibitor therapy and treat this population with the urgency it deserves.


Asunto(s)
Hospitalización , Readmisión del Paciente , Inhibidores del Cotransportador de Sodio-Glucosa 2 , Humanos , Insuficiencia Cardíaca , Hipoglucemiantes/uso terapéutico , Alta del Paciente , Atención Dirigida al Paciente , Guías de Práctica Clínica como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto , Riesgo , Inhibidores del Cotransportador de Sodio-Glucosa 2/uso terapéutico , Volumen Sistólico , Disfunción Ventricular Izquierda/tratamiento farmacológico
6.
J Healthc Qual ; 43(6): 347-354, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34734919

RESUMEN

ABSTRACT: This retrospective, cross-sectional study of U.S. hospitals in Medicare's Inpatient Quality Reporting Program aimed to determine whether variation in Sepsis/Septic Shock (Bundle SEP-1) compliance is linked to hospital size and measures of safety and operational efficiency. Two thousand six hundred and fifty-three acute care hospitals in Medicare's Hospital Compare online database were included in the study. Relationships between SEP-1 bundle compliance, hospital size, and indices of operational excellence (including Patient Safety Index [PSI-90], average length of stay [ALOS] and readmission rate) were analyzed. SEP-1 compliance score was inversely associated with staffed bed number (r = -.14, p < .001), PSI-90 (r = -.01, p < .001), and ALOS (r = -.13, p < .001) in a multivariate analysis. Hospitals in the lowest versus highest quartile by bed number had SEP-1 compliance score of 49.8 ± 20.2% versus 46.9 ± 16.8%, p < .001. Hospitals in the lowest versus highest quartile for SEP-1 score had an ALOS of 5.0 ± 1.2 days versus 4.7 ± 1.1 days and PSI-90 rate of 1.03 ± 0.22 versus 0.98 ± 0.16, p < .001 for both. Although this does not establish a causal relationship, it supports the hypothesis that the ability of hospitals to successfully implement SEP-1 is associated with superior performance in key measures of operational excellence.


Asunto(s)
Medicare , Sepsis , Anciano , Estudios Transversales , Hospitales , Humanos , Tiempo de Internación , Estudios Retrospectivos , Sepsis/terapia , Estados Unidos
8.
J Med Internet Res ; 22(11): e22421, 2020 11 19.
Artículo en Inglés | MEDLINE | ID: mdl-33211015

RESUMEN

BACKGROUND: Machine learning models have the potential to improve diagnostic accuracy and management of acute conditions. Despite growing efforts to evaluate and validate such models, little is known about how to best translate and implement these products as part of routine clinical care. OBJECTIVE: This study aims to explore the factors influencing the integration of a machine learning sepsis early warning system (Sepsis Watch) into clinical workflows. METHODS: We conducted semistructured interviews with 15 frontline emergency department physicians and rapid response team nurses who participated in the Sepsis Watch quality improvement initiative. Interviews were audio recorded and transcribed. We used a modified grounded theory approach to identify key themes and analyze qualitative data. RESULTS: A total of 3 dominant themes emerged: perceived utility and trust, implementation of Sepsis Watch processes, and workforce considerations. Participants described their unfamiliarity with machine learning models. As a result, clinician trust was influenced by the perceived accuracy and utility of the model from personal program experience. Implementation of Sepsis Watch was facilitated by the easy-to-use tablet application and communication strategies that were developed by nurses to share model outputs with physicians. Barriers included the flow of information among clinicians and gaps in knowledge about the model itself and broader workflow processes. CONCLUSIONS: This study generated insights into how frontline clinicians perceived machine learning models and the barriers to integrating them into clinical workflows. These findings can inform future efforts to implement machine learning interventions in real-world settings and maximize the adoption of these interventions.


Asunto(s)
Aprendizaje Automático/normas , Flujo de Trabajo , Humanos , Investigación Cualitativa
9.
JAMIA Open ; 3(2): 252-260, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32734166

RESUMEN

OBJECTIVE: Determine if deep learning detects sepsis earlier and more accurately than other models. To evaluate model performance using implementation-oriented metrics that simulate clinical practice. MATERIALS AND METHODS: We trained internally and temporally validated a deep learning model (multi-output Gaussian process and recurrent neural network [MGP-RNN]) to detect sepsis using encounters from adult hospitalized patients at a large tertiary academic center. Sepsis was defined as the presence of 2 or more systemic inflammatory response syndrome (SIRS) criteria, a blood culture order, and at least one element of end-organ failure. The training dataset included demographics, comorbidities, vital signs, medication administrations, and labs from October 1, 2014 to December 1, 2015, while the temporal validation dataset was from March 1, 2018 to August 31, 2018. Comparisons were made to 3 machine learning methods, random forest (RF), Cox regression (CR), and penalized logistic regression (PLR), and 3 clinical scores used to detect sepsis, SIRS, quick Sequential Organ Failure Assessment (qSOFA), and National Early Warning Score (NEWS). Traditional discrimination statistics such as the C-statistic as well as metrics aligned with operational implementation were assessed. RESULTS: The training set and internal validation included 42 979 encounters, while the temporal validation set included 39 786 encounters. The C-statistic for predicting sepsis within 4 h of onset was 0.88 for the MGP-RNN compared to 0.836 for RF, 0.849 for CR, 0.822 for PLR, 0.756 for SIRS, 0.619 for NEWS, and 0.481 for qSOFA. MGP-RNN detected sepsis a median of 5 h in advance. Temporal validation assessment continued to show the MGP-RNN outperform all 7 clinical risk score and machine learning comparisons. CONCLUSIONS: We developed and validated a novel deep learning model to detect sepsis. Using our data elements and feature set, our modeling approach outperformed other machine learning methods and clinical scores.

10.
JMIR Med Inform ; 8(7): e15182, 2020 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-32673244

RESUMEN

BACKGROUND: Successful integrations of machine learning into routine clinical care are exceedingly rare, and barriers to its adoption are poorly characterized in the literature. OBJECTIVE: This study aims to report a quality improvement effort to integrate a deep learning sepsis detection and management platform, Sepsis Watch, into routine clinical care. METHODS: In 2016, a multidisciplinary team consisting of statisticians, data scientists, data engineers, and clinicians was assembled by the leadership of an academic health system to radically improve the detection and treatment of sepsis. This report of the quality improvement effort follows the learning health system framework to describe the problem assessment, design, development, implementation, and evaluation plan of Sepsis Watch. RESULTS: Sepsis Watch was successfully integrated into routine clinical care and reshaped how local machine learning projects are executed. Frontline clinical staff were highly engaged in the design and development of the workflow, machine learning model, and application. Novel machine learning methods were developed to detect sepsis early, and implementation of the model required robust infrastructure. Significant investment was required to align stakeholders, develop trusting relationships, define roles and responsibilities, and to train frontline staff, leading to the establishment of 3 partnerships with internal and external research groups to evaluate Sepsis Watch. CONCLUSIONS: Machine learning models are commonly developed to enhance clinical decision making, but successful integrations of machine learning into routine clinical care are rare. Although there is no playbook for integrating deep learning into clinical care, learnings from the Sepsis Watch integration can inform efforts to develop machine learning technologies at other health care delivery systems.

11.
MDM Policy Pract ; 5(1): 2381468319899663, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31976373

RESUMEN

Background. Identification of patients at risk of deteriorating during their hospitalization is an important concern. However, many off-shelf scores have poor in-center performance. In this article, we report our experience developing, implementing, and evaluating an in-hospital score for deterioration. Methods. We abstracted 3 years of data (2014-2016) and identified patients on medical wards that died or were transferred to the intensive care unit. We developed a time-varying risk model and then implemented the model over a 10-week period to assess prospective predictive performance. We compared performance to our currently used tool, National Early Warning Score. In order to aid clinical decision making, we transformed the quantitative score into a three-level clinical decision support tool. Results. The developed risk score had an average area under the curve of 0.814 (95% confidence interval = 0.79-0.83) versus 0.740 (95% confidence interval = 0.72-0.76) for the National Early Warning Score. We found the proposed score was able to respond to acute clinical changes in patients' clinical status. Upon implementing the score, we were able to achieve the desired positive predictive value but needed to retune the thresholds to get the desired sensitivity. Discussion. This work illustrates the potential for academic medical centers to build, refine, and implement risk models that are targeted to their patient population and work flow.

12.
JAMA Netw Open ; 2(2): e187571, 2019 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-30768188

RESUMEN

Importance: Sepsis is present in many hospitalizations that culminate in death. The contribution of sepsis to these deaths, and the extent to which they are preventable, is unknown. Objective: To estimate the prevalence, underlying causes, and preventability of sepsis-associated mortality in acute care hospitals. Design, Setting, and Participants: Cohort study in which a retrospective medical record review was conducted of 568 randomly selected adults admitted to 6 US academic and community hospitals from January 1, 2014, to December 31, 2015, who died in the hospital or were discharged to hospice and not readmitted. Medical records were reviewed from January 1, 2017, to March 31, 2018. Main Outcomes and Measures: Clinicians reviewed cases for sepsis during hospitalization using Sepsis-3 criteria, hospice-qualifying criteria on admission, immediate and underlying causes of death, and suboptimal sepsis-related care such as inappropriate or delayed antibiotics, inadequate source control, or other medical errors. The preventability of each sepsis-associated death was rated on a 6-point Likert scale. Results: The study cohort included 568 patients (289 [50.9%] men; mean [SD] age, 70.5 [16.1] years) who died in the hospital or were discharged to hospice. Sepsis was present in 300 hospitalizations (52.8%; 95% CI, 48.6%-57.0%) and was the immediate cause of death in 198 cases (34.9%; 95% CI, 30.9%-38.9%). The next most common immediate causes of death were progressive cancer (92 [16.2%]) and heart failure (39 [6.9%]). The most common underlying causes of death in patients with sepsis were solid cancer (63 of 300 [21.0%]), chronic heart disease (46 of 300 [15.3%]), hematologic cancer (31 of 300 [10.3%]), dementia (29 of 300 [9.7%]), and chronic lung disease (27 of 300 [9.0%]). Hospice-qualifying conditions were present on admission in 121 of 300 sepsis-associated deaths (40.3%; 95% CI 34.7%-46.1%), most commonly end-stage cancer. Suboptimal care, most commonly delays in antibiotics, was identified in 68 of 300 sepsis-associated deaths (22.7%). However, only 11 sepsis-associated deaths (3.7%) were judged definitely or moderately likely preventable; another 25 sepsis-associated deaths (8.3%) were considered possibly preventable. Conclusions and Relevance: In this cohort from 6 US hospitals, sepsis was the most common immediate cause of death. However, most underlying causes of death were related to severe chronic comorbidities and most sepsis-associated deaths were unlikely to be preventable through better hospital-based care. Further innovations in the prevention and care of underlying conditions may be necessary before a major reduction in sepsis-associated deaths can be achieved.


Asunto(s)
Sepsis , Anciano , Anciano de 80 o más Años , Femenino , Hospitalización , Humanos , Masculino , Persona de Mediana Edad , Prevalencia , Estudios Retrospectivos , Sepsis/epidemiología , Sepsis/etiología , Sepsis/mortalidad , Sepsis/prevención & control , Estados Unidos/epidemiología
13.
Crit Care Med ; 47(1): 49-55, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30247239

RESUMEN

OBJECTIVES: Previous studies have looked at National Early Warning Score performance in predicting in-hospital deterioration and death, but data are lacking with respect to patient outcomes following implementation of National Early Warning Score. We sought to determine the effectiveness of National Early Warning Score implementation on predicting and preventing patient deterioration in a clinical setting. DESIGN: Retrospective cohort study. SETTING: Tertiary care academic facility and a community hospital. PATIENTS: Patients 18 years old or older hospitalized from March 1, 2014, to February 28, 2015, during preimplementation of National Early Warning Score to August 1, 2015, to July 31, 2016, after National Early Warning Score was implemented. INTERVENTIONS: Implementation of National Early Warning Score within the electronic health record and associated best practice alert. MEASUREMENTS AND MAIN RESULTS: In this study of 85,322 patients (42,402 patients pre-National Early Warning Score and 42,920 patients post-National Early Warning Score implementation), the primary outcome of rate of ICU transfer or death did not change after National Early Warning Score implementation, with adjusted hazard ratio of 0.94 (0.84-1.05) and 0.90 (0.77-1.05) at our academic and community hospital, respectively. In total, 175,357 best practice advisories fired during the study period, with the best practice advisory performing better at the community hospital than the academic at predicting an event within 12 hours 7.4% versus 2.2% of the time, respectively. Retraining National Early Warning Score with newly generated hospital-specific coefficients improved model performance. CONCLUSIONS: At both our academic and community hospital, National Early Warning Score had poor performance characteristics and was generally ignored by frontline nursing staff. As a result, National Early Warning Score implementation had no appreciable impact on defined clinical outcomes. Refitting of the model using site-specific data improved performance and supports validating predictive models on local data.


Asunto(s)
Alarmas Clínicas , Deterioro Clínico , Gravedad del Paciente , Centros Médicos Académicos , Adulto , Anciano , Actitud del Personal de Salud , Estudios de Cohortes , Diagnóstico Precoz , Femenino , Mortalidad Hospitalaria , Hospitales Comunitarios , Humanos , Unidades de Cuidados Intensivos , Masculino , Persona de Mediana Edad , North Carolina , Personal de Enfermería en Hospital , Transferencia de Pacientes/estadística & datos numéricos , Estudios Retrospectivos
14.
Crit Care Med ; 46(10): 1585-1591, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30015667

RESUMEN

OBJECTIVES: Many septic patients receive care that fails the Centers for Medicare and Medicaid Services' SEP-1 measure, but it is unclear whether this reflects meaningful lapses in care, differences in clinical characteristics, or excessive rigidity of the "all-or-nothing" measure. We compared outcomes in cases that passed versus failed SEP-1 during the first 2 years after the measure was implemented. DESIGN: Retrospective cohort study. SETTING: Seven U.S. hospitals. PATIENTS: Adult patients included in SEP-1 reporting between October 2015 and September 2017. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Of 851 sepsis cases in the cohort, 281 (33%) passed SEP-1 and 570 (67%) failed. SEP-1 failures had higher rates of septic shock (20% vs 9%; p < 0.001), hospital-onset sepsis (11% vs 4%; p = 0.001), and vague presenting symptoms (46% vs 30%; p < 0.001). The most common reasons for failure were omission of 3- and 6-hour lactate measurements (228/570 failures, 40%). Only 86 of 570 failures (15.1%) had greater than 3-hour delays until broad-spectrum antibiotics. Cases that failed SEP-1 had higher in-hospital mortality rates (18.4% vs 11.0%; odds ratio, 1.82; 95% CI, 1.19-2.80; p = 0.006), but this association was no longer significant after adjusting for differences in clinical characteristics and severity of illness (adjusted odds ratio, 1.36; 95% CI, 0.85-2.18; p = 0.205). Delays of greater than 3 hours until antibiotics were significantly associated with death (adjusted odds ratio, 1.94; 95% CI, 1.04-3.62; p = 0.038), whereas failing SEP-1 for any other reason was not (adjusted odds ratio, 1.10; 95% CI, 0.70-1.72; p = 0.674). CONCLUSIONS: Crude mortality rates were higher in sepsis cases that failed versus passed SEP-1, but there was no difference after adjusting for clinical characteristics and severity of illness. Delays in antibiotic administration were associated with higher mortality but only accounted for a small fraction of SEP-1 failures. SEP-1 may not clearly differentiate between high- and low-quality care, and detailed risk adjustment is necessary to properly interpret associations between SEP-1 compliance and mortality.


Asunto(s)
Mortalidad Hospitalaria/tendencias , Indicadores de Calidad de la Atención de Salud , Sepsis/mortalidad , Sepsis/terapia , Tiempo de Tratamiento/estadística & datos numéricos , Adulto , Anciano , Antibacterianos/uso terapéutico , Estudios de Cohortes , Manejo de la Enfermedad , Servicio de Urgencia en Hospital/organización & administración , Femenino , Humanos , Masculino , Evaluación de Resultado en la Atención de Salud , Estudios Retrospectivos , Factores de Riesgo , Estados Unidos
16.
Genet Med ; 9(12): 826-35, 2007 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-18091432

RESUMEN

PURPOSE: Cytochrome P450 (CYP450) enzymes metabolize selective serotonin reuptake inhibitor (SSRI) drugs used in treatment of depression. Variants in these genes may impact treatment efficacy and tolerability. The purpose of this study was 2-fold: to systematically review the literature for evidence supporting CYP450 genotyping to guide SSRI treatment for major depression, and, where evidence is inadequate, to suggest future research. METHODS: We searched MEDLINE(R) and other databases for studies addressing five key questions suggested by the Evaluation of Genomic Applications in Practice and Prevention Working Group. Eligibility criteria were defined, and studies were reviewed independently by paired researchers. A conceptual model was developed to guide future research. RESULTS: Review of 1200 abstracts led to the final inclusion of 37 articles. The evidence indicates relatively high analytic sensitivity and specificity of tests detecting a subset of polymorphisms of CYP2D6, 2C19, 2C8, 2C9, and 1A1. We found marginal evidence regarding a clinical association between CYP450 variants and SSRI metabolism, efficacy, and tolerability in the treatment of depression. CONCLUSIONS: Current evidence does not support the use of CYP450 genotyping to guide SSRI treatment of patients with depression. Studies are proposed that will effectively guide decision-making in the area of CYP450 testing in depression, and genetic testing more generally.


Asunto(s)
Sistema Enzimático del Citocromo P-450/genética , Trastorno Depresivo/tratamiento farmacológico , Pruebas Genéticas , Polimorfismo Genético , Inhibidores Selectivos de la Recaptación de Serotonina/uso terapéutico , Trastorno Depresivo/genética , Medicina Basada en la Evidencia , Variación Genética , Genotipo , Humanos , MEDLINE , Reproducibilidad de los Resultados
17.
JAMA ; 293(6): 699-706, 2005 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-15701911

RESUMEN

CONTEXT: Recent trials have found that ximelagatran and warfarin are equally effective in stroke prevention for patients with atrial fibrillation. Because ximelagatran can be taken in a fixed, oral dose without international normalized ratio monitoring and may have a lower risk of hemorrhage, it might improve quality-adjusted survival compared with dose-adjusted warfarin. OBJECTIVE: To compare quality-adjusted survival and cost among 3 alternative therapies for patients with chronic atrial fibrillation: ximelagatran, warfarin, and aspirin. DESIGN: Semi-Markov decision model. PATIENTS: Hypothetical cohort of 70-year-old patients with chronic atrial fibrillation, varying risk of stroke, and no contraindications to anticoagulation therapy. MAIN OUTCOME MEASURES: Quality-adjusted life-years (QALYs) and costs in US dollars. RESULTS: For patients with atrial fibrillation but no additional risk factors for stroke, both ximelagatran and warfarin cost more than 50,000 dollars per QALY compared with aspirin. For patients with additional stroke risk factors and low hemorrhage risk, ximelagatran modestly increased quality-adjusted survival (0.12 QALY) at a substantial cost (116,000 dollars per QALY) compared with warfarin. For ximelagatran to cost less than 50,000 dollars per QALY it would have to cost less than 1100 dollars per year or be prescribed to patients who have an elevated risk of intracranial hemorrhage (>1.0% per year of warfarin) or a low quality of life with warfarin therapy. CONCLUSION: Assuming equal effectiveness in stroke prevention and decreased hemorrhage risk, ximelagatran is not likely to be cost-effective in patients with atrial fibrillation unless they have a high risk of intracranial hemorrhage or a low quality of life with warfarin.


Asunto(s)
Anticoagulantes/economía , Anticoagulantes/uso terapéutico , Fibrilación Atrial/tratamiento farmacológico , Azetidinas/economía , Azetidinas/uso terapéutico , Profármacos/economía , Profármacos/uso terapéutico , Accidente Cerebrovascular/prevención & control , Anciano , Anciano de 80 o más Años , Aspirina/economía , Aspirina/uso terapéutico , Fibrilación Atrial/complicaciones , Fibrilación Atrial/economía , Bencilaminas , Enfermedad Crónica , Análisis Costo-Beneficio , Técnicas de Apoyo para la Decisión , Humanos , Años de Vida Ajustados por Calidad de Vida , Accidente Cerebrovascular/etiología , Estados Unidos , Warfarina/economía , Warfarina/uso terapéutico
18.
J Comp Neurol ; 456(4): 375-83, 2003 Feb 17.
Artículo en Inglés | MEDLINE | ID: mdl-12532409

RESUMEN

Neuritic plaques are one of the stereotypical hallmarks of Alzheimer's disease (AD) pathology. These structures are composed of extracellular accumulations of fibrillar forms of the amyloid-beta peptide (Abeta), a variety of other plaque-associated proteins, activated glial cells, and degenerating nerve processes. To study the neuritic toxicity of different structural forms of Abeta in the context of regional connectivity and the entire cell, we crossed PDAPP transgenic (Tg) mice, a model with AD-like pathology, to Tg mice that stably express yellow fluorescent protein (YFP) in a subset of neurons in the brain. In PDAPP; YFP double Tg mice, markedly enlarged YFP-labeled axonal and dendritic varicosities were associated with fibrillar Abeta deposits. These varicosities were absent in areas where there were nonfibrillar Abeta deposits. Interestingly, YFP-labeled varicosities revealed changes that corresponded with changes seen with electron microscopy and the de Olmos silver staining technique. Other silver staining methods and immunohistochemical localization of phosphorylated neurofilaments or phosphorylated tau were unable to detect the majority of these dystrophic neurites. Some but not all synaptic vesicle markers accumulated abnormally in YFP-labeled varicosities associated with neuritic plaques. In addition to the characterization of the effects of Abeta on axonal and dendritic structure, YFP-labeled neurons in Tg mice should prove to be a valuable tool to interpret the localization patterns of other markers and for future studies examining the dynamics of axons and dendrites in a variety of disease conditions in living tissue both in vitro and in vivo.


Asunto(s)
Enfermedad de Alzheimer/patología , Péptidos beta-Amiloides/metabolismo , Precursor de Proteína beta-Amiloide/genética , Proteínas Bacterianas/genética , Encéfalo/patología , Proteínas Luminiscentes/genética , Neuronas/patología , Enfermedad de Alzheimer/metabolismo , Animales , Encéfalo/metabolismo , Dendritas/patología , Modelos Animales de Enfermedad , Inmunohistoquímica , Ratones , Ratones Transgénicos , Microscopía Electrónica , Neuritas/patología , Neuronas/metabolismo , Terminales Presinápticos/patología , Tinción con Nitrato de Plata , Vesículas Sinápticas/patología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...