Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Addict Med ; 16(3): 310-316, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34282084

RESUMO

OBJECTIVES: Distance and travel costs to opioid treatment programs (OTPs), especially in rural communities, are barriers to treatment for opioid use disorder. Retention rates at 12 months in our OTP are 55% (range 53%-61%).We piloted a novel treatment platform utilizing a video directly observed therapy (VDOT) smartphone app and a secure medication dispenser to support adherence with take-home doses of methadone or buprenorphine while enabling patients to maintain prosocial activities, reduce time and cost of travel, and increase retention. METHODS: Participants (n»58) were adults in a Vermont OTP. Inclusion criteria included travel hardship, access to Wi-Fi or cellular network, and having an iPhone 4S or Android 4.0 or greater. Patients received a dispenser, VDOT app, clinic dispensed medication, counseling, and urine drug testing. Chart reviews assessed VDOT compliance, engagement in prosocial activities, travel costs and time savings, and treatment disposition/retention. Project-associated costs were examined. RESULTS: Of the 15,831 expected videos, 15,581 (98.4%) were received and only 10 (0.063%) showed signs of medication noncompliance with 1 (0.0064%) showing an overt attempt at diversion. About 93% of participants engaged in prosocial activities, travel time and costs were reduced 86%, median cost saved $72 weekly, median travel time saved 5.5 hours weekly and 98% of participants were in treatment 12 months later. CONCLUSIONS: VDOT participants using dispensers showed high levels of medication ingestion integrity, had favorable clinical stability, and lower travel time and costs. These findings suggest that using VDOT with dispensers may hold promise as an innovative platform for supporting medication adherence.


Assuntos
Buprenorfina , Transtornos Relacionados ao Uso de Opioides , Adulto , Analgésicos Opioides/uso terapêutico , Buprenorfina/uso terapêutico , Terapia Diretamente Observada , Humanos , Adesão à Medicação , Transtornos Relacionados ao Uso de Opioides/tratamento farmacológico
2.
Front Digit Health ; 4: 943768, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36339512

RESUMO

Multiple reporting guidelines for artificial intelligence (AI) models in healthcare recommend that models be audited for reliability and fairness. However, there is a gap of operational guidance for performing reliability and fairness audits in practice. Following guideline recommendations, we conducted a reliability audit of two models based on model performance and calibration as well as a fairness audit based on summary statistics, subgroup performance and subgroup calibration. We assessed the Epic End-of-Life (EOL) Index model and an internally developed Stanford Hospital Medicine (HM) Advance Care Planning (ACP) model in 3 practice settings: Primary Care, Inpatient Oncology and Hospital Medicine, using clinicians' answers to the surprise question ("Would you be surprised if [patient X] passed away in [Y years]?") as a surrogate outcome. For performance, the models had positive predictive value (PPV) at or above 0.76 in all settings. In Hospital Medicine and Inpatient Oncology, the Stanford HM ACP model had higher sensitivity (0.69, 0.89 respectively) than the EOL model (0.20, 0.27), and better calibration (O/E 1.5, 1.7) than the EOL model (O/E 2.5, 3.0). The Epic EOL model flagged fewer patients (11%, 21% respectively) than the Stanford HM ACP model (38%, 75%). There were no differences in performance and calibration by sex. Both models had lower sensitivity in Hispanic/Latino male patients with Race listed as "Other." 10 clinicians were surveyed after a presentation summarizing the audit. 10/10 reported that summary statistics, overall performance, and subgroup performance would affect their decision to use the model to guide care; 9/10 said the same for overall and subgroup calibration. The most commonly identified barriers for routinely conducting such reliability and fairness audits were poor demographic data quality and lack of data access. This audit required 115 person-hours across 8-10 months. Our recommendations for performing reliability and fairness audits include verifying data validity, analyzing model performance on intersectional subgroups, and collecting clinician-patient linkages as necessary for label generation by clinicians. Those responsible for AI models should require such audits before model deployment and mediate between model auditors and impacted stakeholders.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA