RESUMO
GOAL: Occurrences of physician burnout have reached epidemic numbers, and the electronic health record (EHR) is a commonly cited cause of the distress. To enhance current understanding of the relationship between burnout and the EHR, we explored the connections between physicians' distress and the EHR. METHODS: In this qualitative study, physicians and graduate medical trainees from two healthcare organizations in California were interviewed about EHR-related distressing events and the impact on their emotions and actions. We analyzed physician responses to identify themes regarding the negative impact of the EHR on physician experience and actions. EHR "distressing events" were categorized using the Accreditation Council for Graduate Medical Education (ACGME) Physician Professional Competencies. PRINCIPAL FINDINGS: Every participating physician reported EHR-related distress affecting professional activities. Five main themes emerged from our analysis: system blocks to patient care; poor implementation, design, and functionality of the EHR; billing priorities conflicting with ideal workflow and best-practice care; lack of efficiency; and poor teamwork function. When mapped to the ACGME competencies, physician distress frequently stemmed from situations where physicians prioritized systems-based practice above other desired professional actions and behaviors. Physicians also reported a climate of silence in which physicians would not share problems due to fear of retribution or lack of confidence that the problems would be addressed. PRACTICAL APPLICATIONS: Physicians and administrators need to address the hierarchy of values that prioritizes system requirements such as those required by the EHR above physicians' other desired professional actions and behaviors. Balancing the importance of competing competencies may help to address rising burnout. We also recommend that administrators consider qualitative anonymous interviews as an effective method to uncover and understand physician distress in light of physicians' reported climate of silence.
Assuntos
Esgotamento Profissional , Prática de Grupo , Médicos , Esgotamento Profissional/prevenção & controle , Registros Eletrônicos de Saúde , Humanos , Médicos/psicologia , Pesquisa QualitativaRESUMO
BACKGROUND: Clinical outcomes in ST-segment elevation myocardial infarction (STEMI) are related to reperfusion times. Given the benefit of early recognition of STEMI and resulting ability to decrease reperfusion times and improve mortality, current prehospital recommendations are to obtain electrocardiograms (ECGs) in patients with concern for acute coronary syndrome. OBJECTIVES: We sought to determine the effect of wireless transmission of prehospital ECGs on STEMI recognition and reperfusion times. We hypothesized decreased reperfusion times in patients in whom prehospital ECGs were obtained. METHODS: We conducted a retrospective, observational study of patients who presented to our suburban, tertiary care, teaching hospital emergency department with STEMI on a prehospital ECG. RESULTS: Ninety-nine patients underwent reperfusion therapy. Patients with prehospital ECGs had a mean time to angioplasty suite of 43 min (95% confidence interval [CI] 31-54). Compared to patients with no prehospital ECG, mean time to angioplasty suite was 49 min (95% CI 41-57), p = 0.035. Patients with prehospital STEMI identification and catheterization laboratory activation had a mean time to angioplasty suite of 33 min (95% CI 25-41), p = 0.007. Patients with prehospital ECGs had a mean door-to-balloon time of 66 min (95% CI 53-79), whereas the control group had a mean door-to-balloon time of 79 min (95% CI 67-90), p = 0.024. Patients with prehospital STEMI identification and catheterization laboratory activation had a mean door-to-balloon time of 58 min (95% CI 48-68), p = 0.018. CONCLUSIONS: Prehospital STEMI identification allows for prompt catheterization laboratory activation, leading to decreased reperfusion times.
Assuntos
Eletrocardiografia , Serviços Médicos de Emergência/métodos , Infarto do Miocárdio/diagnóstico , Traumatismo por Reperfusão Miocárdica/diagnóstico , Tempo para o Tratamento/estatística & dados numéricos , Adulto , Angioplastia Coronária com Balão/estatística & dados numéricos , Cateterismo Cardíaco/estatística & dados numéricos , Estudos de Casos e Controles , Serviço Hospitalar de Emergência/organização & administração , Feminino , Humanos , Masculino , Infarto do Miocárdio/terapia , Estudos RetrospectivosRESUMO
BACKGROUND: Atrial fibrillation (AF), a common cause of stroke, often is asymptomatic. Smartphones and smartwatches can detect AF using heart rate patterns inferred using photoplethysmography (PPG); however, enhanced accuracy is required to reduce false positives in screening populations. OBJECTIVE: The purpose of this study was to test the hypothesis that a deep learning algorithm given raw, smartwatch-derived PPG waveforms would discriminate AF from normal sinus rhythm better than algorithms using heart rate alone. METHODS: Patients presenting for cardioversion of AF (n = 51) were given wrist-worn fitness trackers containing PPG sensors (Jawbone Health). Standard 12-lead electrocardiograms over-read by board-certified cardiac electrophysiologists were used as the reference standard. The accuracy of PPG signals to discriminate AF from sinus rhythm was evaluated by conventional measures of heart rate variability, a long short-term memory (LSTM) neural network given heart rate data only, and a deep convolutional-recurrent neural net (DNN) given the raw PPG data. RESULTS: From among 51 patients with persistent AF (age 63.6 ± 11.3 years; 78% male; 88% white), we randomly assigned 40 to train and 11 to test the algorithms. Whereas logistic regression analysis of heart rate variability yielded an area under the receiver operating characteristic curve (AUC) of 0.717 (sensitivity 0.741; specificity 0.584), the LSTM model given heart rate data exhibited AUC of 0.954 (sensitivity 0.810; specificity 0.921), and the DNN model given raw PPG data yielded the highest AUC of 0.983 (sensitivity 0.985; specificity 0.880). CONCLUSION: A deep learning model given the raw PPG-based signal resulted in AF detection with high accuracy, performing better than conventional analyses relying on heart rate series data alone.
RESUMO
OBJECTIVES: We validate a machine learning-based sepsis-prediction algorithm (InSight) for the detection and prediction of three sepsis-related gold standards, using only six vital signs. We evaluate robustness to missing data, customisation to site-specific data using transfer learning and generalisability to new settings. DESIGN: A machine-learning algorithm with gradient tree boosting. Features for prediction were created from combinations of six vital sign measurements and their changes over time. SETTING: A mixed-ward retrospective dataset from the University of California, San Francisco (UCSF) Medical Center (San Francisco, California, USA) as the primary source, an intensive care unit dataset from the Beth Israel Deaconess Medical Center (Boston, Massachusetts, USA) as a transfer-learning source and four additional institutions' datasets to evaluate generalisability. PARTICIPANTS: 684 443 total encounters, with 90 353 encounters from June 2011 to March 2016 at UCSF. INTERVENTIONS: None. PRIMARY AND SECONDARY OUTCOME MEASURES: Area under the receiver operating characteristic (AUROC) curve for detection and prediction of sepsis, severe sepsis and septic shock. RESULTS: For detection of sepsis and severe sepsis, InSight achieves an AUROC curve of 0.92 (95% CI 0.90 to 0.93) and 0.87 (95% CI 0.86 to 0.88), respectively. Four hours before onset, InSight predicts septic shock with an AUROC of 0.96 (95% CI 0.94 to 0.98) and severe sepsis with an AUROC of 0.85 (95% CI 0.79 to 0.91). CONCLUSIONS: InSight outperforms existing sepsis scoring systems in identifying and predicting sepsis, severe sepsis and septic shock. This is the first sepsis screening system to exceed an AUROC of 0.90 using only vital sign inputs. InSight is robust to missing data, can be customised to novel hospital data using a small fraction of site data and retains strong discrimination across all institutions.
Assuntos
Algoritmos , Aprendizado de Máquina , Sepse/diagnóstico , Choque Séptico/diagnóstico , Sinais Vitais , Adolescente , Adulto , Idoso , Área Sob a Curva , Boston , Bases de Dados Factuais , Serviço Hospitalar de Emergência/organização & administração , Feminino , Mortalidade Hospitalar , Humanos , Unidades de Terapia Intensiva/organização & administração , Tempo de Internação , Masculino , Pessoa de Meia-Idade , Quartos de Pacientes/organização & administração , Prognóstico , Curva ROC , Estudos Retrospectivos , São Francisco , Sepse/mortalidade , Índice de Gravidade de Doença , Choque Séptico/mortalidade , Adulto JovemRESUMO
Algorithm-based clinical decision support (CDS) systems associate patient-derived health data with outcomes of interest, such as in-hospital mortality. However, the quality of such associations often depends on the availability of site-specific training data. Without sufficient quantities of data, the underlying statistical apparatus cannot differentiate useful patterns from noise and, as a result, may underperform. This initial training data burden limits the widespread, out-of-the-box, use of machine learning-based risk scoring systems. In this study, we implement a statistical transfer learning technique, which uses a large "source" data set to drastically reduce the amount of data needed to perform well on a "target" site for which training data are scarce. We test this transfer technique with AutoTriage, a mortality prediction algorithm, on patient charts from the Beth Israel Deaconess Medical Center (the source) and a population of 48 249 adult inpatients from University of California San Francisco Medical Center (the target institution). We find that the amount of training data required to surpass 0.80 area under the receiver operating characteristic (AUROC) on the target set decreases from more than 4000 patients to fewer than 220. This performance is superior to the Modified Early Warning Score (AUROC: 0.76) and corresponds to a decrease in clinical data collection time from approximately 6 months to less than 10 days. Our results highlight the usefulness of transfer learning in the specialization of CDS systems to new hospital sites, without requiring expensive and time-consuming data collection efforts.
RESUMO
AIMS: To compute the financial and mortality impact of InSight, an algorithm-driven biomarker, which forecasts the onset of sepsis with minimal use of electronic health record data. METHODS: This study compares InSight with existing sepsis screening tools and computes the differential life and cost savings associated with its use in the inpatient setting. To do so, mortality reduction is obtained from an increase in the number of sepsis cases correctly identified by InSight. Early sepsis detection by InSight is also associated with a reduction in length-of-stay, from which cost savings are directly computed. RESULTS: InSight identifies more true positive cases of severe sepsis, with fewer false alarms, than comparable methods. For an individual ICU with 50 beds, for example, it is determined that InSight annually saves 75 additional lives and reduces sepsis-related costs by $560,000. LIMITATIONS: InSight performance results are derived from analysis of a single-center cohort. Mortality reduction results rely on a simplified use case, which fixes prediction times at 0, 1, and 2 h before sepsis onset, likely leading to under-estimates of lives saved. The corresponding cost reduction numbers are based on national averages for daily patient length-of-stay cost. CONCLUSIONS: InSight has the potential to reduce sepsis-related deaths and to lead to substantial cost savings for healthcare facilities.
Assuntos
Algoritmos , Sepse/economia , Sepse/mortalidade , Índice de Gravidade de Doença , Fatores Etários , Antibacterianos/economia , Antibacterianos/uso terapêutico , Biomarcadores , Protocolos Clínicos , Análise Custo-Benefício , Humanos , Tempo de Internação , Escores de Disfunção Orgânica , Sensibilidade e Especificidade , Sepse/diagnóstico , Sinais VitaisRESUMO
BACKGROUND: Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. OBJECTIVE: To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. METHODS: We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. RESULTS: In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. CONCLUSIONS: Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.
RESUMO
INTRODUCTION: Emergency department (ED) crowding has been shown to negatively impact patient outcomes. Few studies have addressed the effect of ED crowding on patient satisfaction. Our objective was to evaluate the impact of ED crowding on patient satisfaction in patients discharged from the ED. METHODS: We measured patient satisfaction using Press-Ganey surveys returned by patients that visited our ED between August 1, 2007 and March 31, 2008. We recorded all mean satisfaction scores and obtained mean ED occupancy rate, mean emergency department work index (EDWIN) score and hospital diversion status over each 8-hour shift from data archived in our electronic tracking board. Univariate and multivariate logistic regression analysis was calculated to determine the effect of ED crowding and hospital diversion status on the odds of achieving a mean satisfaction score ≥ 85, which was the patient satisfaction goal set forth by our ED administration. RESULTS: A total of 1591 surveys were returned over the study period. Mean satisfaction score was 77.6 (standard deviation [SD] ±16) and mean occupancy rate was 1.23 (SD ± 0.31). The likelihood of failure to meet patient satisfaction goals was associated with an increase in average ED occupancy rate (odds ratio [OR] 0.32, 95% confidence interval [CI] 0.17 to 0.59, P < 0.001) and an increase in EDWIN score (OR 0.05, 95% CI 0.004 to 0.55, P = 0.015). Hospital diversion resulted in lower mean satisfaction scores, but this was not statistically significant (OR 0.62, 95% CI 0.36 to 1.05). In multivariable analysis controlling for hospital diversion status and time of shift, ED occupancy rate remained a significant predictor of failure to meet patient satisfaction goals (OR 0.34, 95% CI 0.18 to 0.66, P = 0.001). CONCLUSION: Increased crowding, as measured by ED occupancy rate and EDWIN score, was significantly associated with reduced patient satisfaction. Although causative attribution was limited, our study suggested yet another negative impact resulting from ED crowding.
RESUMO
BACKGROUND: Severity-of-illness scoring systems have primarily been developed for, and validated in, younger trauma patients. AIMS: We sought to determine the accuracy of the injury severity score (ISS) and the revised trauma score (RTS) in predicting mortality and hospital length of stay (LOS) in trauma patients over the age of 65 treated in our emergency department (ED). MATERIALS AND METHODS: Using the Illinois Trauma Registry, we identified all patients 65 years and older treated in our level I trauma facility from January 2004 to November 2007. The primary outcome was death; the secondary outcome was overall hospital length of stay (LOS). We measured associations between scores and outcomes with binary logistic and linear regression. RESULTS: A total of 347 patients, 65 years of age and older were treated in our hospital during the study period. Median age was 76 years (IQR 69-82), with median ISS 13 (IQR 8-17), and median RTS 7.8 (IQR 7.1-7.8). Overall mortality was 24%. A higher value for ISS showed a positive correlation with likelihood of death, which although statistically significant, was numerically small (OR=1.10, 95% CI 1.06 to 1.13, P<0.001). An elevated RTS had an inverse correlation to likelihood of death that was also statistically significant (OR=0.48, 95% CI 0.39 to 0.58, P<0.001). Total hospital LOS increased with increasing ISS, with statistical significance decreasing at the highest levels of ISS, but an increase in RTS not confirming the predicted decrease in total hospital LOS consistently across all ranges of RTS. CONCLUSIONS: The ISS and the RTS were better predictors of mortality than hypothesized, but had limited correlation with hospital LOS in elderly trauma patients. Although there may be some utility in these scores when applied to the elderly population, caution is warranted if attempting to predict the prognosis of patients.
RESUMO
BACKGROUND: The current international staging system for lung cancer designates intralobar satellites as T4 disease. In this study, we sought to determine the impact of multifocal, intralobar non-small cell lung cancer (NSCLC) on patient survival and its potential relevance to stage designation. METHODS: We conducted a retrospective review of our thoracic surgical cancer registry from 1990 to 2005. Included were 53 patients with a resected lung cancer containing intralobar satellites detected preoperatively (n = 8) or in the resected specimen (n = 45). Patients with multicentric bronchioloalveolar cancer were excluded. All patients had an anatomic resection with mediastinal lymph node dissection. Median follow-up for the entire group was 31 months. Survival was calculated by the Kaplan-Meier method. A Cox proportional hazards regression model was performed to examine simultaneously the effects on overall survival of age, gender, nodal disease, number of satellite lesions, lymphatic invasion, and T status. RESULTS: The median age of the 53 patients with multifocal, intralobar (T4) disease was 68 years and 31 were women. Ten patients had more than one satellite lesion. Overall 5-year survival was 47.6% (95% confidence interval [CI], 27.36% to 65.30%) for all patients with resected intralobar satellites. Patients without nodal metastases had a 5-year survival of 58.4% (95% CI, 28.76% to 79.30%). The Cox regression identified female gender (adjusted hazard ratio [HR], 0.31; 95% CI, 0.10 to 0.96; p < 0.04) as a significant prognostic variable but only a trend towards significance for nodal status (adjusted HR, 2.3; 95% CI, .83 to 6.26; p < 0.11). CONCLUSIONS: Patients with intralobar multifocal NSCLC detected in the resected specimen have a more favorable prognosis after surgical resection than might be predicted by their stage T4 designation. Five-year survival rates, especially in T4N0 patients, more closely approximate those with stages IB or II NSCLC.
Assuntos
Carcinoma Pulmonar de Células não Pequenas/patologia , Carcinoma Pulmonar de Células não Pequenas/cirurgia , Neoplasias Pulmonares/patologia , Neoplasias Pulmonares/cirurgia , Idoso , Feminino , Seguimentos , Humanos , Estimativa de Kaplan-Meier , Metástase Linfática , Masculino , Estadiamento de Neoplasias , Valor Preditivo dos Testes , Modelos de Riscos Proporcionais , Sistema de Registros , Estudos Retrospectivos , Fatores SexuaisRESUMO
BACKGROUND: The purposes of this study were to determine the frequency of downstaging of T or N after neoadjuvant chemotherapy and radical resection in patients with carcinoma of the esophagus, and to evaluate the effect of tumor downstaging on survival. METHODS: A cohort of patients who underwent neoadjuvant chemotherapy followed by radical surgical resection for carcinoma of the esophagus was identified from a large, prospectively maintained, single-institution database of esophageal cancer patients. Patients were included if they had an accurate pretreatment clinical stage determined by the authors. Data collected included demographic data, the type of staging regimen, the chemotherapy agents used, clinical and pathologic data and stages, and survival data. Downstaging of T or N was determined by comparing the pretreatment, clinical stage to the postresection, pathologic stage. Downstaging was then evaluated in the context of survival. RESULTS: Seventy-seven patients were identified who had an accurate clinical stage assigned and underwent neoadjuvant chemotherapy followed by radical resection. Patients were clinically staged before treatment using computed tomography, positron emission tomography, and endoscopic ultrasonography. Thirty-seven patients (48%) experienced downstaging of T or N, and this group of patients had a 5-year overall actuarial survival of 63%, compared with 23% for those who were not downstaged (p = 0.002). Three patients had a complete pathologic response to neoadjuvant chemotherapy (3.9%). CONCLUSIONS: Patients who experience downstaging of T or N after neoadjuvant chemotherapy and radical surgical resection for esophageal carcinoma have a significantly higher survival rate compared with those who do not experience downstaging. This enhanced survival is comparable to survival rates reported in complete pathologic responders after neoadjuvant chemoradiation.
Assuntos
Neoplasias Esofágicas/patologia , Esofagectomia , Adulto , Idoso , Quimioterapia Adjuvante , Terapia Combinada , Intervalo Livre de Doença , Neoplasias Esofágicas/mortalidade , Neoplasias Esofágicas/terapia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Terapia Neoadjuvante , Estadiamento de Neoplasias , Estudos RetrospectivosRESUMO
OBJECTIVE: With the widespread use of computed tomography and the emergence of screening programs, non-small cell lung cancer is increasingly detected in sizes 1 cm or less. We sought to examine the long-term survival and recurrence patterns after resection of these tumors. METHODS: We conducted a retrospective review over a 15-year period to identify patients with surgically resected non-small cell lung cancer measuring 1 cm or less. Medical records were reviewed, and survival data were analyzed by the Kaplan-Meier method. RESULTS: There were 83 patients (26 men, 57 women) with a median age of 67 years (range 43-88 years). Median tumor size was 0.90 cm. Lobectomy was performed in 71 patients, bilobectomy in 1, pneumonectomy in 1, segmentectomy in 5, and wedge resection in 5. Postoperative stage was IA in 67 patients, IB in 4, IIA in 1, IIB in 4, IIIA in 2, and IIIB in 5. Median follow-up was 31 months. There was 1 operative death (1.2%). In 5 (31.3%) of the 16 patients with non-IA disease, recurrent cancer developed after resection. No recurrences were observed in the 67 patients with stage IA disease. The 5- and 10-year overall survivals for the entire cohort were 86% and 72%, respectively, and the disease-specific survival was 91% at both time points. For patients with stage IA disease, 5- and 10-year survivals were 94% and 75%, respectively, and the disease-specific survival was 100% at both time points. CONCLUSION: Eighty-one percent of patients with resected non-small cell lung cancer measuring 1 cm or less had stage IA disease. After surgical resection, recurrence is rare and long-term survival is excellent.