Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Electrocardiol ; 81: 111-116, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37683575

RESUMO

BACKGROUND: Despite the morbidity associated with acute atrial fibrillation (AF), no models currently exist to forecast its imminent onset. We sought to evaluate the ability of deep learning to forecast the imminent onset of AF with sufficient lead time, which has important implications for inpatient care. METHODS: We utilized the Physiobank Long-Term AF Database, which contains 24-h, labeled ECG recordings from patients with a history of AF. AF episodes were defined as ≥5 min of sustained AF. Three deep learning models incorporating convolutional and transformer layers were created for forecasting, with two models focusing on the predictive nature of sinus rhythm segments and AF epochs separately preceding an AF episode, and one model utilizing all preceding waveform as input. Cross-validated performance was evaluated using area under time-dependent receiver operating characteristic curves (AUC(t)) at 7.5-, 15-, 30-, and 60-min lead times, precision-recall curves, and imminent AF risk trajectories. RESULTS: There were 367 AF episodes from 84 ECG recordings. All models showed average risk trajectory divergence of those with an AF episode from those without ∼15 min before the episode. Highest AUC was associated with the sinus rhythm model [AUC = 0.74; 7.5-min lead time], though the model using all preceding waveform data had similar performance and higher AUCs at longer lead times. CONCLUSIONS: In this proof-of-concept study, we demonstrated the potential utility of neural networks to forecast the onset of AF in long-term ECG recordings with a clinically relevant lead time. External validation in larger cohorts is required before deploying these models clinically.


Assuntos
Fibrilação Atrial , Humanos , Fibrilação Atrial/diagnóstico , Eletrocardiografia , Redes Neurais de Computação , Curva ROC , Fatores de Tempo
2.
J Electrocardiol ; 76: 35-38, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36434848

RESUMO

The idea that we can detect subacute potentially catastrophic illness earlier by using statistical models trained on clinical data is now well-established. We review evidence that supports the role of continuous cardiorespiratory monitoring in these predictive analytics monitoring tools. In particular, we review how continuous ECG monitoring reflects the patient and not the clinician, is less likely to be biased, is unaffected by changes in practice patterns, captures signatures of illnesses that are interpretable by clinicians, and is an underappreciated and underutilized source of detailed information for new mathematical methods to reveal.


Assuntos
Deterioração Clínica , Eletrocardiografia , Humanos , Eletrocardiografia/métodos , Monitorização Fisiológica , Modelos Estatísticos , Inteligência Artificial
3.
Clin Infect Dis ; 75(3): 476-482, 2022 08 31.
Artigo em Inglês | MEDLINE | ID: mdl-34791136

RESUMO

BACKGROUND: Most hospitals use traditional infection prevention (IP) methods for outbreak detection. We developed the Enhanced Detection System for Healthcare-Associated Transmission (EDS-HAT), which combines whole-genome sequencing (WGS) surveillance and machine learning (ML) of the electronic health record (EHR) to identify undetected outbreaks and the responsible transmission routes, respectively. METHODS: We performed WGS surveillance of healthcare-associated bacterial pathogens from November 2016 to November 2018. EHR ML was used to identify the transmission routes for WGS-detected outbreaks, which were investigated by an IP expert. Potential infections prevented were estimated and compared with traditional IP practice during the same period. RESULTS: Of 3165 isolates, there were 2752 unique patient isolates in 99 clusters involving 297 (10.8%) patient isolates identified by WGS; clusters ranged from 2-14 patients. At least 1 transmission route was detected for 65.7% of clusters. During the same time, traditional IP investigation prompted WGS for 15 suspected outbreaks involving 133 patients, for which transmission events were identified for 5 (3.8%). If EDS-HAT had been running in real time, 25-63 transmissions could have been prevented. EDS-HAT was found to be cost-saving and more effective than traditional IP practice, with overall savings of $192 408-$692 532. CONCLUSIONS: EDS-HAT detected multiple outbreaks not identified using traditional IP methods, correctly identified the transmission routes for most outbreaks, and would save the hospital substantial costs. Traditional IP practice misidentified outbreaks for which transmission did not occur. WGS surveillance combined with EHR ML has the potential to save costs and enhance patient safety.


Assuntos
Infecção Hospitalar , Registros Eletrônicos de Saúde , Infecção Hospitalar/epidemiologia , Infecção Hospitalar/microbiologia , Infecção Hospitalar/prevenção & controle , Atenção à Saúde , Surtos de Doenças , Genoma Bacteriano , Humanos , Aprendizado de Máquina , Sequenciamento Completo do Genoma/métodos
4.
Sensors (Basel) ; 22(4)2022 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-35214310

RESUMO

Early recognition of pathologic cardiorespiratory stress and forecasting cardiorespiratory decompensation in the critically ill is difficult even in highly monitored patients in the Intensive Care Unit (ICU). Instability can be intuitively defined as the overt manifestation of the failure of the host to adequately respond to cardiorespiratory stress. The enormous volume of patient data available in ICU environments, both of high-frequency numeric and waveform data accessible from bedside monitors, plus Electronic Health Record (EHR) data, presents a platform ripe for Artificial Intelligence (AI) approaches for the detection and forecasting of instability, and data-driven intelligent clinical decision support (CDS). Building unbiased, reliable, and usable AI-based systems across health care sites is rapidly becoming a high priority, specifically as these systems relate to diagnostics, forecasting, and bedside clinical decision support. The ICU environment is particularly well-positioned to demonstrate the value of AI in saving lives. The goal is to create AI models embedded in a real-time CDS for forecasting and mitigation of critical instability in ICU patients of sufficient readiness to be deployed at the bedside. Such a system must leverage multi-source patient data, machine learning, systems engineering, and human action expertise, the latter being key to successful CDS implementation in the clinical workflow and evaluation of bias. We present one approach to create an operationally relevant AI-based forecasting CDS system.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Inteligência Artificial , Cuidados Críticos , Humanos , Unidades de Terapia Intensiva , Aprendizado de Máquina
5.
Sensors (Basel) ; 22(3)2022 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-35161770

RESUMO

For fluid resuscitation of critically ill individuals to be effective, it must be well calibrated in terms of timing and dosages of treatments. In current practice, the cardiovascular sufficiency of patients during fluid resuscitation is determined using primarily invasively measured vital signs, including Arterial Pressure and Mixed Venous Oxygen Saturation (SvO2), which may not be available in outside-of-hospital settings, particularly in the field when treating subjects injured in traffic accidents or wounded in combat where only non-invasive monitoring is available to drive care. In this paper, we propose (1) a Machine Learning (ML) approach to estimate the sufficiency utilizing features extracted from non-invasive vital signs and (2) a novel framework to address the detrimental impact of inter-patient diversity on the ability of ML models to generalize well to unseen subjects. Through comprehensive evaluation on the physiological data collected in laboratory animal experiments, we demonstrate that the proposed approaches can achieve competitive performance on new patients using only non-invasive measurements. These characteristics enable effective monitoring of fluid resuscitation in real-world acute settings with limited monitoring resources and can help facilitate broader adoption of ML in this important subfield of healthcare.


Assuntos
Sistema Cardiovascular , Hidratação , Animais , Estado Terminal , Coração , Humanos , Oximetria
6.
J Clin Monit Comput ; 36(2): 397-405, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-33558981

RESUMO

Big data analytics research using heterogeneous electronic health record (EHR) data requires accurate identification of disease phenotype cases and controls. Overreliance on ground truth determination based on administrative data can lead to biased and inaccurate findings. Hospital-acquired venous thromboembolism (HA-VTE) is challenging to identify due to its temporal evolution and variable EHR documentation. To establish ground truth for machine learning modeling, we compared accuracy of HA-VTE diagnoses made by administrative coding to manual review of gold standard diagnostic test results. We performed retrospective analysis of EHR data on 3680 adult stepdown unit patients identifying HA-VTE. International Classification of Diseases, Ninth Revision (ICD-9-CM) codes for VTE were identified. 4544 radiology reports associated with VTE diagnostic tests were screened using terminology extraction and then manually reviewed by a clinical expert to confirm diagnosis. Of 415 cases with ICD-9-CM codes for VTE, 219 were identified with acute onset type codes. Test report review identified 158 new-onset HA-VTE cases. Only 40% of ICD-9-CM coded cases (n = 87) were confirmed by a positive diagnostic test report, leaving the majority of administratively coded cases unsubstantiated by confirmatory diagnostic test. Additionally, 45% of diagnostic test confirmed HA-VTE cases lacked corresponding ICD codes. ICD-9-CM coding missed diagnostic test-confirmed HA-VTE cases and inaccurately assigned cases without confirmed VTE, suggesting dependence on administrative coding leads to inaccurate HA-VTE phenotyping. Alternative methods to develop more sensitive and specific VTE phenotype solutions portable across EHR vendor data are needed to support case-finding in big-data analytics.


Assuntos
Tromboembolia Venosa , Big Data , Hospitais , Humanos , Aprendizado de Máquina , Estudos Retrospectivos , Tromboembolia Venosa/diagnóstico
7.
Clin Infect Dis ; 73(3): e638-e642, 2021 08 02.
Artigo em Inglês | MEDLINE | ID: mdl-33367518

RESUMO

BACKGROUND: Traditional methods of outbreak investigations utilize reactive whole genome sequencing (WGS) to confirm or refute the outbreak. We have implemented WGS surveillance and a machine learning (ML) algorithm for the electronic health record (EHR) to retrospectively detect previously unidentified outbreaks and to determine the responsible transmission routes. METHODS: We performed WGS surveillance to identify and characterize clusters of genetically-related Pseudomonas aeruginosa infections during a 24-month period. ML of the EHR was used to identify potential transmission routes. A manual review of the EHR was performed by an infection preventionist to determine the most likely route and results were compared to the ML algorithm. RESULTS: We identified a cluster of 6 genetically related P. aeruginosa cases that occurred during a 7-month period. The ML algorithm identified gastroscopy as a potential transmission route for 4 of the 6 patients. Manual EHR review confirmed gastroscopy as the most likely route for 5 patients. This transmission route was confirmed by identification of a genetically-related P. aeruginosa incidentally cultured from a gastroscope used on 4of the 5 patients. Three infections, 2 of which were blood stream infections, could have been prevented if the ML algorithm had been running in real-time. CONCLUSIONS: WGS surveillance combined with a ML algorithm of the EHR identified a previously undetected outbreak of gastroscope-associated P. aeruginosa infections. These results underscore the value of WGS surveillance and ML of the EHR for enhancing outbreak detection in hospitals and preventing serious infections.


Assuntos
Infecção Hospitalar , Infecções por Pseudomonas , Infecção Hospitalar/diagnóstico , Infecção Hospitalar/epidemiologia , Surtos de Doenças , Gastroscópios , Humanos , Infecções por Pseudomonas/diagnóstico , Infecções por Pseudomonas/epidemiologia , Pseudomonas aeruginosa/genética , Estudos Retrospectivos , Sequenciamento Completo do Genoma
8.
Crit Care ; 24(1): 661, 2020 11 25.
Artigo em Inglês | MEDLINE | ID: mdl-33234161

RESUMO

BACKGROUND: Even brief hypotension is associated with increased morbidity and mortality. We developed a machine learning model to predict the initial hypotension event among intensive care unit (ICU) patients and designed an alert system for bedside implementation. MATERIALS AND METHODS: From the Medical Information Mart for Intensive Care III (MIMIC-3) dataset, minute-by-minute vital signs were extracted. A hypotension event was defined as at least five measurements within a 10-min period of systolic blood pressure ≤ 90 mmHg and mean arterial pressure ≤ 60 mmHg. Using time series data from 30-min overlapping time windows, a random forest (RF) classifier was used to predict risk of hypotension every minute. Chronologically, the first half of extracted data was used to train the model, and the second half was used to validate the trained model. The model's performance was measured with area under the receiver operating characteristic curve (AUROC) and area under the precision recall curve (AUPRC). Hypotension alerts were generated using risk score time series, a stacked RF model. A lockout time were applied for real-life implementation. RESULTS: We identified 1307 subjects (1580 ICU stays) as the hypotension group and 1619 subjects (2279 ICU stays) as the non-hypotension group. The RF model showed AUROC of 0.93 and 0.88 at 15 and 60 min, respectively, before hypotension, and AUPRC of 0.77 at 60 min before. Risk score trajectories revealed 80% and > 60% of hypotension predicted at 15 and 60 min before the hypotension, respectively. The stacked model with 15-min lockout produced on average 0.79 alerts/subject/hour (sensitivity 92.4%). CONCLUSION: Clinically significant hypotension events in the ICU can be predicted at least 1 h before the initial hypotension episode. With a highly sensitive and reliable practical alert system, a vast majority of future hypotension could be captured, suggesting potential real-life utility.


Assuntos
Hipotensão/diagnóstico , Monitorização Fisiológica/normas , Medicina de Precisão/métodos , Sinais Vitais/fisiologia , Idoso , Área Sob a Curva , Feminino , Humanos , Hipotensão/fisiopatologia , Unidades de Terapia Intensiva/organização & administração , Unidades de Terapia Intensiva/estatística & dados numéricos , Aprendizado de Máquina/normas , Aprendizado de Máquina/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Monitorização Fisiológica/métodos , Monitorização Fisiológica/estatística & dados numéricos , Curva ROC , Medição de Risco/métodos , Medição de Risco/normas , Medição de Risco/estatística & dados numéricos
9.
Anesth Analg ; 130(5): 1176-1187, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32287125

RESUMO

BACKGROUND: Individualized hemodynamic monitoring approaches are not well validated. Thus, we evaluated the discriminative performance improvement that might occur when moving from noninvasive monitoring (NIM) to invasive monitoring and with increasing levels of featurization associated with increasing sampling frequency and referencing to a stable baseline to identify bleeding during surgery in a porcine model. METHODS: We collected physiologic waveform (WF) data (250 Hz) from NIM, central venous (CVC), arterial (ART), and pulmonary arterial (PAC) catheters, plus mixed venous O2 saturation and cardiac output from 38 anesthetized Yorkshire pigs bled at 20 mL/min until a mean arterial pressure of 30 mm Hg following a 30-minute baseline period. Prebleed physiologic data defined a personal stable baseline for each subject independently. Nested models were evaluated using simple hemodynamic metrics (SM) averaged over 20-second windows and sampled every minute, beat to beat (B2B), and WF using Random Forest Classification models to identify bleeding with or without normalization to personal stable baseline, using a leave-one-pig-out cross-validation to minimize model overfitting. Model hyperparameters were tuned to detect stable or bleeding states. Bleeding models were compared use both each subject's personal baseline and a grouped-average (universal) baseline. Timeliness of bleed onset detection was evaluated by comparing the tradeoff between a low false-positive rate (FPR) and shortest time to bleed detection. Predictive performance was evaluated using a variant of the receiver operating characteristic focusing on minimizing FPR and false-negative rates (FNR) for true-positive and true-negative rates, respectively. RESULTS: In general, referencing models to a personal baseline resulted in better bleed detection performance for all catheters than using universal baselined data. Increasing granularity from SM to B2B and WF progressively improved bleeding detection. All invasive monitoring outperformed NIM for both time to bleeding detection and low FPR and FNR. In that regard, when referenced to personal baseline with SM analysis, PAC and ART + PAC performed best; for B2B CVC, PAC and ART + PAC performed best; and for WF PAC, CVC, ART + CVC, and ART + PAC performed equally well and better than other monitoring approaches. Without personal baseline, NIM performed poorly at all levels, while all catheters performed similarly for SM, with B2B PAC and ART + PAC performing the best, and for WF PAC, ART, ART + CVC, and ART + PAC performed equally well and better than the other monitoring approaches. CONCLUSIONS: Increasing hemodynamic monitoring featurization by increasing sampling frequency and referencing to personal baseline markedly improves the ability of invasive monitoring to detect bleed.


Assuntos
Análise de Dados , Monitorização Hemodinâmica/métodos , Hemodinâmica/fisiologia , Hemorragia/diagnóstico , Hemorragia/fisiopatologia , Animais , Pressão Arterial/fisiologia , Débito Cardíaco , Feminino , Monitorização Fisiológica/métodos , Suínos
10.
J Biomed Inform ; 91: 103126, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30771483

RESUMO

We present a statistical inference model for the detection and characterization of outbreaks of hospital associated infection. The approach combines patient exposures, determined from electronic medical records, and pathogen similarity, determined by whole-genome sequencing, to simultaneously identify probable outbreaks and their root-causes. We show how our model can be used to target isolates for whole-genome sequencing, improving outbreak detection and characterization even without comprehensive sequencing. Additionally, we demonstrate how to learn model parameters from reference data of known outbreaks. We demonstrate model performance using semi-synthetic experiments.


Assuntos
Infecção Hospitalar/microbiologia , Surtos de Doenças , Aprendizado de Máquina , Prontuários Médicos , Humanos , Modelos Teóricos , Estados Unidos/epidemiologia
11.
J Clin Monit Comput ; 33(6): 973-985, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30767136

RESUMO

Tachycardia is a strong though non-specific marker of cardiovascular stress that proceeds hemodynamic instability. We designed a predictive model of tachycardia using multi-granular intensive care unit (ICU) data by creating a risk score and dynamic trajectory. A subset of clinical and numerical signals were extracted from the Multiparameter Intelligent Monitoring in Intensive Care II database. A tachycardia episode was defined as heart rate ≥ 130/min lasting for ≥ 5 min, with ≥ 10% density. Regularized logistic regression (LR) and random forest (RF) classifiers were trained to create a risk score for upcoming tachycardia. Three different risk score models were compared for tachycardia and control (non-tachycardia) groups. Risk trajectory was generated from time windows moving away at 1 min increments from the tachycardia episode. Trajectories were computed over 3 hours leading up to the episode for three different models. From 2809 subjects, 787 tachycardia episodes and 707 control periods were identified. Patients with tachycardia had increased vasopressor support, longer ICU stay, and increased ICU mortality than controls. In model evaluation, RF was slightly superior to LR, which accuracy ranged from 0.847 to 0.782, with area under the curve from 0.921 to 0.842. Risk trajectory analysis showed average risks for tachycardia group evolved to 0.78 prior to the tachycardia episodes, while control group risks remained < 0.3. Among the three models, the internal control model demonstrated evolving trajectory approximately 75 min before tachycardia episode. Clinically relevant tachycardia episodes can be predicted from vital sign time series using machine learning algorithms.


Assuntos
Doenças Cardiovasculares/diagnóstico , Cuidados Críticos/métodos , Pneumopatias/diagnóstico , Monitorização Intraoperatória/métodos , Taquicardia/diagnóstico , Adulto , Idoso , Algoritmos , Área Sob a Curva , Coleta de Dados , Bases de Dados Factuais , Registros Eletrônicos de Saúde , Frequência Cardíaca , Mortalidade Hospitalar , Humanos , Unidades de Terapia Intensiva , Modelos Logísticos , Aprendizado de Máquina , Pessoa de Meia-Idade , Curva ROC , Análise de Regressão , Reprodutibilidade dos Testes , Risco , Centros de Atenção Terciária , Adulto Jovem
12.
J Electrocardiol ; 51(6S): S44-S48, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30077422

RESUMO

Research demonstrates that the majority of alarms derived from continuous bedside monitoring devices are non-actionable. This avalanche of unreliable alerts causes clinicians to experience sensory overload when attempting to sort real from false alarms, causing desensitization and alarm fatigue, which in turn leads to adverse events when true instability is neither recognized nor attended to despite the alarm. The scope of the problem of alarm fatigue is broad, and its contributing mechanisms are numerous. Current and future approaches to defining and reacting to actionable and non-actionable alarms are being developed and investigated, but challenges in impacting alarm modalities, sensitivity and specificity, and clinical activity in order to reduce alarm fatigue and adverse events remain. A multi-faceted approach involving clinicians, computer scientists, industry, and regulatory agencies is needed to battle alarm fatigue.


Assuntos
Alarmes Clínicos , Segurança do Paciente , Sistemas Automatizados de Assistência Junto ao Leito , Erros de Diagnóstico , Eletrocardiografia , Falha de Equipamento , Humanos , Som
13.
J Clin Monit Comput ; 32(1): 117-126, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28229353

RESUMO

Cardiorespiratory instability (CRI) in monitored step-down unit (SDU) patients has a variety of etiologies, and likely manifests in patterns of vital signs (VS) changes. We explored use of clustering techniques to identify patterns in the initial CRI epoch (CRI1; first exceedances of VS beyond stability thresholds after SDU admission) of unstable patients, and inter-cluster differences in admission characteristics and outcomes. Continuous noninvasive monitoring of heart rate (HR), respiratory rate (RR), and pulse oximetry (SpO2) were sampled at 1/20 Hz. We identified CRI1 in 165 patients, employed hierarchical and k-means clustering, tested several clustering solutions, used 10-fold cross validation to establish the best solution and assessed inter-cluster differences in admission characteristics and outcomes. Three clusters (C) were derived: C1) normal/high HR and RR, normal SpO2 (n = 30); C2) normal HR and RR, low SpO2 (n = 103); and C3) low/normal HR, low RR and normal SpO2 (n = 32). Clusters were significantly different based on age (p < 0.001; older patients in C2), number of comorbidities (p = 0.008; more C2 patients had ≥ 2) and hospital length of stay (p = 0.006; C1 patients stayed longer). There were no between-cluster differences in SDU length of stay, or mortality. Three different clusters of VS presentations for CRI1 were identified. Clusters varied on age, number of comorbidities and hospital length of stay. Future study is needed to determine if there are common physiologic underpinnings of VS clusters which might inform clinical decision-making when CRI first manifests.


Assuntos
Cuidados Críticos/métodos , Monitorização Fisiológica/instrumentação , Processamento de Sinais Assistido por Computador , Sinais Vitais , Adulto , Idoso , Análise por Conglomerados , Estudos de Coortes , Comorbidade , Feminino , Frequência Cardíaca , Hospitalização , Humanos , Masculino , Pessoa de Meia-Idade , Monitorização Fisiológica/métodos , Oximetria , Admissão do Paciente , Reprodutibilidade dos Testes , Taxa Respiratória
14.
Crit Care Med ; 44(7): e456-63, 2016 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26992068

RESUMO

OBJECTIVE: The use of machine-learning algorithms to classify alerts as real or artifacts in online noninvasive vital sign data streams to reduce alarm fatigue and missed true instability. DESIGN: Observational cohort study. SETTING: Twenty-four-bed trauma step-down unit. PATIENTS: Two thousand one hundred fifty-three patients. INTERVENTION: Noninvasive vital sign monitoring data (heart rate, respiratory rate, peripheral oximetry) recorded on all admissions at 1/20 Hz, and noninvasive blood pressure less frequently, and partitioned data into training/validation (294 admissions; 22,980 monitoring hours) and test sets (2,057 admissions; 156,177 monitoring hours). Alerts were vital sign deviations beyond stability thresholds. A four-member expert committee annotated a subset of alerts (576 in training/validation set, 397 in test set) as real or artifact selected by active learning, upon which we trained machine-learning algorithms. The best model was evaluated on test set alerts to enact online alert classification over time. MEASUREMENTS AND MAIN RESULTS: The Random Forest model discriminated between real and artifact as the alerts evolved online in the test set with area under the curve performance of 0.79 (95% CI, 0.67-0.93) for peripheral oximetry at the instant the vital sign first crossed threshold and increased to 0.87 (95% CI, 0.71-0.95) at 3 minutes into the alerting period. Blood pressure area under the curve started at 0.77 (95% CI, 0.64-0.95) and increased to 0.87 (95% CI, 0.71-0.98), whereas respiratory rate area under the curve started at 0.85 (95% CI, 0.77-0.95) and increased to 0.97 (95% CI, 0.94-1.00). Heart rate alerts were too few for model development. CONCLUSIONS: Machine-learning models can discern clinically relevant peripheral oximetry, blood pressure, and respiratory rate alerts from artifacts in an online monitoring dataset (area under the curve > 0.87).


Assuntos
Artefatos , Alarmes Clínicos/classificação , Monitorização Fisiológica/métodos , Aprendizado de Máquina Supervisionado , Sinais Vitais , Determinação da Pressão Arterial , Estudos de Coortes , Frequência Cardíaca , Humanos , Oximetria , Taxa Respiratória
15.
J Clin Monit Comput ; 30(6): 875-888, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26438655

RESUMO

Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby "cleaning" such data for future modeling. 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data [heart rate (HR), respiratory rate (RR), peripheral arterial oxygen saturation (SpO2) at 1/20 Hz, and noninvasive oscillometric blood pressure (BP)]. Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. Block 1 yielded 812 VS events, with 214 (26 %) judged by experts as artifact (RR 43 %, SpO2 40 %, BP 15 %, HR 2 %). ML algorithms applied to the Block 1 training/cross-validation set (tenfold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO2. Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO2. ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building.


Assuntos
Alarmes Clínicos , Mineração de Dados/métodos , Frequência Cardíaca , Monitorização Fisiológica , Adulto , Idoso , Algoritmos , Área Sob a Curva , Artefatos , Pressão Sanguínea , Interpretação Estatística de Dados , Sistemas de Apoio a Decisões Clínicas , Feminino , Sistemas de Informação Hospitalar , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Oscilometria , Risco , Sinais Vitais
16.
Am J Respir Crit Care Med ; 190(6): 606-10, 2014 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-25068389

RESUMO

It is often difficult to accurately predict when, why, and which patients develop shock, because signs of shock often occur late, once organ injury is already present. Three levels of aggregation of information can be used to aid the bedside clinician in this task: analysis of derived parameters of existing measured physiologic variables using simple bedside calculations (functional hemodynamic monitoring); prior physiologic data of similar subjects during periods of stability and disease to define quantitative metrics of level of severity; and libraries of responses across large and comprehensive collections of records of diverse subjects whose diagnosis, therapies, and course is already known to predict not only disease severity, but also the subsequent behavior of the subject if left untreated or treated with one of the many therapeutic options. The problem is in defining the minimal monitoring data set needed to initially identify those patients across all possible processes, and then specifically monitor their responses to targeted therapies known to improve outcome. To address these issues, multivariable models using machine learning data-driven classification techniques can be used to parsimoniously predict cardiorespiratory insufficiency. We briefly describe how these machine learning approaches are presently applied to address earlier identification of cardiorespiratory insufficiency and direct focused, patient-specific management.


Assuntos
Hemodinâmica , Disseminação de Informação/métodos , Unidades de Terapia Intensiva/organização & administração , Monitorização Fisiológica/métodos , Choque/diagnóstico , Humanos , Modelos Teóricos
17.
ArXiv ; 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-37965077

RESUMO

Forecasting healthcare time series is crucial for early detection of adverse outcomes and for patient monitoring. Forecasting, however, can be difficult in practice due to noisy and intermittent data. The challenges are often exacerbated by change points induced via extrinsic factors, such as the administration of medication. To address these challenges, we propose a novel hybrid global-local architecture and a pharmacokinetic encoder that informs deep learning models of patient-specific treatment effects. We showcase the efficacy of our approach in achieving significant accuracy gains for a blood glucose forecasting task using both realistically simulated and real-world data. Our global-local architecture improves over patient-specific models by 9.2-14.6%. Additionally, our pharmacokinetic encoder improves over alternative encoding techniques by 4.4% on simulated data and 2.1% on real-world data. The proposed approach can have multiple beneficial applications in clinical practice, such as issuing early warnings about unexpected treatment responses, or helping to characterize patient-specific treatment effects in terms of drug absorption and elimination characteristics.

18.
Intensive Care Med Exp ; 12(1): 44, 2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38782787

RESUMO

We tested the ability of a physiologically driven minimally invasive closed-loop algorithm, called Resuscitation based on Functional Hemodynamic Monitoring (ReFit), to stabilize for up to 3 h a porcine model of noncompressible hemorrhage induced by severe liver injury and do so during both ground and air transport. Twelve animals were resuscitated using ReFit to drive fluid and vasopressor infusion to a mean arterial pressure (MAP) > 60 mmHg and heart rate < 110 min-1 30 min after MAP < 40 mmHg following liver injury. ReFit was initially validated in 8 animals in the laboratory, then in 4 animals during air (23nm and 35nm) and ground (9 mi) to air (9.5nm and 83m) transport returning to the laboratory. The ReFit algorithm kept all animals stable for ~ 3 h. Thus, ReFit algorithm can diagnose and treat ongoing hemorrhagic shock independent to the site of care or during transport. These results have implications for treatment of critically ill patients in remote, austere and contested environments and during transport to a higher level of care.

19.
EBioMedicine ; 93: 104681, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37392596

RESUMO

BACKGROUND: Healthcare-associated bacterial pathogens frequently carry plasmids that contribute to antibiotic resistance and virulence. The horizontal transfer of plasmids in healthcare settings has been previously documented, but genomic and epidemiologic methods to study this phenomenon remain underdeveloped. The objectives of this study were to apply whole-genome sequencing to systematically resolve and track plasmids carried by nosocomial pathogens in a single hospital, and to identify epidemiologic links that indicated likely horizontal plasmid transfer. METHODS: We performed an observational study of plasmids circulating among bacterial isolates infecting patients at a large hospital. We first examined plasmids carried by isolates sampled from the same patient over time and isolates that caused clonal outbreaks in the same hospital to develop thresholds with which horizontal plasmid transfer within a tertiary hospital could be inferred. We then applied those sequence similarity thresholds to perform a systematic screen of 3074 genomes of nosocomial bacterial isolates from a single hospital for the presence of 89 plasmids. We also collected and reviewed data from electronic health records for evidence of geotemporal links between patients infected with bacteria encoding plasmids of interest. FINDINGS: Our analyses determined that 95% of analyzed genomes maintained roughly 95% of their plasmid genetic content and accumulated fewer than 15 SNPs per 100 kb of plasmid sequence. Applying these similarity thresholds to identify horizontal plasmid transfer identified 45 plasmids that potentially circulated among clinical isolates. Ten highly preserved plasmids met criteria for geotemporal links associated with horizontal transfer. Several plasmids with shared backbones also encoded different additional mobile genetic element content, and these elements were variably present among the sampled clinical isolate genomes. INTERPRETATION: Evidence suggests that the horizontal transfer of plasmids among nosocomial bacterial pathogens appears to be frequent within hospitals and can be monitored with whole genome sequencing and comparative genomics approaches. These approaches should incorporate both nucleotide identity and reference sequence coverage to study the dynamics of plasmid transfer in the hospital. FUNDING: This research was supported by the US National Institute of Allergy and Infectious Disease (NIAID) and the University of Pittsburgh School of Medicine.


Assuntos
Antibacterianos , Infecção Hospitalar , Humanos , Plasmídeos/genética , Genômica , Bactérias/genética , Infecção Hospitalar/epidemiologia , Genoma Bacteriano
20.
PLoS One ; 17(2): e0264198, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35202422

RESUMO

We consider whether one can forecast the emergence of variants of concern in the SARS-CoV-2 outbreak and similar pandemics. We explore methods of population genetics and identify key relevant principles in both deterministic and stochastic models of spread of infectious disease. Finally, we demonstrate that fitness variation, defined as a trait for which an increase in its value is associated with an increase in net Darwinian fitness if the value of other traits are held constant, is a strong indicator of imminent transition in the viral population.


Assuntos
COVID-19/epidemiologia , Previsões/métodos , SARS-CoV-2/genética , COVID-19/transmissão , Modelos Epidemiológicos , Aptidão Genética/genética , Genética Populacional/métodos , Humanos , Pandemias , SARS-CoV-2/patogenicidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA