Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Sci Rep ; 14(1): 14611, 2024 06 25.
Artículo en Inglés | MEDLINE | ID: mdl-38918593

RESUMEN

Residents learn the vesico-urethral anastomosis (VUA), a key step in robot-assisted radical prostatectomy (RARP), early in their training. VUA assessment and training significantly impact patient outcomes and have high educational value. This study aimed to develop objective prediction models for the Robotic Anastomosis Competency Evaluation (RACE) metrics using electroencephalogram (EEG) and eye-tracking data. Data were recorded from 23 participants performing robot-assisted VUA (henceforth 'anastomosis') on plastic models and animal tissue using the da Vinci surgical robot. EEG and eye-tracking features were extracted, and participants' anastomosis subtask performance was assessed by three raters using the RACE tool and operative videos. Random forest regression (RFR) and gradient boosting regression (GBR) models were developed to predict RACE scores using extracted features, while linear mixed models (LMM) identified associations between features and RACE scores. Overall performance scores significantly differed among inexperienced, competent, and experienced skill levels (P value < 0.0001). For plastic anastomoses, R2 values for predicting unseen test scores were: needle positioning (0.79), needle entry (0.74), needle driving and tissue trauma (0.80), suture placement (0.75), and tissue approximation (0.70). For tissue anastomoses, the values were 0.62, 0.76, 0.65, 0.68, and 0.62, respectively. The models could enhance RARP anastomosis training by offering objective performance feedback to trainees.


Asunto(s)
Anastomosis Quirúrgica , Competencia Clínica , Electroencefalografía , Aprendizaje Automático , Procedimientos Quirúrgicos Robotizados , Uretra , Humanos , Anastomosis Quirúrgica/métodos , Procedimientos Quirúrgicos Robotizados/educación , Procedimientos Quirúrgicos Robotizados/métodos , Electroencefalografía/métodos , Masculino , Uretra/cirugía , Tecnología de Seguimiento Ocular , Prostatectomía/métodos , Vejiga Urinaria/cirugía
2.
Artif Intell Med ; 154: 102900, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38878555

RESUMEN

With Artificial Intelligence (AI) increasingly permeating various aspects of society, including healthcare, the adoption of the Transformers neural network architecture is rapidly changing many applications. Transformer is a type of deep learning architecture initially developed to solve general-purpose Natural Language Processing (NLP) tasks and has subsequently been adapted in many fields, including healthcare. In this survey paper, we provide an overview of how this architecture has been adopted to analyze various forms of healthcare data, including clinical NLP, medical imaging, structured Electronic Health Records (EHR), social media, bio-physiological signals, biomolecular sequences. Furthermore, which have also include the articles that used the transformer architecture for generating surgical instructions and predicting adverse outcomes after surgeries under the umbrella of critical care. Under diverse settings, these models have been used for clinical diagnosis, report generation, data reconstruction, and drug/protein synthesis. Finally, we also discuss the benefits and limitations of using transformers in healthcare and examine issues such as computational cost, model interpretability, fairness, alignment with human values, ethical implications, and environmental impact.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Lenguaje Natural , Humanos , Inteligencia Artificial , Atención a la Salud/organización & administración , Redes Neurales de la Computación , Registros Electrónicos de Salud
3.
Sci Rep ; 14(1): 17444, 2024 07 29.
Artículo en Inglés | MEDLINE | ID: mdl-39075127

RESUMEN

The clock drawing test (CDT) is a neuropsychological assessment tool to screen an individual's cognitive ability. In this study, we developed a Fair and Interpretable Representation of Clock drawing test (FaIRClocks) to evaluate and mitigate classification bias against people with less than 8 years of education, while screening their cognitive function using an array of neuropsychological measures. In this study, we represented clock drawings by a priorly published 10-dimensional deep learning feature set trained on publicly available data from the National Health and Aging Trends Study (NHATS). These embeddings were further fine-tuned with clocks from a preoperative cognitive screening program at the University of Florida to predict three cognitive scores: the Mini-Mental State Examination (MMSE) total score, an attention composite z-score (ATT-C), and a memory composite z-score (MEM-C). ATT-C and MEM-C scores were developed by averaging z-scores based on normative references. The cognitive screening classifiers were initially tested to see their relative performance in patients with low years of education (< = 8 years) versus patients with higher education (> 8 years) and race. Results indicated that the initial unweighted classifiers confounded lower education with cognitive compromise resulting in a 100% type I error rate for this group. Thereby, the samples were re-weighted using multiple fairness metrics to achieve sensitivity/specificity and positive/negative predictive value (PPV/NPV) balance across groups. In summary, we report the FaIRClocks model, with promise to help identify and mitigate bias against people with less than 8 years of education during preoperative cognitive screening.


Asunto(s)
Escolaridad , Racismo , Humanos , Masculino , Femenino , Anciano , Pruebas Neuropsicológicas , Cognición/fisiología , Disfunción Cognitiva/diagnóstico , Anciano de 80 o más Años , Pruebas de Estado Mental y Demencia , Persona de Mediana Edad , Aprendizaje Profundo
4.
Assessment ; : 10731911241236336, 2024 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-38494894

RESUMEN

Graphomotor and time-based variables from the digital Clock Drawing Test (dCDT) characterize cognitive functions. However, no prior publications have quantified the strength of the associations between digital clock variables as they are produced. We hypothesized that analysis of the production of clock features and their interrelationships, as suggested, will differ between the command and copy test conditions. Older adults aged 65+ completed a digital clock drawing to command and copy conditions. Using a Bayesian hill-climbing algorithm and bootstrapping (10,000 samples), we derived directed acyclic graphs (DAGs) to examine network structure for command and copy dCDT variables. Although the command condition showed moderate associations between variables (µ|ßz|= 0.34) relative to the copy condition (µ|ßz| = 0.25), the copy condition network had more connections (18/18 versus 15/18 command). Network connectivity across command and copy was most influenced by five of the 18 variables. The direction of dependencies followed the order of instructions better in the command condition network. Digitally acquired clock variables relate to one another but differ in network structure when derived from command or copy conditions. Continued analyses of clock drawing production should improve understanding of quintessential normal features to aid in early neurodegenerative disease detection.

5.
Front Neurol ; 15: 1386728, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38784909

RESUMEN

Acuity assessments are vital for timely interventions and fair resource allocation in critical care settings. Conventional acuity scoring systems heavily depend on subjective patient assessments, leaving room for implicit bias and errors. These assessments are often manual, time-consuming, intermittent, and challenging to interpret accurately, especially for healthcare providers. This risk of bias and error is likely most pronounced in time-constrained and high-stakes environments, such as critical care settings. Furthermore, such scores do not incorporate other information, such as patients' mobility level, which can indicate recovery or deterioration in the intensive care unit (ICU), especially at a granular level. We hypothesized that wearable sensor data could assist in assessing patient acuity granularly, especially in conjunction with clinical data from electronic health records (EHR). In this prospective study, we evaluated the impact of integrating mobility data collected from wrist-worn accelerometers with clinical data obtained from EHR for estimating acuity. Accelerometry data were collected from 87 patients wearing accelerometers on their wrists in an academic hospital setting. The data was evaluated using five deep neural network models: VGG, ResNet, MobileNet, SqueezeNet, and a custom Transformer network. These models outperformed a rule-based clinical score (Sequential Organ Failure Assessment, SOFA) used as a baseline when predicting acuity state (for ground truth we labeled as unstable patients if they needed life-supporting therapies, and as stable otherwise), particularly regarding the precision, sensitivity, and F1 score. The results demonstrate that integrating accelerometer data with demographics and clinical variables improves predictive performance compared to traditional scoring systems in healthcare. Deep learning models consistently outperformed the SOFA score baseline across various scenarios, showing notable enhancements in metrics such as the area under the receiver operating characteristic (ROC) Curve (AUC), precision, sensitivity, specificity, and F1 score. The most comprehensive scenario, leveraging accelerometer, demographics, and clinical data, achieved the highest AUC of 0.73, compared to 0.53 when using SOFA score as the baseline, with significant improvements in precision (0.80 vs. 0.23), specificity (0.79 vs. 0.73), and F1 score (0.77 vs. 0.66). This study demonstrates a novel approach beyond the simplistic differentiation between stable and unstable conditions. By incorporating mobility and comprehensive patient information, we distinguish between these states in critically ill patients and capture essential nuances in physiology and functional status. Unlike rudimentary definitions, such as equating low blood pressure with instability, our methodology delves deeper, offering a more holistic understanding and potentially valuable insights for acuity assessment.

6.
Res Sq ; 2024 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-39149454

RESUMEN

On average, more than 5 million patients are admitted to intensive care units (ICUs) in the US, with mortality rates ranging from 10 to 29%. The acuity state of patients in the ICU can quickly change from stable to unstable, sometimes leading to life-threatening conditions. Early detection of deteriorating conditions can assist in more timely interventions and improved survival rates. While Artificial Intelligence (AI)-based models show potential for assessing acuity in a more granular and automated manner, they typically use mortality as a proxy of acuity in the ICU. Furthermore, these methods do not determine the acuity state of a patient (i.e., stable or unstable), the transition between acuity states, or the need for life-sustaining therapies. In this study, we propose APRICOT-M (Acuity Prediction in Intensive Care Unit-Mamba), a 1M-parameter state space-based neural network to predict acuity state, transitions, and the need for life-sustaining therapies in real-time among ICU patients. The model integrates ICU data in the preceding four hours (including vital signs, laboratory results, assessment scores, and medications) and patient characteristics (age, sex, race, and comorbidities) to predict the acuity outcomes in the next four hours. Our state space-based model can process sparse and irregularly sampled data without manual imputation, thus reducing the noise in input data and increasing inference speed. The model was trained on data from 107,473 patients (142,062 ICU admissions) from 55 hospitals between 2014-2017 and validated externally on data from 74,901 patients (101,356 ICU admissions) from 143 hospitals. Additionally, it was validated temporally on data from 12,927 patients (15,940 ICU admissions) from one hospital in 2018-2019 and prospectively on data from 215 patients (369 ICU admissions) from one hospital in 2021-2023. Three datasets were used for training and evaluation: the University of Florida Health (UFH) dataset, the electronic ICU Collaborative Research Database (eICU), and the Medical Information Mart for Intensive Care (MIMIC)-IV dataset. APRICOT-M significantly outperforms the baseline acuity assessment, Sequential Organ Failure Assessment (SOFA), for mortality prediction in both external (AUROC 0.95 CI: 0.94-0.95 compared to 0.78 CI: 0.78-0.79) and prospective (AUROC 0.99 CI: 0.97-1.00 compared to 0.80 CI: 0.65-0.92) cohorts, as well as for instability prediction (external AUROC 0.75 CI: 0.74-0.75 compared to 0.51 CI: 0.51-0.51, and prospective AUROC 0.69 CI: 0.64-0.74 compared to 0.53 CI: 0.50-0.57). This tool has the potential to help clinicians make timely interventions by predicting the transition between acuity states and decision-making on life-sustaining within the next four hours in the ICU.

7.
Sci Rep ; 14(1): 8442, 2024 04 10.
Artículo en Inglés | MEDLINE | ID: mdl-38600110

RESUMEN

Using clustering analysis for early vital signs, unique patient phenotypes with distinct pathophysiological signatures and clinical outcomes may be revealed and support early clinical decision-making. Phenotyping using early vital signs has proven challenging, as vital signs are typically sampled sporadically. We proposed a novel, deep temporal interpolation and clustering network to simultaneously extract latent representations from irregularly sampled vital signs and derive phenotypes. Four distinct clusters were identified. Phenotype A (18%) had the greatest prevalence of comorbid disease with increased prevalence of prolonged respiratory insufficiency, acute kidney injury, sepsis, and long-term (3-year) mortality. Phenotypes B (33%) and C (31%) had a diffuse pattern of mild organ dysfunction. Phenotype B's favorable short-term clinical outcomes were tempered by the second highest rate of long-term mortality. Phenotype C had favorable clinical outcomes. Phenotype D (17%) exhibited early and persistent hypotension, high incidence of early surgery, and substantial biomarker incidence of inflammation. Despite early and severe illness, phenotype D had the second lowest long-term mortality. After comparing the sequential organ failure assessment scores, the clustering results did not simply provide a recapitulation of previous acuity assessments. This tool may impact triage decisions and have significant implications for clinical decision-support under time constraints and uncertainty.


Asunto(s)
Puntuaciones en la Disfunción de Órganos , Sepsis , Humanos , Enfermedad Aguda , Fenotipo , Biomarcadores , Análisis por Conglomerados
8.
Ann Surg Open ; 5(2): e429, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38911666

RESUMEN

Objective: To determine whether certain patients are vulnerable to errant triage decisions immediately after major surgery and whether there are unique sociodemographic phenotypes within overtriaged and undertriaged cohorts. Background: In a fair system, overtriage of low-acuity patients to intensive care units (ICUs) and undertriage of high-acuity patients to general wards would affect all sociodemographic subgroups equally. Methods: This multicenter, longitudinal cohort study of hospital admissions immediately after major surgery compared hospital mortality and value of care (risk-adjusted mortality/total costs) across 4 cohorts: overtriage (N = 660), risk-matched overtriage controls admitted to general wards (N = 3077), undertriage (N = 2335), and risk-matched undertriage controls admitted to ICUs (N = 4774). K-means clustering identified sociodemographic phenotypes within overtriage and undertriage cohorts. Results: Compared with controls, overtriaged admissions had a predominance of male patients (56.2% vs 43.1%, P < 0.001) and commercial insurance (6.4% vs 2.5%, P < 0.001); undertriaged admissions had a predominance of Black patients (28.4% vs 24.4%, P < 0.001) and greater socioeconomic deprivation. Overtriage was associated with increased total direct costs [$16.2K ($11.4K-$23.5K) vs $14.1K ($9.1K-$20.7K), P < 0.001] and low value of care; undertriage was associated with increased hospital mortality (1.5% vs 0.7%, P = 0.002) and hospice care (2.2% vs 0.6%, P < 0.001) and low value of care. Unique sociodemographic phenotypes within both overtriage and undertriage cohorts had similar outcomes and value of care, suggesting that triage decisions, rather than patient characteristics, drive outcomes and value of care. Conclusions: Postoperative triage decisions should ensure equality across sociodemographic groups by anchoring triage decisions to objective patient acuity assessments, circumventing cognitive shortcuts and mitigating bias.

9.
Artículo en Inglés | MEDLINE | ID: mdl-38585187

RESUMEN

Delirium is a syndrome of acute brain failure which is prevalent amongst older adults in the Intensive Care Unit (ICU). Incidence of delirium can significantly worsen prognosis and increase mortality, therefore necessitating its rapid and continual assessment in the ICU. Currently, the common approach for delirium assessment is manual and sporadic. Hence, there exists a critical need for a robust and automated system for predicting delirium in the ICU. In this work, we develop a machine learning (ML) system for real-time prediction of delirium using Electronic Health Record (EHR) data. Unlike prior approaches which provide one delirium prediction label per entire ICU stay, our approach provides predictions every 12 hours. We use the latest 12 hours of ICU data, along with patient demographic and medical history data, to predict delirium risk in the next 12-hour window. This enables delirium risk prediction as soon as 12 hours after ICU admission. We train and test four ML classification algorithms on longitudinal EHR data pertaining to 16,327 ICU stays of 13,395 patients covering a total of 56,297 12-hour windows in the ICU to predict the dynamic incidence of delirium. The best performing algorithm was Categorical Boosting which achieved an area under receiver operating characteristic curve (AUROC) of 0.87 (95% Confidence Interval; C.I, 0.86-0.87). The deployment of this ML system in ICUs can enable early identification of delirium, thereby reducing its deleterious impact on long-term adverse outcomes, such as ICU cost, length of stay and mortality.

10.
IEEE Int Conf Bioinform Biomed Workshops ; 2023: 2207-2212, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38463539

RESUMEN

Quantifying pain in patients admitted to intensive care units (ICUs) is challenging due to the increased prevalence of communication barriers in this patient population. Previous research has posited a positive correlation between pain and physical activity in critically ill patients. In this study, we advance this hypothesis by building machine learning classifiers to examine the ability of accelerometer data collected from daily wearables to predict self-reported pain levels experienced by patients in the ICU. We trained multiple Machine Learning (ML) models, including Logistic Regression, CatBoost, and XG-Boost, on statistical features extracted from the accelerometer data combined with previous pain measurements and patient demographics. Following previous studies that showed a change in pain sensitivity in ICU patients at night, we performed the task of pain classification separately for daytime and nighttime pain reports. In the pain versus no-pain classification setting, logistic regression gave the best classifier in daytime (AUC: 0.72, F1-score: 0.72), and CatBoost gave the best classifier at nighttime (AUC: 0.82, F1-score: 0.82). Performance of logistic regression dropped to 0.61 AUC, 0.62 F1-score (mild vs. moderate pain, nighttime), and CatBoost's performance was similarly affected with 0.61 AUC, 0.60 F1-score (moderate vs. severe pain, daytime). The inclusion of analgesic information benefited the classification between moderate and severe pain. SHAP analysis was conducted to find the most significant features in each setting. It assigned the highest importance to accelerometer-related features on all evaluated settings but also showed the contribution of the other features such as age and medications in specific contexts. In conclusion, accelerometer data combined with patient demographics and previous pain measurements can be used to screen painful from painless episodes in the ICU and can be combined with analgesic information to provide moderate classification between painful episodes of different severities.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA