Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Radiology ; 308(1): e222937, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37489991

RESUMEN

Background An artificial intelligence (AI) algorithm has been developed for fully automated body composition assessment of lung cancer screening noncontrast low-dose CT of the chest (LDCT) scans, but the utility of these measurements in disease risk prediction models has not been assessed. Purpose To evaluate the added value of CT-based AI-derived body composition measurements in risk prediction of lung cancer incidence, lung cancer death, cardiovascular disease (CVD) death, and all-cause mortality in the National Lung Screening Trial (NLST). Materials and Methods In this secondary analysis of the NLST, body composition measurements, including area and attenuation attributes of skeletal muscle and subcutaneous adipose tissue, were derived from baseline LDCT examinations by using a previously developed AI algorithm. The added value of these measurements was assessed with sex- and cause-specific Cox proportional hazards models with and without the AI-derived body composition measurements for predicting lung cancer incidence, lung cancer death, CVD death, and all-cause mortality. Models were adjusted for confounding variables including age; body mass index; quantitative emphysema; coronary artery calcification; history of diabetes, heart disease, hypertension, and stroke; and other PLCOM2012 lung cancer risk factors. Goodness-of-fit improvements were assessed with the likelihood ratio test. Results Among 20 768 included participants (median age, 61 years [IQR, 57-65 years]; 12 317 men), 865 were diagnosed with lung cancer and 4180 died during follow-up. Including the AI-derived body composition measurements improved risk prediction for lung cancer death (male participants: χ2 = 23.09, P < .001; female participants: χ2 = 15.04, P = .002), CVD death (males: χ2 = 69.94, P < .001; females: χ2 = 16.60, P < .001), and all-cause mortality (males: χ2 = 248.13, P < .001; females: χ2 = 94.54, P < .001), but not for lung cancer incidence (male participants: χ2 = 2.53, P = .11; female participants: χ2 = 1.73, P = .19). Conclusion The body composition measurements automatically derived from baseline low-dose CT examinations added predictive value for lung cancer death, CVD death, and all-cause death, but not for lung cancer incidence in the NLST. Clinical trial registration no. NCT00047385 © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Fintelmann in this issue.


Asunto(s)
Enfermedades Cardiovasculares , Neoplasias Pulmonares , Femenino , Masculino , Humanos , Persona de Mediana Edad , Detección Precoz del Cáncer , Inteligencia Artificial , Composición Corporal , Pulmón
2.
J Biomed Inform ; 112: 103611, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33157313

RESUMEN

Model calibration, critical to the success and safety of clinical prediction models, deteriorates over time in response to the dynamic nature of clinical environments. To support informed, data-driven model updating strategies, we present and evaluate a calibration drift detection system. Methods are developed for maintaining dynamic calibration curves with optimized online stochastic gradient descent and for detecting increasing miscalibration with adaptive sliding windows. These methods are generalizable to support diverse prediction models developed using a variety of learning algorithms and customizable to address the unique needs of clinical use cases. In both simulation and case studies, our system accurately detected calibration drift. When drift is detected, our system further provides actionable alerts by including information on a window of recent data that may be appropriate for model updating. Simulations showed these windows were primarily composed of data accruing after drift onset, supporting the potential utility of the windows for model updating. By promoting model updating as calibration deteriorates rather than on pre-determined schedules, implementations of our drift detection system may minimize interim periods of insufficient model accuracy and focus analytic resources on those models most in need of attention.


Asunto(s)
Algoritmos , Modelos Estadísticos , Calibración , Pronóstico
3.
J Med Syst ; 42(7): 123, 2018 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-29846806

RESUMEN

The widely used American Society of Anesthesiologists Physical Status (ASA PS) classification is subjective, requires manual clinician review to score, and has limited granularity. Our objective was to develop a system that automatically generates an ASA PS with finer granularity by creating a continuous ASA PS score. Supervised machine learning methods were used to create a model that predicts a patient's ASA PS on a continuous scale using the patient's home medications and comorbidities. Three different types of predictive models were trained: regression models, ordinal models, and classification models. The performance and agreement of each model to anesthesiologists were compared by calculating the mean squared error (MSE), rounded MSE and Cohen's Kappa on a holdout set. To assess model performance on continuous ASA PS, model rankings were compared to two anesthesiologists on a subset of ASA PS 3 case pairs. The random forest regression model achieved the best MSE and rounded MSE. A model consisting of three random forest classifiers (split model) achieved the best Cohen's Kappa. The model's agreement with our anesthesiologists on the ASA PS 3 case pairs yielded fair to moderate Kappa values. The results suggest that the random forest split classification model can predict ASA PS with agreement similar to that of anesthesiologists reported in literature and produce a continuous score in which agreement in accurately judging granularity is fair to moderate.


Asunto(s)
Anestesiología , Gravedad del Paciente , Aprendizaje Automático Supervisado , Automatización , Comorbilidad , Humanos , Modelos Teóricos , Estudios Retrospectivos
4.
J Biomed Inform ; 58: 11-18, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26385377

RESUMEN

OBJECTIVES: Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. METHODS: Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. RESULTS: Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. CONCLUSION: In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting.


Asunto(s)
Aprendizaje , Aprendizaje Automático , Humanos , Procesamiento de Lenguaje Natural
5.
Comput Biol Med ; 171: 108122, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38417381

RESUMEN

Treatments ideally mitigate pathogenesis, or the detrimental effects of the root causes of disease. However, existing definitions of treatment effect fail to account for pathogenic mechanism. We therefore introduce the Treated Root causal Effects (TRE) metric which measures the ability of a treatment to modify root causal effects. We leverage TREs to automatically identify treatment targets and cluster patients who respond similarly to treatment. The proposed algorithm learns a partially linear causal model to extract the root causal effects of each variable and then estimates TREs for target discovery and downstream subtyping. We maintain interpretability even without assuming an invertible structural equation model. Experiments across a range of datasets corroborate the generality of the proposed approach.


Asunto(s)
Algoritmos , Modelos Teóricos , Humanos
6.
NPJ Digit Med ; 7(1): 53, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38429353

RESUMEN

The rising popularity of artificial intelligence in healthcare is highlighting the problem that a computational model achieving super-human clinical performance at its training sites may perform substantially worse at new sites. In this perspective, we argue that we should typically expect this failure to transport, and we present common sources for it, divided into those under the control of the experimenter and those inherent to the clinical data-generating process. Of the inherent sources we look a little deeper into site-specific clinical practices that can affect the data distribution, and propose a potential solution intended to isolate the imprint of those practices on the data from the patterns of disease cause and effect that are the usual target of probabilistic clinical models.

7.
Cancer Biomark ; 2024 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-38517780

RESUMEN

BACKGROUND: Large community cohorts are useful for lung cancer research, allowing for the analysis of risk factors and development of predictive models. OBJECTIVE: A robust methodology for (1) identifying lung cancer and pulmonary nodules diagnoses as well as (2) associating multimodal longitudinal data with these events from electronic health record (EHRs) is needed to optimally curate cohorts at scale. METHODS: In this study, we leveraged (1) SNOMED concepts to develop ICD-based decision rules for building a cohort that captured lung cancer and pulmonary nodules and (2) clinical knowledge to define time windows for collecting longitudinal imaging and clinical concepts. We curated three cohorts with clinical data and repeated imaging for subjects with pulmonary nodules from our Vanderbilt University Medical Center. RESULTS: Our approach achieved an estimated sensitivity 0.930 (95% CI: [0.879, 0.969]), specificity of 0.996 (95% CI: [0.989, 1.00]), positive predictive value of 0.979 (95% CI: [0.959, 1.000]), and negative predictive value of 0.987 (95% CI: [0.976, 0.994]) for distinguishing lung cancer from subjects with SPNs. CONCLUSION: This work represents a general strategy for high-throughput curation of multi-modal longitudinal cohorts at risk for lung cancer from routinely collected EHRs.

8.
J Am Med Inform Assoc ; 31(4): 968-974, 2024 04 03.
Artículo en Inglés | MEDLINE | ID: mdl-38383050

RESUMEN

OBJECTIVE: To develop and evaluate a data-driven process to generate suggestions for improving alert criteria using explainable artificial intelligence (XAI) approaches. METHODS: We extracted data on alerts generated from January 1, 2019 to December 31, 2020, at Vanderbilt University Medical Center. We developed machine learning models to predict user responses to alerts. We applied XAI techniques to generate global explanations and local explanations. We evaluated the generated suggestions by comparing with alert's historical change logs and stakeholder interviews. Suggestions that either matched (or partially matched) changes already made to the alert or were considered clinically correct were classified as helpful. RESULTS: The final dataset included 2 991 823 firings with 2689 features. Among the 5 machine learning models, the LightGBM model achieved the highest Area under the ROC Curve: 0.919 [0.918, 0.920]. We identified 96 helpful suggestions. A total of 278 807 firings (9.3%) could have been eliminated. Some of the suggestions also revealed workflow and education issues. CONCLUSION: We developed a data-driven process to generate suggestions for improving alert criteria using XAI techniques. Our approach could identify improvements regarding clinical decision support (CDS) that might be overlooked or delayed in manual reviews. It also unveils a secondary purpose for the XAI: to improve quality by discovering scenarios where CDS alerts are not accepted due to workflow, education, or staffing issues.


Asunto(s)
Inteligencia Artificial , Sistemas de Apoyo a Decisiones Clínicas , Humanos , Aprendizaje Automático , Centros Médicos Académicos , Escolaridad
9.
Artículo en Inglés | MEDLINE | ID: mdl-37465096

RESUMEN

Features learned from single radiologic images are unable to provide information about whether and how much a lesion may be changing over time. Time-dependent features computed from repeated images can capture those changes and help identify malignant lesions by their temporal behavior. However, longitudinal medical imaging presents the unique challenge of sparse, irregular time intervals in data acquisition. While self-attention has been shown to be a versatile and efficient learning mechanism for time series and natural images, its potential for interpreting temporal distance between sparse, irregularly sampled spatial features has not been explored. In this work, we propose two interpretations of a time-distance vision transformer (ViT) by using (1) vector embeddings of continuous time and (2) a temporal emphasis model to scale self-attention weights. The two algorithms are evaluated based on benign versus malignant lung cancer discrimination of synthetic pulmonary nodules and lung screening computed tomography studies from the National Lung Screening Trial (NLST). Experiments evaluating the time-distance ViTs on synthetic nodules show a fundamental improvement in classifying irregularly sampled longitudinal images when compared to standard ViTs. In cross-validation on screening chest CTs from the NLST, our methods (0.785 and 0.786 AUC respectively) significantly outperform a cross-sectional approach (0.734 AUC) and match the discriminative performance of the leading longitudinal medical imaging algorithm (0.779 AUC) on benign versus malignant classification. This work represents the first self-attention-based framework for classifying longitudinal medical images. Our code is available at https://github.com/tom1193/time-distance-transformer.

10.
Med Image Anal ; 90: 102939, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37725868

RESUMEN

Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realizes global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissue structures. To address such challenges and inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting of multiple modalities, anatomies, and a wide range of tissue classes, including 133 structures in the brain, 14 organs in the abdomen, 4 hierarchical components in the kidneys, inter-connected kidney tumors and brain tumors. We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in a single network, outperforming prior state-of-the-art method SLANT27 ensembled with 27 networks. Our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively. Code, pre-trained models, and use case pipeline are available at: https://github.com/MASILab/UNesT.

11.
Med Image Comput Comput Assist Interv ; 14221: 649-659, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38779102

RESUMEN

The accuracy of predictive models for solitary pulmonary nodule (SPN) diagnosis can be greatly increased by incorporating repeat imaging and medical context, such as electronic health records (EHRs). However, clinically routine modalities such as imaging and diagnostic codes can be asynchronous and irregularly sampled over different time scales which are obstacles to longitudinal multimodal learning. In this work, we propose a transformer-based multimodal strategy to integrate repeat imaging with longitudinal clinical signatures from routinely collected EHRs for SPN classification. We perform unsupervised disentanglement of latent clinical signatures and leverage time-distance scaled self-attention to jointly learn from clinical signatures expressions and chest computed tomography (CT) scans. Our classifier is pretrained on 2,668 scans from a public dataset and 1,149 subjects with longitudinal chest CTs, billing codes, medications, and laboratory tests from EHRs of our home institution. Evaluation on 227 subjects with challenging SPNs revealed a significant AUC improvement over a longitudinal multimodal baseline (0.824 vs 0.752 AUC), as well as improvements over a single cross-section multimodal scenario (0.809 AUC) and a longitudinal imaging-only scenario (0.741 AUC). This work demonstrates significant advantages with a novel approach for co-learning longitudinal imaging and non-imaging phenotypes with transformers. Code available at https://github.com/MASILab/lmsignatures.

13.
Comput Biol Med ; 150: 106113, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36198225

RESUMEN

OBJECTIVE: Patients with indeterminate pulmonary nodules (IPN) with an intermediate to a high probability of lung cancer generally undergo invasive diagnostic procedures. Chest computed tomography image and clinical data have been in estimating the pretest probability of lung cancer. In this study, we apply a deep learning network to integrate multi-modal data from CT images and clinical data (including blood-based biomarkers) to improve lung cancer diagnosis. Our goal is to reduce uncertainty and to avoid morbidity, mortality, over- and undertreatment of patients with IPNs. METHOD: We use a retrospective study design with cross-validation and external-validation from four different sites. We introduce a deep learning framework with a two-path structure to learn from CT images and clinical data. The proposed model can learn and predict with single modality if the multi-modal data is not complete. We use 1284 patients in the learning cohort for model development. Three external sites (with 155, 136 and 96 patients, respectively) provided patient data for external validation. We compare our model to widely applied clinical prediction models (Mayo and Brock models) and image-only methods (e.g., Liao et al. model). RESULTS: Our co-learning model improves upon the performance of clinical-factor-only (Mayo and Brock models) and image-only (Liao et al.) models in both cross-validation of learning cohort (e.g. , AUC: 0.787 (ours) vs. 0.707-0.719 (baselines), results reported in validation fold and external-validation using three datasets from University of Pittsburgh Medical Center (e.g., 0.918 (ours) vs. 0.828-0.886 (baselines)), Detection of Early Cancer Among Military Personnel (e.g., 0.712 (ours) vs. 0.576-0.709 (baselines)), and University of Colorado Denver (e.g., 0.847 (ours) vs. 0.679-0.746 (baselines)). In addition, our model achieves better re-classification performance (cNRI 0.04 to 0.20) in all cross- and external-validation sets compared to the Mayo model. CONCLUSIONS: Lung cancer risk estimation in patients with IPNs can benefit from the co-learning of CT image and clinical data. Learning from more subjects, even though those only have a single modality, can improve the prediction accuracy. An integrated deep learning model can achieve reasonable discrimination and re-classification performance.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Nódulos Pulmonares Múltiples , Humanos , Estudios Retrospectivos , Incertidumbre , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen
14.
J Am Med Inform Assoc ; 28(3): 596-604, 2021 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-33277896

RESUMEN

OBJECTIVE: Simulating electronic health record data offers an opportunity to resolve the tension between data sharing and patient privacy. Recent techniques based on generative adversarial networks have shown promise but neglect the temporal aspect of healthcare. We introduce a generative framework for simulating the trajectory of patients' diagnoses and measures to evaluate utility and privacy. MATERIALS AND METHODS: The framework simulates date-stamped diagnosis sequences based on a 2-stage process that 1) sequentially extracts temporal patterns from clinical visits and 2) generates synthetic data conditioned on the learned patterns. We designed 3 utility measures to characterize the extent to which the framework maintains feature correlations and temporal patterns in clinical events. We evaluated the framework with billing codes, represented as phenome-wide association study codes (phecodes), from over 500 000 Vanderbilt University Medical Center electronic health records. We further assessed the privacy risks based on membership inference and attribute disclosure attacks. RESULTS: The simulated temporal sequences exhibited similar characteristics to real sequences on the utility measures. Notably, diagnosis prediction models based on real versus synthetic temporal data exhibited an average relative difference in area under the ROC curve of 1.6% with standard deviation of 3.8% for 1276 phecodes. Additionally, the relative difference in the mean occurrence age and time between visits were 4.9% and 4.2%, respectively. The privacy risks in synthetic data, with respect to the membership and attribute inference were negligible. CONCLUSION: This investigation indicates that temporal diagnosis code sequences can be simulated in a manner that provides utility and respects privacy.


Asunto(s)
Simulación por Computador , Confidencialidad , Registros Electrónicos de Salud , Modelos Estadísticos , Centros Médicos Académicos , Current Procedural Terminology , Diagnóstico , Enfermedad/clasificación , Precios de Hospital/clasificación , Humanos , Difusión de la Información , Tennessee , Factores de Tiempo
15.
J Biol Rhythms ; 36(6): 595-601, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34696614

RESUMEN

False negative tests for SARS-CoV-2 are common and have important public health and medical implications. We tested the hypothesis of diurnal variation in viral shedding by assessing the proportion of positive versus negative SARS-CoV-2 reverse transcription polymerase chain reaction (RT-PCR) tests and cycle time (Ct) values among positive samples by the time of day. Among 86,342 clinical tests performed among symptomatic and asymptomatic patients in a regional health care network in the southeastern United States from March to August 2020, we found evidence for diurnal variation in the proportion of positive SARS-CoV-2 tests, with a peak around 1400 h and 1.7-fold variation over the day after adjustment for age, sex, race, testing location, month, and day of week and lower Ct values during the day for positive samples. These findings have important implications for public health testing and vaccination strategies.


Asunto(s)
COVID-19 , SARS-CoV-2 , Prueba de COVID-19 , Ritmo Circadiano , Humanos , Reacción en Cadena de la Polimerasa
16.
Appl Clin Inform ; 12(1): 164-169, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33657635

RESUMEN

BACKGROUND: The data visualization literature asserts that the details of the optimal data display must be tailored to the specific task, the background of the user, and the characteristics of the data. The general organizing principle of a concept-oriented display is known to be useful for many tasks and data types. OBJECTIVES: In this project, we used general principles of data visualization and a co-design process to produce a clinical display tailored to a specific cognitive task, chosen from the anesthesia domain, but with clear generalizability to other clinical tasks. To support the work of the anesthesia-in-charge (AIC) our task was, for a given day, to depict the acuity level and complexity of each patient in the collection of those that will be operated on the following day. The AIC uses this information to optimally allocate anesthesia staff and providers across operating rooms. METHODS: We used a co-design process to collaborate with participants who work in the AIC role. We conducted two in-depth interviews with AICs and engaged them in subsequent input on iterative design solutions. RESULTS: Through a co-design process, we found (1) the need to carefully match the level of detail in the display to the level required by the clinical task, (2) the impedance caused by irrelevant information on the screen such as icons relevant only to other tasks, and (3) the desire for a specific but optional trajectory of increasingly detailed textual summaries. CONCLUSION: This study reports a real-world clinical informatics development project that engaged users as co-designers. Our process led to the user-preferred design of a single binary flag to identify the subset of patients needing further investigation, and then a trajectory of increasingly detailed, text-based abstractions for each patient that can be displayed when more information is needed.


Asunto(s)
Presentación de Datos , Informática Médica , Atención a la Salud , Humanos , Quirófanos , Atención Perioperativa
17.
Sci Rep ; 11(1): 18618, 2021 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-34545125

RESUMEN

Heart failure (HF) has no cure and, for HF with preserved ejection fraction (HFpEF), no life-extending treatments. Defining the clinical epidemiology of HF could facilitate earlier identification of high-risk individuals. We define the clinical epidemiology of HF subtypes (HFpEF and HF with reduced ejection fraction [HFrEF]), identified among 2.7 million individuals receiving routine clinical care. Differences in patterns and rates of accumulation of comorbidities, frequency of hospitalization, use of specialty care, were defined for each HF subtype. Among 28,156 HF cases, 8322 (30%) were HFpEF and 11,677 (42%) were HFrEF. HFpEF was the more prevalent subtype among older women. 177 Phenotypes differentially associated with HFpEF versus HFrEF. HFrEF was more frequently associated with diagnoses related to ischemic cardiac injury while HFpEF was associated more with non-cardiac comorbidities and HF symptoms. These comorbidity patterns were frequently present 3 years prior to a HFpEF diagnosis. HF subtypes demonstrated distinct patterns of clinical co-morbidities and disease progression. For HFpEF, these comorbidities were often non-cardiac and manifested prior to the onset of a HF diagnosis. Recognizing these comorbidity patterns, along the care continuum, may present a window of opportunity to identify individuals at risk for developing incident HFpEF.


Asunto(s)
Insuficiencia Cardíaca/clasificación , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Comorbilidad , Progresión de la Enfermedad , Femenino , Factores de Riesgo de Enfermedad Cardiaca , Insuficiencia Cardíaca/epidemiología , Insuficiencia Cardíaca/fisiopatología , Humanos , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Fenotipo , Conducta de Reducción del Riesgo , Volumen Sistólico
18.
J Clin Anesth ; 68: 110114, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33142248

RESUMEN

STUDY OBJECTIVE: A challenge in reducing unwanted care variation is effectively managing the wide variety of performed surgical procedures. While an organization may perform thousands of types of cases, privacy and logistical constraints prevent review of previous cases to learn about prior practices. To bridge this gap, we developed a system for extracting key data from anesthesia records. Our objective was to determine whether usage of the system would improve case planning performance for anesthesia residents. DESIGN: Randomized, cross-over trial. SETTING: Vanderbilt University Medical Center. MEASUREMENTS: We developed a web-based, data visualization tool for reviewing de-identified anesthesia records. First year anesthesia residents were recruited and performed simulated case planning tasks (e.g., selecting an anesthetic type) across six case scenarios using a randomized, cross-over design after a baseline assessment. An algorithm scored case planning performance based on care components selected by residents occurring frequently among prior anesthetics, which was scored on a 0-4 point scale. Linear mixed effects regression quantified the tool effect on the average performance score, adjusting for potential confounders. MAIN RESULTS: We analyzed 516 survey questionnaires from 19 residents. The mean performance score was 2.55 ± SD 0.32. Utilization of the tool was associated with an average score improvement of 0.120 points (95% CI 0.060 to 0.179; p < 0.001). Additionally, a 0.055 point improvement due to the "learning effect" was observed from each assessment to the next (95% CI 0.034 to 0.077; p < 0.001). Assessment score was also significantly associated with specific case scenarios (p < 0.001). CONCLUSIONS: This study demonstrated the feasibility of developing of a clinical data visualization system that aggregated key anesthetic information and found that the usage of tools modestly improved residents' performance in simulated case planning.


Asunto(s)
Anestesia , Internado y Residencia , Centros Médicos Académicos , Anestesia/efectos adversos , Competencia Clínica , Estudios Cruzados , Humanos
19.
IEEE Trans Knowl Data Eng ; 22(3): 437-446, 2010 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-21373375

RESUMEN

The goal of data anonymization is to allow the release of scientifically useful data in a form that protects the privacy of its subjects. This requires more than simply removing personal identifiers from the data, because an attacker can still use auxiliary information to infer sensitive individual information. Additional perturbation is necessary to prevent these inferences, and the challenge is to perturb the data in a way that preserves its analytic utility.No existing anonymization algorithm provides both perfect privacy protection and perfect analytic utility. We make the new observation that anonymization algorithms are not required to operate in the original vector-space basis of the data, and many algorithms can be improved by operating in a judiciously chosen alternate basis. A spectral basis derived from the data's eigenvectors is one that can provide substantial improvement. We introduce the term spectral anonymization to refer to an algorithm that uses a spectral basis for anonymization, and we give two illustrative examples.We also propose new measures of privacy protection that are more general and more informative than existing measures, and a principled reference standard with which to define adequate privacy protection.

20.
Appl Clin Inform ; 11(5): 700-709, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-33086396

RESUMEN

BACKGROUND: Suboptimal information display in electronic health records (EHRs) is a notorious pain point for users. Designing an effective display is difficult, due in part to the complex and varied nature of clinical practice. OBJECTIVE: This article aims to understand the goals, constraints, frustrations, and mental models of inpatient medical providers when accessing EHR data, to better inform the display of clinical information. METHODS: A multidisciplinary ethnographic study of inpatient medical providers. RESULTS: Our participants' primary goal was usually to assemble a clinical picture around a given question, under the constraints of time pressure and incomplete information. To do so, they tend to use a mental model of multiple layers of abstraction when thinking of patients and disease; they prefer immediate pattern recognition strategies for answering clinical questions, with breadth-first or depth-first search strategies used subsequently if needed; and they are sensitive to data relevance, completeness, and reliability when reading a record. CONCLUSION: These results conflict with the ubiquitous display design practice of separating data by type (test results, medications, notes, etc.), a mismatch that is known to encumber efficient mental processing by increasing both navigation burden and memory demands on users. A popular and obvious solution is to select or filter the data to display exactly what is presumed to be relevant to the clinical question, but this solution is both brittle and mistrusted by users. A less brittle approach that is more aligned with our users' mental model could use abstraction to summarize details instead of filtering to hide data. An abstraction-based approach could allow clinicians to more easily assemble a clinical picture, to use immediate pattern recognition strategies, and to adjust the level of displayed detail to their particular needs. It could also help the user notice unanticipated patterns and to fluidly shift attention as understanding evolves.


Asunto(s)
Registros Electrónicos de Salud , Pacientes Internos , Humanos , Reproducibilidad de los Resultados , Diseño Centrado en el Usuario
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA