Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Healthc Inform Res ; 7(2): 225-253, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37377633

RESUMEN

One of the hindrances in the widespread acceptance of deep learning-based decision support systems in healthcare is bias. Bias in its many forms occurs in the datasets used to train and test deep learning models and is amplified when deployed in the real world, leading to challenges such as model drift. Recent advancements in the field of deep learning have led to the deployment of deployable automated healthcare diagnosis decision support systems at hospitals as well as tele-medicine through IoT devices. Research has been focused primarily on the development and improvement of these systems leaving a gap in the analysis of the fairness. The domain of FAccT ML (fairness, accountability, and transparency) accounts for the analysis of these deployable machine learning systems. In this work, we present a framework for bias analysis in healthcare time series (BAHT) signals such as electrocardiogram (ECG) and electroencephalogram (EEG). BAHT provides a graphical interpretive analysis of bias in the training, testing datasets in terms of protected variables, and analysis of bias amplification by the trained supervised learning model for time series healthcare decision support systems. We thoroughly investigate three prominent time series ECG and EEG healthcare datasets used for model training and research. We show the extensive presence of bias in the datasets leads to potentially biased or unfair machine-learning models. Our experiments also demonstrate the amplification of identified bias with an observed maximum of 66.66%. We investigate the effect of model drift due to unanalyzed bias in datasets and algorithms. Bias mitigation though prudent is a nascent area of research. We present experiments and analyze the most prevalently accepted bias mitigation strategies of under-sampling, oversampling, and the use of synthetic data for balancing the dataset through augmentation. It is important that healthcare models, datasets, and bias mitigation strategies should be properly analyzed for a fair unbiased delivery of service.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38226393

RESUMEN

Skeletonization has been a popular shape analysis technique that models both the interior and exterior of an object. Existing template-based calculations of skeletal models from anatomical structures are a time-consuming manual process. Recently, learning-based methods have been used to extract skeletons from 3D shapes. In this work, we propose novel additional geometric terms for calculating skeletal structures of objects. The results are similar to traditional fitted s-reps but but are produced much more quickly. Evaluation on real clinical data shows that the learned model predicts accurate skeletal representations and shows the impact of proposed geometric losses along with using s-reps as weak supervision.

3.
Phys Med Rehabil Clin N Am ; 32(2): 437-449, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33814068

RESUMEN

This article discusses the use of physical and biometric sensors in telerehabilitation. It also discusses synchronous tele-physical assessment using haptics and augmented reality and asynchronous physical assessment using remote pose estimation. The article additionally focuses on computational models that have the potential to monitor and evaluate changes in kinematic and kinetic properties during telerehabilitation using biometric sensors such as electromyography and other wearable and noncontact sensors based on force and speed. And finally, the article discusses how virtual reality environments can be facilitated in telerehabilitation.


Asunto(s)
Terapia por Ejercicio/métodos , Accesibilidad a los Servicios de Salud , Monitoreo Fisiológico/métodos , Examen Físico/métodos , Telemedicina/métodos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA