Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Med Internet Res ; 25: e43633, 2023 06 26.
Artigo em Inglês | MEDLINE | ID: mdl-37358890

RESUMO

BACKGROUND: Engagement is key to interventions that achieve successful behavior change and improvements in health. There is limited literature on the application of predictive machine learning (ML) models to data from commercially available weight loss programs to predict disengagement. Such data could help participants achieve their goals. OBJECTIVE: This study aimed to use explainable ML to predict the risk of member disengagement week by week over 12 weeks on a commercially available web-based weight loss program. METHODS: Data were available from 59,686 adults who participated in the weight loss program between October 2014 and September 2019. Data included year of birth, sex, height, weight, motivation to join the program, use statistics (eg, weight entries, entries into the food diary, views of the menu, and program content), program type, and weight loss. Random forest, extreme gradient boosting, and logistic regression with L1 regularization models were developed and validated using a 10-fold cross-validation approach. In addition, temporal validation was performed on a test cohort of 16,947 members who participated in the program between April 2018 and September 2019, and the remaining data were used for model development. Shapley values were used to identify globally relevant features and explain individual predictions. RESULTS: The average age of the participants was 49.60 (SD 12.54) years, the average starting BMI was 32.43 (SD 6.19), and 81.46% (39,594/48,604) of the participants were female. The class distributions (active and inactive members) changed from 39,369 and 9235 in week 2 to 31,602 and 17,002 in week 12, respectively. With 10-fold-cross-validation, extreme gradient boosting models had the best predictive performance, which ranged from 0.85 (95% CI 0.84-0.85) to 0.93 (95% CI 0.93-0.93) for area under the receiver operating characteristic curve and from 0.57 (95% CI 0.56-0.58) to 0.95 (95% CI 0.95-0.96) for area under the precision-recall curve (across 12 weeks of the program). They also presented a good calibration. Results obtained with temporal validation ranged from 0.51 to 0.95 for area under a precision-recall curve and 0.84 to 0.93 for area under the receiver operating characteristic curve across the 12 weeks. There was a considerable improvement in area under a precision-recall curve of 20% in week 3 of the program. On the basis of the computed Shapley values, the most important features for predicting disengagement in the following week were those related to the total activity on the platform and entering a weight in the previous weeks. CONCLUSIONS: This study showed the potential of applying ML predictive algorithms to help predict and understand participants' disengagement with a web-based weight loss program. Given the association between engagement and health outcomes, these findings can prove valuable in providing better support to individuals to enhance their engagement and potentially achieve greater weight loss.


Assuntos
Intervenção Baseada em Internet , Programas de Redução de Peso , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Transversais , Internet , Aprendizado de Máquina , Redução de Peso
2.
Comput Biol Med ; 177: 108658, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38833801

RESUMO

Bradycardia is a commonly occurring condition in premature infants, often causing serious consequences and cardiovascular complications. Reliable and accurate detection of bradycardia events is pivotal for timely intervention and effective treatment. Excessive false alarms pose a critical problem in bradycardia event detection, eroding trust in machine learning (ML)-based clinical decision support tools designed for such detection. This could result in disregarding the algorithm's accurate recommendations and disrupting workflows, potentially compromising the quality of patient care. This article introduces an ML-based approach incorporating an output correction element, designed to minimise false alarms. The approach has been applied to bradycardia detection in preterm infants. We applied five ML-based autoencoder techniques, using recurrent neural network (RNN), long-short-term memory (LSTM), gated recurrent unit (GRU), 1D convolutional neural network (1D CNN), and a combination of 1D CNN and LSTM. The analysis is performed on ∼440 hours of real-time preterm infant data. The proposed approach achieved 0.978, 0.73, 0.992, 0.671 and 0.007 in AUC-ROC, AUC-PRC, recall, F1 score, and false positive rate (FPR) respectively and a false alarms reduction of 36% when compared with methods without the correction approach. This study underscores the imperative of cultivating solutions that alleviate alarm fatigue and encourage active engagement among healthcare professionals.


Assuntos
Bradicardia , Aprendizado de Máquina , Humanos , Bradicardia/diagnóstico , Bradicardia/fisiopatologia , Recém-Nascido , Recém-Nascido Prematuro/fisiologia , Redes Neurais de Computação , Masculino , Feminino , Eletrocardiografia/métodos , Processamento de Sinais Assistido por Computador , Algoritmos
3.
Interact J Med Res ; 13: e46946, 2024 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-39163610

RESUMO

BACKGROUND: Computational signal preprocessing is a prerequisite for developing data-driven predictive models for clinical decision support. Thus, identifying the best practices that adhere to clinical principles is critical to ensure transparency and reproducibility to drive clinical adoption. It further fosters reproducible, ethical, and reliable conduct of studies. This procedure is also crucial for setting up a software quality management system to ensure regulatory compliance in developing software as a medical device aimed at early preclinical detection of clinical deterioration. OBJECTIVE: This scoping review focuses on the neonatal intensive care unit setting and summarizes the state-of-the-art computational methods used for preprocessing neonatal clinical physiological signals; these signals are used for the development of machine learning models to predict the risk of adverse outcomes. METHODS: Five databases (PubMed, Web of Science, Scopus, IEEE, and ACM Digital Library) were searched using a combination of keywords and MeSH (Medical Subject Headings) terms. A total of 3585 papers from 2013 to January 2023 were identified based on the defined search terms and inclusion criteria. After removing duplicates, 2994 (83.51%) papers were screened by title and abstract, and 81 (0.03%) were selected for full-text review. Of these, 52 (64%) were eligible for inclusion in the detailed analysis. RESULTS: Of the 52 articles reviewed, 24 (46%) studies focused on diagnostic models, while the remainder (n=28, 54%) focused on prognostic models. The analysis conducted in these studies involved various physiological signals, with electrocardiograms being the most prevalent. Different programming languages were used, with MATLAB and Python being notable. The monitoring and capturing of physiological data used diverse systems, impacting data quality and introducing study heterogeneity. Outcomes of interest included sepsis, apnea, bradycardia, mortality, necrotizing enterocolitis, and hypoxic-ischemic encephalopathy, with some studies analyzing combinations of adverse outcomes. We found a partial or complete lack of transparency in reporting the setting and the methods used for signal preprocessing. This includes reporting methods to handle missing data, segment size for considered analysis, and details regarding the modification of the state-of-the-art methods for physiological signal processing to align with the clinical principles for neonates. Only 7 (13%) of the 52 reviewed studies reported all the recommended preprocessing steps, which could have impacts on the downstream analysis. CONCLUSIONS: The review found heterogeneity in the techniques used and inconsistent reporting of parameters and procedures used for preprocessing neonatal physiological signals, which is necessary to confirm adherence to clinical and software quality management system practices, usefulness, and choice of best practices. Enhancing transparency in reporting and standardizing procedures will boost study interpretation and reproducibility and expedite clinical adoption, instilling confidence in the research findings and streamlining the translation of research outcomes into clinical practice, ultimately contributing to the advancement of neonatal care and patient outcomes.

4.
Stud Health Technol Inform ; 310: 224-228, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269798

RESUMO

Accurate identification of the QRS complex is critical to analyse heart rate variability (HRV), which is linked to various adverse outcomes in premature infants. Reliable and accurate extraction of HRV characteristics at a large scale in the neonatal context remains a challenge. In this paper, we investigate the capabilities of 15 state-of-the-art QRS complex detection implementations using two real-world preterm neonatal datasets. As an attempt to improve the accuracy and reliability, we introduce a weighted ensemble-based method as an alternative. Obtained results indicate the superiority of the proposed method over the state of the art on both datasets with an F1-score of 0.966 (95% CI 0.962-0.97) and 0.893 (95% CI 0.892-0.894). This motivates the deployment of ensemble-based methods for any HRV-based analysis to ensure robust and accurate QRS complex detection.


Assuntos
Algoritmos , Recém-Nascido Prematuro , Lactente , Recém-Nascido , Humanos , Frequência Cardíaca , Reprodutibilidade dos Testes , Eletrocardiografia
5.
Stud Health Technol Inform ; 310: 865-869, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269932

RESUMO

The lack of transparency and explainability hinders the clinical adoption of Machine learning (ML) algorithms. While explainable artificial intelligence (XAI) methods have been proposed, little research has focused on the agreement between these methods and expert clinical knowledge. This study applies current state-of-the-art explainability methods to clinical decision support algorithms developed for Electronic Medical Records (EMR) data to analyse the concordance between these factors and discusses causes for identified discrepancies from a clinical and technical perspective. Important factors for achieving trustworthy XAI solutions for clinical decision support are also discussed.


Assuntos
Inteligência Artificial , Registros Eletrônicos de Saúde , Algoritmos , Conhecimento , Aprendizado de Máquina
6.
Sci Rep ; 14(1): 5760, 2024 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-38459073

RESUMO

Stroke is a leading cause of death and disability worldwide, and early diagnosis and prompt medical intervention are thus crucial. Frequent monitoring of stroke patients is also essential to assess treatment efficacy and detect complications earlier. While computed tomography (CT) and magnetic resonance imaging (MRI) are commonly used for stroke diagnosis, they cannot be easily used onsite, nor for frequent monitoring purposes. To meet those requirements, an electromagnetic imaging (EMI) device, which is portable, non-invasive, and non-ionizing, has been developed. It uses a headset with an antenna array that irradiates the head with a safe low-frequency EM field and captures scattered fields to map the brain using a complementary set of physics-based and data-driven algorithms, enabling quasi-real-time detection, two-dimensional localization, and classification of strokes. This study reports clinical findings from the first time the device was used on stroke patients. The clinical results on 50 patients indicate achieving an overall accuracy of 98% in classification and 80% in two-dimensional quadrant localization. With its lightweight design and potential for use by a single para-medical staff at the point of care, the device can be used in intensive care units, emergency departments, and by paramedics for onsite diagnosis.


Assuntos
Encéfalo , Acidente Vascular Cerebral , Humanos , Encéfalo/diagnóstico por imagem , Fenômenos Eletromagnéticos , Cabeça , Acidente Vascular Cerebral/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética
7.
Artigo em Inglês | MEDLINE | ID: mdl-39480722

RESUMO

Hepatic steatosis, a key factor in chronic liver diseases, is difficult to diagnose early. This study introduces a classifier for hepatic steatosis using microwave technology, validated through clinical trials. Our method uses microwave signals and deep learning to improve detection to reliable results. It includes a pipeline with simulation data, a new deep-learning model called HepNet, and transfer learning. The simulation data, created with 3D electromagnetic tools, is used for training and evaluating the model. HepNet uses skip connections in convolutional layers and two fully connected layers for better feature extraction and generalization. Calibration and uncertainty assessments ensure the model's robustness. Our simulation achieved an F1-score of 0.91 and a confidence level of 0.97 for classifications with entropy ≤0.1, outperforming traditional models like LeNet (0.81) and ResNet (0.87). We also use transfer learning to adapt HepNet to clinical data with limited patient samples. Using 1H-MRS as the standard for two microwave liver scanners, HepNet achieved high F1-scores of 0.95 and 0.88 for 94 and 158 patient samples, respectively, showing its clinical potential.

8.
Artigo em Inglês | MEDLINE | ID: mdl-38082857

RESUMO

Premature babies and those born with a medical condition are cared for within the neonatal intensive care unit (NICU) in hospitals. Monitoring physiological signals and subsequent analysis and interpretation can reveal acute and chronic conditions for these neonates. Several advanced algorithms using physiological signals have been built into existing monitoring systems to allow clinicians to analyse signals in real time and anticipate patient deterioration. However, limited enhancements have been made to interactively visualise and adapt them to neonatal monitoring systems. To bridge this gap, we describe the development of a user-friendly and interactive dashboard for neonatal vital signs analysis written in the Python programming language where the analysis can be performed without prior computing knowledge. To ensure practicality, the dashboard was designed in consultation with a neonatologist to visualise electrocardiogram, heart rate, respiratory rate and oxygen saturation data in a time-series format. The resulting dashboard included interactive visualisations, advanced electrocardiogram analysis and statistical analysis which can be used to extract important information on patients' conditions.Clinical Relevance- This will support the care of preterm infants by allowing clinicians to visualise and interpret physiological data in greater granularity, aiding in patient monitoring and detection of adverse conditions. The detection of adverse conditions could allow timely and potentially life-saving interventions for conditions such as sepsis and brain injury.


Assuntos
Recém-Nascido Prematuro , Unidades de Terapia Intensiva Neonatal , Lactente , Recém-Nascido , Humanos , Frequência Cardíaca , Monitorização Fisiológica , Algoritmos
9.
Sci Rep ; 12(1): 16592, 2022 10 05.
Artigo em Inglês | MEDLINE | ID: mdl-36198757

RESUMO

Preventing unplanned hospitalisations, including readmissions and re-presentations to the emergency department, is an important strategy for addressing the growing demand for hospital care. Significant successes have been reported from interventions put in place by hospitals to reduce their incidence. However, there is limited use of data-driven algorithms in hospital services to identify patients for enrolment into these intervention programs. Here we present the results of a study aiming to develop algorithms deployable at scale as part of a state government's initiative to address rehospitalizations and which fills several gaps identified in the state-of-the-art literature. To the best of our knowledge, our study involves the largest-ever sample size for developing risk models. Logistic regression, random forests and gradient boosted techniques were explored as model candidates and validated retrospectively on five years of data from 27 hospitals in Queensland, Australia. The models used a range of predictor variables sourced from state-wide Emergency Department(ED), inpatient, hospital-dispensed medications and hospital-requested pathology databases. The investigation leads to several findings: (i) the advantage of looking at a longer patient data history, (ii) ED and inpatient datasets alone can provide useful information for predicting hospitalisation risk and the addition of medications and pathology test results leads to trivial performance improvements, (iii) predicting readmissions to the hospital was slightly easier than predicting re-presentations to ED after an inpatient stay, which was slightly easier again than predicting re-presentations to ED after an EDstay, (iv) a gradient boosted approach (XGBoost) was systematically the most powerful modelling approach across various tests.


Assuntos
Registros Eletrônicos de Saúde , Hospitalização , Serviço Hospitalar de Emergência , Hospitais , Humanos , Estudos Retrospectivos
10.
Sci Rep ; 12(1): 11734, 2022 07 11.
Artigo em Inglês | MEDLINE | ID: mdl-35817885

RESUMO

The Electronic Medical Record (EMR) provides an opportunity to manage patient care efficiently and accurately. This includes clinical decision support tools for the timely identification of adverse events or acute illnesses preceded by deterioration. This paper presents a machine learning-driven tool developed using real-time EMR data for identifying patients at high risk of reaching critical conditions that may demand immediate interventions. This tool provides a pre-emptive solution that can help busy clinicians to prioritize their efforts while evaluating the individual patient risk of deterioration. The tool also provides visualized explanation of the main contributing factors to its decisions, which can guide the choice of intervention. When applied to a test cohort of 18,648 patient records, the tool achieved 100% sensitivity for prediction windows 2-8 h in advance for patients that were identified at 95%, 85% and 70% risk of deterioration.


Assuntos
Registros Eletrônicos de Saúde , Aprendizado de Máquina , Estudos de Coortes , Humanos
11.
Front Neurol ; 12: 765412, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34777233

RESUMO

Introduction: Electromagnetic imaging is an emerging technology which promises to provide a mobile, and rapid neuroimaging modality for pre-hospital and bedside evaluation of stroke patients based on the dielectric properties of the tissue. It is now possible due to technological advancements in materials, antennae design and manufacture, rapid portable computing power and network analyses and development of processing algorithms for image reconstruction. The purpose of this report is to introduce images from a novel, portable electromagnetic scanner being trialed for bedside and mobile imaging of ischaemic and haemorrhagic stroke. Methods: A prospective convenience study enrolled patients (January 2020 to August 2020) with known stroke to have brain electromagnetic imaging, in addition to usual imaging and medical care. The images are obtained by processing signals from encircling transceiver antennae which emit and detect low energy signals in the microwave frequency spectrum between 0.5 and 2.0 GHz. The purpose of the study was to refine the imaging algorithms. Results: Examples are presented of haemorrhagic and ischaemic stroke and comparison is made with CT, perfusion and MRI T2 FAIR sequence images. Conclusion: Due to speed of imaging, size and mobility of the device and negligible environmental risks, development of electromagnetic scanning scanner provides a promising additional modality for mobile and bedside neuroimaging.

12.
IEEE/ACM Trans Comput Biol Bioinform ; 16(6): 1802-1815, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-29993889

RESUMO

DNA microarray datasets are characterized by a large number of features with very few samples, which is a typical cause of overfitting and poor generalization in the classification task. Here, we introduce a novel feature selection (FS) approach which employs the distance correlation (dCor) as a criterion for evaluating the dependence of the class on a given feature subset. The dCor index provides a reliable dependence measure among random vectors of arbitrary dimension, without any assumption on their distribution. Moreover, it is sensitive to the presence of redundant terms. The proposed FS method is based on a probabilistic representation of the feature subset model, which is progressively refined by a repeated process of model extraction and evaluation. A key element of the approach is a distributed optimization scheme based on a vertical partitioning of the dataset, which alleviates the negative effects of its unbalanced dimensions. The proposed method has been tested on several microarray datasets, resulting in quite compact and accurate models obtained at a reasonable computational cost.


Assuntos
Biomarcadores Tumorais/genética , Biologia Computacional/métodos , Neoplasias/genética , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Algoritmos , Linhagem Celular Tumoral , Bases de Dados Factuais , Reações Falso-Positivas , Humanos , Leucemia/genética , Modelos Estatísticos , Análise Multivariada
13.
IEEE Trans Cybern ; 48(4): 1151-1162, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28371789

RESUMO

We here introduce a novel classification approach adopted from the nonlinear model identification framework, which jointly addresses the feature selection (FS) and classifier design tasks. The classifier is constructed as a polynomial expansion of the original features and a selection process is applied to find the relevant model terms. The selection method progressively refines a probability distribution defined on the model structure space, by extracting sample models from the current distribution and using the aggregate information obtained from the evaluation of the population of models to reinforce the probability of extracting the most important terms. To reduce the initial search space, distance correlation filtering is optionally applied as a preprocessing technique. The proposed method is compared to other well-known FS and classification methods on standard benchmark problems. Besides the favorable properties of the method regarding classification accuracy, the obtained models have a simple structure, easily amenable to interpretation and analysis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA