Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 20(18)2020 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-32899979

RESUMO

(1) Background: People living with type 1 diabetes (T1D) require self-management to maintain blood glucose (BG) levels in a therapeutic range through the delivery of exogenous insulin. However, due to the various variability, uncertainty and complex glucose dynamics, optimizing the doses of insulin delivery to minimize the risk of hyperglycemia and hypoglycemia is still an open problem. (2) Methods: In this work, we propose a novel insulin bolus advisor which uses deep reinforcement learning (DRL) and continuous glucose monitoring to optimize insulin dosing at mealtime. In particular, an actor-critic model based on deep deterministic policy gradient is designed to compute mealtime insulin doses. The proposed system architecture uses a two-step learning framework, in which a population model is first obtained and then personalized by subject-specific data. Prioritized memory replay is adopted to accelerate the training process in clinical practice. To validate the algorithm, we employ a customized version of the FDA-accepted UVA/Padova T1D simulator to perform in silico trials on 10 adult subjects and 10 adolescent subjects. (3) Results: Compared to a standard bolus calculator as the baseline, the DRL insulin bolus advisor significantly improved the average percentage time in target range (70-180 mg/dL) from 74.1%±8.4% to 80.9%±6.9% (p<0.01) and 54.9%±12.4% to 61.6%±14.1% (p<0.01) in the the adult and adolescent cohorts, respectively, while reducing hypoglycemia. (4) Conclusions: The proposed algorithm has the potential to improve mealtime bolus insulin delivery in people with T1D and is a feasible candidate for future clinical validation.


Assuntos
Diabetes Mellitus Tipo 1 , Adolescente , Adulto , Algoritmos , Glicemia , Automonitorização da Glicemia , Diabetes Mellitus Tipo 1/tratamento farmacológico , Humanos , Hipoglicemiantes/uso terapêutico , Insulina , Sistemas de Infusão de Insulina
2.
Sci Rep ; 14(1): 15245, 2024 07 02.
Artigo em Inglês | MEDLINE | ID: mdl-38956183

RESUMO

In hybrid automatic insulin delivery (HAID) systems, meal disturbance is compensated by feedforward control, which requires the announcement of the meal by the patient with type 1 diabetes (DM1) to achieve the desired glycemic control performance. The calculation of insulin bolus in the HAID system is based on the amount of carbohydrates (CHO) in the meal and patient-specific parameters, i.e. carbohydrate-to-insulin ratio (CR) and insulin sensitivity-related correction factor (CF). The estimation of CHO in a meal is prone to errors and is burdensome for patients. This study proposes a fully automatic insulin delivery (FAID) system that eliminates patient intervention by compensating for unannounced meals. This study exploits the deep reinforcement learning (DRL) algorithm to calculate insulin bolus for unannounced meals without utilizing the information on CHO content. The DRL bolus calculator is integrated with a closed-loop controller and a meal detector (both previously developed by our group) to implement the FAID system. An adult cohort of 68 virtual patients based on the modified UVa/Padova simulator was used for in-silico trials. The percentage of the overall duration spent in the target range of 70-180 mg/dL was 71.2 % and 76.2 % , < 70 mg/dL was 0.9 % and 0.1 % , and > 180 mg/dL was 26.7 % and 21.1 % , respectively, for the FAID system and HAID system utilizing a standard bolus calculator (SBC) including CHO misestimation. The proposed algorithm can be exploited to realize FAID systems in the future.


Assuntos
Aprendizado Profundo , Diabetes Mellitus Tipo 1 , Sistemas de Infusão de Insulina , Insulina , Insulina/administração & dosagem , Humanos , Diabetes Mellitus Tipo 1/tratamento farmacológico , Diabetes Mellitus Tipo 1/sangue , Algoritmos , Glicemia/análise , Adulto , Hipoglicemiantes/administração & dosagem
3.
IEEE Trans Biomed Circuits Syst ; 18(2): 236-246, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38163299

RESUMO

Leveraging continuous glucose monitoring (CGM) systems, real-time blood glucose (BG) forecasting is essential for proactive interventions, playing a crucial role in enhancing the management of type 1 diabetes (T1D) and type 2 diabetes (T2D). However, developing a model generalized to a population and subsequently embedding it within a microchip of a wearable device presents significant technical challenges. Furthermore, the domain of BG prediction in T2D remains under-explored in the literature. In light of this, we propose a population-specific BG prediction model, leveraging the capabilities of the temporal fusion Transformer (TFT) to adjust predictions based on personal demographic data. Then the trained model is embedded within a system-on-chip, integral to our low-power and low-cost customized wearable device. This device seamlessly communicates with CGM systems through Bluetooth and provides timely BG predictions using edge computing. When evaluated on two publicly available clinical datasets with a total of 124 participants with T1D or T2D, the embedded TFT model consistently demonstrated superior performance, achieving the lowest prediction errors when compared with a range of machine learning baseline methods. Executing the TFT model on our wearable device requires minimal memory and power consumption, enabling continuous decision support for more than 51 days on a single Li-Poly battery charge. These findings demonstrate the significant potential of the proposed TFT model and wearable device in enhancing the quality of life for people with diabetes and effectively addressing real-world challenges.


Assuntos
Aprendizado Profundo , Diabetes Mellitus Tipo 1 , Diabetes Mellitus Tipo 2 , Humanos , Glucose , Diabetes Mellitus Tipo 1/terapia , Glicemia , Diabetes Mellitus Tipo 2/terapia , Automonitorização da Glicemia/métodos , Qualidade de Vida
4.
Artigo em Inglês | MEDLINE | ID: mdl-39012743

RESUMO

Real-time continuous glucose monitoring (CGM), augmented with accurate glucose prediction, offers an effective strategy for maintaining blood glucose levels within a therapeutically appropriate range. This is particularly crucial for individuals with type 1 diabetes (T1D) who require long-term self-management. However, with extensive glycemic variability, developing a prediction algorithm applicable across diverse populations remains a significant challenge. Leveraging meta-learning for domain generalization, we propose GPFormer, a Transformer-based zero-shot learning method designed for multi-horizon glucose prediction. We developed GPFormer on the REPLACE-BG dataset, comprising 226 participants with T1D, and proceeded to evaluate its performance using three external clinical datasets with CGM data. These included the OhioT1DM dataset, a publicly available dataset including 12 T1D participants, as well as two proprietary datasets. The first proprietary dataset included 22 participants, while the second contained 45 participants, encompassing a diverse group with T1D, type 2 diabetes, and those without diabetes, including patients admitted to hospitals. These four datasets include both outpatient and inpatient settings, various intervention strategies, and demographic variability, which effectively reflect real-world scenarios of CGM usage. When compared with a group of machine learning baseline methods, GPFormer consistently demonstrated superior performance and achieved the lowest root mean square error for all the evaluated datasets up to a prediction horizon of two hours. These experimental results highlight the effectiveness and generalizability of the proposed model across a variety of populations, demonstrating its substantial potential to enhance glucose management in a wide range of practical clinical settings.

5.
IEEE J Biomed Health Inform ; 27(10): 5087-5098, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37607154

RESUMO

Recent advancements in hybrid closed-loop systems, also known as the artificial pancreas (AP), have been shown to optimize glucose control and reduce the self-management burdens for people living with type 1 diabetes (T1D). AP systems can adjust the basal infusion rates of insulin pumps, facilitated by real-time communication with continuous glucose monitoring. Deep reinforcement learning (DRL) has introduced new paradigms of basal insulin control algorithms. However, all the existing DRL-based AP controllers require extensive random online interactions between the agent and environment. While this can be validated in T1D simulators, it becomes impractical in real-world clinical settings. To this end, we propose an offline DRL framework that can develop and validate models for basal insulin control entirely offline. It comprises a DRL model based on the twin delayed deep deterministic policy gradient and behavior cloning, as well as off-policy evaluation (OPE) using fitted Q evaluation. We evaluated the proposed framework on an in silico dataset generated by the UVA/Padova T1D simulator, and the OhioT1DM dataset, a real clinical dataset. The performance on the in silico dataset shows that the offline DRL algorithm significantly increased time in range while reducing time below range and time above range for both adult and adolescent groups. Then, we used the OPE to estimate model performance on the clinical dataset, where a notable increase in policy values was observed for each subject. The results demonstrate that the proposed framework is a viable and safe method for improving personalized basal insulin control in T1D.


Assuntos
Diabetes Mellitus Tipo 1 , Pâncreas Artificial , Adulto , Adolescente , Humanos , Diabetes Mellitus Tipo 1/tratamento farmacológico , Insulina/uso terapêutico , Glicemia , Automonitorização da Glicemia , Algoritmos , Sistemas de Infusão de Insulina , Hipoglicemiantes/uso terapêutico
6.
IEEE Trans Biomed Eng ; 70(1): 193-204, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35776825

RESUMO

The availability of large amounts of data from continuous glucose monitoring (CGM), together with the latest advances in deep learning techniques, have opened the door to a new paradigm of algorithm design for personalized blood glucose (BG) prediction in type 1 diabetes (T1D) with superior performance. However, there are several challenges that prevent the widespread implementation of deep learning algorithms in actual clinical settings, including unclear prediction confidence and limited training data for new T1D subjects. To this end, we propose a novel deep learning framework, Fast-adaptive and Confident Neural Network (FCNN), to meet these clinical challenges. In particular, an attention-based recurrent neural network is used to learn representations from CGM input and forward a weighted sum of hidden states to an evidential output layer, aiming to compute personalized BG predictions with theoretically supported model confidence. The model-agnostic meta-learning is employed to enable fast adaptation for a new T1D subject with limited training data. The proposed framework has been validated on three clinical datasets. In particular, for a dataset including 12 subjects with T1D, FCNN achieved a root mean square error of 18.64±2.60 mg/dL and 31.07±3.62 mg/dL for 30 and 60-minute prediction horizons, respectively, which outperformed all the considered baseline methods with significant improvements. These results indicate that FCNN is a viable and effective approach for predicting BG levels in T1D. The well-trained models can be implemented in smartphone apps to improve glycemic control by enabling proactive actions through real-time glucose alerts.


Assuntos
Aprendizado Profundo , Diabetes Mellitus Tipo 1 , Glicemia/análise , Diabetes Mellitus Tipo 1/sangue , Diabetes Mellitus Tipo 1/diagnóstico , Humanos
7.
IEEE J Biomed Health Inform ; 27(10): 5122-5133, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37134028

RESUMO

Time series data generated by continuous glucose monitoring sensors offer unparalleled opportunities for developing data-driven approaches, especially deep learning-based models, in diabetes management. Although these approaches have achieved state-of-the-art performance in various fields such as glucose prediction in type 1 diabetes (T1D), challenges remain in the acquisition of large-scale individual data for personalized modeling due to the elevated cost of clinical trials and data privacy regulations. In this work, we introduce GluGAN, a framework specifically designed for generating personalized glucose time series based on generative adversarial networks (GANs). Employing recurrent neural network (RNN) modules, the proposed framework uses a combination of unsupervised and supervised training to learn temporal dynamics in latent spaces. Aiming to assess the quality of synthetic data, we apply clinical metrics, distance scores, and discriminative and predictive scores computed by post-hoc RNNs in evaluation. Across three clinical datasets with 47 T1D subjects (including one publicly available and two proprietary datasets), GluGAN achieved better performance for all the considered metrics when compared with four baseline GAN models. The performance of data augmentation is evaluated by three machine learning-based glucose predictors. Using the training sets augmented by GluGAN significantly reduced the root mean square error for the predictors over 30 and 60-minute horizons. The results suggest that GluGAN is an effective method in generating high-quality synthetic glucose time series and has the potential to be used for evaluating the effectiveness of automated insulin delivery algorithms and as a digital twin to substitute for pre-clinical trials.


Assuntos
Glicemia , Diabetes Mellitus Tipo 1 , Humanos , Automonitorização da Glicemia , Diabetes Mellitus Tipo 1/tratamento farmacológico , Fatores de Tempo , Glucose
8.
IEEE J Biomed Health Inform ; 27(5): 2536-2544, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37027579

RESUMO

Mealtime insulin dosing is a major challenge for people living with type 1 diabetes (T1D). This task is typically performed using a standard formula that, despite containing some patient-specific parameters, often leads to sub-optimal glucose control due to lack of personalization and adaptation. To overcome the previous limitations here we propose an individualized and adaptive mealtime insulin bolus calculator based on double deep Q-learning (DDQ), which is tailored to the patient thanks to a personalization procedure relying on a two-step learning framework. The DDQ-learning bolus calculator was developed and tested using the UVA/Padova T1D simulator modified to reliably mimic real-world scenarios by introducing multiple variability sources impacting glucose metabolism and technology. The learning phase included a long-term training of eight sub-population models, one for each representative subject, selected thanks to a clustering procedure applied to the training set. Then, for each subject of the testing set, a personalization procedure was performed, by initializing the models based on the cluster to which the patient belongs. We evaluated the effectiveness of the proposed bolus calculator on a 60-day simulation, using several metrics representing the goodness of glycemic control, and comparing the results with the standard guidelines for mealtime insulin dosing. The proposed method improved the time in target range from 68.35% to 70.08% and significantly reduced the time in hypoglycemia (from 8.78% to 4.17%). The overall glycemic risk index decreased from 8.2 to 7.3, indicating the benefit of our method when applied for insulin dosing compared to standard guidelines.


Assuntos
Diabetes Mellitus Tipo 1 , Insulina , Humanos , Insulina/uso terapêutico , Diabetes Mellitus Tipo 1/tratamento farmacológico , Hipoglicemiantes/uso terapêutico , Glicemia , Automonitorização da Glicemia/métodos , Sistemas de Infusão de Insulina
9.
NPJ Digit Med ; 5(1): 78, 2022 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-35760819

RESUMO

People living with type 1 diabetes (T1D) require lifelong self-management to maintain glucose levels in a safe range. Failure to do so can lead to adverse glycemic events with short and long-term complications. Continuous glucose monitoring (CGM) is widely used in T1D self-management for real-time glucose measurements, while smartphone apps are adopted as basic electronic diaries, data visualization tools, and simple decision support tools for insulin dosing. Applying a mixed effects logistic regression analysis to the outcomes of a six-week longitudinal study in 12 T1D adults using CGM and a clinically validated wearable sensor wristband (NCT ID: NCT03643692), we identified several significant associations between physiological measurements and hypo- and hyperglycemic events measured an hour later. We proceeded to develop a new smartphone-based platform, ARISES (Adaptive, Real-time, and Intelligent System to Enhance Self-care), with an embedded deep learning algorithm utilizing multi-modal data from CGM, daily entries of meal and bolus insulin, and the sensor wristband to predict glucose levels and hypo- and hyperglycemia. For a 60-minute prediction horizon, the proposed algorithm achieved the average root mean square error (RMSE) of 35.28 ± 5.77 mg/dL with the Matthews correlation coefficients for detecting hypoglycemia and hyperglycemia of 0.56 ± 0.07 and 0.70 ± 0.05, respectively. The use of wristband data significantly reduced the RMSE by 2.25 mg/dL (p < 0.01). The well-trained model is implemented on the ARISES app to provide real-time decision support. These results indicate that the ARISES has great potential to mitigate the risk of severe complications and enhance self-management for people with T1D.

10.
IEEE J Biomed Health Inform ; 25(4): 1223-1232, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-32755873

RESUMO

People with Type 1 diabetes (T1D) require regular exogenous infusion of insulin to maintain their blood glucose concentration in a therapeutically adequate target range. Although the artificial pancreas and continuous glucose monitoring have been proven to be effective in achieving closed-loop control, significant challenges still remain due to the high complexity of glucose dynamics and limitations in the technology. In this work, we propose a novel deep reinforcement learning model for single-hormone (insulin) and dual-hormone (insulin and glucagon) delivery. In particular, the delivery strategies are developed by double Q-learning with dilated recurrent neural networks. For designing and testing purposes, the FDA-accepted UVA/Padova Type 1 simulator was employed. First, we performed long-term generalized training to obtain a population model. Then, this model was personalized with a small data-set of subject-specific data. In silico results show that the single and dual-hormone delivery strategies achieve good glucose control when compared to a standard basal-bolus therapy with low-glucose insulin suspension. Specifically, in the adult cohort (n = 10), percentage time in target range 70, 180 mg/dL improved from 77.6% to 80.9% with single-hormone control, and to 85.6% with dual-hormone control. In the adolescent cohort (n = 10), percentage time in target range improved from 55.5% to [Formula: see text] with single-hormone control, and to 78.8% with dual-hormone control. In all scenarios, a significant decrease in hypoglycemia was observed. These results show that the use of deep reinforcement learning is a viable approach for closed-loop glucose control in T1D.


Assuntos
Diabetes Mellitus Tipo 1 , Pâncreas Artificial , Adolescente , Adulto , Algoritmos , Glicemia , Automonitorização da Glicemia , Simulação por Computador , Diabetes Mellitus Tipo 1/tratamento farmacológico , Humanos , Hipoglicemiantes/uso terapêutico , Insulina/uso terapêutico , Sistemas de Infusão de Insulina
11.
IEEE J Biomed Health Inform ; 25(7): 2744-2757, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33232247

RESUMO

Diabetes is a chronic metabolic disorder that affects an estimated 463 million people worldwide. Aiming to improve the treatment of people with diabetes, digital health has been widely adopted in recent years and generated a huge amount of data that could be used for further management of this chronic disease. Taking advantage of this, approaches that use artificial intelligence and specifically deep learning, an emerging type of machine learning, have been widely adopted with promising results. In this paper, we present a comprehensive review of the applications of deep learning within the field of diabetes. We conducted a systematic literature search and identified three main areas that use this approach: diagnosis of diabetes, glucose management, and diagnosis of diabetes-related complications. The search resulted in the selection of 40 original research articles, of which we have summarized the key information about the employed learning models, development process, main outcomes, and baseline methods for performance evaluation. Among the analyzed literature, it is to be noted that various deep learning techniques and frameworks have achieved state-of-the-art performance in many diabetes-related tasks by outperforming conventional machine learning approaches. Meanwhile, we identify some limitations in the current literature, such as a lack of data availability and model interpretability. The rapid developments in deep learning and the increase in available data offer the possibility to meet these challenges in the near future and allow the widespread deployment of this technology in clinical settings.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Inteligência Artificial , Diabetes Mellitus/diagnóstico , Diabetes Mellitus/terapia , Humanos , Aprendizado de Máquina
12.
J Healthc Inform Res ; 4(3): 308-324, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35415447

RESUMO

Diabetes is a chronic disease affecting 415 million people worldwide. People with type 1 diabetes mellitus (T1DM) need to self-administer insulin to maintain blood glucose (BG) levels in a normal range, which is usually a very challenging task. Developing a reliable glucose forecasting model would have a profound impact on diabetes management, since it could provide predictive glucose alarms or insulin suspension at low-glucose for hypoglycemia minimisation. Recently, deep learning has shown great potential in healthcare and medical research for diagnosis, forecasting and decision-making. In this work, we introduce a deep learning model based on a dilated recurrent neural network (DRNN) to provide 30-min forecasts of future glucose levels. Using dilation, the DRNN model gains a much larger receptive field in terms of neurons aiming at capturing long-term dependencies. A transfer learning technique is also applied to make use of the data from multiple subjects. The proposed approach outperforms existing glucose forecasting algorithms, including autoregressive models (ARX), support vector regression (SVR) and conventional neural networks for predicting glucose (NNPG) (e.g. RMSE = NNPG, 22.9 mg/dL; SVR, 21.7 mg/dL; ARX, 20.1 mg/dl; DRNN, 18.9 mg/dL on the OhioT1DM dataset). The results suggest that dilated connections can improve glucose forecasting performance efficiently.

13.
IEEE J Biomed Health Inform ; 24(2): 414-423, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31369390

RESUMO

For people with Type 1 diabetes (T1D), forecasting of blood glucose (BG) can be used to effectively avoid hyperglycemia, hypoglycemia and associated complications. The latest continuous glucose monitoring (CGM) technology allows people to observe glucose in real-time. However, an accurate glucose forecast remains a challenge. In this work, we introduce GluNet, a framework that leverages on a personalized deep neural network to predict the probabilistic distribution of short-term (30-60 minutes) future CGM measurements for subjects with T1D based on their historical data including glucose measurements, meal information, insulin doses, and other factors. It adopts the latest deep learning techniques consisting of four components: data pre-processing, label transform/recover, multi-layers of dilated convolution neural network (CNN), and post-processing. The method is evaluated in-silico for both adult and adolescent subjects. The results show significant improvements over existing methods in the literature through a comprehensive comparison in terms of root mean square error (RMSE) ([Formula: see text] mg/dL) with short time lag ([Formula: see text] minutes) for prediction horizons (PH) = 30 mins (minutes), and RMSE ([Formula: see text] mg/dL) with time lag ([Formula: see text] mins) for PH = 60 mins for virtual adult subjects. In addition, GluNet is also tested on two clinical data sets. Results show that it achieves an RMSE ([Formula: see text] mg/dL) with time lag ([Formula: see text] mins) for PH = 30 mins and an RMSE ([Formula: see text] mg/dL) with time lag ([Formula: see text] mins) for PH = 60 mins. These are the best reported results for glucose forecasting when compared with other methods including the neural network for predicting glucose (NNPG), the support vector regression (SVR), the latent variable with exogenous input (LVX), and the auto regression with exogenous input (ARX) algorithm.


Assuntos
Glicemia/análise , Aprendizado Profundo , Diabetes Mellitus Tipo 1/sangue , Algoritmos , Automonitorização da Glicemia/métodos , Humanos , Probabilidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA