RESUMO
Current artificial pancreas (AP) systems are hybrid closed-loop systems that require manual meal announcements to manage postprandial glucose control effectively. This poses a cognitive burden and challenge to users with T1D since this relies on frequent user engagement to maintain tight glucose control. In order to move towards fully automated closed-loop glucose control, we propose an algorithm based on a deep learning framework that performs multitask quantile regression, for both meal detection and carbohydrate estimation. Our proposed method is evaluated in silico on 10 adult subjects from the UVa/Padova simulator with a Bio-inspired Artificial Pancreas (BiAP) control algorithm over a 2 month period. Three different configurations of the AP are evaluated -BiAP without meal announcement (BiAP-NMA), BiAP with meal announcement (BiAP-MA), and BiAP with meal detection (BiAP-MD). We present results showing an improvement of BiAP-MD over BiAP-NMA, demonstrating 144.5 ± 6.8 mg/dL mean blood glucose level (-4.4 mg/dL, p< 0.01) and 77.8 ± 6.3% mean time between 70 and 180 mg/dL (+3.9%, p< 0.001). This improvement in control is realised without a significant increase in mean in hypoglycaemia (+0.1%, p= 0.4). In terms of detection of meals and snacks, the proposed method on average achieves 93% precision and 76% recall with a detection delay time of 38 ± 15 min (92% precision, 92% recall, and 37 min detection time for meals only). Furthermore, BiAP-MD handles hypoglycaemia better than BiAP-MA based on CVGA assessment with fewer control errors (10% vs. 20%). This study suggests that multitask quantile regression can improve the capability of AP systems for postprandial glucose control without increasing hypoglycaemia.
Assuntos
Aprendizado Profundo , Diabetes Mellitus Tipo 1 , Pâncreas Artificial , Adulto , Algoritmos , Glicemia , Automonitorização da Glicemia , Diabetes Mellitus Tipo 1/tratamento farmacológico , Humanos , Insulina , Sistemas de Infusão de Insulina , RefeiçõesRESUMO
BACKGROUND: A locally developed case-based reasoning (CBR) algorithm, designed to augment antimicrobial prescribing in secondary care was evaluated. METHODS: Prescribing recommendations made by a CBR algorithm were compared to decisions made by physicians in clinical practice. Comparisons were examined in 2 patient populations: first, in patients with confirmed Escherichia coli blood stream infections ("E. coli patients"), and second in ward-based patients presenting with a range of potential infections ("ward patients"). Prescribing recommendations were compared against the Antimicrobial Spectrum Index (ASI) and the World Health Organization Essential Medicine List Access, Watch, Reserve (AWaRe) classification system. Appropriateness of a prescription was defined as the spectrum of the prescription covering the known or most-likely organism antimicrobial sensitivity profile. RESULTS: In total, 224 patients (145 E. coli patients and 79 ward patients) were included. Mean (standard deviation) age was 66 (18) years with 108/224 (48%) female sex. The CBR recommendations were appropriate in 202/224 (90%) compared to 186/224 (83%) in practice (odds ratio [OR]: 1.24 95% confidence interval [CI]: .392-3.936; P = .71). CBR recommendations had a smaller ASI compared to practice with a median (range) of 6 (0-13) compared to 8 (0-12) (P < .01). CBR recommendations were more likely to be classified as Access class antimicrobials compared to physicians' prescriptions at 110/224 (49%) vs. 79/224 (35%) (OR: 1.77; 95% CI: 1.212-2.588; P < .01). Results were similar for E. coli and ward patients on subgroup analysis. CONCLUSIONS: A CBR-driven decision support system provided appropriate recommendations within a narrower spectrum compared to current clinical practice. Future work must investigate the impact of this intervention on prescribing behaviors more broadly and patient outcomes.
Assuntos
Anti-Infecciosos , Gestão de Antimicrobianos , Idoso , Algoritmos , Antibacterianos/uso terapêutico , Anti-Infecciosos/uso terapêutico , Escherichia coli , Feminino , Humanos , Prescrição Inadequada , Padrões de Prática MédicaRESUMO
(1) Background: People living with type 1 diabetes (T1D) require self-management to maintain blood glucose (BG) levels in a therapeutic range through the delivery of exogenous insulin. However, due to the various variability, uncertainty and complex glucose dynamics, optimizing the doses of insulin delivery to minimize the risk of hyperglycemia and hypoglycemia is still an open problem. (2) Methods: In this work, we propose a novel insulin bolus advisor which uses deep reinforcement learning (DRL) and continuous glucose monitoring to optimize insulin dosing at mealtime. In particular, an actor-critic model based on deep deterministic policy gradient is designed to compute mealtime insulin doses. The proposed system architecture uses a two-step learning framework, in which a population model is first obtained and then personalized by subject-specific data. Prioritized memory replay is adopted to accelerate the training process in clinical practice. To validate the algorithm, we employ a customized version of the FDA-accepted UVA/Padova T1D simulator to perform in silico trials on 10 adult subjects and 10 adolescent subjects. (3) Results: Compared to a standard bolus calculator as the baseline, the DRL insulin bolus advisor significantly improved the average percentage time in target range (70-180 mg/dL) from 74.1%±8.4% to 80.9%±6.9% (p<0.01) and 54.9%±12.4% to 61.6%±14.1% (p<0.01) in the the adult and adolescent cohorts, respectively, while reducing hypoglycemia. (4) Conclusions: The proposed algorithm has the potential to improve mealtime bolus insulin delivery in people with T1D and is a feasible candidate for future clinical validation.
Assuntos
Diabetes Mellitus Tipo 1 , Adolescente , Adulto , Algoritmos , Glicemia , Automonitorização da Glicemia , Diabetes Mellitus Tipo 1/tratamento farmacológico , Humanos , Hipoglicemiantes/uso terapêutico , Insulina , Sistemas de Infusão de InsulinaRESUMO
In the daily management of type 1 diabetes (T1D), determining the correct insulin dose to be injected at meal-time is fundamental to achieve optimal glycemic control. Wearable sensors, such as continuous glucose monitoring (CGM) devices, are instrumental to achieve this purpose. In this paper, we show how CGM data, together with commonly recorded inputs (carbohydrate intake and bolus insulin), can be used to develop an algorithm that allows classifying, at meal-time, the post-prandial glycemic status (i.e., blood glucose concentration being too low, too high, or within target range). Such an outcome can then be used to improve the efficacy of insulin therapy by reducing or increasing the corresponding meal bolus dose. A state-of-the-art T1D simulation environment, including intraday variability and a behavioral model, was used to generate a rich in silico dataset corresponding to 100 subjects over a two-month scenario. Then, an extreme gradient-boosted tree (XGB) algorithm was employed to classify the post-prandial glycemic status. Finally, we demonstrate how the XGB algorithm outcome can be exploited to improve glycemic control in T1D through real-time adjustment of the meal insulin bolus. The proposed XGB algorithm obtained good accuracy at classifying post-prandial glycemic status (AUROC = 0.84 [0.78, 0.87]). Consequently, when used to adjust, in real-time, meal insulin boluses obtained with a bolus calculator, the proposed approach improves glycemic control when compared to the baseline bolus calculator. In particular, percentage time in target [70, 180] mg/dL was improved from 61.98 (± 13.89) to 67.00 (± 11.54; p < 0.01) without increasing hypoglycemia.
Assuntos
Diabetes Mellitus Tipo 1/tratamento farmacológico , Hiperglicemia/tratamento farmacológico , Hipoglicemiantes/administração & dosagem , Insulina/administração & dosagem , Algoritmos , Glicemia/efeitos dos fármacos , Automonitorização da Glicemia , Simulação por Computador , Diabetes Mellitus Tipo 1/sangue , Diabetes Mellitus Tipo 1/patologia , Relação Dose-Resposta a Droga , Humanos , Hiperglicemia/sangue , Hiperglicemia/patologia , Sistemas de Infusão de Insulina , Período Pós-Prandial , Estudo de Prova de ConceitoRESUMO
(1) Objective: Blood glucose forecasting in type 1 diabetes (T1D) management is a maturing field with numerous algorithms being published and a few of them having reached the commercialisation stage. However, accurate long-term glucose predictions (e.g., >60 min), which are usually needed in applications such as precision insulin dosing (e.g., an artificial pancreas), still remain a challenge. In this paper, we present a novel glucose forecasting algorithm that is well-suited for long-term prediction horizons. The proposed algorithm is currently being used as the core component of a modular safety system for an insulin dose recommender developed within the EU-funded PEPPER (Patient Empowerment through Predictive PERsonalised decision support) project. (2) Methods: The proposed blood glucose forecasting algorithm is based on a compartmental composite model of glucose-insulin dynamics, which uses a deconvolution technique applied to the continuous glucose monitoring (CGM) signal for state estimation. In addition to commonly employed inputs by glucose forecasting methods (i.e., CGM data, insulin, carbohydrates), the proposed algorithm allows the optional input of meal absorption information to enhance prediction accuracy. Clinical data corresponding to 10 adult subjects with T1D were used for evaluation purposes. In addition, in silico data obtained with a modified version of the UVa-Padova simulator was used to further evaluate the impact of accounting for meal absorption information on prediction accuracy. Finally, a comparison with two well-established glucose forecasting algorithms, the autoregressive exogenous (ARX) model and the latent variable-based statistical (LVX) model, was carried out. (3) Results: For prediction horizons beyond 60 min, the performance of the proposed physiological model-based (PM) algorithm is superior to that of the LVX and ARX algorithms. When comparing the performance of PM against the secondly ranked method (ARX) on a 120 min prediction horizon, the percentage improvement on prediction accuracy measured with the root mean square error, A-region of error grid analysis (EGA), and hypoglycaemia prediction calculated by the Matthews correlation coefficient, was 18.8 % , 17.9 % , and 80.9 % , respectively. Although showing a trend towards improvement, the addition of meal absorption information did not provide clinically significant improvements. (4) Conclusion: The proposed glucose forecasting algorithm is potentially well-suited for T1D management applications which require long-term glucose predictions.
Assuntos
Automonitorização da Glicemia/métodos , Glicemia , Diabetes Mellitus Tipo 1/sangue , Previsões/métodos , Adulto , Algoritmos , Feminino , Humanos , Hipoglicemia/sangue , Insulina/sangue , Sistemas de Infusão de Insulina , Masculino , Modelos BiológicosRESUMO
The artificial pancreas (AP) system is designed to regulate blood glucose in subjects with type 1 diabetes using a continuous glucose monitor informed controller that adjusts insulin infusion via an insulin pump. However, current AP developments are mainly hybrid closed-loop systems that include feed-forward actions triggered by the announcement of meals or exercise. The first step to fully closing the loop in the AP requires removing meal announcement, which is currently the most effective way to alleviate postprandial hyperglycemia due to the delay in insulin action. Here, a novel approach to meal detection in the AP is presented using a sliding window and computing the normalized cross-covariance between measured glucose and the forward difference of a disturbance term, estimated from an augmented minimal model using an Unscented Kalman Filter. Three different tunings were applied to the same meal detection algorithm: (1) a high sensitivity tuning, (2) a trade-off tuning that has a high amount of meals detected and a low amount of false positives (FP), and (3) a low FP tuning. For the three tunings sensitivities 99 ± 2%, 93 ± 5%, and 47 ± 12% were achieved, respectively. A sensitivity analysis was also performed and found that higher carbohydrate quantities and faster rates of glucose appearance result in favorable meal detection outcomes.
Assuntos
Pâncreas Artificial , Algoritmos , Glucose , Humanos , Insulina , Sistemas de Infusão de Insulina , RefeiçõesRESUMO
BACKGROUND: Antimicrobial Resistance is threatening our ability to treat common infectious diseases and overuse of antimicrobials to treat human infections in hospitals is accelerating this process. Clinical Decision Support Systems (CDSSs) have been proven to enhance quality of care by promoting change in prescription practices through antimicrobial selection advice. However, bypassing an initial assessment to determine the existence of an underlying disease that justifies the need of antimicrobial therapy might lead to indiscriminate and often unnecessary prescriptions. METHODS: From pathology laboratory tests, six biochemical markers were selected and combined with microbiology outcomes from susceptibility tests to create a unique dataset with over one and a half million daily profiles to perform infection risk inference. Outliers were discarded using the inter-quartile range rule and several sampling techniques were studied to tackle the class imbalance problem. The first phase selects the most effective and robust model during training using ten-fold stratified cross-validation. The second phase evaluates the final model after isotonic calibration in scenarios with missing inputs and imbalanced class distributions. RESULTS: More than 50% of infected profiles have daily requested laboratory tests for the six biochemical markers with very promising infection inference results: area under the receiver operating characteristic curve (0.80-0.83), sensitivity (0.64-0.75) and specificity (0.92-0.97). Standardization consistently outperforms normalization and sensitivity is enhanced by using the SMOTE sampling technique. Furthermore, models operated without noticeable loss in performance if at least four biomarkers were available. CONCLUSION: The selected biomarkers comprise enough information to perform infection risk inference with a high degree of confidence even in the presence of incomplete and imbalanced data. Since they are commonly available in hospitals, Clinical Decision Support Systems could benefit from these findings to assist clinicians in deciding whether or not to initiate antimicrobial therapy to improve prescription practices.
Assuntos
Anti-Infecciosos , Biomarcadores , Sistemas de Apoio a Decisões Clínicas , Resistência Microbiana a Medicamentos , Medição de Risco/métodos , Máquina de Vetores de Suporte , Sistemas de Apoio a Decisões Clínicas/estatística & dados numéricos , Humanos , Medição de Risco/estatística & dados numéricosRESUMO
BACKGROUND: The inappropriate use of antimicrobials drives antimicrobial resistance. We conducted a study to map physician decision-making processes for acute infection management in secondary care to identify potential targets for quality improvement interventions. METHODS: Physicians newly qualified to consultant level participated in semi-structured interviews. Interviews were audio recorded and transcribed verbatim for analysis using NVIVO11.0 software. Grounded theory methodology was applied. Analytical categories were created using constant comparison approach to the data and participants were recruited to the study until thematic saturation was reached. RESULTS: Twenty physicians were interviewed. The decision pathway for the management of acute infections follows a Bayesian-like step-wise approach, with information processed and systematically added to prior assumptions to guide management. The main emerging themes identified as determinants of the decision-making of individual physicians were (1) perceptions of providing 'optimal' care for the patient with infection by providing rapid and often intravenous therapy; (2) perceptions that stopping/de-escalating therapy was a senior doctor decision with junior trainees not expected to contribute; and (3) expectation of interactions with local guidelines and microbiology service advice. Feedback on review of junior doctor prescribing decisions was often lacking, causing frustration and confusion on appropriate practice within this cohort. CONCLUSION: Interventions to improve infection management must incorporate mechanisms to promote distribution of responsibility for decisions made. The disparity between expectations of prescribers to start but not review/stop therapy must be urgently addressed with mechanisms to improve communication and feedback to junior prescribers to facilitate their continued development as prudent antimicrobial prescribers.
Assuntos
Anti-Infecciosos/uso terapêutico , Atitude do Pessoal de Saúde , Infecções/tratamento farmacológico , Padrões de Prática Médica/estatística & dados numéricos , Teorema de Bayes , Comunicação , Tomada de Decisões , Humanos , Masculino , Médicos , Padrões de Prática Médica/normas , Pesquisa Qualitativa , Atenção Secundária à Saúde/normas , Atenção Secundária à Saúde/estatística & dados numéricosRESUMO
BACKGROUND: Despite abundant evidence demonstrating the benefits of continuous glucose monitoring (CGM) in diabetes management, a significant proportion of people using this technology still struggle to achieve glycemic targets. To address this challenge, we propose the Accu-Chek® SmartGuide Predict app, an innovative CGM digital companion that incorporates a suite of advanced glucose predictive functionalities aiming to inform users earlier about acute glycemic situations. METHODS: The app's functionalities, powered by three machine learning models, include a two-hour glucose forecast, a 30-minute low glucose detection, and a nighttime low glucose prediction for bedtime interventions. Evaluation of the models' performance included three data sets, comprising subjects with T1D on MDI (n = 21), subjects with type 2 diabetes (T2D) on MDI (n = 59), and subjects with T1D on insulin pump therapy (n = 226). RESULTS: On an aggregated data set, the two-hour glucose prediction model, at a forecasting horizon of 30, 45, 60, and 120 minutes, achieved a percentage of data points in zones A and B of Consensus Error Grid of: 99.8%, 99.3%, 98.7%, and 96.3%, respectively. The 30-minute low glucose prediction model achieved an accuracy, sensitivity, specificity, mean lead time, and area under the receiver operating characteristic curve (ROC AUC) of: 98.9%, 95.2%, 98.9%, 16.2 minutes, and 0.958, respectively. The nighttime low glucose prediction model achieved an accuracy, sensitivity, specificity, and ROC AUC of: 86.5%, 55.3%, 91.6%, and 0.859, respectively. CONCLUSIONS: The consistency of the performance of the three predictive models when evaluated on different cohorts of subjects with T1D and T2D on different insulin therapies, including real-world data, offers reassurance for real-world efficacy.
Assuntos
Automonitorização da Glicemia , Glicemia , Diabetes Mellitus Tipo 1 , Diabetes Mellitus Tipo 2 , Aplicativos Móveis , Humanos , Automonitorização da Glicemia/instrumentação , Automonitorização da Glicemia/métodos , Glicemia/análise , Diabetes Mellitus Tipo 2/sangue , Diabetes Mellitus Tipo 2/tratamento farmacológico , Diabetes Mellitus Tipo 1/sangue , Diabetes Mellitus Tipo 1/tratamento farmacológico , Feminino , Masculino , Aprendizado de Máquina , Pessoa de Meia-Idade , Adulto , Sistemas de Infusão de Insulina , Monitoramento Contínuo da GlicoseRESUMO
OBJECTIVE: Artificial intelligence and machine learning are transforming many fields including medicine. In diabetes, robust biosensing technologies and automated insulin delivery therapies have created a substantial opportunity to improve health. While the number of manuscripts addressing the topic of applying machine learning to diabetes has grown in recent years, there has been a lack of consistency in the methods, metrics, and data used to train and evaluate these algorithms. This manuscript provides consensus guidelines for machine learning practitioners in the field of diabetes, including best practice recommended approaches and warnings about pitfalls to avoid. METHODS: Algorithmic approaches are reviewed and benefits of different algorithms are discussed including importance of clinical accuracy, explainability, interpretability, and personalization. We review the most common features used in machine learning applications in diabetes glucose control and provide an open-source library of functions for calculating features, as well as a framework for specifying data sets using data sheets. A review of current data sets available for training algorithms is provided as well as an online repository of data sources. SIGNIFICANCE: These consensus guidelines are designed to improve performance and translatability of new machine learning algorithms developed in the field of diabetes for engineers and data scientists.
Assuntos
Inteligência Artificial , Diabetes Mellitus , Humanos , Controle Glicêmico , Aprendizado de Máquina , Diabetes Mellitus/tratamento farmacológico , AlgoritmosRESUMO
The availability of large amounts of data from continuous glucose monitoring (CGM), together with the latest advances in deep learning techniques, have opened the door to a new paradigm of algorithm design for personalized blood glucose (BG) prediction in type 1 diabetes (T1D) with superior performance. However, there are several challenges that prevent the widespread implementation of deep learning algorithms in actual clinical settings, including unclear prediction confidence and limited training data for new T1D subjects. To this end, we propose a novel deep learning framework, Fast-adaptive and Confident Neural Network (FCNN), to meet these clinical challenges. In particular, an attention-based recurrent neural network is used to learn representations from CGM input and forward a weighted sum of hidden states to an evidential output layer, aiming to compute personalized BG predictions with theoretically supported model confidence. The model-agnostic meta-learning is employed to enable fast adaptation for a new T1D subject with limited training data. The proposed framework has been validated on three clinical datasets. In particular, for a dataset including 12 subjects with T1D, FCNN achieved a root mean square error of 18.64±2.60 mg/dL and 31.07±3.62 mg/dL for 30 and 60-minute prediction horizons, respectively, which outperformed all the considered baseline methods with significant improvements. These results indicate that FCNN is a viable and effective approach for predicting BG levels in T1D. The well-trained models can be implemented in smartphone apps to improve glycemic control by enabling proactive actions through real-time glucose alerts.
Assuntos
Aprendizado Profundo , Diabetes Mellitus Tipo 1 , Glicemia/análise , Diabetes Mellitus Tipo 1/sangue , Diabetes Mellitus Tipo 1/diagnóstico , HumanosRESUMO
Time series data generated by continuous glucose monitoring sensors offer unparalleled opportunities for developing data-driven approaches, especially deep learning-based models, in diabetes management. Although these approaches have achieved state-of-the-art performance in various fields such as glucose prediction in type 1 diabetes (T1D), challenges remain in the acquisition of large-scale individual data for personalized modeling due to the elevated cost of clinical trials and data privacy regulations. In this work, we introduce GluGAN, a framework specifically designed for generating personalized glucose time series based on generative adversarial networks (GANs). Employing recurrent neural network (RNN) modules, the proposed framework uses a combination of unsupervised and supervised training to learn temporal dynamics in latent spaces. Aiming to assess the quality of synthetic data, we apply clinical metrics, distance scores, and discriminative and predictive scores computed by post-hoc RNNs in evaluation. Across three clinical datasets with 47 T1D subjects (including one publicly available and two proprietary datasets), GluGAN achieved better performance for all the considered metrics when compared with four baseline GAN models. The performance of data augmentation is evaluated by three machine learning-based glucose predictors. Using the training sets augmented by GluGAN significantly reduced the root mean square error for the predictors over 30 and 60-minute horizons. The results suggest that GluGAN is an effective method in generating high-quality synthetic glucose time series and has the potential to be used for evaluating the effectiveness of automated insulin delivery algorithms and as a digital twin to substitute for pre-clinical trials.
Assuntos
Glicemia , Diabetes Mellitus Tipo 1 , Humanos , Automonitorização da Glicemia , Diabetes Mellitus Tipo 1/tratamento farmacológico , Fatores de Tempo , GlucoseRESUMO
BACKGROUND: One of the biggest challenges for people with type 1 diabetes (T1D) using multiple daily injections (MDIs) is nocturnal hypoglycemia (NH). Recurrent NH can lead to serious complications; hence, prevention is of high importance. In this work, we develop and externally validate, device-agnostic Machine Learning (ML) models to provide bedtime decision support to people with T1D and minimize the risk of NH. METHODS: We present the design and development of binary classifiers to predict NH (blood glucose levels occurring below 70 mg/dL). Using data collected from a 6-month study of 37 adult participants with T1D under free-living conditions, we extract daytime features from continuous glucose monitor (CGM) sensors, administered insulin, meal, and physical activity information. We use these features to train and test the performance of two ML algorithms: Random Forests (RF) and Support Vector Machines (SVMs). We further evaluate our model in an external population of 20 adults with T1D using MDI insulin therapy and wearing CGM and flash glucose monitoring sensors for two periods of eight weeks each. RESULTS: At population-level, SVM outperforms RF algorithm with a receiver operating characteristic-area under curve (ROC-AUC) of 79.36% (95% CI: 76.86%, 81.86%). The proposed SVM model generalizes well in an unseen population (ROC-AUC = 77.06%), as well as between the two different glucose sensors (ROC-AUC = 77.74%). CONCLUSIONS: Our model shows state-of-the-art performance, generalizability, and robustness in sensor devices from different manufacturers. We believe it is a potential viable approach to inform people with T1D about their risk of NH before it occurs.
RESUMO
Background: The Advanced Bolus Calculator for Type 1 Diabetes (ABC4D) is a decision support system using the artificial intelligence technique of case-based reasoning to adapt and personalize insulin bolus doses. The integrated system comprises a smartphone application and clinical web portal. We aimed to assess the safety and efficacy of the ABC4D (intervention) compared with a nonadaptive bolus calculator (control). Methods: This was a prospective randomized controlled crossover study. Following a 2-week run-in period, participants were randomized to ABC4D or control for 12 weeks. After a 6-week washout period, participants crossed over for 12 weeks. The primary outcome was difference in % time in range (%TIR) (3.9-10.0 mmol/L [70-180 mg/dL]) change during the daytime (07:00-22:00) between groups. Results: Thirty-seven adults with type 1 diabetes on multiple daily injections of insulin were randomized, median (interquartile range [IQR]) age 44.7 (28.2-55.2) years, diabetes duration 15.0 (9.5-29.0) years, and glycated hemoglobin 61.0 (58.0-67.0) mmol/mol (7.7 [7.5-8.3]%). Data from 33 participants were analyzed. There was no significant difference in daytime %TIR change with ABC4D compared with control (median [IQR] +0.1 [-2.6 to +4.0]% vs. +1.9 [-3.8 to +10.1]%; P = 0.53). Participants accepted fewer meal dose recommendations in the intervention compared with control (78.7 [55.8-97.6]% vs. 93.5 [73.8-100]%; P = 0.009), with a greater reduction in insulin dosage from that recommended. Conclusion: The ABC4D is safe for adapting insulin bolus doses and provided the same level of glycemic control as the nonadaptive bolus calculator. Results suggest that participants did not follow the ABC4D recommendations as frequently as control, impacting its effectiveness. Clinical Trials Registration: clinicaltrials.gov NCT03963219 (Phase 5).
Assuntos
Diabetes Mellitus Tipo 1 , Adulto , Humanos , Diabetes Mellitus Tipo 1/tratamento farmacológico , Hipoglicemiantes/uso terapêutico , Estudos Cross-Over , Glicemia , Inteligência Artificial , Estudos Prospectivos , Insulina/uso terapêutico , Insulina Regular Humana/uso terapêuticoRESUMO
Background and Aims: The recent increase in wearable devices for diabetes care, and in particular the use of continuous glucose monitoring (CGM), generates large data sets and associated cybersecurity challenges. In this study, we demonstrate that it is possible to identify CGM data at an individual level by using standard machine learning techniques. Methods: The publicly available REPLACE-BG data set (NCT02258373) containing 226 adult participants with type 1 diabetes (T1D) wearing CGM over 6 months was used. A support vector machine (SVM) binary classifier aiming to determine if a CGM data stream belongs to an individual participant was trained and tested for each of the subjects in the data set. To generate the feature vector used for classification, 12 standard glycemic metrics were selected and evaluated at different time periods of the day (24 h, day, night, breakfast, lunch, and dinner). Different window lengths of CGM data (3, 7, 15, and 30 days) were chosen to evaluate their impact on the classification performance. A recursive feature selection method was employed to select the minimum subset of features that did not significantly degrade performance. Results: A total of 40 features were generated as a result of evaluating the glycemic metrics over the selected time periods (24 h, day, night, breakfast, lunch, and dinner). A window length of 15 days was found to perform the best in terms of accuracy (86.8% ± 12.8%) and F1 score (0.86 ± 0.16). The corresponding sensitivity and specificity were 85.7% ± 19.5% and 87.9% ± 17.5%, respectively. Through recursive feature selection, a subset of 9 features was shown to perform similarly to the 40 features. Conclusion: It is possible to determine with a relatively high accuracy if a CGM data stream belongs to an individual. The proposed approach can be used as a digital CGM "fingerprint" or for detecting glycemic changes within an individual, for example during intercurrent illness.
Assuntos
Diabetes Mellitus Tipo 1 , Dispositivos Eletrônicos Vestíveis , Adulto , Glicemia/metabolismo , Automonitorização da Glicemia , Diabetes Mellitus Tipo 1/tratamento farmacológico , Humanos , Aprendizado de MáquinaRESUMO
Blood glucose prediction algorithms are key tools in the development of decision support systems and closed-loop insulin delivery systems for blood glucose control in diabetes. Deep learning models have provided leading results among machine learning algorithms to date in glucose prediction. However these models typically require large amounts of data to obtain best personalised glucose prediction results. Multitask learning facilitates an approach for leveraging data from multiple subjects while still learning accurate personalised models. In this work we present results comparing the effectiveness of multitask learning over sequential transfer learning, and learning only on subject-specific data with neural network and support vector regression. The multitask learning approach shows consistent leading performance in predictive metrics at both short-term and long-term prediction horizons. We obtain a predictive accuracy (RMSE) of 18.8 ±2.3, 25.3 ±2.9, 31.8 ±3.9, 41.2 ±4.5, 47.2 ±4.6 mg/dL at 30, 45, 60, 90, and 120 min prediction horizons respectively, with at least 93% clinically acceptable predictions using the Clarke Error Grid (EGA) at each prediction horizon. We also identify relevant prior information such as glycaemic variability that can be incorporated to improve predictive performance at long-term prediction horizons. Furthermore, we show consistent performance - ≤ 5% change in both RMSE and EGA (Zone A) - in rare cases of adverse glycaemic events with 1-6 weeks of training data. In conclusion, a multitask approach can allow for deploying personalised models even with significantly less subject-specific data without compromising performance.
Assuntos
Glicemia , Diabetes Mellitus Tipo 1 , Algoritmos , Automonitorização da Glicemia , Humanos , Insulina/uso terapêutico , Sistemas de Infusão de InsulinaRESUMO
BACKGROUND: User-developed automated insulin delivery systems, also referred to as do-it-yourself artificial pancreas systems (DIY APS), are in use by people living with type 1 diabetes. In this work, we evaluate, in silico, the DIY APS Loop control algorithm and compare it head-to-head with the bio-inspired artificial pancreas (BiAP) controller for which clinical data are available. METHODS: The Python version of the Loop control algorithm called PyLoopKit was employed for evaluation purposes. A Python-MATLAB interface was created to integrate PyLoopKit with the UVa-Padova simulator. Two configurations of BiAP (non-adaptive and adaptive) were evaluated. In addition, the Tandem Basal-IQ predictive low-glucose suspend was used as a baseline algorithm. Two scenarios with different levels of variability were used to challenge the algorithms on the adult (n = 10) and adolescent (n = 10) virtual cohorts of the simulator. RESULTS: Both BiAP and Loop improve, or maintain, glycemic control when compared with Basal-IQ. Under the scenario with lower variability, BiAP and Loop perform relatively similarly. However, BiAP, and in particular its adaptive configuration, outperformed Loop in the scenario with higher variability by increasing the percentage time in glucose target range 70-180 mg/dL (BiAP-Adaptive vs Loop vs Basal-IQ) (adults: 89.9% ± 3.2%* vs 79.5% ± 5.3%* vs 67.9% ± 8.3%; adolescents: 74.6 ± 9.5%* vs 53.0% ± 7.7% vs 55.4% ± 12.0%, where * indicates the significance of P < .05 calculated in sequential order) while maintaining the percentage time below range (adults: 0.89% ± 0.37% vs 1.72% ± 1.26% vs 3.41 ± 1.92%; adolescents: 2.87% ± 2.77% vs 4.90% ± 1.92% vs 4.17% ± 2.74%). CONCLUSIONS: Both Loop and BiAP algorithms are safe and improve glycemic control when compared, in silico, with Basal-IQ. However, BiAP appears significantly more robust to real-world challenges by outperforming Loop and Basal-IQ in the more challenging scenario.
Assuntos
Diabetes Mellitus Tipo 1 , Pâncreas Artificial , Adolescente , Adulto , Algoritmos , Glicemia , Automonitorização da Glicemia , Diabetes Mellitus Tipo 1/tratamento farmacológico , Humanos , Hipoglicemiantes/uso terapêutico , Insulina/uso terapêutico , Sistemas de Infusão de InsulinaRESUMO
People living with type 1 diabetes (T1D) require lifelong self-management to maintain glucose levels in a safe range. Failure to do so can lead to adverse glycemic events with short and long-term complications. Continuous glucose monitoring (CGM) is widely used in T1D self-management for real-time glucose measurements, while smartphone apps are adopted as basic electronic diaries, data visualization tools, and simple decision support tools for insulin dosing. Applying a mixed effects logistic regression analysis to the outcomes of a six-week longitudinal study in 12 T1D adults using CGM and a clinically validated wearable sensor wristband (NCT ID: NCT03643692), we identified several significant associations between physiological measurements and hypo- and hyperglycemic events measured an hour later. We proceeded to develop a new smartphone-based platform, ARISES (Adaptive, Real-time, and Intelligent System to Enhance Self-care), with an embedded deep learning algorithm utilizing multi-modal data from CGM, daily entries of meal and bolus insulin, and the sensor wristband to predict glucose levels and hypo- and hyperglycemia. For a 60-minute prediction horizon, the proposed algorithm achieved the average root mean square error (RMSE) of 35.28 ± 5.77 mg/dL with the Matthews correlation coefficients for detecting hypoglycemia and hyperglycemia of 0.56 ± 0.07 and 0.70 ± 0.05, respectively. The use of wristband data significantly reduced the RMSE by 2.25 mg/dL (p < 0.01). The well-trained model is implemented on the ARISES app to provide real-time decision support. These results indicate that the ARISES has great potential to mitigate the risk of severe complications and enhance self-management for people with T1D.
RESUMO
Background and objective: Sub-therapeutic dosing of piperacillin-tazobactam in critically-ill patients is associated with poor clinical outcomes and may promote the emergence of drug-resistant infections. In this paper, an in silico investigation of whether closed-loop control can improve pharmacokinetic-pharmacodynamic (PK-PD) target attainment is described. Method: An in silico platform was developed using PK data from 20 critically-ill patients receiving piperacillin-tazobactam where serum and tissue interstitial fluid (ISF) PK were defined. Intra-day variability on renal clearance, ISF sensor error, and infusion constraints were taken into account. Proportional-integral-derivative (PID) control was selected for drug delivery modulation. Dose adjustment was made based on ISF sensor data with a 30-min sampling period, targeting a serum piperacillin concentration between 32 and 64 mg/L. A single tuning parameter set was employed across the virtual population. The PID controller was compared to standard therapy, including bolus and continuous infusion of piperacillin-tazobactam. Results: Despite significant inter-subject and simulated intra-day PK variability and sensor error, PID demonstrated a significant improvement in target attainment compared to traditional bolus and continuous infusion approaches. Conclusion: A PID controller driven by ISF drug concentration measurements has the potential to precisely deliver piperacillin-tazobactam in critically-ill patients undergoing treatment for sepsis.
RESUMO
Aims: To determine if a longer duration of continuous glucose monitoring (CGM) sampling is needed to correctly assess the quality of glycemic control given different types of data loss. Materials and Methods: Data loss was generated in two different methods until the desired percentage of data loss (10-50%) was achieved with (1) eliminating random individual CGM values and (2) eliminating gaps of a predefined time length (1-5 h). For CGM metrics, days required to cross predetermined targets for median absolute percentage error (MdAPE) for the different data loss strategies were calculated and compared with current international consensus recommendation of >70% of optimal data sampling. Results: Up to 90 days of CGM data from 291 adults with type 1 diabetes were analyzed. MdAPE threshold crossing remained virtually constant for random CGM data loss up to 50% for all CGM metrics. However, the MdAPE crossing threshold increased when losing data with longer gaps. For all CGM metrics assessed in our study (%T70-180, %T < 70, %T < 54, %T > 180, and %T > 250), up to 50% data loss in a random manner did not cause any significant change on optimal sampling duration; however, >30% of data loss in gaps up to 5 h required longer optimal sampling duration. Conclusions: Optimal sampling duration for CGM metrics depends on percentage of data loss as well as duration of data loss. International consensus recommendation for 70% CGM data adequacy is sufficient to report %T70-180 with 2 weeks of data without large data gaps.