Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 56
1.
IEEE Rev Biomed Eng ; 17: 19-41, 2024.
Article En | MEDLINE | ID: mdl-37943654

OBJECTIVE: Artificial intelligence and machine learning are transforming many fields including medicine. In diabetes, robust biosensing technologies and automated insulin delivery therapies have created a substantial opportunity to improve health. While the number of manuscripts addressing the topic of applying machine learning to diabetes has grown in recent years, there has been a lack of consistency in the methods, metrics, and data used to train and evaluate these algorithms. This manuscript provides consensus guidelines for machine learning practitioners in the field of diabetes, including best practice recommended approaches and warnings about pitfalls to avoid. METHODS: Algorithmic approaches are reviewed and benefits of different algorithms are discussed including importance of clinical accuracy, explainability, interpretability, and personalization. We review the most common features used in machine learning applications in diabetes glucose control and provide an open-source library of functions for calculating features, as well as a framework for specifying data sets using data sheets. A review of current data sets available for training algorithms is provided as well as an online repository of data sources. SIGNIFICANCE: These consensus guidelines are designed to improve performance and translatability of new machine learning algorithms developed in the field of diabetes for engineers and data scientists.


Artificial Intelligence , Diabetes Mellitus , Humans , Glycemic Control , Machine Learning , Diabetes Mellitus/drug therapy , Algorithms
2.
J Diabetes Sci Technol ; : 19322968231185796, 2023 Jul 11.
Article En | MEDLINE | ID: mdl-37434362

BACKGROUND: One of the biggest challenges for people with type 1 diabetes (T1D) using multiple daily injections (MDIs) is nocturnal hypoglycemia (NH). Recurrent NH can lead to serious complications; hence, prevention is of high importance. In this work, we develop and externally validate, device-agnostic Machine Learning (ML) models to provide bedtime decision support to people with T1D and minimize the risk of NH. METHODS: We present the design and development of binary classifiers to predict NH (blood glucose levels occurring below 70 mg/dL). Using data collected from a 6-month study of 37 adult participants with T1D under free-living conditions, we extract daytime features from continuous glucose monitor (CGM) sensors, administered insulin, meal, and physical activity information. We use these features to train and test the performance of two ML algorithms: Random Forests (RF) and Support Vector Machines (SVMs). We further evaluate our model in an external population of 20 adults with T1D using MDI insulin therapy and wearing CGM and flash glucose monitoring sensors for two periods of eight weeks each. RESULTS: At population-level, SVM outperforms RF algorithm with a receiver operating characteristic-area under curve (ROC-AUC) of 79.36% (95% CI: 76.86%, 81.86%). The proposed SVM model generalizes well in an unseen population (ROC-AUC = 77.06%), as well as between the two different glucose sensors (ROC-AUC = 77.74%). CONCLUSIONS: Our model shows state-of-the-art performance, generalizability, and robustness in sensor devices from different manufacturers. We believe it is a potential viable approach to inform people with T1D about their risk of NH before it occurs.

3.
IEEE J Biomed Health Inform ; 27(10): 5122-5133, 2023 10.
Article En | MEDLINE | ID: mdl-37134028

Time series data generated by continuous glucose monitoring sensors offer unparalleled opportunities for developing data-driven approaches, especially deep learning-based models, in diabetes management. Although these approaches have achieved state-of-the-art performance in various fields such as glucose prediction in type 1 diabetes (T1D), challenges remain in the acquisition of large-scale individual data for personalized modeling due to the elevated cost of clinical trials and data privacy regulations. In this work, we introduce GluGAN, a framework specifically designed for generating personalized glucose time series based on generative adversarial networks (GANs). Employing recurrent neural network (RNN) modules, the proposed framework uses a combination of unsupervised and supervised training to learn temporal dynamics in latent spaces. Aiming to assess the quality of synthetic data, we apply clinical metrics, distance scores, and discriminative and predictive scores computed by post-hoc RNNs in evaluation. Across three clinical datasets with 47 T1D subjects (including one publicly available and two proprietary datasets), GluGAN achieved better performance for all the considered metrics when compared with four baseline GAN models. The performance of data augmentation is evaluated by three machine learning-based glucose predictors. Using the training sets augmented by GluGAN significantly reduced the root mean square error for the predictors over 30 and 60-minute horizons. The results suggest that GluGAN is an effective method in generating high-quality synthetic glucose time series and has the potential to be used for evaluating the effectiveness of automated insulin delivery algorithms and as a digital twin to substitute for pre-clinical trials.


Blood Glucose , Diabetes Mellitus, Type 1 , Humans , Blood Glucose Self-Monitoring , Diabetes Mellitus, Type 1/drug therapy , Time Factors , Glucose
4.
Diabetes Technol Ther ; 25(6): 414-425, 2023 06.
Article En | MEDLINE | ID: mdl-37017468

Background: The Advanced Bolus Calculator for Type 1 Diabetes (ABC4D) is a decision support system using the artificial intelligence technique of case-based reasoning to adapt and personalize insulin bolus doses. The integrated system comprises a smartphone application and clinical web portal. We aimed to assess the safety and efficacy of the ABC4D (intervention) compared with a nonadaptive bolus calculator (control). Methods: This was a prospective randomized controlled crossover study. Following a 2-week run-in period, participants were randomized to ABC4D or control for 12 weeks. After a 6-week washout period, participants crossed over for 12 weeks. The primary outcome was difference in % time in range (%TIR) (3.9-10.0 mmol/L [70-180 mg/dL]) change during the daytime (07:00-22:00) between groups. Results: Thirty-seven adults with type 1 diabetes on multiple daily injections of insulin were randomized, median (interquartile range [IQR]) age 44.7 (28.2-55.2) years, diabetes duration 15.0 (9.5-29.0) years, and glycated hemoglobin 61.0 (58.0-67.0) mmol/mol (7.7 [7.5-8.3]%). Data from 33 participants were analyzed. There was no significant difference in daytime %TIR change with ABC4D compared with control (median [IQR] +0.1 [-2.6 to +4.0]% vs. +1.9 [-3.8 to +10.1]%; P = 0.53). Participants accepted fewer meal dose recommendations in the intervention compared with control (78.7 [55.8-97.6]% vs. 93.5 [73.8-100]%; P = 0.009), with a greater reduction in insulin dosage from that recommended. Conclusion: The ABC4D is safe for adapting insulin bolus doses and provided the same level of glycemic control as the nonadaptive bolus calculator. Results suggest that participants did not follow the ABC4D recommendations as frequently as control, impacting its effectiveness. Clinical Trials Registration: clinicaltrials.gov NCT03963219 (Phase 5).


Diabetes Mellitus, Type 1 , Adult , Humans , Diabetes Mellitus, Type 1/drug therapy , Hypoglycemic Agents/therapeutic use , Cross-Over Studies , Blood Glucose , Artificial Intelligence , Prospective Studies , Insulin/therapeutic use , Insulin, Regular, Human/therapeutic use
5.
IEEE Trans Biomed Eng ; 70(1): 193-204, 2023 01.
Article En | MEDLINE | ID: mdl-35776825

The availability of large amounts of data from continuous glucose monitoring (CGM), together with the latest advances in deep learning techniques, have opened the door to a new paradigm of algorithm design for personalized blood glucose (BG) prediction in type 1 diabetes (T1D) with superior performance. However, there are several challenges that prevent the widespread implementation of deep learning algorithms in actual clinical settings, including unclear prediction confidence and limited training data for new T1D subjects. To this end, we propose a novel deep learning framework, Fast-adaptive and Confident Neural Network (FCNN), to meet these clinical challenges. In particular, an attention-based recurrent neural network is used to learn representations from CGM input and forward a weighted sum of hidden states to an evidential output layer, aiming to compute personalized BG predictions with theoretically supported model confidence. The model-agnostic meta-learning is employed to enable fast adaptation for a new T1D subject with limited training data. The proposed framework has been validated on three clinical datasets. In particular, for a dataset including 12 subjects with T1D, FCNN achieved a root mean square error of 18.64±2.60 mg/dL and 31.07±3.62 mg/dL for 30 and 60-minute prediction horizons, respectively, which outperformed all the considered baseline methods with significant improvements. These results indicate that FCNN is a viable and effective approach for predicting BG levels in T1D. The well-trained models can be implemented in smartphone apps to improve glycemic control by enabling proactive actions through real-time glucose alerts.


Deep Learning , Diabetes Mellitus, Type 1 , Blood Glucose/analysis , Diabetes Mellitus, Type 1/blood , Diabetes Mellitus, Type 1/diagnosis , Humans
6.
Front Bioeng Biotechnol ; 10: 1015389, 2022.
Article En | MEDLINE | ID: mdl-36338121

Background and objective: Sub-therapeutic dosing of piperacillin-tazobactam in critically-ill patients is associated with poor clinical outcomes and may promote the emergence of drug-resistant infections. In this paper, an in silico investigation of whether closed-loop control can improve pharmacokinetic-pharmacodynamic (PK-PD) target attainment is described. Method: An in silico platform was developed using PK data from 20 critically-ill patients receiving piperacillin-tazobactam where serum and tissue interstitial fluid (ISF) PK were defined. Intra-day variability on renal clearance, ISF sensor error, and infusion constraints were taken into account. Proportional-integral-derivative (PID) control was selected for drug delivery modulation. Dose adjustment was made based on ISF sensor data with a 30-min sampling period, targeting a serum piperacillin concentration between 32 and 64 mg/L. A single tuning parameter set was employed across the virtual population. The PID controller was compared to standard therapy, including bolus and continuous infusion of piperacillin-tazobactam. Results: Despite significant inter-subject and simulated intra-day PK variability and sensor error, PID demonstrated a significant improvement in target attainment compared to traditional bolus and continuous infusion approaches. Conclusion: A PID controller driven by ISF drug concentration measurements has the potential to precisely deliver piperacillin-tazobactam in critically-ill patients undergoing treatment for sepsis.

7.
Diabetes Technol Ther ; 24(10): 749-753, 2022 10.
Article En | MEDLINE | ID: mdl-35653736

Aims: To determine if a longer duration of continuous glucose monitoring (CGM) sampling is needed to correctly assess the quality of glycemic control given different types of data loss. Materials and Methods: Data loss was generated in two different methods until the desired percentage of data loss (10-50%) was achieved with (1) eliminating random individual CGM values and (2) eliminating gaps of a predefined time length (1-5 h). For CGM metrics, days required to cross predetermined targets for median absolute percentage error (MdAPE) for the different data loss strategies were calculated and compared with current international consensus recommendation of >70% of optimal data sampling. Results: Up to 90 days of CGM data from 291 adults with type 1 diabetes were analyzed. MdAPE threshold crossing remained virtually constant for random CGM data loss up to 50% for all CGM metrics. However, the MdAPE crossing threshold increased when losing data with longer gaps. For all CGM metrics assessed in our study (%T70-180, %T < 70, %T < 54, %T > 180, and %T > 250), up to 50% data loss in a random manner did not cause any significant change on optimal sampling duration; however, >30% of data loss in gaps up to 5 h required longer optimal sampling duration. Conclusions: Optimal sampling duration for CGM metrics depends on percentage of data loss as well as duration of data loss. International consensus recommendation for 70% CGM data adequacy is sufficient to report %T70-180 with 2 weeks of data without large data gaps.


Diabetes Mellitus, Type 1 , Hypoglycemia , Adult , Blood Glucose , Blood Glucose Self-Monitoring/methods , Diabetes Mellitus, Type 1/drug therapy , Glycated Hemoglobin/analysis , Humans
8.
NPJ Digit Med ; 5(1): 78, 2022 Jun 27.
Article En | MEDLINE | ID: mdl-35760819

People living with type 1 diabetes (T1D) require lifelong self-management to maintain glucose levels in a safe range. Failure to do so can lead to adverse glycemic events with short and long-term complications. Continuous glucose monitoring (CGM) is widely used in T1D self-management for real-time glucose measurements, while smartphone apps are adopted as basic electronic diaries, data visualization tools, and simple decision support tools for insulin dosing. Applying a mixed effects logistic regression analysis to the outcomes of a six-week longitudinal study in 12 T1D adults using CGM and a clinically validated wearable sensor wristband (NCT ID: NCT03643692), we identified several significant associations between physiological measurements and hypo- and hyperglycemic events measured an hour later. We proceeded to develop a new smartphone-based platform, ARISES (Adaptive, Real-time, and Intelligent System to Enhance Self-care), with an embedded deep learning algorithm utilizing multi-modal data from CGM, daily entries of meal and bolus insulin, and the sensor wristband to predict glucose levels and hypo- and hyperglycemia. For a 60-minute prediction horizon, the proposed algorithm achieved the average root mean square error (RMSE) of 35.28 ± 5.77 mg/dL with the Matthews correlation coefficients for detecting hypoglycemia and hyperglycemia of 0.56 ± 0.07 and 0.70 ± 0.05, respectively. The use of wristband data significantly reduced the RMSE by 2.25 mg/dL (p < 0.01). The well-trained model is implemented on the ARISES app to provide real-time decision support. These results indicate that the ARISES has great potential to mitigate the risk of severe complications and enhance self-management for people with T1D.

9.
Sensors (Basel) ; 22(2)2022 Jan 08.
Article En | MEDLINE | ID: mdl-35062427

Current artificial pancreas (AP) systems are hybrid closed-loop systems that require manual meal announcements to manage postprandial glucose control effectively. This poses a cognitive burden and challenge to users with T1D since this relies on frequent user engagement to maintain tight glucose control. In order to move towards fully automated closed-loop glucose control, we propose an algorithm based on a deep learning framework that performs multitask quantile regression, for both meal detection and carbohydrate estimation. Our proposed method is evaluated in silico on 10 adult subjects from the UVa/Padova simulator with a Bio-inspired Artificial Pancreas (BiAP) control algorithm over a 2 month period. Three different configurations of the AP are evaluated -BiAP without meal announcement (BiAP-NMA), BiAP with meal announcement (BiAP-MA), and BiAP with meal detection (BiAP-MD). We present results showing an improvement of BiAP-MD over BiAP-NMA, demonstrating 144.5 ± 6.8 mg/dL mean blood glucose level (-4.4 mg/dL, p< 0.01) and 77.8 ± 6.3% mean time between 70 and 180 mg/dL (+3.9%, p< 0.001). This improvement in control is realised without a significant increase in mean in hypoglycaemia (+0.1%, p= 0.4). In terms of detection of meals and snacks, the proposed method on average achieves 93% precision and 76% recall with a detection delay time of 38 ± 15 min (92% precision, 92% recall, and 37 min detection time for meals only). Furthermore, BiAP-MD handles hypoglycaemia better than BiAP-MA based on CVGA assessment with fewer control errors (10% vs. 20%). This study suggests that multitask quantile regression can improve the capability of AP systems for postprandial glucose control without increasing hypoglycaemia.


Deep Learning , Diabetes Mellitus, Type 1 , Pancreas, Artificial , Adult , Algorithms , Blood Glucose , Blood Glucose Self-Monitoring , Diabetes Mellitus, Type 1/drug therapy , Humans , Insulin , Insulin Infusion Systems , Meals
10.
Diabetes Technol Ther ; 24(6): 403-408, 2022 06.
Article En | MEDLINE | ID: mdl-35099288

Background and Aims: The recent increase in wearable devices for diabetes care, and in particular the use of continuous glucose monitoring (CGM), generates large data sets and associated cybersecurity challenges. In this study, we demonstrate that it is possible to identify CGM data at an individual level by using standard machine learning techniques. Methods: The publicly available REPLACE-BG data set (NCT02258373) containing 226 adult participants with type 1 diabetes (T1D) wearing CGM over 6 months was used. A support vector machine (SVM) binary classifier aiming to determine if a CGM data stream belongs to an individual participant was trained and tested for each of the subjects in the data set. To generate the feature vector used for classification, 12 standard glycemic metrics were selected and evaluated at different time periods of the day (24 h, day, night, breakfast, lunch, and dinner). Different window lengths of CGM data (3, 7, 15, and 30 days) were chosen to evaluate their impact on the classification performance. A recursive feature selection method was employed to select the minimum subset of features that did not significantly degrade performance. Results: A total of 40 features were generated as a result of evaluating the glycemic metrics over the selected time periods (24 h, day, night, breakfast, lunch, and dinner). A window length of 15 days was found to perform the best in terms of accuracy (86.8% ± 12.8%) and F1 score (0.86 ± 0.16). The corresponding sensitivity and specificity were 85.7% ± 19.5% and 87.9% ± 17.5%, respectively. Through recursive feature selection, a subset of 9 features was shown to perform similarly to the 40 features. Conclusion: It is possible to determine with a relatively high accuracy if a CGM data stream belongs to an individual. The proposed approach can be used as a digital CGM "fingerprint" or for detecting glycemic changes within an individual, for example during intercurrent illness.


Diabetes Mellitus, Type 1 , Wearable Electronic Devices , Adult , Blood Glucose/metabolism , Blood Glucose Self-Monitoring , Diabetes Mellitus, Type 1/drug therapy , Humans , Machine Learning
11.
IEEE J Biomed Health Inform ; 26(1): 436-445, 2022 01.
Article En | MEDLINE | ID: mdl-34314367

Blood glucose prediction algorithms are key tools in the development of decision support systems and closed-loop insulin delivery systems for blood glucose control in diabetes. Deep learning models have provided leading results among machine learning algorithms to date in glucose prediction. However these models typically require large amounts of data to obtain best personalised glucose prediction results. Multitask learning facilitates an approach for leveraging data from multiple subjects while still learning accurate personalised models. In this work we present results comparing the effectiveness of multitask learning over sequential transfer learning, and learning only on subject-specific data with neural network and support vector regression. The multitask learning approach shows consistent leading performance in predictive metrics at both short-term and long-term prediction horizons. We obtain a predictive accuracy (RMSE) of 18.8 ±2.3, 25.3 ±2.9, 31.8 ±3.9, 41.2 ±4.5, 47.2 ±4.6 mg/dL at 30, 45, 60, 90, and 120 min prediction horizons respectively, with at least 93% clinically acceptable predictions using the Clarke Error Grid (EGA) at each prediction horizon. We also identify relevant prior information such as glycaemic variability that can be incorporated to improve predictive performance at long-term prediction horizons. Furthermore, we show consistent performance - ≤ 5% change in both RMSE and EGA (Zone A) - in rare cases of adverse glycaemic events with 1-6 weeks of training data. In conclusion, a multitask approach can allow for deploying personalised models even with significantly less subject-specific data without compromising performance.


Blood Glucose , Diabetes Mellitus, Type 1 , Algorithms , Blood Glucose Self-Monitoring , Humans , Insulin/therapeutic use , Insulin Infusion Systems
12.
J Diabetes Sci Technol ; 16(1): 29-39, 2022 Jan.
Article En | MEDLINE | ID: mdl-34861785

BACKGROUND: User-developed automated insulin delivery systems, also referred to as do-it-yourself artificial pancreas systems (DIY APS), are in use by people living with type 1 diabetes. In this work, we evaluate, in silico, the DIY APS Loop control algorithm and compare it head-to-head with the bio-inspired artificial pancreas (BiAP) controller for which clinical data are available. METHODS: The Python version of the Loop control algorithm called PyLoopKit was employed for evaluation purposes. A Python-MATLAB interface was created to integrate PyLoopKit with the UVa-Padova simulator. Two configurations of BiAP (non-adaptive and adaptive) were evaluated. In addition, the Tandem Basal-IQ predictive low-glucose suspend was used as a baseline algorithm. Two scenarios with different levels of variability were used to challenge the algorithms on the adult (n = 10) and adolescent (n = 10) virtual cohorts of the simulator. RESULTS: Both BiAP and Loop improve, or maintain, glycemic control when compared with Basal-IQ. Under the scenario with lower variability, BiAP and Loop perform relatively similarly. However, BiAP, and in particular its adaptive configuration, outperformed Loop in the scenario with higher variability by increasing the percentage time in glucose target range 70-180 mg/dL (BiAP-Adaptive vs Loop vs Basal-IQ) (adults: 89.9% ± 3.2%* vs 79.5% ± 5.3%* vs 67.9% ± 8.3%; adolescents: 74.6 ± 9.5%* vs 53.0% ± 7.7% vs 55.4% ± 12.0%, where * indicates the significance of P < .05 calculated in sequential order) while maintaining the percentage time below range (adults: 0.89% ± 0.37% vs 1.72% ± 1.26% vs 3.41 ± 1.92%; adolescents: 2.87% ± 2.77% vs 4.90% ± 1.92% vs 4.17% ± 2.74%). CONCLUSIONS: Both Loop and BiAP algorithms are safe and improve glycemic control when compared, in silico, with Basal-IQ. However, BiAP appears significantly more robust to real-world challenges by outperforming Loop and Basal-IQ in the more challenging scenario.


Diabetes Mellitus, Type 1 , Pancreas, Artificial , Adolescent , Adult , Algorithms , Blood Glucose , Blood Glucose Self-Monitoring , Diabetes Mellitus, Type 1/drug therapy , Humans , Hypoglycemic Agents/therapeutic use , Insulin/therapeutic use , Insulin Infusion Systems
13.
Comput Methods Programs Biomed ; 208: 106205, 2021 Sep.
Article En | MEDLINE | ID: mdl-34118493

BACKGROUND: There are several medical devices used in Colombia for diabetes management, most of which have an associated telemedicine platform to access the data. In this work, we present the results of a pilot study evaluating the use of the Tidepool telemedicine platform for providing remote diabetes health services in Colombia across multiple devices. METHOD: Individuals with Type 1 and Type 2 diabetes using multiple diabetes devices were recruited to evaluate the user experience with Tidepool over three months. Two endocrinologists used the Tidepool software to maintain a weekly communication with participants reviewing the devices data remotely. Demographic, clinical, psychological and usability data were collected at several stages of the study. RESULTS: Six participants, from ten at the baseline (five MDI and five CSII), completed this pilot study. Three different diabetes devices were employed by the participants: a glucose meter (Abbot), an intermittently-scanned glucose monitor (Abbot), and an insulin pump (Medtronic). A score of 81.3 in the system usability scale revealed that overall, most participants found the system easy to use, especially the web interface. The system also compared highly favourably against the proprietary platforms. The ability to upload and share data and communicate remotely with the clinicians was raised consistently by participants. Clinicians cited the lockdown imposed by the Covid-19 pandemic as a valuable test for this platform. Inability to upload data from mobile devices was identified as one of the main limitations. CONCLUSION: Tidepool has the potential to be used as a tool to facilitate remote diabetes care in Colombia. Users, both participants and clinicians, agreed to recommend the use of platforms like Tidepool to achieve better disease management and communication with the health care team. Some improvements were identified to enhance the user experience.


COVID-19 , Diabetes Mellitus, Type 2 , Telemedicine , Cloud Computing , Colombia , Communicable Disease Control , Diabetes Mellitus, Type 2/therapy , Humans , Pandemics , Pilot Projects , SARS-CoV-2
14.
Nat Rev Microbiol ; 19(12): 747-758, 2021 12.
Article En | MEDLINE | ID: mdl-34158654

An optimal antimicrobial dose provides enough drug to achieve a clinical response while minimizing toxicity and development of drug resistance. There can be considerable variability in pharmacokinetics, for example, owing to comorbidities or other medications, which affects antimicrobial pharmacodynamics and, thus, treatment success. Although current approaches to antimicrobial dose optimization address fixed variability, better methods to monitor and rapidly adjust antimicrobial dosing are required to understand and react to residual variability that occurs within and between individuals. We review current challenges to the wider implementation of antimicrobial dose optimization and highlight novel solutions, including biosensor-based, real-time therapeutic drug monitoring and computer-controlled, closed-loop control systems. Precision antimicrobial dosing promises to improve patient outcome and is important for antimicrobial stewardship and the prevention of antimicrobial resistance.


Anti-Infective Agents/pharmacokinetics , Antimicrobial Stewardship , Bacterial Infections/drug therapy , Drug Monitoring/methods , Artificial Intelligence , Biosensing Techniques , Decision Support Systems, Clinical , Drug Resistance, Microbial , Humans
15.
JAC Antimicrob Resist ; 3(1): dlab002, 2021 Mar.
Article En | MEDLINE | ID: mdl-34192255

BACKGROUND: Bacterial infection has been challenging to diagnose in patients with COVID-19. We developed and evaluated supervised machine learning algorithms to support the diagnosis of secondary bacterial infection in hospitalized patients during the COVID-19 pandemic. METHODS: Inpatient data at three London hospitals for the first COVD-19 wave in March and April 2020 were extracted. Demographic, blood test and microbiology data for individuals with and without SARS-CoV-2-positive PCR were obtained. A Gaussian Naive Bayes, Support Vector Machine (SVM) and Artificial Neural Network were trained and compared using the area under the receiver operating characteristic curve (AUCROC). The best performing algorithm (SVM with 21 blood test variables) was prospectively piloted in July 2020. AUCROC was calculated for the prediction of a positive microbiological sample within 48 h of admission. RESULTS: A total of 15 599 daily blood profiles for 1186 individual patients were identified to train the algorithms; 771/1186 (65%) individuals were SARS-CoV-2 PCR positive. Clinically significant microbiology results were present for 166/1186 (14%) patients during admission. An SVM algorithm trained with 21 routine blood test variables and over 8000 individual profiles had the best performance. AUCROC was 0.913, sensitivity 0.801 and specificity 0.890. Prospective testing on 54 patients on admission (28/54, 52% SARS-CoV-2 PCR positive) demonstrated an AUCROC of 0.960 (95% CI: 0.90-1.00). CONCLUSIONS: An SVM using 21 routine blood test variables had excellent performance at inferring the likelihood of positive microbiology. Further prospective evaluation of the algorithms ability to support decision making for the diagnosis of bacterial infection in COVID-19 cohorts is underway.

16.
Clin Infect Dis ; 72(12): 2103-2111, 2021 06 15.
Article En | MEDLINE | ID: mdl-32246143

BACKGROUND: A locally developed case-based reasoning (CBR) algorithm, designed to augment antimicrobial prescribing in secondary care was evaluated. METHODS: Prescribing recommendations made by a CBR algorithm were compared to decisions made by physicians in clinical practice. Comparisons were examined in 2 patient populations: first, in patients with confirmed Escherichia coli blood stream infections ("E. coli patients"), and second in ward-based patients presenting with a range of potential infections ("ward patients"). Prescribing recommendations were compared against the Antimicrobial Spectrum Index (ASI) and the World Health Organization Essential Medicine List Access, Watch, Reserve (AWaRe) classification system. Appropriateness of a prescription was defined as the spectrum of the prescription covering the known or most-likely organism antimicrobial sensitivity profile. RESULTS: In total, 224 patients (145 E. coli patients and 79 ward patients) were included. Mean (standard deviation) age was 66 (18) years with 108/224 (48%) female sex. The CBR recommendations were appropriate in 202/224 (90%) compared to 186/224 (83%) in practice (odds ratio [OR]: 1.24 95% confidence interval [CI]: .392-3.936; P = .71). CBR recommendations had a smaller ASI compared to practice with a median (range) of 6 (0-13) compared to 8 (0-12) (P < .01). CBR recommendations were more likely to be classified as Access class antimicrobials compared to physicians' prescriptions at 110/224 (49%) vs. 79/224 (35%) (OR: 1.77; 95% CI: 1.212-2.588; P < .01). Results were similar for E. coli and ward patients on subgroup analysis. CONCLUSIONS: A CBR-driven decision support system provided appropriate recommendations within a narrower spectrum compared to current clinical practice. Future work must investigate the impact of this intervention on prescribing behaviors more broadly and patient outcomes.


Anti-Infective Agents , Antimicrobial Stewardship , Aged , Algorithms , Anti-Bacterial Agents/therapeutic use , Anti-Infective Agents/therapeutic use , Escherichia coli , Female , Humans , Inappropriate Prescribing , Practice Patterns, Physicians'
17.
IEEE J Biomed Health Inform ; 25(7): 2744-2757, 2021 07.
Article En | MEDLINE | ID: mdl-33232247

Diabetes is a chronic metabolic disorder that affects an estimated 463 million people worldwide. Aiming to improve the treatment of people with diabetes, digital health has been widely adopted in recent years and generated a huge amount of data that could be used for further management of this chronic disease. Taking advantage of this, approaches that use artificial intelligence and specifically deep learning, an emerging type of machine learning, have been widely adopted with promising results. In this paper, we present a comprehensive review of the applications of deep learning within the field of diabetes. We conducted a systematic literature search and identified three main areas that use this approach: diagnosis of diabetes, glucose management, and diagnosis of diabetes-related complications. The search resulted in the selection of 40 original research articles, of which we have summarized the key information about the employed learning models, development process, main outcomes, and baseline methods for performance evaluation. Among the analyzed literature, it is to be noted that various deep learning techniques and frameworks have achieved state-of-the-art performance in many diabetes-related tasks by outperforming conventional machine learning approaches. Meanwhile, we identify some limitations in the current literature, such as a lack of data availability and model interpretability. The rapid developments in deep learning and the increase in available data offer the possibility to meet these challenges in the near future and allow the widespread deployment of this technology in clinical settings.


Deep Learning , Diabetes Mellitus , Artificial Intelligence , Diabetes Mellitus/diagnosis , Diabetes Mellitus/therapy , Humans , Machine Learning
18.
Diabetes Technol Ther ; 23(3): 175-186, 2021 03.
Article En | MEDLINE | ID: mdl-33048581

Background: The Patient Empowerment through Predictive Personalized Decision Support (PEPPER) system provides personalized bolus advice for people with type 1 diabetes. The system incorporates an adaptive insulin recommender system (based on case-based reasoning, an artificial intelligence methodology), coupled with a safety system, which includes predictive glucose alerts and alarms, predictive low-glucose suspend, personalized carbohydrate recommendations, and dynamic bolus insulin constraint. We evaluated the safety and efficacy of the PEPPER system compared to a standard bolus calculator. Methods: This was an open-labeled multicenter randomized controlled crossover study. Following 4-week run-in, participants were randomized to PEPPER/Control or Control/PEPPER in a 1:1 ratio for 12 weeks. Participants then crossed over after a washout period. The primary end-point was percentage time in range (TIR, 3.9-10.0 mmol/L [70-180 mg/dL]). Secondary outcomes included glycemic variability, quality of life, and outcomes on the safety system and insulin recommender. Results: Fifty-four participants on multiple daily injections (MDI) or insulin pump completed the run-in period, making up the intention-to-treat analysis. Median (interquartile range) age was 41.5 (32.3-49.8) years, diabetes duration 21.0 (11.5-26.0) years, and HbA1c 61.0 (58.0-66.1) mmol/mol. No significant difference was observed for percentage TIR between the PEPPER and Control groups (62.5 [52.1-67.8] % vs. 58.4 [49.6-64.3] %, respectively, P = 0.27). For quality of life, participants reported higher perceived hypoglycemia with the PEPPER system despite no objective difference in time spent in hypoglycemia. Conclusions: The PEPPER system was safe, but did not change glycemic outcomes, compared to control. There is wide scope for integrating PEPPER into routine diabetes management for pump and MDI users. Further studies are required to confirm overall effectiveness. Clinical trial registration: ClinicalTrials.gov NCT03849755.


Diabetes Mellitus, Type 1 , Quality of Life , Adult , Artificial Intelligence , Blood Glucose , Cross-Over Studies , Diabetes Mellitus, Type 1/drug therapy , Feasibility Studies , Glycated Hemoglobin/analysis , Humans , Hypoglycemic Agents/therapeutic use , Insulin/therapeutic use , Insulin Infusion Systems , Middle Aged
19.
Diabetes Technol Ther ; 23(4): 314-319, 2021 04.
Article En | MEDLINE | ID: mdl-33064025

Objective: Consensus continuous glucose monitoring (CGM) guidance includes a recommendation that a minimum of 14 days of CGM data are used to report times in ranges. The previously employed approaches to determine the optimal duration for CGM data have limitations. In this study, we present a robust approach to define the minimum duration of CGM data to report times in ranges, as well as other glycemic metrics. Methods: The approach is based on the median absolute percentage error and employs a sliding time window to reduce the impact of inter-time interval variability, hence allowing smaller data sets to be used. A 10% and 5% threshold were employed to assess the optimal duration of CGM data for a set of commonly employed metrics to assess quality of glycemic control and glycemic variability. To evaluate the impact of the data set size and type of intervention, data from two randomized controlled trials involving participants with type 1 diabetes were used (n = 236 and n = 25). Results: Results suggest that mean glucose reaches the 5% threshold for mean absolute percentage error within 2 weeks, whereas percentage time in target 70-180 mg/dL, mean absolute glucose, standard deviation, and coefficient of variation reach the same threshold within 4 weeks in both data sets, suggesting that these metrics can be robustly assessed from CGM data for a 4-week period, whereas some other metrics require much longer window lengths, especially those evaluating hypoglycemia. Conclusions: Our data suggest that there is no optimal duration for CGM data to robustly assess all outcomes and that the duration required for a robust outcome depends on the population being studied, the sampling frequency, and the primary outcomes selected.


Blood Glucose Self-Monitoring , Diabetes Mellitus, Type 1 , Blood Glucose , Diabetes Mellitus, Type 1/drug therapy , Glucose , Glycemic Control , Humans
20.
IEEE J Biomed Health Inform ; 25(4): 1223-1232, 2021 04.
Article En | MEDLINE | ID: mdl-32755873

People with Type 1 diabetes (T1D) require regular exogenous infusion of insulin to maintain their blood glucose concentration in a therapeutically adequate target range. Although the artificial pancreas and continuous glucose monitoring have been proven to be effective in achieving closed-loop control, significant challenges still remain due to the high complexity of glucose dynamics and limitations in the technology. In this work, we propose a novel deep reinforcement learning model for single-hormone (insulin) and dual-hormone (insulin and glucagon) delivery. In particular, the delivery strategies are developed by double Q-learning with dilated recurrent neural networks. For designing and testing purposes, the FDA-accepted UVA/Padova Type 1 simulator was employed. First, we performed long-term generalized training to obtain a population model. Then, this model was personalized with a small data-set of subject-specific data. In silico results show that the single and dual-hormone delivery strategies achieve good glucose control when compared to a standard basal-bolus therapy with low-glucose insulin suspension. Specifically, in the adult cohort (n = 10), percentage time in target range 70, 180 mg/dL improved from 77.6% to 80.9% with single-hormone control, and to 85.6% with dual-hormone control. In the adolescent cohort (n = 10), percentage time in target range improved from 55.5% to [Formula: see text] with single-hormone control, and to 78.8% with dual-hormone control. In all scenarios, a significant decrease in hypoglycemia was observed. These results show that the use of deep reinforcement learning is a viable approach for closed-loop glucose control in T1D.


Diabetes Mellitus, Type 1 , Pancreas, Artificial , Adolescent , Adult , Algorithms , Blood Glucose , Blood Glucose Self-Monitoring , Computer Simulation , Diabetes Mellitus, Type 1/drug therapy , Humans , Hypoglycemic Agents/therapeutic use , Insulin/therapeutic use , Insulin Infusion Systems
...