Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
Radiology ; 310(2): e231319, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38319168

ABSTRACT

Filters are commonly used to enhance specific structures and patterns in images, such as vessels or peritumoral regions, to enable clinical insights beyond the visible image using radiomics. However, their lack of standardization restricts reproducibility and clinical translation of radiomics decision support tools. In this special report, teams of researchers who developed radiomics software participated in a three-phase study (September 2020 to December 2022) to establish a standardized set of filters. The first two phases focused on finding reference filtered images and reference feature values for commonly used convolutional filters: mean, Laplacian of Gaussian, Laws and Gabor kernels, separable and nonseparable wavelets (including decomposed forms), and Riesz transformations. In the first phase, 15 teams used digital phantoms to establish 33 reference filtered images of 36 filter configurations. In phase 2, 11 teams used a chest CT image to derive reference values for 323 of 396 features computed from filtered images using 22 filter and image processing configurations. Reference filtered images and feature values for Riesz transformations were not established. Reproducibility of standardized convolutional filters was validated on a public data set of multimodal imaging (CT, fluorodeoxyglucose PET, and T1-weighted MRI) in 51 patients with soft-tissue sarcoma. At validation, reproducibility of 486 features computed from filtered images using nine configurations × three imaging modalities was assessed using the lower bounds of 95% CIs of intraclass correlation coefficients. Out of 486 features, 458 were found to be reproducible across nine teams with lower bounds of 95% CIs of intraclass correlation coefficients greater than 0.75. In conclusion, eight filter types were standardized with reference filtered images and reference feature values for verifying and calibrating radiomics software packages. A web-based tool is available for compliance checking.


Subject(s)
Image Processing, Computer-Assisted , Radiomics , Humans , Reproducibility of Results , Biomarkers , Multimodal Imaging
2.
Eur Radiol ; 33(10): 7199-7208, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37079030

ABSTRACT

AIM: To study the feasibility of radiomic analysis of baseline [18F]fluoromethylcholine positron emission tomography/computed tomography (PET/CT) for the prediction of biochemical recurrence (BCR) in a cohort of intermediate and high-risk prostate cancer (PCa) patients. MATERIAL AND METHODS: Seventy-four patients were prospectively collected. We analyzed three prostate gland (PG) segmentations (i.e., PGwhole: whole PG; PG41%: prostate having standardized uptake value - SUV > 0.41*SUVmax; PG2.5: prostate having SUV > 2.5) together with three SUV discretization steps (i.e., 0.2, 0.4, and 0.6). For each segmentation/discretization step, we trained a logistic regression model to predict BCR using radiomic and/or clinical features. RESULTS: The median baseline prostate-specific antigen was 11 ng/mL, the Gleason score was > 7 for 54% of patients, and the clinical stage was T1/T2 for 89% and T3 for 9% of patients. The baseline clinical model achieved an area under the receiver operating characteristic curve (AUC) of 0.73. Performances improved when clinical data were combined with radiomic features, in particular for PG2.5 and 0.4 discretization, for which the median test AUC was 0.78. CONCLUSION: Radiomics reinforces clinical parameters in predicting BCR in intermediate and high-risk PCa patients. These first data strongly encourage further investigations on the use of radiomic analysis to identify patients at risk of BCR. CLINICAL RELEVANCE STATEMENT: The application of AI combined with radiomic analysis of [18F]fluoromethylcholine PET/CT images has proven to be a promising tool to stratify patients with intermediate or high-risk PCa in order to predict biochemical recurrence and tailor the best treatment options. KEY POINTS: • Stratification of patients with intermediate and high-risk prostate cancer at risk of biochemical recurrence before initial treatment would help determine the optimal curative strategy. • Artificial intelligence combined with radiomic analysis of [18F]fluorocholine PET/CT images allows prediction of biochemical recurrence, especially when radiomic features are complemented with patients' clinical information (highest median AUC of 0.78). • Radiomics reinforces the information of conventional clinical parameters (i.e., Gleason score and initial prostate-specific antigen level) in predicting biochemical recurrence.


Subject(s)
Positron Emission Tomography Computed Tomography , Prostatic Neoplasms , Male , Humans , Positron Emission Tomography Computed Tomography/methods , Prostate-Specific Antigen , Artificial Intelligence , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/therapy , Retrospective Studies
3.
Radiology ; 303(3): 533-541, 2022 06.
Article in English | MEDLINE | ID: mdl-35230182

ABSTRACT

Background The translation of radiomic models into clinical practice is hindered by the limited reproducibility of features across software and studies. Standardization is needed to accelerate this process and to bring radiomics closer to clinical deployment. Purpose To assess the standardization level of seven radiomic software programs and investigate software agreement as a function of built-in image preprocessing (eg, interpolation and discretization), feature aggregation methods, and the morphological characteristics (ie, volume and shape) of the region of interest (ROI). Materials and Methods The study was organized into two phases: In phase I, the two Image Biomarker Standardization Initiative (IBSI) phantoms were used to evaluate the IBSI compliance of seven software programs. In phase II, the reproducibility of all IBSI-standardized radiomic features across tools was assessed with two custom Italian multicenter Shared Understanding of Radiomic Extractors (ImSURE) digital phantoms that allowed, in conjunction with a systematic feature extraction, observations on whether and how feature matches between program pairs varied depending on the preprocessing steps, aggregation methods, and ROI characteristics. Results In phase I, the software programs showed different levels of completeness (ie, the number of computable IBSI benchmark values). However, the IBSI-compliance assessment revealed that they were all standardized in terms of feature implementation. When considering additional preprocessing steps, for each individual program, match percentages fell by up to 30%. In phase II, the ImSURE phantoms showed that software agreement was dependent on discretization and aggregation as well as on ROI shape and volume factors. Conclusion The agreement of radiomic software varied in relation to factors that had already been standardized (eg, interpolation and discretization methods) and factors that need standardization. Both dependences must be resolved to ensure the reproducibility of radiomic features and to pave the way toward the clinical adoption of radiomic models. Published under a CC BY 4.0 license. Online supplemental material is available for this article. See also the editorial by Steiger in this issue. An earlier incorrect version appeared online and in print. This article was corrected on March 2, 2022.


Subject(s)
Benchmarking , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Reproducibility of Results , Software
5.
Diagnostics (Basel) ; 13(13)2023 Jun 23.
Article in English | MEDLINE | ID: mdl-37443547

ABSTRACT

Lung cancer represents the second most common malignancy worldwide and lymph node (LN) involvement serves as a crucial prognostic factor for tailoring treatment approaches. Invasive methods, such as mediastinoscopy and endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA), are employed for preoperative LN staging. Among the preoperative non-invasive diagnostic methods, computed tomography (CT) and, recently, positron emission tomography (PET)/CT with fluorine-18-fludeoxyglucose ([18F]FDG) are routinely recommended by several guidelines; however, they can both miss pathologically proven LN metastases, with an incidence up to 26% for patients staged with [18F]FDG PET/CT. These undetected metastases, known as occult LN metastases (OLMs), are usually cases of micro-metastasis or small LN metastasis (shortest radius below 10 mm). Hence, it is crucial to find novel approaches to increase their discovery rate. Radiomics is an emerging field that seeks to uncover and quantify the concealed information present in biomedical images by utilising machine or deep learning approaches. The extracted features can be integrated into predictive models, as numerous reports have emphasised their usefulness in the staging of lung cancer. However, there is a paucity of studies examining the detection of OLMs using quantitative features derived from images. Hence, the objective of this review was to investigate the potential application of PET- and/or CT-derived quantitative radiomic features for the identification of OLMs.

6.
Radiother Oncol ; 188: 109896, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37660751

ABSTRACT

PURPOSE: To investigate the potential of dosiomics in predicting radiotherapy-induced taste distortion (dysgeusia) in head & neck (H&N) cancer. METHODS: A cohort of 80 H&N cancer patients treated with radical or adjuvant radiotherapy and with a follow-up of at least 24 months was enrolled. Treatment information, as well as tobacco and alcohol consumption were also collected. The whole tongue was manually delineated on the planning CT and mapped to the dose map retrieved from the treatment planning system. For every patient, 6 regions of the tongue were examined; for each of them, 145 dosiomic features were extracted from the dose map and fed to a logistic regression model to predict the grade of dysgeusia at follow-up, with and without including clinical features. A mean dose-based model was considered for reference. RESULTS: Both dosiomics and mean dose models achieved good prediction performance for acute dysgeusia with AUC up to 0.88. For the dosiomic model, the central and anterior ⅔ regions of the tongue were the most predictive. For all models, a gradual reduction in the performance was observed at later times for chronic dysgeusia prediction, with higher values for dosiomics. The inclusion of smoke and alcohol habits did not improve model performances. CONCLUSION: The dosiomic analysis of the dose to the tongue identified features able to predict acute dysgeusia. Dosiomics resulted superior to the conventional mean dose-based model for chronic dysgeusia prediction. Larger, prospective studies are needed to support these results before integrating dosiomics in radiotherapy planning.

7.
Phys Imaging Radiat Oncol ; 26: 100435, 2023 Apr.
Article in English | MEDLINE | ID: mdl-37089905

ABSTRACT

Background and purpose: Prediction models may be reliable decision-support tools to reduce the workload associated with the measurement-based patient-specific quality assurance (PSQA) of radiotherapy plans. This study compared the effectiveness of three different models based on delivery parameters, complexity metrics and sinogram radiomics features as tools for virtual-PSQA (vPSQA) of helical tomotherapy (HT) plans. Materials and methods: A dataset including 881 RT plans created with two different treatment planning systems (TPSs) was collected. Sixty-five indicators including 12 delivery parameters (DP) and 53 complexity metrics (CM) were extracted using a dedicated software library. Additionally, 174 radiomics features (RF) were extracted from the plans' sinograms. Three groups of variables were formed: A (DP), B (DP + CM) and C (DP + CM + RF). Regression models were trained to predict the gamma index passing rate P R γ (3%G, 2mm) and the impact of each group of variables was investigated. ROC-AUC analysis measured the ability of the models to accurately discriminate between 'deliverable' and 'non-deliverable' plans. Results: The best performance was achieved by model C which allowed detecting around 16% and 63% of the 'deliverable' plans with 100% sensitivity for the two TPSs, respectively. In a real clinical scenario, this would have decreased the whole PSQA workload by approximately 35%. Conclusions: The combination of delivery parameters, complexity metrics and sinogram radiomics features allows for robust and reliable PSQA gamma passing rate predictions and high-sensitivity detection of a fraction of deliverable plans for one of the two TPSs. Promising yet improvable results were obtained for the other one. The results foster a future adoption of vPSQA programs for HT.

8.
Sci Data ; 9(1): 695, 2022 11 12.
Article in English | MEDLINE | ID: mdl-36371503

ABSTRACT

In radiology and oncology, radiomic models are increasingly employed to predict clinical outcomes, but their clinical deployment has been hampered by lack of standardisation. This hindrance has driven the international Image Biomarker Standardisation Initiative (IBSI) to define guidelines for image pre-processing, standardise the formulation and nomenclature of 169 radiomic features and share two benchmark digital phantoms for software calibration. However, to better assess the concordance of radiomic tools, more heterogeneous phantoms are needed. We created two digital phantoms, called ImSURE phantoms, having isotropic and anisotropic voxel size, respectively, and 90 regions of interest (ROIs) each. To use these phantoms, we designed a systematic feature extraction workflow including 919 different feature values (obtained from the 169 IBSI-standardised features considering all possible combinations of feature aggregation and intensity discretisation methods). The ImSURE phantoms will allow to assess the concordance of radiomic software depending on interpolation, discretisation and aggregation methods, as well as on ROI volume and shape. Eventually, we provide the feature values extracted from these phantoms using five open-source IBSI-compliant software.

9.
Sci Rep ; 12(1): 14132, 2022 08 19.
Article in English | MEDLINE | ID: mdl-35986072

ABSTRACT

In this study, we tested and compared radiomics and deep learning-based approaches on the public LUNG1 dataset, for the prediction of 2-year overall survival (OS) in non-small cell lung cancer patients. Radiomic features were extracted from the gross tumor volume using Pyradiomics, while deep features were extracted from bi-dimensional tumor slices by convolutional autoencoder. Both radiomic and deep features were fed to 24 different pipelines formed by the combination of four feature selection/reduction methods and six classifiers. Direct classification through convolutional neural networks (CNNs) was also performed. Each approach was investigated with and without the inclusion of clinical parameters. The maximum area under the receiver operating characteristic on the test set improved from 0.59, obtained for the baseline clinical model, to 0.67 ± 0.03, 0.63 ± 0.03 and 0.67 ± 0.02 for models based on radiomic features, deep features, and their combination, and to 0.64 ± 0.04 for direct CNN classification. Despite the high number of pipelines and approaches tested, results were comparable and in line with previous works, hence confirming that it is challenging to extract further imaging-based information from the LUNG1 dataset for the prediction of 2-year OS.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Deep Learning , Lung Neoplasms , Carcinoma, Non-Small-Cell Lung/diagnostic imaging , Humans , Lung Neoplasms/diagnostic imaging , Neural Networks, Computer , ROC Curve
10.
J Neural Eng ; 18(5)2021 10 05.
Article in English | MEDLINE | ID: mdl-34544051

ABSTRACT

Objective.The N2pc is a small amplitude transient interhemispheric voltage asymmetry used in cognitive neuroscience to investigate subject's allocation of selective visuo-spatial attention. N2pc is typically estimated by averaging the sweeps of the electroencephalographic (EEG) signal but, in absence of explicit normative indications, the number of sweeps is often based on arbitrariness or personal experience. With the final aim of reducing duration and cost of experimental protocols, here we developed a new approach to reliably predict N2pc amplitude from a minimal EEG dataset.Approach.First, features predictive of N2pc amplitude were identified in the time-frequency domain. Then, an artificial neural network (NN) was trained to predict N2pc mean amplitude at the individual level. By resorting to simulated data, accuracy of the NN was assessed by computing the mean squared error (MSE) and the amplitude discretization error (ADE) and compared to the standard time averaging (TA) technique. The NN was then tested against two real datasets consisting of 14 and 12 subjects, respectively.Main result.In simulated scenarios entailing different number of sweeps (between 10 and 100), the MSE obtained with the proposed method resulted, on average, 1/5 of that obtained with the TA technique. Implementation on real EEG datasets showed that N2pc amplitude could be reliably predicted with as few as 40 EEG sweeps per cell of the experimental design.Significance.The developed approach allows to reduce duration and cost of experiments involving the N2pc, for instance in studies investigating attention deficits in pathological subjects.


Subject(s)
Attention Deficit Disorder with Hyperactivity , Electroencephalography , Humans , Neural Networks, Computer
11.
Cancers (Basel) ; 13(23)2021 Nov 30.
Article in English | MEDLINE | ID: mdl-34885135

ABSTRACT

We performed a systematic review of the literature to provide an overview of the application of PET radiomics for the prediction of the initial staging of prostate cancer (PCa), and to discuss the additional value of radiomic features over clinical data. The most relevant databases and web sources were interrogated by using the query "prostate AND radiomic* AND PET". English-language original articles published before July 2021 were considered. A total of 28 studies were screened for eligibility and 6 of them met the inclusion criteria and were, therefore, included for further analysis. All studies were based on human patients. The average number of patients included in the studies was 72 (range 52-101), and the average number of high-order features calculated per study was 167 (range 50-480). The radiotracers used were [68Ga]Ga-PSMA-11 (in four out of six studies), [18F]DCFPyL (one out of six studies), and [11C]Choline (one out of six studies). Considering the imaging modality, three out of six studies used a PET/CT scanner and the other half a PET/MRI tomograph. Heterogeneous results were reported regarding radiomic methods (e.g., segmentation modality) and considered features. The studies reported several predictive markers including first-, second-, and high-order features, such as "kurtosis", "grey-level uniformity", and "HLL wavelet mean", respectively, as well as PET-based metabolic parameters. The strengths and weaknesses of PET radiomics in this setting of disease will be largely discussed and a critical analysis of the available data will be reported. In our review, radiomic analysis proved to add useful information for lesion detection and the prediction of tumor grading of prostatic lesions, even when they were missed at visual qualitative assessment due to their small size; furthermore, PET radiomics could play a synergistic role with the mpMRI radiomic features in lesion evaluation. The most common limitations of the studies were the small sample size, retrospective design, lack of validation on external datasets, and unavailability of univocal cut-off values for the selected radiomic features.

12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1019-1022, 2020 07.
Article in English | MEDLINE | ID: mdl-33018158

ABSTRACT

The N2pc event-related potential component measures direction and time course of selective visual attention and represents an important biomarker in cognitive neuroscience. While its subtractive origin strongly influences the amplitude, thus hindering its detection, other external factors, such as subject's inefficiency to allocate attention to the cued target, or the heterogeneity of the visual context, may strongly affect the elicitation of the component itself. It would therefore be extremely important to create a tool that, using as few sweeps as possible, could reliably establish whether an N2pc is present in an individual subject. In the present work, we propose an approach by resorting to a time-frequency analysis of N2pc individual signals; in particular, power at each frequency band (α/ß/δ/θ) was computed in the N2 time range and correlated to the estimated amplitude of the N2pc. Preliminary results on fourteen human volunteers of a visual search design showed a very high correlation coefficient (over 0.9) between the low frequency bands power and the mean absolute amplitude of the component, using only 40 sweeps. Results also seemed to suggest that N2pc amplitude values higher than 0.5 µV could be accurately classified according to time-frequency indices.Clinical Relevance - The online detection of the N2pc presence in individual EEG datasets would allow not only to study the factors responsible of N2pc variability across subjects and conditions, but also to investigate novel search variants on participants with a predisposition to show an N2pc, reducing time and costs and the possibility to obtain biased results.


Subject(s)
Cognitive Neuroscience , Electroencephalography , Attention , Cues , Evoked Potentials , Humans
13.
J Neural Eng ; 17(3): 036024, 2020 06 22.
Article in English | MEDLINE | ID: mdl-32240993

ABSTRACT

OBJECTIVE: Event-related potentials (ERPs) evoked by visual stimulations comprise several components, with different amplitudes and latencies. Among them, the N2 and N2pc components have been demonstrated to be a measure of subjects' allocation of visual attention to possible targets and to be involved in the suppression of irrelevant items. Unfortunately, the N2 and N2pc components have smaller amplitudes compared with those of the background electroencephalogram (EEG), and their measurement requires employing techniques such as conventional averaging, which in turn necessitates several sweeps to provide acceptable estimates. In visual search studies, the number of sweeps (Nswp) used to extrapolate reliable estimates of N2/N2pc components has always been somehow arbitrary, with studies using 50-500 sweeps. In-silico studies relying on synthetic data providing a close-to-realistic fit to the variability of the visual N2 component and background EEG signals are therefore needed to go beyond arbitrary choices in this context. APPROACH: In the present work, we sought to take a step in this direction by developing a simulator of ERP variations in the N2 time range based on real experimental data while monitoring variations in the estimation accuracy of N2/N2pc components as a function of two factors, i.e. signal-to-noise ratio (SNR) and number of averaged sweeps. MAIN RESULTS: The results revealed that both Nswp and SNR had a strong impact on the accuracy of N2/N2pc estimates. Critically, the present simulation showed that, for a given level of SNR, a non-arbitrary Nswp could be parametrically determined, after which no additional significant improvements in noise suppression and N2/N2pc accuracy estimation were observed. SIGNIFICANCE: The present simulator is thought to provide investigators with quantitative guidelines for designing experimental protocols aimed at improving the detection accuracy of N2/N2pc components. The parameters of the simulator can be tuned, adapted, or integrated to fit other ERP modulations.


Subject(s)
Electroencephalography , Evoked Potentials , Computers , Humans , Photic Stimulation
14.
J Diabetes Sci Technol ; 13(1): 103-110, 2019 01.
Article in English | MEDLINE | ID: mdl-29848104

ABSTRACT

BACKGROUND: The standard formula (SF) used in bolus calculators (BCs) determines meal insulin bolus using "static" measurement of blood glucose concentration (BG) obtained by self-monitoring of blood glucose (SMBG) fingerprick device. Some methods have been proposed to improve efficacy of SF using "dynamic" information provided by continuous glucose monitoring (CGM), and, in particular, glucose rate of change (ROC). This article compares, in silico and in an ideal framework limiting the exposition to possibly confounding factors (such as CGM noise), the performance of three popular techniques devised for such a scope, that is, the methods of Buckingham et al (BU), Scheiner (SC), and Pettus and Edelman (PE). METHOD: Using the UVa/Padova Type 1 diabetes simulator we generated data of 100 virtual subjects in noise-free, single-meal scenarios having different preprandial BG and ROC values. Meal insulin bolus was computed using SF, BU, SC, and PE. Performance was assessed with the blood glucose risk index (BGRI) on the 9 hours after meal. RESULTS: On average, BU, SC, and PE improve BGRI compared to SF. When BG is rapidly decreasing, PE obtains the best performance. In the other ROC scenarios, none of the considered methods prevails in all the preprandial BG conditions tested. CONCLUSION: Our study showed that, at least in the considered ideal framework, none of the methods to correct SF according to ROC is globally better than the others. Critical analysis of the results also suggests that further investigations are needed to develop more effective formulas to account for ROC information in BCs.


Subject(s)
Blood Glucose Self-Monitoring/methods , Blood Glucose/analysis , Chemistry, Pharmaceutical/methods , Diabetes Mellitus, Type 1/blood , Insulin/administration & dosage , Insulin/pharmacology , Computer Simulation , Humans , Hypoglycemic Agents/administration & dosage , Hypoglycemic Agents/pharmacology , Insulin Infusion Systems , Postprandial Period
15.
Article in English | MEDLINE | ID: mdl-30440244

ABSTRACT

Type 1 diabetes (TID) therapy is based on multiple daily injections of exogenous insulin. The so-called insulin bolus calculators facilitate insulin dose calculation to the patients by implementing a standard formula SF which, besides some patient-related parameters, also considers the current value of blood glucose concentration (BG), normally measured by the patient through a fingerprick device. The recent approval by the U.S. Food and Drug Administration to use the measurements collected by wearable continuous glucose monitoring (CGM) sensors for insulin dosing of fers new perspectives. Indeed, CGM sensors provide real-time information on both glucose concentration and rate of change, currently not considered in the SF. The purpose of this work is to preliminary investigate the possibility of using neural networks (NN)s for the calculation of meal insulin bolus dose exploiting CGM-based information. Using the UVa/Padova TID Simulator, we generated data of 100 subjects in 9-h, single-meal, noise-free scenarios. In particular, for each subject we analyzed different meal conditions in terms of carbohydrate intakes, preprandial BG and glucose rate-of -change. Then, a fully-connected feedforward NN was trained, with the aim of estimating the insulin bolus needed to obtain the best glycemic outcomes according to the blood glucose risk index (BGRI). Preliminary results show that by using the NN to calculate insulin doses lower BGRI values are obtained, on average, compared to the SF. These results encourage further development of the approach and its assessment in more challenging scenarios.


Subject(s)
Diabetes Mellitus, Type 1/drug therapy , Insulin/administration & dosage , Blood Glucose/analysis , Blood Glucose Self-Monitoring/methods , Diabetes Mellitus, Type 1/blood , Humans , Insulin Infusion Systems , Neural Networks, Computer
16.
J Diabetes Sci Technol ; 12(2): 265-272, 2018 03.
Article in English | MEDLINE | ID: mdl-29493356

ABSTRACT

BACKGROUND: In type 1 diabetes (T1D) therapy, the calculation of the meal insulin bolus is performed according to a standard formula (SF) exploiting carbohydrate intake, carbohydrate-to-insulin ratio, correction factor, insulin on board, and target glucose. Recently, some approaches were proposed to account for preprandial glucose rate of change (ROC) in the SF, including those by Scheiner and by Pettus and Edelman. Here, the aim is to develop a new approach, based on neural networks (NN), to optimize and personalize the bolus calculation using continuous glucose monitoring information and some easily accessible patient parameters. METHOD: The UVa/Padova T1D Simulator was used to simulate data of 100 virtual adults in a single-meal noise-free scenario with different conditions in terms of meal amount and preprandial blood glucose and ROC values. An NN was trained to learn the optimal insulin dose using the SF parameters, ROC, body weight, insulin pump basal infusion rate and insulin sensitivity as features. The performance of the NN for meal bolus calculation was assessed by blood glucose risk index (BGRI) and compared to the methods by Scheiner and by Pettus and Edelman. RESULTS: The NN approach brings to a small but statistically significant ( P < .001) reduction of BGRI value, equal to 0.37, 0.23, and 0.20 versus SF, Scheiner, and Pettus and Edelman, respectively. CONCLUSION: This preliminary study showed the potentiality of using NNs for the personalization and optimization of the meal insulin bolus calculation. Future work will deal with more realistic scenarios including technological and physiological/behavioral sources of variability.


Subject(s)
Blood Glucose Self-Monitoring/methods , Diabetes Mellitus, Type 1/blood , Hypoglycemic Agents/administration & dosage , Insulin/administration & dosage , Neural Networks, Computer , Blood Glucose/analysis , Datasets as Topic , Humans
SELECTION OF CITATIONS
SEARCH DETAIL