Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(20)2022 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-36298158

RESUMO

The exponential increase in internet data poses several challenges to cloud systems and data centers, such as scalability, power overheads, network load, and data security. To overcome these limitations, research is focusing on the development of edge computing systems, i.e., based on a distributed computing model in which data processing occurs as close as possible to where the data are collected. Edge computing, indeed, mitigates the limitations of cloud computing, implementing artificial intelligence algorithms directly on the embedded devices enabling low latency responses without network overhead or high costs, and improving solution scalability. Today, the hardware improvements of the edge devices make them capable of performing, even if with some constraints, complex computations, such as those required by Deep Neural Networks. Nevertheless, to efficiently implement deep learning algorithms on devices with limited computing power, it is necessary to minimize the production time and to quickly identify, deploy, and, if necessary, optimize the best Neural Network solution. This study focuses on developing a universal method to identify and port the best Neural Network on an edge system, valid regardless of the device, Neural Network, and task typology. The method is based on three steps: a trade-off step to obtain the best Neural Network within different solutions under investigation; an optimization step to find the best configurations of parameters under different acceleration techniques; eventually, an explainability step using local interpretable model-agnostic explanations (LIME), which provides a global approach to quantify the goodness of the classifier decision criteria. We evaluated several MobileNets on the Fudan Shangai-Tech dataset to test the proposed approach.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Computação em Nuvem , Algoritmos , Computadores
2.
Bioengineering (Basel) ; 9(5)2022 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-35621461

RESUMO

BACKGROUND: Type 1 Diabetes Mellitus (T1D) is an autoimmune disease that can cause serious complications that can be avoided by preventing the glycemic levels from exceeding the physiological range. Straightforwardly, many data-driven models were developed to forecast future glycemic levels and to allow patients to avoid adverse events. Most models are tuned on data of adult patients, whereas the prediction of glycemic levels of pediatric patients has been rarely investigated, as they represent the most challenging T1D population. METHODS: A Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) Recurrent Neural Network were optimized on glucose, insulin, and meal data of 10 virtual pediatric patients. The trained models were then implemented on two edge-computing boards to evaluate the feasibility of an edge system for glucose forecasting in terms of prediction accuracy and inference time. RESULTS: The LSTM model achieved the best numeric and clinical accuracy when tested in the .tflite format, whereas the CNN achieved the best clinical accuracy in uint8. The inference time for each prediction was far under the limit represented by the sampling period. CONCLUSION: Both models effectively predict glucose in pediatric patients in terms of numerical and clinical accuracy. The edge implementation did not show a significant performance decrease, and the inference time was largely adequate for a real-time application.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA