Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Environ Monit Assess ; 196(8): 720, 2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-38985219

RESUMEN

Managing e-waste involves collecting it, extracting valuable metals at low costs, and ensuring environmentally safe disposal. However, monitoring this process has become challenging due to e-waste expansion. With IoT technology like LoRa-LPWAN, pre-collection monitoring becomes more cost-effective. Our paper presents an e-waste collection and recovery system utilizing the LoRa-LPWAN standard, integrating intelligence at the edge and fog layers. The system incentivizes WEEE holders, encouraging participation in the innovative collection process. The city administration oversees this process using innovative trucks, GPS, LoRaWAN, RFID, and BLE technologies. Analysis of IoT performance factors and quantitative assessments (latency and collision probability on LoRa, Sigfox, and NB-IoT) demonstrate the effectiveness of our incentive-driven IoT solution, particularly with LoRa standard and Edge AI integration. Additionally, cost estimates show the advantage of LoRaWAN. Moreover, the proposed IoT-based e-waste management solution promises cost savings, stakeholder trust, and long-term effectiveness through streamlined processes and human resource training. Integration with government databases involves data standardization, API development, security measures, and functionality testing for efficient management.


Asunto(s)
Residuos Electrónicos , Administración de Residuos , Administración de Residuos/métodos , Inteligencia Artificial , Monitoreo del Ambiente/métodos , Internet de las Cosas , Conservación de los Recursos Naturales/métodos
2.
Sensors (Basel) ; 23(3)2023 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-36772319

RESUMEN

Artificial Intelligence (Al) models are being produced and used to solve a variety of current and future business and technical problems. Therefore, AI model engineering processes, platforms, and products are acquiring special significance across industry verticals. For achieving deeper automation, the number of data features being used while generating highly promising and productive AI models is numerous, and hence the resulting AI models are bulky. Such heavyweight models consume a lot of computation, storage, networking, and energy resources. On the other side, increasingly, AI models are being deployed in IoT devices to ensure real-time knowledge discovery and dissemination. Real-time insights are of paramount importance in producing and releasing real-time and intelligent services and applications. Thus, edge intelligence through on-device data processing has laid down a stimulating foundation for real-time intelligent enterprises and environments. With these emerging requirements, the focus turned towards unearthing competent and cognitive techniques for maximally compressing huge AI models without sacrificing AI model performance. Therefore, AI researchers have come up with a number of powerful optimization techniques and tools to optimize AI models. This paper is to dig deep and describe all kinds of model optimization at different levels and layers. Having learned the optimization methods, this work has highlighted the importance of having an enabling AI model optimization framework.

3.
Sensors (Basel) ; 23(5)2023 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-36904586

RESUMEN

Over the last few years, several studies have appeared that employ Artificial Intelligence (AI) techniques to improve sustainable development in the agricultural sector. Specifically, these intelligent techniques provide mechanisms and procedures to facilitate decision-making in the agri-food industry. One of the application areas has been the automatic detection of plant diseases. These techniques, mainly based on deep learning models, allow for analysing and classifying plants to determine possible diseases facilitating early detection and thus preventing the propagation of the disease. In this way, this paper proposes an Edge-AI device that incorporates the necessary hardware and software components for automatically detecting plant diseases from a set of images of a plant leaf. In this way, the main goal of this work is to design an autonomous device that allows the detection of possible diseases that can detect potential diseases in plants. This will be achieved by capturing multiple images of the leaves and implementing data fusion techniques to enhance the classification process and improve its robustness. Several tests have been carried out to determine that the use of this device significantly increases the robustness of the classification responses to possible plant diseases.


Asunto(s)
Agricultura , Inteligencia Artificial , Consenso , Inteligencia , Enfermedades de las Plantas
4.
Sensors (Basel) ; 23(4)2023 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-36850763

RESUMEN

Deep Learning models have presented promising results when applied to Agriculture 4.0. Among other applications, these models can be used in disease detection and fruit counting. Deep Learning models usually have many layers in the architecture and millions of parameters. This aspect hinders the use of Deep Learning on mobile devices as they require a large amount of processing power for inference. In addition, the lack of high-quality Internet connectivity in the field impedes the usage of cloud computing, pushing the processing towards edge devices. This work describes the proposal of an edge AI application to detect and map diseases in citrus orchards. The proposed system has low computational demand, enabling the use of low-footprint models for both detection and classification tasks. We initially compared AI algorithms to detect fruits on trees. Specifically, we analyzed and compared YOLO and Faster R-CNN. Then, we studied lean AI models to perform the classification task. In this context, we tested and compared the performance of MobileNetV2, EfficientNetV2-B0, and NASNet-Mobile. In the detection task, YOLO and Faster R-CNN had similar AI performance metrics, but YOLO was significantly faster. In the image classification task, MobileNetMobileV2 and EfficientNetV2-B0 obtained an accuracy of 100%, while NASNet-Mobile had a 98% performance. As for the timing performance, MobileNetV2 and EfficientNetV2-B0 were the best candidates, while NASNet-Mobile was significantly worse. Furthermore, MobileNetV2 had a 10% better performance than EfficientNetV2-B0. Finally, we provide a method to evaluate the results from these algorithms towards describing the disease spread using statistical parametric models and a genetic algorithm to perform the parameters' regression. With these results, we validated the proposed pipeline, enabling the usage of adequate AI models to develop a mobile edge AI solution.


Asunto(s)
Agricultura , Citrus , Algoritmos , Benchmarking , Inteligencia Artificial
5.
Sensors (Basel) ; 23(10)2023 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-37430583

RESUMEN

Over the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to industrial. In this latter scenario, however, using consumer Personal Computer (PC) hardware is not always suitable for the potential harsh conditions of the working environment and the strict timing that industrial applications typically have. Therefore, the design of custom FPGA (Field Programmable Gate Array) solutions for network inference is gaining massive attention from researchers and companies as well. In this paper, we propose a family of network architectures composed of three kinds of custom layers working with integer arithmetic with a customizable precision (down to just two bits). Such layers are designed to be effectively trained on classical GPUs (Graphics Processing Units) and then synthesized to FPGA hardware for real-time inference. The idea is to provide a trainable quantization layer, called Requantizer, acting both as a non-linear activation for neurons and a value rescaler to match the desired bit precision. This way, the training is not only quantization-aware, but also capable of estimating the optimal scaling coefficients to accommodate both the non-linear nature of the activations and the constraints imposed by the limited precision. In the experimental section, we test the performance of this kind of model while working both on classical PC hardware and a case-study implementation of a signal peak detection device running on a real FPGA. We employ TensorFlow Lite for training and comparison, and use Xilinx FPGAs and Vivado for synthesis and implementation. The results show an accuracy of the quantized networks close to the floating point version, without the need for representative data for calibration as in other approaches, and performance that is better than dedicated peak detection algorithms. The FPGA implementation is able to run in real time at a rate of four gigapixels per second with moderate hardware resources, while achieving a sustained efficiency of 0.5 TOPS/W (tera operations per second per watt), in line with custom integrated hardware accelerators.

6.
Sensors (Basel) ; 23(9)2023 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-37177550

RESUMEN

This paper delves into image detection based on distributed deep-learning techniques for intelligent traffic systems or self-driving cars. The accuracy and precision of neural networks deployed on edge devices (e.g., CCTV (closed-circuit television) for road surveillance) with small datasets may be compromised, leading to the misjudgment of targets. To address this challenge, TensorFlow and PyTorch were used to initialize various distributed model parallel and data parallel techniques. Despite the success of these techniques, communication constraints were observed along with certain speed issues. As a result, a hybrid pipeline was proposed, combining both dataset and model distribution through an all-reduced algorithm and NVlinks to prevent miscommunication among gradients. The proposed approach was tested on both an edge cluster and Google cluster environment, demonstrating superior performance compared to other test settings, with the quality of the bounding box detection system meeting expectations with increased reliability. Performance metrics, including total training time, images/second, cross-entropy loss, and total loss against the number of the epoch, were evaluated, revealing a robust competition between TensorFlow and PyTorch. The PyTorch environment's hybrid pipeline outperformed other test settings.

7.
Sensors (Basel) ; 23(6)2023 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-36991712

RESUMEN

This research describes the use of high-performance computing (HPC) and deep learning to create prediction models that could be deployed on edge AI devices equipped with camera and installed in poultry farms. The main idea is to leverage an existing IoT farming platform and use HPC offline to run deep learning to train the models for object detection and object segmentation, where the objects are chickens in images taken on farm. The models can be ported from HPC to edge AI devices to create a new type of computer vision kit to enhance the existing digital poultry farm platform. Such new sensors enable implementing functions such as counting chickens, detection of dead chickens, and even assessing their weight or detecting uneven growth. These functions combined with the monitoring of environmental parameters, could enable early disease detection and improve the decision-making process. The experiment focused on Faster R-CNN architectures and AutoML was used to identify the most suitable architecture for chicken detection and segmentation for the given dataset. For the selected architectures, further hyperparameter optimization was carried out and we achieved the accuracy of AP = 85%, AP50 = 98%, and AP75 = 96% for object detection and AP = 90%, AP50 = 98%, and AP75 = 96% for instance segmentation. These models were installed on edge AI devices and evaluated in the online mode on actual poultry farms. Initial results are promising, but further development of the dataset and improvements in prediction models is needed.


Asunto(s)
Aprendizaje Profundo , Aves de Corral , Animales , Granjas , Pollos , Computadores
8.
Sensors (Basel) ; 23(15)2023 Aug 03.
Artículo en Inglés | MEDLINE | ID: mdl-37571678

RESUMEN

Smart wearable devices enable personalized at-home healthcare by unobtrusively collecting patient health data and facilitating the development of intelligent platforms to support patient care and management. The accurate analysis of data obtained from wearable devices is crucial for interpreting and contextualizing health data and facilitating the reliable diagnosis and management of critical and chronic diseases. The combination of edge computing and artificial intelligence has provided real-time, time-critical, and privacy-preserving data analysis solutions. However, based on the envisioned service, evaluating the additive value of edge intelligence to the overall architecture is essential before implementation. This article aims to comprehensively analyze the current state of the art on smart health infrastructures implementing wearable and AI technologies at the far edge to support patients with chronic heart failure (CHF). In particular, we highlight the contribution of edge intelligence in supporting the integration of wearable devices into IoT-aware technology infrastructures that provide services for patient diagnosis and management. We also offer an in-depth analysis of open challenges and provide potential solutions to facilitate the integration of wearable devices with edge AI solutions to provide innovative technological infrastructures and interactive services for patients and doctors.


Asunto(s)
Insuficiencia Cardíaca , Dispositivos Electrónicos Vestibles , Humanos , Inteligencia Artificial , Concienciación , Enfermedad Crónica , Insuficiencia Cardíaca/diagnóstico , Insuficiencia Cardíaca/terapia
9.
Sensors (Basel) ; 23(6)2023 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-36991659

RESUMEN

Internet of things (IoT)-enabled wireless body area network (WBAN) is an emerging technology that combines medical devices, wireless devices, and non-medical devices for healthcare management applications. Speech emotion recognition (SER) is an active research field in the healthcare domain and machine learning. It is a technique that can be used to automatically identify speakers' emotions from their speech. However, the SER system, especially in the healthcare domain, is confronted with a few challenges. For example, low prediction accuracy, high computational complexity, delay in real-time prediction, and how to identify appropriate features from speech. Motivated by these research gaps, we proposed an emotion-aware IoT-enabled WBAN system within the healthcare framework where data processing and long-range data transmissions are performed by an edge AI system for real-time prediction of patients' speech emotions as well as to capture the changes in emotions before and after treatment. Additionally, we investigated the effectiveness of different machine learning and deep learning algorithms in terms of performance classification, feature extraction methods, and normalization methods. We developed a hybrid deep learning model, i.e., convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM), and a regularized CNN model. We combined the models with different optimization strategies and regularization techniques to improve the prediction accuracy, reduce generalization error, and reduce the computational complexity of the neural networks in terms of their computational time, power, and space. Different experiments were performed to check the efficiency and effectiveness of the proposed machine learning and deep learning algorithms. The proposed models are compared with a related existing model for evaluation and validation using standard performance metrics such as prediction accuracy, precision, recall, F1 score, confusion matrix, and the differences between the actual and predicted values. The experimental results proved that one of the proposed models outperformed the existing model with an accuracy of about 98%.


Asunto(s)
Internet de las Cosas , Habla , Humanos , Redes Neurales de la Computación , Aprendizaje Automático , Emociones
10.
Sensors (Basel) ; 23(9)2023 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-37177461

RESUMEN

The paper presents a comprehensive overview of intelligent video analytics and human action recognition methods. The article provides an overview of the current state of knowledge in the field of human activity recognition, including various techniques such as pose-based, tracking-based, spatio-temporal, and deep learning-based approaches, including visual transformers. We also discuss the challenges and limitations of these techniques and the potential of modern edge AI architectures to enable real-time human action recognition in resource-constrained environments.


Asunto(s)
Actividades Humanas , Reconocimiento de Normas Patrones Automatizadas , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos
11.
Sensors (Basel) ; 22(2)2022 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-35062410

RESUMEN

Edge Computing (EC) is a new architecture that extends Cloud Computing (CC) services closer to data sources. EC combined with Deep Learning (DL) is a promising technology and is widely used in several applications. However, in conventional DL architectures with EC enabled, data producers must frequently send and share data with third parties, edge or cloud servers, to train their models. This architecture is often impractical due to the high bandwidth requirements, legalization, and privacy vulnerabilities. The Federated Learning (FL) concept has recently emerged as a promising solution for mitigating the problems of unwanted bandwidth loss, data privacy, and legalization. FL can co-train models across distributed clients, such as mobile phones, automobiles, hospitals, and more, through a centralized server, while maintaining data localization. FL can therefore be viewed as a stimulating factor in the EC paradigm as it enables collaborative learning and model optimization. Although the existing surveys have taken into account applications of FL in EC environments, there has not been any systematic survey discussing FL implementation and challenges in the EC paradigm. This paper aims to provide a systematic survey of the literature on the implementation of FL in EC environments with a taxonomy to identify advanced solutions and other open problems. In this survey, we review the fundamentals of EC and FL, then we review the existing related works in FL in EC. Furthermore, we describe the protocols, architecture, framework, and hardware requirements for FL implementation in the EC environment. Moreover, we discuss the applications, challenges, and related existing solutions in the edge FL. Finally, we detail two relevant case studies of applying FL in EC, and we identify open issues and potential directions for future research. We believe this survey will help researchers better understand the connection between FL and EC enabling technologies and concepts.


Asunto(s)
Nube Computacional , Privacidad , Predicción , Humanos
12.
Sensors (Basel) ; 22(24)2022 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-36560026

RESUMEN

Edge artificial intelligence (EDGE-AI) refers to the execution of artificial intelligence algorithms on hardware devices while processing sensor data/signals in order to extract information and identify patterns, without utilizing the cloud. In the field of predictive maintenance for industrial applications, EDGE-AI systems can provide operational state recognition for machines and production chains, almost in real time. This work presents two methodological approaches for the detection of the operational states of a DC motor, based on sound data. Initially, features were extracted using an audio dataset. Two different Convolutional Neural Network (CNN) models were trained for the particular classification problem. These two models are subject to post-training quantization and an appropriate conversion/compression in order to be deployed to microcontroller units (MCUs) through utilizing appropriate software tools. A real-time validation experiment was conducted, including the simulation of a custom stress test environment, to check the deployed models' performance on the recognition of the engine's operational states and the response time for the transition between the engine's states. Finally, the two implementations were compared in terms of classification accuracy, latency, and resource utilization, leading to promising results.


Asunto(s)
Algoritmos , Inteligencia Artificial , Redes Neurales de la Computación , Programas Informáticos , Simulación por Computador
13.
Sensors (Basel) ; 22(6)2022 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-35336335

RESUMEN

Artificial Intelligence (AI) in Cyber-Physical Systems allows machine learning inference on acquired data with ever greater accuracy, thanks to models trained with massive amounts of information generated by Internet of Things devices. Edge Intelligence is increasingly adopted to execute inference on data at the border of local networks, exploiting models trained in the Cloud. However, the training tasks on Edge nodes are not supported yet with flexible dynamic migration between Edge and Cloud. This paper proposes a Cloud-Edge AI microservice architecture, based on Osmotic Computing principles. Notable features include: (i) containerized architecture enabling training and inference on the Edge, Cloud, or both, exploiting computational resources opportunistically to reach the best prediction accuracy; and (ii) microservice encapsulation of each architectural module, allowing a direct mapping with Commercial-Off-The-Shelf (COTS) components. Grounding on the proposed architecture: (i) a prototype has been realized with commodity hardware leveraging open-source software technologies; and (ii) it has been then used in a small-scale intelligent manufacturing case study, carrying out experiments. The obtained results validate the feasibility and key benefits of the approach.


Asunto(s)
Inteligencia Artificial , Programas Informáticos , Inteligencia , Ósmosis
14.
Sensors (Basel) ; 22(13)2022 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-35808373

RESUMEN

The digital transformation of agriculture is a promising necessity for tackling the increasing nutritional needs of the population on Earth and the degradation of natural resources. Focusing on the "hot" area of natural resource preservation, the recent appearance of more efficient and cheaper microcontrollers, the advances in low-power and long-range radios, and the availability of accompanying software tools are exploited in order to monitor water consumption and to detect and report misuse events, with reduced power and network bandwidth requirements. Quite often, large quantities of water are wasted for a variety of reasons; from broken irrigation pipes to people's negligence. To tackle this problem, the necessary design and implementation details are highlighted for an experimental water usage reporting system that exhibits Edge Artificial Intelligence (Edge AI) functionality. By combining modern technologies, such as Internet of Things (IoT), Edge Computing (EC) and Machine Learning (ML), the deployment of a compact automated detection mechanism can be easier than before, while the information that has to travel from the edges of the network to the cloud and thus the corresponding energy footprint are drastically reduced. In parallel, characteristic implementation challenges are discussed, and a first set of corresponding evaluation results is presented.


Asunto(s)
Inteligencia Artificial , Internet de las Cosas , Agricultura , Humanos , Aprendizaje Automático , Agua
15.
World Wide Web ; 25(5): 1883-1903, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35002476

RESUMEN

With the development of telemedicine and edge computing, edge artificial intelligence (AI) will become a new development trend for smart medicine. On the other hand, nearly one-third of children suffer from sleep disorders. However, all existing sleep staging methods are for adults. Therefore, we adapted edge AI to develop a lightweight automatic sleep staging method for children using single-channel EEG. The trained sleep staging model will be deployed to edge smart devices so that the sleep staging can be implemented on edge devices which will greatly save network resources and improving the performance and privacy of sleep staging application. Then the results and hypnogram will be uploaded to the cloud server for further analysis by the physicians to get sleep disease diagnosis reports and treatment opinions. We utilized 1D convolutional neural networks (1D-CNN) and long short term memory (LSTM) to build our sleep staging model, named CSleepNet. We tested the model on our childrens sleep (CS) dataset and sleep-EDFX dataset. For the CS dataset, we experimented with F4-M1 channel EEG using four different loss functions, and the logcosh performed best with overall accuracy of 83.06% and F1-score of 76.50%. We used Fpz-Cz and Pz-Oz channel EEG to train our model in Sleep-EDFX dataset, and achieved an accuracy of 86.41% without manual feature extraction. The experimental results show that our method has great potential. It not only plays an important role in sleep-related research, but also can be widely used in the classification of other time sequences physiological signals.

16.
Sensors (Basel) ; 22(1)2021 Dec 28.
Artículo en Inglés | MEDLINE | ID: mdl-35009712

RESUMEN

There is a constant risk of iron ore collapsing during its transfer between processing stages in beneficiation plants. Existing instrumentation is not only expensive but also complex and challenging to maintain. In this research, we propose using edge artificial intelligence for early detection of landslide risk based on images of iron ore transported on conveyor belts. During this work, we defined the device edge and the deep neural network model. Then, we built a prototype will to collect images that will be used for training the model. This model will be compressed for use in the device edge. This same prototype will be used for field tests of the model under operational conditions. In building the prototype, a real-time clock was used to ensure the synchronization of image records with the plant's process information, ensuring the correct classification of images by the process specialist. The results obtained in the field tests of the prototype with an accuracy of 91% and a recall of 96% indicate the feasibility of using deep learning at the edge to detect the type of iron ore and prevent its risk of avalanche.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Hierro , Redes Neurales de la Computación
17.
Sensors (Basel) ; 21(15)2021 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-34372319

RESUMEN

Ecological environments research helps to assess the impacts on forests and managing forests. The usage of novel software and hardware technologies enforces the solution of tasks related to this problem. In addition, the lack of connectivity for large data throughput raises the demand for edge-computing-based solutions towards this goal. Therefore, in this work, we evaluate the opportunity of using a Wearable edge AI concept in a forest environment. For this matter, we propose a new approach to the hardware/software co-design process. We also address the possibility of creating wearable edge AI, where the wireless personal and body area networks are platforms for building applications using edge AI. Finally, we evaluate a case study to test the possibility of performing an edge AI task in a wearable-based environment. Thus, in this work, we evaluate the system to achieve the desired task, the hardware resource and performance, and the network latency associated with each part of the process. Through this work, we validated both the design pattern review and case study. In the case study, the developed algorithms could classify diseased leaves with a circa 90% accuracy with the proposed technique in the field. This results can be reviewed in the laboratory with more modern models that reached up to 96% global accuracy. The system could also perform the desired tasks with a quality factor of 0.95, considering the usage of three devices. Finally, it detected a disease epicenter with an offset of circa 0.5 m in a 6 m × 6 m × 12 m space. These results enforce the usage of the proposed methods in the targeted environment and the proposed changes in the co-design pattern.


Asunto(s)
Algoritmos , Dispositivos Electrónicos Vestibles , Inteligencia Artificial , Diseño de Equipo , Humanos , Programas Informáticos
18.
Sensors (Basel) ; 21(17)2021 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-34502637

RESUMEN

Internet of Things (IoT) can help to pave the way to the circular economy and to a more sustainable world by enabling the digitalization of many operations and processes, such as water distribution, preventive maintenance, or smart manufacturing. Paradoxically, IoT technologies and paradigms such as edge computing, although they have a huge potential for the digital transition towards sustainability, they are not yet contributing to the sustainable development of the IoT sector itself. In fact, such a sector has a significant carbon footprint due to the use of scarce raw materials and its energy consumption in manufacturing, operating, and recycling processes. To tackle these issues, the Green IoT (G-IoT) paradigm has emerged as a research area to reduce such carbon footprint; however, its sustainable vision collides directly with the advent of Edge Artificial Intelligence (Edge AI), which imposes the consumption of additional energy. This article deals with this problem by exploring the different aspects that impact the design and development of Edge-AI G-IoT systems. Moreover, it presents a practical Industry 5.0 use case that illustrates the different concepts analyzed throughout the article. Specifically, the proposed scenario consists in an Industry 5.0 smart workshop that looks for improving operator safety and operation tracking. Such an application case makes use of a mist computing architecture composed of AI-enabled IoT nodes. After describing the application case, it is evaluated its energy consumption and it is analyzed the impact on the carbon footprint that it may have on different countries. Overall, this article provides guidelines that will help future developers to face the challenges that will arise when creating the next generation of Edge-AI G-IoT systems.


Asunto(s)
Internet de las Cosas , Inteligencia Artificial , Pruebas Diagnósticas de Rutina , Industrias , Tecnología
19.
Sensors (Basel) ; 20(24)2020 Dec 21.
Artículo en Inglés | MEDLINE | ID: mdl-33371514

RESUMEN

Telemedicine and all types of monitoring systems have proven to be a useful and low-cost tool with a high level of applicability in cardiology. The objective of this work is to present an IoT-based monitoring system for cardiovascular patients. The system sends the ECG signal to a Fog layer service by using the LoRa communication protocol. Also, it includes an AI algorithm based on deep learning for the detection of Atrial Fibrillation and other heart rhythms. The automatic detection of arrhythmias can be complementary to the diagnosis made by the physician, achieving a better clinical vision that improves therapeutic decision making. The performance of the proposed system is evaluated on a dataset of 8.528 short single-lead ECG records using two merge MobileNet networks that classify data with an accuracy of 90% for atrial fibrillation.


Asunto(s)
Fibrilación Atrial , Enfermedades Cardiovasculares/diagnóstico , Electrocardiografía , Internet de las Cosas , Redes Neurales de la Computación , Algoritmos , Fibrilación Atrial/diagnóstico , Nube Computacional , Humanos , Monitoreo Fisiológico
20.
Heliyon ; 10(12): e32609, 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-38975192

RESUMEN

Closed-loop neuromodulation with intelligence methods has shown great potentials in providing novel neuro-technology for treating neurological and psychiatric diseases. Development of brain-machine interactive neuromodulation strategies could lead to breakthroughs in precision and personalized electronic medicine. The neuromodulation research tool integrating artificial intelligent computing and performing neural sensing and stimulation in real-time could accelerate the development of closed-loop neuromodulation strategies and translational research into clinical application. In this study, we developed a brain-machine interactive neuromodulation research tool (BMINT), which has capabilities of neurophysiological signals sensing, computing with mainstream machine learning algorithms and delivering electrical stimulation pulse by pulse in real-time. The BMINT research tool achieved system time delay under 3 ms, and computing capabilities in feasible computation cost, efficient deployment of machine learning algorithms and acceleration process. Intelligent computing framework embedded in the BMINT enable real-time closed-loop neuromodulation developed with mainstream AI ecosystem resources. The BMINT could provide timely contribution to accelerate the translational research of intelligent neuromodulation by integrating neural sensing, edge AI computing and stimulation with AI ecosystems.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA