Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Anim Sci ; 1022024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-38587413

RESUMO

The characteristics of chicken droppings are closely linked to their health status. In prior studies, chicken droppings recognition is treated as an object detection task, leading to challenges in labeling and missed detection due to the diverse shapes, overlapping boundaries, and dense distribution of chicken droppings. Additionally, the use of intelligent monitoring equipment equipped with edge devices in farms can significantly reduce manual labor. However, the limited computational power of edge devices presents challenges in deploying real-time segmentation algorithms for field applications. Therefore, this study redefines the task as a segmentation task, with the main objective being the development of a lightweight segmentation model for the automated monitoring of abnormal chicken droppings. A total of 60 Arbor Acres broilers were housed in 5 specific pathogen-free cages for over 3 wk, and 1650 RGB images of chicken droppings were randomly divided into training and testing sets in an 8:2 ratio to develop and test the model. Firstly, by incorporating the attention mechanism, multi-loss function, and auxiliary segmentation head, the segmentation accuracy of the DDRNet was enhanced. Then, by employing the group convolution and an advanced knowledge-distillation algorithm, a lightweight segmentation model named DDRNet-s-KD was obtained, which achieved a mean Dice coefficient (mDice) of 79.43% and an inference speed of 86.10 frames per second (FPS), showing a 2.91% and 61.2% increase in mDice and FPS compared to the benchmark model. Furthermore, the DDRNet-s-KD model was quantized from 32-bit floating-point values to 8-bit integers and then converted to TensorRT format. Impressively, the weight size of the quantized model was only 13.7 MB, representing an 82.96% reduction compared to the benchmark model. This makes it well-suited for deployment on the edge device, achieving an inference speed of 137.51 FPS on Jetson Xavier NX. In conclusion, the methods proposed in this study show significant potential in monitoring abnormal chicken droppings and can provide an effective reference for the implementation of other agricultural embedded systems.


The characteristics of chicken droppings are closely related to their health. In this study, we developed a lightweight segmentation model for chicken droppings and evaluated its inference speed on the edge device with limited computational power. The results showed that the proposed model exhibits significant potential in the early warning of abnormal chicken droppings, which can help producers implement interventions before disease outbreaks, thereby avoiding great economic losses. Additionally, the model optimization and compression processes proposed in this study can provide an effective reference for the implementation of other embedded systems.


Assuntos
Galinhas , Fezes , Animais , Algoritmos , Criação de Animais Domésticos/métodos , Processamento de Imagem Assistida por Computador/métodos , Organismos Livres de Patógenos Específicos
2.
Artigo em Inglês | MEDLINE | ID: mdl-38472722

RESUMO

This study introduces two models, ConvLSTM2D-liquid time-constant network (CLTC) and ConvLSTM2D-closed-form continuous-time neural network (CCfC), designed for abnormality identification using electrocardiogram (ECG) data. Trained on the Telehealth Network of Minas Gerais (TNMG) subset dataset, both models were evaluated for their performance, generalizability capacity, and resilience. They demonstrated comparable results in terms of F1 scores and AUROC values. The CCfC model achieved slightly higher accuracy, while the CLTC model showed better handling of empty channels. Remarkably, the models were successfully deployed on a resource-constrained microcontroller, proving their suitability for edge device applications. Generalization capabilities were confirmed through the evaluation on the China Physiological Signal Challenge 2018 (CPSC) dataset. The models' efficient resource utilization, occupying 70.6% of memory and 9.4% of flash memory, makes them promising candidates for real-world healthcare applications. Overall, this research advances abnormality identification in ECG data, contributing to the progress of AI in healthcare.

3.
Sensors (Basel) ; 24(3)2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38339490

RESUMO

The number of cameras utilised in smart city domains is increasingly prominent and notable for monitoring outdoor urban and rural areas such as farms and forests to deter thefts of farming machinery and livestock, as well as monitoring workers to guarantee their safety. However, anomaly detection tasks become much more challenging in environments with low-light conditions. Consequently, achieving efficient outcomes in recognising surrounding behaviours and events becomes difficult. Therefore, this research has developed a technique to enhance images captured in poor visibility. This enhancement aims to boost object detection accuracy and mitigate false positive detections. The proposed technique consists of several stages. In the first stage, features are extracted from input images. Subsequently, a classifier assigns a unique label to indicate the optimum model among multi-enhancement networks. In addition, it can distinguish scenes captured with sufficient light from low-light ones. Finally, a detection algorithm is applied to identify objects. Each task was implemented on a separate IoT-edge device, improving detection performance on the ExDark database with a nearly one-second response time across all stages.

4.
Micromachines (Basel) ; 14(9)2023 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-37763876

RESUMO

Personalized PageRank (PPR) is a widely used graph processing algorithm used to calculate the importance of source nodes in a graph. Generally, PPR is executed by using a high-performance microprocessor of a server, but it needs to be executed on edge devices to guarantee data privacy and network latency. However, since PPR has a variety of computation/memory characteristics that vary depending on the graph datasets, it causes performance/energy inefficiency when it is executed on edge devices with limited hardware resources. In this paper, we propose HedgeRank, a heterogeneity-aware, energy-efficient, partitioning technique of personalized PageRank at the edge. HedgeRank partitions the PPR subprocesses and allocates them to appropriate edge devices by considering their computation capability and energy efficiency. When combining low-power and high-performance edge devices, HedgeRank improves the execution time and energy consumption of PPR execution by up to 26.7% and 15.2% compared to the state-of-the-art PPR technique.

5.
Heliyon ; 9(8): e18606, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37593642

RESUMO

The global food crisis is becoming increasingly severe, and frequent grain bins fires can also lead to significant food losses at the same time. Accordingly, this paper proposes a model-compressed technique for promptly detecting small and thin smoke at the early stages of fire in grain bins. The proposed technique involves three key stages: (1) conducting smoke experiments in a back-up bin to acquire a dataset; (2) proposing a real-time detection model based on YOLO v5s with sparse training, channel pruning and model fine-tuning, and (3) the proposed model is subsequently deployed on different current edge devices. The experimental results indicate the proposed model can detect the smoke in grain bins effectively, with mAP and detection speed are 94.90% and 109.89 FPS respectively, and model size reduced by 5.11 MB. Furthermore, the proposed model is deployed on the edge device and achieved the detection speed of 49.26 FPS, thus allowing for real-time detection.

6.
Natl Sci Rev ; 10(8): nwac266, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37396141

RESUMO

Intelligent indoor robotics is expected to rapidly gain importance in crucial areas of our modern society such as at-home health care and factories. Yet, existing mobile robots are limited in their ability to perceive and respond to dynamically evolving complex indoor environments because of their inherently limited sensing and computing resources that are, moreover, traded off against their cruise time and payload. To address these formidable challenges, here we propose intelligent indoor metasurface robotics (I2MR), where all sensing and computing are relegated to a centralized robotic brain endowed with microwave perception; and I2MR's limbs (motorized vehicles, airborne drones, etc.) merely execute the wirelessly received instructions from the brain. The key aspect of our concept is the centralized use of a computation-enabled programmable metasurface that can flexibly mold microwave propagation in the indoor wireless environment, including a sensing and localization modality based on configurational diversity and a communication modality to establish a preferential high-capacity wireless link between the I2MR's brain and limbs. The metasurface-enhanced microwave perception is capable of realizing low-latency and high-resolution three-dimensional imaging of humans, even around corners and behind thick concrete walls, which is the basis for action decisions of the I2MR's brain. I2MR is thus endowed with real-time and full-context awareness of its operating indoor environment. We implement, experimentally, a proof-of-principle demonstration at ∼2.4 GHz, in which I2MR provides health-care assistance to a human inhabitant. The presented strategy opens a new avenue for the conception of smart and wirelessly networked indoor robotics.

7.
Sensors (Basel) ; 23(12)2023 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-37420727

RESUMO

Binarized Neural Network (BNN) is a quantized Convolutional Neural Network (CNN), reducing the precision of network parameters for a much smaller model size. In BNNs, the Batch Normalisation (BN) layer is essential. When running BN on edge devices, floating point instructions take up a significant number of cycles to perform. This work leverages the fixed nature of a model during inference, to reduce the full-precision memory footprint by half. This was achieved by pre-computing the BN parameters prior to quantization. The proposed BNN was validated through modeling the network on the MNIST dataset. Compared to the traditional method of computation, the proposed BNN reduced the memory utilization by 63% at 860-bytes without any significant impact on accuracy. By pre-computing portions of the BN layer, the number of cycles required to compute is reduced to two cycles on an edge device.


Assuntos
Redes Neurais de Computação , Corrida
8.
Sci Prog ; 106(2): 368504231177551, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37229758

RESUMO

Rivets are used to assemble layers in the air intakes, fuselages, and wings of an aircraft. After a long time of working under extreme conditions, pitting corrosion could appear in the rivets of the aircraft. The rivets could be broken down and thread the safety of the aircraft. In this paper, we proposed an ultrasonic testing method integrated with convolutional neural network (CNN) for the detection of corrosion in the rivets. The CNN model was designed to be lightweight enough to be able to run on edge devices. The CNN model was trained with a very limited sample of rivets, from 3 to 9 artificial pitting corrosive rivets. The results show that the proposed approach could detect up to 95.2% of pitting corrosion using experimental data with three training rivets. The detection accuracy could be improved to 99% by nine training rivets. The CNN model was implemented and ran on an edge device (Jetson Nano) in real-time with a small latency of 1.65 ms.

9.
Sensors (Basel) ; 23(3)2023 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-36772225

RESUMO

Tiny machine learning (TinyML) has become an emerging field according to the rapid growth in the area of the internet of things (IoT). However, most deep learning algorithms are too complex, require a lot of memory to store data, and consume an enormous amount of energy for calculation/data movement; therefore, the algorithms are not suitable for IoT devices such as various sensors and imaging systems. Furthermore, typical hardware accelerators cannot be embedded in these resource-constrained edge devices, and they are difficult to drive real-time inference processing as well. To perform the real-time processing on these battery-operated devices, deep learning models should be compact and hardware-optimized, and hardware accelerator designs also have to be lightweight and consume extremely low energy. Therefore, we present an optimized network model through model simplification and compression for the hardware to be implemented, and propose a hardware architecture for a lightweight and energy-efficient deep learning accelerator. The experimental results demonstrate that our optimized model successfully performs object detection, and the proposed hardware design achieves 1.25× and 4.27× smaller logic and BRAM size, respectively, and its energy consumption is approximately 10.37× lower than previous similar works with 43.95 fps as a real-time process under an operating frequency of 100 MHz on a Xilinx ZC702 FPGA.

10.
Micromachines (Basel) ; 13(6)2022 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-35744466

RESUMO

Recently, the Internet of Things (IoT) has gained a lot of attention, since IoT devices are placed in various fields. Many of these devices are based on machine learning (ML) models, which render them intelligent and able to make decisions. IoT devices typically have limited resources, which restricts the execution of complex ML models such as deep learning (DL) on them. In addition, connecting IoT devices to the cloud to transfer raw data and perform processing causes delayed system responses, exposes private data and increases communication costs. Therefore, to tackle these issues, there is a new technology called Tiny Machine Learning (TinyML), that has paved the way to meet the challenges of IoT devices. This technology allows processing of the data locally on the device without the need to send it to the cloud. In addition, TinyML permits the inference of ML models, concerning DL models on the device as a Microcontroller that has limited resources. The aim of this paper is to provide an overview of the revolution of TinyML and a review of tinyML studies, wherein the main contribution is to provide an analysis of the type of ML models used in tinyML studies; it also presents the details of datasets and the types and characteristics of the devices with an aim to clarify the state of the art and envision development requirements.

11.
AI Ethics ; 2(4): 623-630, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34790960

RESUMO

Artificial intelligence and edge devices have been used at an increased rate in managing the COVID-19 pandemic. In this article we review the lessons learned from COVID-19 to postulate possible solutions for a Disease X event. The overall purpose of the study and the research problems investigated is the integration of artificial intelligence function in digital healthcare systems. The basic design of the study includes a systematic state-of-the-art review, followed by an evaluation of different approaches to managing global pandemics. The study design then engages with constructing a new methodology for integrating algorithms in healthcare systems, followed by analysis of the new methodology and a discussion. Action research is applied to review existing state of the art, and a qualitative case study method is used to analyse the knowledge acquired from the COVID-19 pandemic. Major trends found as a result of the study derive from the synthesis of COVID-19 knowledge, presenting new insights in the form of a conceptual methodology-that includes six phases for managing a future Disease X event, resulting with a summary map of various problems, solutions and expected results from integrating functional AI in healthcare systems.

12.
Sensors (Basel) ; 21(23)2021 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-34884076

RESUMO

As the techniques of autonomous driving become increasingly valued and universal, real-time semantic segmentation has become very popular and challenging in the field of deep learning and computer vision in recent years. However, in order to apply the deep learning model to edge devices accompanying sensors on vehicles, we need to design a structure that has the best trade-off between accuracy and inference time. In previous works, several methods sacrificed accuracy to obtain a faster inference time, while others aimed to find the best accuracy under the condition of real time. Nevertheless, the accuracies of previous real-time semantic segmentation methods still have a large gap compared to general semantic segmentation methods. As a result, we propose a network architecture based on a dual encoder and a self-attention mechanism. Compared with preceding works, we achieved a 78.6% mIoU with a speed of 39.4 FPS with a 1024 × 2048 resolution on a Cityscapes test submission.


Assuntos
Condução de Veículo , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador , Semântica
13.
Front Plant Sci ; 12: 740936, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34721466

RESUMO

In recent years, deep-learning-based fruit-detection technology has exhibited excellent performance in modern horticulture research. However, deploying deep learning algorithms in real-time field applications is still challenging, owing to the relatively low image processing capability of edge devices. Such limitations are becoming a new bottleneck and hindering the utilization of AI algorithms in modern horticulture. In this paper, we propose a lightweight fruit-detection algorithm, specifically designed for edge devices. The algorithm is based on Light-CSPNet as the backbone network, an improved feature-extraction module, a down-sampling method, and a feature-fusion module, and it ensures real-time detection on edge devices while maintaining the fruit-detection accuracy. The proposed algorithm was tested on three edge devices: NVIDIA Jetson Xavier NX, NVIDIA Jetson TX2, and NVIDIA Jetson NANO. The experimental results show that the average detection precision of the proposed algorithm for orange, tomato, and apple datasets are 0.93, 0.847, and 0.850, respectively. Deploying the algorithm, the detection speed of NVIDIA Jetson Xavier NX reaches 21.3, 24.8, and 22.2 FPS, while that of NVIDIA Jetson TX2 reaches 13.9, 14.1, and 14.5 FPS and that of NVIDIA Jetson NANO reaches 6.3, 5.0, and 8.5 FPS for the three datasets. Additionally, the proposed algorithm provides a component add/remove function to flexibly adjust the model structure, considering the trade-off between the detection accuracy and speed in practical usage.

14.
Bioengineering (Basel) ; 8(11)2021 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-34821716

RESUMO

The success of deep machine learning (DML) models in gaming and robotics has increased its trial in clinical and public healthcare solutions. In applying DML to healthcare problems, a special challenge of inadequate electrical energy and computing resources exists in regional and developing areas of the world. In this paper, we evaluate and report the computational and predictive performance design trade-offs for four candidate deep learning models that can be deployed for rapid malaria case finding. The goal is to maximise malaria detection accuracy while reducing computing resource and energy consumption. Based on our experimental results using a blood smear malaria test data set, the quantised versions of Basic Convolutional Neural Network (B-CNN) and MobileNetV2 have better malaria detection performance (up to 99% recall), lower memory usage (2MB 8-bit quantised model) and shorter inference time (33-95 microseconds on mobile phones) than VGG-19 fine-tuned and quantised models. Hence, we have implemented MobileNetV2 in our mobile application as it has even a lower memory requirement than B-CNN. This work will help to counter the negative effects of COVID-19 on the previous successes towards global malaria elimination.

15.
Sensors (Basel) ; 20(9)2020 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-32365645

RESUMO

In a few years, the world will be populated by billions of connected devices that will be placed in our homes, cities, vehicles, and industries. Devices with limited resources will interact with the surrounding environment and users. Many of these devices will be based on machine learning models to decode meaning and behavior behind sensors' data, to implement accurate predictions and make decisions. The bottleneck will be the high level of connected things that could congest the network. Hence, the need to incorporate intelligence on end devices using machine learning algorithms. Deploying machine learning on such edge devices improves the network congestion by allowing computations to be performed close to the data sources. The aim of this work is to provide a review of the main techniques that guarantee the execution of machine learning models on hardware with low performances in the Internet of Things paradigm, paving the way to the Internet of Conscious Things. In this work, a detailed review on models, architecture, and requirements on solutions that implement edge machine learning on Internet of Things devices is presented, with the main goal to define the state of the art and envisioning development requirements. Furthermore, an example of edge machine learning implementation on a microcontroller will be provided, commonly regarded as the machine learning "Hello World".

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...