Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Big Data ; 7: 1359906, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38953011

RESUMO

Persuasive technologies, in connection with human factor engineering requirements for healthy workplaces, have played a significant role in ensuring a change in human behavior. Healthy workplaces suggest different best practices applicable to body posture, proximity to the computer system, movement, lighting conditions, computer system layout, and other significant psychological and cognitive aspects. Most importantly, body posture suggests how users should sit or stand in workplaces in line with best and healthy practices. In this study, we developed two study phases (pilot and main) using two deep learning models: convolutional neural networks (CNN) and Yolo-V3. To train the two models, we collected posture datasets from creative common license YouTube videos and Kaggle. We classified the dataset into comfortable and uncomfortable postures. Results show that our YOLO-V3 model outperformed CNN model with a mean average precision of 92%. Based on this finding, we recommend that YOLO-V3 model be integrated in the design of persuasive technologies for a healthy workplace. Additionally, we provide future implications for integrating proximity detection taking into consideration the ideal number of centimeters users should maintain in a healthy workplace.

2.
Sci Rep ; 14(1): 8627, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622182

RESUMO

A bridge disease identification approach based on an enhanced YOLO v3 algorithm is suggested to increase the accuracy of apparent disease detection of concrete bridges under complex backgrounds. First, the YOLO v3 network structure is enhanced to better accommodate the dense distribution and large variation of disease scale characteristics, and the detection layer incorporates the squeeze and excitation (SE) networks attention mechanism module and spatial pyramid pooling module to strengthen the semantic feature extraction ability. Secondly, CIoU with better localization ability is selected as the loss function for training. Finally, the K-means algorithm is used for anchor frame clustering on the bridge surface disease defects dataset. 1363 datasets containing exposed reinforcement, spalling, and water erosion damage of bridges are produced, and network training is done after manual labelling and data improvement in order to test the efficacy of the algorithm described in this paper. According to the trial results, the YOLO v3 model has enhanced more than the original model in terms of precision rate, recall rate, Average Precision (AP), and other indicators. Its overall mean Average Precision (mAP) value has also grown by 5.5%. With the RTX2080Ti graphics card, the detection frame rate increases to 84 Frames Per Second, enabling more precise and real-time bridge illness detection.

3.
Graefes Arch Clin Exp Ophthalmol ; 262(1): 231-247, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37548671

RESUMO

BACKGROUND: In this article, we present a computerized system for the analysis and assessment of diabetic retinopathy (DR) based on retinal fundus photographs. DR is a chronic ophthalmic disease and a major reason for blindness in people with diabetes. Consistent examination and prompt diagnosis are the vital approaches to control DR. METHODS: With the aim of enhancing the reliability of DR diagnosis, we utilized the deep learning model called You Only Look Once V3 (YOLO V3) to recognize and classify DR from retinal images. The DR was classified into five major stages: normal, mild, moderate, severe, and proliferative. We evaluated the performance of the YOLO V3 algorithm based on color fundus images. RESULTS: We have achieved high precision and sensitivity on the train and test data for the DR classification and mean average precision (mAP) is calculated on DR lesion detection. CONCLUSIONS: The results indicate that the suggested model distinguishes all phases of DR and performs better than existing models in terms of accuracy and implementation time.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico , Reprodutibilidade dos Testes , Fundo de Olho , Algoritmos
4.
PeerJ Comput Sci ; 9: e1673, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38077557

RESUMO

For the problem of insufficient small target detection ability of the existing network model, a vehicle target detection method based on the improved YOLO V3 network model is proposed in the article. The improvement of the algorithm model can effectively improve the detection ability of small target vehicles in aerial photography. The optimization and adjustment of the anchor box and the improvement of the network residual module have improved the small target detection effect of the algorithm. Furthermore, the introduction of the rectangular prediction frame with orientation angles into the model of this article can improve the vehicle positioning efficiency of the algorithm, greatly reduce the problem of wrong detection and missed detection of vehicles in the model, and provide ideas for solving related problems. Experiments show that the accuracy rate of the improved algorithm model is 89.3%. Compared to the YOLO V3 algorithm, it is improved by 15.9%. The recall rate is improved by 16%, and the F1 value is also improved by 15.9%, which greatly increased the detection efficiency of aerial vehicles.

5.
PeerJ Comput Sci ; 9: e1502, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37705641

RESUMO

Ecological biodiversity is declining at an unprecedented rate. To combat such irreversible changes in natural ecosystems, biodiversity conservation initiatives are being conducted globally. However, the lack of a feasible methodology to quantify biodiversity in real-time and investigate population dynamics in spatiotemporal scales prevents the use of ecological data in environmental planning. Traditionally, ecological studies rely on the census of an animal population by the "capture, mark and recapture" technique. In this technique, human field workers manually count, tag and observe tagged individuals, making it time-consuming, expensive, and cumbersome to patrol the entire area. Recent research has also demonstrated the potential for inexpensive and accessible sensors for ecological data monitoring. However, stationary sensors collect localised data which is highly specific on the placement of the setup. In this research, we propose the methodology for biodiversity monitoring utilising state-of-the-art deep learning (DL) methods operating in real-time on sample payloads of mobile robots. Such trained DL algorithms demonstrate a mean average precision (mAP) of 90.51% in an average inference time of 67.62 milliseconds within 6,000 training epochs. We claim that the use of such mobile platform setups inferring real-time ecological data can help us achieve our goal of quick and effective biodiversity surveys. An experimental test payload is fabricated, and online as well as offline field surveys are conducted, validating the proposed methodology for species identification that can be further extended to geo-localisation of flora and fauna in any ecosystem.

6.
J Colloid Interface Sci ; 651: 59-67, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37540930

RESUMO

Artificial intelligence (AI)-integrated smartphone-based handheld determination platform, based on 3D printed accessory, Al3+-triggered aggregation-induced red-emssion enhanced carbon dots (CDs) test strip, and smartphone with self-developed YOLO v3 AI algorithm-based application, proves the feasibility for intelligent real-time on-site quantitation of F- through tracking a consecutive fluorescence (FL) colour change. CDs, manifesting dual emission of moderate green emission at 512 nm and weak red one at 620 nm under 365 nm excitation, were synthesized hydrothermally from alizarin carmine and citric acid. CDs@Al3+, with distinct aggregation-induced red-emssion enhancement and green-emssion quenchment, were prepared by adding Al3+ to the CDs solution. Inspiringly, due to intrinsic ratiometric FL variation (I620/I512), CDs@Al3+ engender a successive FL colour variation from red to green in response to different concentrations of F- with low limit of detection of 7.998 µM and wide linear range of 150-1200 µM based on excellent linearity correlation between R/G value and F- concentration. Furthermore, F- content in tap water, toothpaste and milk could be intelligently, speedily, and straightforwardly analyzed through the AI-integrated smartphone-based handheld detection platform. It is fervently desired that our study will motivate a brand-new perspective for the promotion of efficacious detection strategy and the extension of practical application promise.

7.
Multimed Tools Appl ; : 1-16, 2023 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-37362733

RESUMO

The ability of Advanced Driving Assistance Systems (ADAS) is to identify and understand all objects around the vehicle under varying driving conditions and environmental factors is critical. Today's vehicles are equipped with advanced driving assistance systems that make driving safer and more comfortable. A camera mounted on the car helps the system recognise and detect traffic signs and alerts the driver about various road conditions, like if construction work is ahead or if speed limits have changed. The goal is to identify the traffic sign and process the image in a minimal processing time. A custom convolutional neural network model is used to classify the traffic signs with higher accuracy than the existing models. Image augmentation techniques are used to expand the dataset artificially, and that allows one to learn how the image looks from different perspectives, such as when viewed from different angles or when it looks blurry due to poor weather conditions. The algorithms used to detect traffic signs are YOLO v3 and YOLO v4-tiny. The proposed solution for detecting a specific set of traffic signs performed well, with an accuracy rate of 95.85%.

8.
Sensors (Basel) ; 23(9)2023 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-37177589

RESUMO

With the development of industrial automation, articulated robots have gradually replaced labor in the field of bolt installation. Although the installation efficiency has been improved, installation defects may still occur. Bolt installation defects can considerably affect the mechanical properties of structures and even lead to safety accidents. Therefore, in order to ensure the success rate of bolt assembly, an efficient and timely detection method of incorrect or missing assembly is needed. At present, the automatic detection of bolt installation defects mainly depends on a single type of sensor, which is prone to mis-inspection. Visual sensors can identify the incorrect or missing installation of bolts, but it cannot detect torque defects. Torque sensors can only be judged according to the torque and angel information, but cannot accurately identify the incorrect or missing installation of bolts. To solve this problem, a detection method of bolt installation defects based on multiple sensors is proposed. The trained YOLO (You Only Look Once) v3 network is used to judge the images collected by the visual sensor, and the recognition rate of visual detection is up to 99.75%, and the average confidence of the output is 0.947. The detection speed is 48 FPS, which meets the real-time requirement. At the same time, torque and angle sensors are used to judge the torque defects and whether bolts have slipped. Combined with the multi-sensor judgment results, this method can effectively identify defects such as missing bolts and sliding teeth. Finally, this paper carried out experiments to identify bolt installation defects such as incorrect, missing torque defects, and bolt slips. At this time, the traditional detection method based on a single type of sensor cannot be effectively identified, and the detection method based on multiple sensors can be accurately identified.

9.
Front Cardiovasc Med ; 10: 1101765, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36910524

RESUMO

Introduction: The primary factor for cardiovascular disease and upcoming cardiovascular events is atherosclerosis. Recently, carotid plaque texture, as observed on ultrasonography, is varied and difficult to classify with the human eye due to substantial inter-observer variability. High-resolution magnetic resonance (MR) plaque imaging offers naturally superior soft tissue contrasts to computed tomography (CT) and ultrasonography, and combining different contrast weightings may provide more useful information. Radiation freeness and operator independence are two additional benefits of M RI. However, other than preliminary research on MR texture analysis of basilar artery plaque, there is currently no information addressing MR radiomics on the carotid plaque. Methods: For the automatic segmentation of MRI scans to detect carotid plaque for stroke risk assessment, there is a need for a computer-aided autonomous framework to classify MRI scans automatically. We used to detect carotid plaque from MRI scans for stroke risk assessment pre-trained models, fine-tuned them, and adjusted hyperparameters according to our problem. Results: Our trained YOLO V3 model achieved 94.81% accuracy, RCNN achieved 92.53% accuracy, and MobileNet achieved 90.23% in identifying carotid plaque from MRI scans for stroke risk assessment. Our approach will prevent incorrect diagnoses brought on by poor image quality and personal experience. Conclusion: The evaluations in this work have demonstrated that this methodology produces acceptable results for classifying magnetic resonance imaging (MRI) data.

10.
Animals (Basel) ; 13(3)2023 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-36766301

RESUMO

There are some problems with estrus detection in ewes in large-scale meat sheep farming: mainly, the manual detection method is labor-intensive and the contact sensor detection method causes stress reactions in ewes. To solve the abovementioned problems, we proposed a multi-objective detection layer neural network-based method for ewe estrus crawling behavior recognition. The approach we proposed has four main parts. Firstly, to address the problem of mismatch between our constructed ewe estrus dataset and the YOLO v3 anchor box size, we propose to obtain a new anchor box size by clustering the ewe estrus dataset using the K-means++ algorithm. Secondly, to address the problem of low model recognition precision caused by small imaging of distant ewes in the dataset, we added a 104 × 104 target detection layer, making the total target detection layer reach four layers, strengthening the model's ability to learn shallow information and improving the model's ability to detect small targets. Then, we added residual units to the residual structure of the model, so that the deep feature information of the model is not easily lost and further fused with the shallow feature information to speed up the training of the model. Finally, we maintain the aspect ratio of the images in the data-loading module of the model to reduce the distortion of the image information and increase the precision of the model. The experimental results show that our proposed model has 98.56% recognition precision, while recall was 98.04%, F1 value was 98%, mAP was 99.78%, FPS was 41 f/s, and model size was 276 M, which can meet the accurate and real-time recognition of ewe estrus behavior in large-scale meat sheep farming.

11.
Sensors (Basel) ; 22(22)2022 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-36433474

RESUMO

Road discrepancies such as potholes and road cracks are often present in our day-to-day commuting and travel. The cost of damage repairs caused by potholes has always been a concern for owners of any type of vehicle. Thus, an early detection processes can contribute to the swift response of road maintenance services and the prevention of pothole related accidents. In this paper, automatic detection of potholes is performed using the computer vision model library, You Look Only Once version 3, also known as Yolo v3. Light and weather during driving naturally affect our ability to observe road damage. Such adverse conditions also negatively influence the performance of visual object detectors. The aim of this work was to examine the effect adverse conditions have on pothole detection. The basic design of this study is therefore composed of two main parts: (1) dataset creation and data processing, and (2) dataset experiments using Yolo v3. Additionally, Sparse R-CNN was incorporated into our experiments. For this purpose, a dataset consisting of subsets of images recorded under different light and weather was developed. To the best of our knowledge, there exists no detailed analysis of pothole detection performance under adverse conditions. Despite the existence of newer libraries, Yolo v3 is still a competitive architecture that provides good results with lower hardware requirements.


Assuntos
Condução de Veículo , Computadores , Simulação por Computador
12.
Diagnostics (Basel) ; 12(7)2022 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-35885584

RESUMO

Teeth detection and tooth segmentation are essential for processing Cone Beam Computed Tomography (CBCT) images. The accuracy decides the credibility of the subsequent applications, such as diagnosis, treatment plans in clinical practice or other research that is dependent on automatic dental identification. The main problems are complex noises and metal artefacts which would affect the accuracy of teeth detection and segmentation with traditional algorithms. In this study, we proposed a teeth-detection method to avoid the problems above and to accelerate the operation speed. In our method, (1) a Convolutional Neural Network (CNN) was employed to classify layer classes; (2) images were chosen to perform Region of Interest (ROI) cropping; (3) in ROI regions, we used a YOLO v3 and multi-level combined teeth detection method to locate each tooth bounding box; (4) we obtained tooth bounding boxes on all layers. We compared our method with a Faster R-CNN method which was commonly used in previous studies. The training and prediction time were shortened by 80% and 62% in our method, respectively. The Object Inclusion Ratio (OIR) metric of our method was 96.27%, while for the Faster R-CNN method, it was 91.40%. When testing images with severe noise or with different missing teeth, our method promises a stable result. In conclusion, our method of teeth detection on dental CBCT is practical and reliable for its high prediction speed and robust detection.

13.
Pest Manag Sci ; 78(5): 1861-1869, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35060294

RESUMO

BACKGROUND: Precision weed control in vegetable fields can substantially reduce the required weed control inputs. Rapid and accurate weed detection in vegetable fields is a challenging task due to the presence of a wide variety of weed species at various growth stages and densities. This paper presents a novel deep-learning-based method for weed detection that recognizes vegetable crops and classifies all other green objects as weeds. RESULTS: The optimal confidence threshold values for YOLO-v3, CenterNet, and Faster R-CNN were 0.4, 0.6, and 0.4/0.5, respectively. These deep-learning models had average precision (AP) above 97% in the testing dataset. YOLO-v3 was the most accurate model for detection of vegetables and yielded the highest F 1 score of 0.971, along with high precision and recall values of 0.971 and 0.970, respectively. The inference time of YOLO-v3 was similar to CenterNet, but significantly shorter than that of Faster R-CNN. Overall, YOLO-v3 showed the highest accuracy and computational efficiency among the deep-learning architectures evaluated in this study. CONCLUSION: These results demonstrate that deep-learning-based methods can reliably detect weeds in vegetable crops. The proposed method avoids dealing with various weed species, and thus greatly reduces the overall complexity of weed detection in vegetable fields. Findings have implications for advancing site-specific robotic weed control in vegetable crops.


Assuntos
Aprendizado Profundo , Verduras , Produtos Agrícolas , Plantas Daninhas , Controle de Plantas Daninhas/métodos
14.
BMC Med Inform Decis Mak ; 21(1): 324, 2021 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-34809632

RESUMO

BACKGROUND: The correct identification of pills is very important to ensure the safe administration of drugs to patients. Here, we use three current mainstream object detection models, namely RetinaNet, Single Shot Multi-Box Detector (SSD), and You Only Look Once v3(YOLO v3), to identify pills and compare the associated performance. METHODS: In this paper, we introduce the basic principles of three object detection models. We trained each algorithm on a pill image dataset and analyzed the performance of the three models to determine the best pill recognition model. The models were then used to detect difficult samples and we compared the results. RESULTS: The mean average precision (MAP) of RetinaNet reached 82.89%, but the frames per second (FPS) is only one third of YOLO v3, which makes it difficult to achieve real-time performance. SSD does not perform as well on the indicators of MAP and FPS. Although the MAP of YOLO v3 is slightly lower than the others (80.69%), it has a significant advantage in terms of detection speed. YOLO v3 also performed better when tasked with hard sample detection, and therefore the model is more suitable for deployment in hospital equipment. CONCLUSION: Our study reveals that object detection can be applied for real-time pill identification in a hospital pharmacy, and YOLO v3 exhibits an advantage in detection speed while maintaining a satisfactory MAP.


Assuntos
Redes Neurais de Computação , Sulfadiazina de Prata , Algoritmos , Humanos
15.
Math Biosci Eng ; 18(4): 3491-3501, 2021 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-34198397

RESUMO

PURPOSE: In order to improve the accuracy of liquid level detection in intravenous left auxiliary vein infusion and reduce the pain of patients with blood returning from intravenous infusion, we propose a deep learning based liquid level detection model of infusion levels to facilitate this operation. METHOD: We implemented a Yolo v3-based detection model of infusion level images in intravenous infusion, and at the same time, compare it with SURF image processing technique, RCNN, and Fast-RCNN methods. RESULTS: The model in this paper is better than the comparison algorithm in Intersection over Union (IoU), precision, recall and test time. The liquid level detection model based on Yolo v3 has a precision of 0.9768, a recall rate of 0.9688, an IoU of 0.8943, and a test time of 2.9 s. CONCLUSION: The experimental results prove that the liquid level detection method based on deep learning has the characteristics of high accuracy and good real-time performance. This method can play a certain auxiliary role in the hospital environment and improve work efficiency of medical workers.


Assuntos
Algoritmos , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador , Infusões Intravenosas
16.
Sensors (Basel) ; 21(5)2021 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-33652633

RESUMO

This paper proposes a new Image-to-Image Translation (Pix2Pix) enabled deep learning method for traveling wave-based fault location. Unlike the previous methods that require a high sampling frequency of the PMU, the proposed method can translate the scale 1 detail component image provided by the low frequency PMU data to higher frequency ones via the Pix2Pix. This allows us to significantly improve the fault location accuracy. Test results via the YOLO v3 object recognition algorithm show that the images generated by pix2pix can be accurately identified. This enables to improve the estimation accuracy of the arrival time of the traveling wave head, leading to better fault location outcomes.

17.
Multimed Tools Appl ; 80(13): 19753-19768, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33679209

RESUMO

There are many solutions to prevent the spread of the COVID-19 virus and one of the most effective solutions is wearing a face mask. Almost everyone is wearing face masks at all times in public places during the coronavirus pandemic. This encourages us to explore face mask detection technology to monitor people wearing masks in public places. Most recent and advanced face mask detection approaches are designed using deep learning. In this article, two state-of-the-art object detection models, namely, YOLOv3 and faster R-CNN are used to achieve this task. The authors have trained both the models on a dataset that consists of images of people of two categories that are with and without face masks. This work proposes a technique that will draw bounding boxes (red or green) around the faces of people, based on whether a person is wearing a mask or not, and keeps the record of the ratio of people wearing face masks on the daily basis. The authors have also compared the performance of both the models i.e., their precision rate and inference time.

18.
Sensors (Basel) ; 20(24)2020 Dec 18.
Artigo em Inglês | MEDLINE | ID: mdl-33352867

RESUMO

Countries around the world have paid increasing attention to the issue of marine security, and sea target detection is a key task to ensure marine safety. Therefore, it is of great significance to propose an efficient and accurate sea-surface target detection algorithm. The anchor-setting method of the traditional YOLO v3 only uses the degree of overlap between the anchor and the ground-truth box as the standard. As a result, the information of some feature maps cannot be used, and the required accuracy of target detection is hard to achieve in a complex sea environment. Therefore, two new anchor-setting methods for the visual detection of sea targets were proposed in this paper: the average method and the select-all method. In addition, cross PANet, a feature fusion structure for cross-feature maps was developed and was used to obtain a better baseline cross YOLO v3, where different anchor-setting methods were combined with a focal loss for experimental comparison in the datasets of sea buoys and existing sea ships, SeaBuoys and SeaShips, respectively. The results showed that the method proposed in this paper could significantly improve the accuracy of YOLO v3 in detecting sea-surface targets, and the highest value of mAP in the two datasets is 98.37% and 90.58%, respectively.

19.
Sensors (Basel) ; 20(24)2020 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-33371291

RESUMO

With the recent development of artificial intelligence along with information and communications infrastructure, a new paradigm of online services is being developed. Whereas in the past a service system could only exchange information of the service provider at the request of the user, information can now be provided by automatically analyzing a particular need, even without a direct user request. This also holds for online platforms of used-vehicle sales. In the past, consumers needed to inconveniently determine and classify the quality of information through static data provided by service and information providers. As a result, this service field has been harmful to consumers owing to such problems as false sales, fraud, and exaggerated advertising. Despite significant efforts of platform providers, there are limited human resources for censoring the vast amounts of data uploaded by sellers. Therefore, in this study, an algorithm called YOLOv3+MSSIM Type 2 for automatically censoring the data of used-vehicle sales on an online platform was developed. To this end, an artificial intelligence system that can automatically analyze an object in a vehicle video uploaded by a seller, and an artificial intelligence system that can filter the vehicle-specific terms and profanity from the seller's video presentation, were also developed. As a result of evaluating the developed system, the average execution speed of the proposed YOLOv3+MSSIM Type 2 algorithm was 78.6 ms faster than that of the pure YOLOv3 algorithm, and the average frame rate per second was improved by 40.22 fps. In addition, the average GPU utilization rate was improved by 23.05%, proving the efficiency.

20.
Entropy (Basel) ; 22(9)2020 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-33286711

RESUMO

Visually impaired people face numerous difficulties in their daily life, and technological interventions may assist them to meet these challenges. This paper proposes an artificial intelligence-based fully automatic assistive technology to recognize different objects, and auditory inputs are provided to the user in real time, which gives better understanding to the visually impaired person about their surroundings. A deep-learning model is trained with multiple images of objects that are highly relevant to the visually impaired person. Training images are augmented and manually annotated to bring more robustness to the trained model. In addition to computer vision-based techniques for object recognition, a distance-measuring sensor is integrated to make the device more comprehensive by recognizing obstacles while navigating from one place to another. The auditory information that is conveyed to the user after scene segmentation and obstacle identification is optimized to obtain more information in less time for faster processing of video frames. The average accuracy of this proposed method is 95.19% and 99.69% for object detection and recognition, respectively. The time complexity is low, allowing a user to perceive the surrounding scene in real time.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA