Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 63
Filtrar
1.
Sensors (Basel) ; 24(7)2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38610516

RESUMO

In recent years, the development of intelligent sensor systems has experienced remarkable growth, particularly in the domain of microwave and millimeter wave sensing, thanks to the increased availability of affordable hardware components. With the development of smart Ground-Based Synthetic Aperture Radar (GBSAR) system called GBSAR-Pi, we previously explored object classification applications based on raw radar data. Building upon this foundation, in this study, we analyze the potential of utilizing polarization information to improve the performance of deep learning models based on raw GBSAR data. The data are obtained with a GBSAR operating at 24 GHz with both vertical (VV) and horizontal (HH) polarization, resulting in two matrices (VV and HH) per observed scene. We present several approaches demonstrating the integration of such data into classification models based on a modified ResNet18 architecture. We also introduce a novel Siamese architecture tailored to accommodate the dual input radar data. The results indicate that a simple concatenation method is the most promising approach and underscore the importance of considering antenna polarization and merging strategies in deep learning applications based on radar data.

2.
Sensors (Basel) ; 24(3)2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38339499

RESUMO

This paper is on the autonomous detection of humans in off-limits mountains. In off-limits mountains, a human rarely exists, thus human detection is an extremely rare event. Due to the advances in artificial intelligence, object detection-classification algorithms based on a Convolution Neural Network (CNN) can be used for this application. However, considering off-limits mountains, there should be no person in general. Thus, it is not desirable to run object detection-classification algorithms continuously, since they are computationally heavy. This paper addresses a time-efficient human detector system, based on both motion detection and object classification. The proposed scheme is to run a motion detection algorithm from time to time. In the camera image, we define a feasible human space where a human can appear. Once motion is detected inside the feasible human space, one enables the object classification, only inside the bounding box where motion is detected. Since motion detection inside the feasible human space runs much faster than an object detection-classification method, the proposed approach is suitable for real-time human detection with low computational loads. As far as we know, no paper in the literature used the feasible human space, as in our paper. The outperformance of our human detector system is verified by comparing it with other state-of-the-art object detection-classification algorithms (HOG detector, YOLOv7 and YOLOv7-tiny) under experiments. This paper demonstrates that the accuracy of the proposed human detector system is comparable to other state-of-the-art algorithms, while outperforming in computational speed. Our experiments show that in environments with no humans, the proposed human detector runs 62 times faster than YOLOv7 method, while showing comparable accuracy.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Movimento (Física) , Redes Neurais de Computação
3.
J Sci Food Agric ; 104(10): 6018-6034, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-38483173

RESUMO

BACKGROUND: The accurate recognition and early warning for plant diseases and pests are a prerequisite of intelligent prevention and control for plant diseases and pests. As a result of the phenotype similarity of the hazarded plant after plant diseases and pests occur, as well as the interference of the external environment, traditional deep learning models often face the overfitting problem in phenotype recognition of plant diseases and pests, which leads to not only the slow convergence speed of the network, but also low recognition accuracy. RESULTS: Motivated by the above problems, the present study proposes a deep learning model EResNet-support vector machine (SVM) to alleviate the overfitting for the recognition and classification of plant diseases and pests. First, the feature extraction capability of the model is improved by increasing feature extraction layers in the convolutional neural network. Second, the order-reduced modules are embedded and a sparsely activated function is introduced to reduce model complexity and alleviate overfitting. Finally, a classifier fused by SVM and fully connected layers are introduced to transforms the original non-linear classification problem into a linear classification problem in high-dimensional space to further alleviate the overfitting and improve the recognition accuracy of plant diseases and pests. The ablation experiments further demonstrate that the fused structure can effectively alleviate the overfitting and improve the recognition accuracy. The experimental recognition results for typical plant diseases and pests show that the proposed EResNet-SVM model has 99.30% test accuracy for eight conditions (seven plant diseases and one normal), which is 5.90% higher than the original ResNet18. Compared with the classic AlexNet, GoogLeNet, Xception, SqueezeNet and DenseNet201 models, the accuracy of the EResNet-SVM model has improved by 5.10%, 7%, 8.10%, 6.20% and 1.90%, respectively. The testing accuracy of the EResNet-SVM model for 6 insect pests is 100%, which is 3.90% higher than that of the original ResNet18 model. CONCLUSION: This research provides not only useful references for alleviating the overfitting problem in deep learning, but also a theoretical and technical support for the intelligent detection and control of plant diseases and pests. © 2024 Society of Chemical Industry.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Doenças das Plantas , Máquina de Vetores de Suporte , Doenças das Plantas/parasitologia , Doenças das Plantas/prevenção & controle , Animais , Insetos , Controle de Pragas/métodos
4.
Small ; 19(39): e2301593, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37259272

RESUMO

Electronic skin (E-skin) with multimodal sensing ability demonstrates huge prospects in object classification by intelligent robots. However, realizing the object classification capability of E-skin faces severe challenges in multiple types of output signals. Herein, a hierarchical pressure-temperature bimodal sensing E-skin based on all resistive output signals is developed for accurate object classification, which consists of laser-induced graphene/silicone rubber (LIG/SR) pressure sensing layer and NiO temperature sensing layer. The highly conductive LIG is employed as pressure-sensitive material as well as the interdigital electrode. Benefiting from high conductivity of LIG, pressure perception exhibits an excellent sensitivity of -34.15 kPa-1 . Meanwhile, a high temperature coefficient of resistance of -3.84%°C-1 is obtained in the range of 24-40 °C. More importantly, based on only electrical resistance as the output signal, the bimodal sensing E-skin with negligible crosstalk can simultaneously achieve pressure and temperature perception. Furthermore, a smart glove based on this E-skin enables classifying various objects with different shapes, sizes, and surface temperatures, which achieves over 92% accuracy under assistance of deep learning. Consequently, the hierarchical pressure-temperature bimodal sensing E-skin demonstrates potential application in human-machine interfaces, intelligent robots, and smart prosthetics.

5.
Sensors (Basel) ; 23(13)2023 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-37447943

RESUMO

In this study, a comprehensive approach for sensing object stiffness through the pincer grasping of soft pneumatic grippers (SPGs) is presented. This study was inspired by the haptic sensing of human hands that allows us to perceive object properties through grasping. Many researchers have tried to imitate this capability in robotic grippers. The association between gripper performance and object reaction must be determined for this purpose. However, soft pneumatic actuators (SPA), the main components of SPGs, are extremely compliant. SPA compliance makes the determination of the association challenging. Methodologically, the connection between the behaviors of grasped objects and those of SPAs was clarified. A new concept of SPA modeling was then introduced. A method for stiffness sensing through SPG pincer grasping was developed based on this connection, and demonstrated on four samples. This method was validated through compression testing on the same samples. The results indicate that the proposed method yielded similar stiffness trends with slight deviations in compression testing. A main limitation in this study was the occlusion effect, which leads to dramatic deviations when grasped objects greatly deform. This is the first study to enable stiffness sensing and SPG grasping to be carried out in the same attempt. This study makes a major contribution to research on soft robotics by progressing the role of sensing for SPG grasping and object classification by offering an efficient method for acquiring another effective class of classification input. Ultimately, the proposed framework shows promise for future applications in inspecting and classifying visually indistinguishable objects.


Assuntos
Mãos , Robótica , Humanos , Desenho de Equipamento , Pressão , Robótica/métodos , Força da Mão
6.
Sensors (Basel) ; 23(23)2023 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-38067967

RESUMO

Simultaneous location and mapping (SLAM) technology is key in robot autonomous navigation. Most visual SLAM (VSLAM) algorithms for dynamic environments cannot achieve sufficient positioning accuracy and real-time performance simultaneously. When the dynamic object proportion is too high, the VSLAM algorithm will collapse. To solve the above problems, this paper proposes an indoor dynamic VSLAM algorithm called YDD-SLAM based on ORB-SLAM3, which introduces the YOLOv5 object detection algorithm and integrates deep information. Firstly, the objects detected by YOLOv5 are divided into eight subcategories according to their motion characteristics and depth values. Secondly, the depth ranges of the dynamic object and potentially dynamic object in the moving state in the scene are calculated. Simultaneously, the depth value of the feature point in the detection box is compared with that of the feature point in the detection box to determine whether the point is a dynamic feature point; if it is, the dynamic feature point is eliminated. Further, multiple feature point optimization strategies were developed for VSLAM in dynamic environments. A public data set and an actual dynamic scenario were used for testing. The accuracy of the proposed algorithm was significantly improved compared to that of ORB-SLAM3. This work provides a theoretical foundation for the practical application of a dynamic VSLAM algorithm.

7.
Sensors (Basel) ; 23(21)2023 Nov 05.
Artigo em Inglês | MEDLINE | ID: mdl-37960681

RESUMO

The efficient recognition and classification of personal protective equipment are essential for ensuring the safety of personnel in complex industrial settings. Using the existing methods, manually performing macro-level classification and identification of personnel in intricate spheres is tedious, time-consuming, and inefficient. The availability of several artificial intelligence models in recent times presents a new paradigm shift in object classification and tracking in complex settings. In this study, several compact and efficient deep learning model architectures are explored, and a new efficient model is constructed by fusing the learning capabilities of the individual, efficient models for better object feature learning and optimal inferencing. The proposed model ensures rapid identification of personnel in complex working environments for appropriate safety measures. The new model construct follows the contributory learning theory whereby each fussed model brings its learned features that are then combined to obtain a more accurate and rapid model using normalized quantization-aware learning. The major contribution of the work is the introduction of a normalized quantization-aware learning strategy to fuse the features learned by each of the contributing models. During the investigation, a separable convolutional driven model was constructed as a base model, and then the various efficient architectures were combined for the rapid identification and classification of the various hardhat classes used in complex industrial settings. A remarkable rapid classification and accuracy were recorded with the new resultant model.

8.
Sensors (Basel) ; 23(9)2023 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-37177672

RESUMO

An intelligent transportation system is one of the fundamental goals of the smart city concept. The Internet of Things (IoT) concept is a basic instrument to digitalize and automatize the process in the intelligent transportation system. Digitalization via the IoT concept enables the automatic collection of data usable for management in the transportation system. The IoT concept includes a system of sensors, actuators, control units and computational distribution among the edge, fog and cloud layers. The study proposes a taxonomy of sensors used for monitoring tasks based on motion detection and object tracking in intelligent transportation system tasks. The sensor's taxonomy helps to categorize the sensors based on working principles, installation or maintenance methods and other categories. The sensor's categorization enables us to compare the effectiveness of each sensor's system. Monitoring tasks are analyzed, categorized, and solved in intelligent transportation systems based on a literature review and focusing on motion detection and object tracking methods. A literature survey of sensor systems used for monitoring tasks in the intelligent transportation system was performed according to sensor and monitoring task categorization. In this review, we analyzed the achieved results to measure, sense, or classify events in intelligent transportation system monitoring tasks. The review conclusions were used to propose an architecture of the universal sensor system for common monitoring tasks based on motion detection and object tracking methods in intelligent transportation tasks. The proposed architecture was built and tested for the first experimental results in the case study scenario. Finally, we propose methods that could significantly improve the results in the following research.

9.
Sensors (Basel) ; 23(3)2023 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-36772135

RESUMO

Digital holographically sensed 3D data processing, which is useful for AI-based vision, is demonstrated. Three prominent methods of learning from datasets such as sensed holograms, computationally retrieved intensity and phase from holograms forming concatenated intensity-phase (whole information) images, and phase-only images (depth information) were utilized for the proposed multi-class classification and multi-output regression tasks of the chosen 3D objects in supervised learning. Each dataset comprised 2268 images obtained from the chosen eighteen 3D objects. The efficacy of our approaches was validated on experimentally generated digital holographic data then further quantified and compared using specific evaluation matrices. The machine learning classifiers had better AUC values for different classes on the holograms and whole information datasets compared to the CNN, whereas the CNN had a better performance on the phase-only image dataset compared to these classifiers. The MLP regressor was found to have a stable prediction in the test and validation sets with a fixed EV regression score of 0.00 compared to the CNN, the other regressors for holograms, and the phase-only image datasets, whereas the RF regressor showed a better performance in the validation set for the whole information dataset with a fixed EV regression score of 0.01 compared to the CNN and other regressors.

10.
Environ Monit Assess ; 195(4): 469, 2023 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-36920539

RESUMO

The rapid expansion of cities and continuous urban population growth underscores a need for sustainable urban development. Sustainable development is that which addresses human needs, contributes to well-being, is economically viable, and utilizes natural resources at a degree sustainable by the surrounding environmental systems. Urban green spaces, green roofs, and solar panels are examples of environmentally sustainable urban development (ESUD), or development that focuses on environmental impact, but also presents the potential to achieve social and economic sustainability. The aim of this study was to map and compare amounts of ESUD c. 2010 and c. 2019 through an object-based image analysis (OBIA) approach using National Agricultural Imagery Program (NAIP) aerial orthoimagery for six mid- to large-size cities in the USA. The results of this study indicate a hybrid OBIA and manual interpretation approach applied to NAIP orthoimagery may allow for reliable mapping and areal estimation of urban green space and green roof changes in urban areas. The reliability of OBIA-only mapping and estimation of areal extents of existing green roofs, and new and existing solar panels, is inconclusive due to low mapping accuracy and coarse spatial resolution of aerial orthoimagery relative to some ESUD features. The three urban study areas in humid continental climate zones (Dfa) were estimated to have greater areal extent of new and existing urban green space and existing green roofs, but less areal extent of new green roofs and existing solar panels compared to the three study areas in humid subtropical climate zones (Cfa).


Assuntos
Monitoramento Ambiental , Reforma Urbana , Humanos , Cidades , Reprodutibilidade dos Testes , Meio Ambiente , Conservação dos Recursos Naturais
11.
Entropy (Basel) ; 25(4)2023 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-37190423

RESUMO

The computer vision, graphics, and machine learning research groups have given a significant amount of focus to 3D object recognition (segmentation, detection, and classification). Deep learning approaches have lately emerged as the preferred method for 3D segmentation problems as a result of their outstanding performance in 2D computer vision. As a result, many innovative approaches have been proposed and validated on multiple benchmark datasets. This study offers an in-depth assessment of the latest developments in deep learning-based 3D object recognition. We discuss the most well-known 3D object recognition models, along with evaluations of their distinctive qualities.

12.
Sensors (Basel) ; 22(9)2022 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-35590899

RESUMO

The research of object classification and part segmentation is a hot topic in computer vision, robotics, and virtual reality. With the emergence of depth cameras, point clouds have become easier to collect and increasingly important because of their simple and unified structures. Recently, a considerable number of studies have been carried out about deep learning on 3D point clouds. However, data captured directly by sensors from the real-world often encounters severe incomplete sampling problems. The classical network is able to learn deep point set features efficiently, but it is not robust enough when the method suffers from the lack of point clouds. In this work, a novel and general network was proposed, whose effect does not depend on a large amount of point cloud input data. The mutual learning of neighboring points and the fusion between high and low feature layers can better promote the integration of local features so that the network can be more robust. The specific experiments were conducted on the ScanNet and Modelnet40 datasets with 84.5% and 92.8% accuracy, respectively, which proved that our model is comparable or even better than most existing methods for classification and segmentation tasks, and has good local feature integration ability. Particularly, it can still maintain 87.4% accuracy when the number of input points is further reduced to 128. The model proposed has bridged the gap between classical networks and point cloud processing.


Assuntos
Robótica , Realidade Virtual , Computação em Nuvem , Redes Neurais de Computação
13.
Sensors (Basel) ; 22(19)2022 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-36236574

RESUMO

Ground-object classification using remote-sensing images of high resolution is widely used in land planning, ecological monitoring, and resource protection. Traditional image segmentation technology has poor effect on complex scenes in high-resolution remote-sensing images. In the field of deep learning, some deep neural networks are being applied to high-resolution remote-sensing image segmentation. The DeeplabV3+ network is a deep neural network based on encoder-decoder architecture, which is commonly used to segment images with high precision. However, the segmentation accuracy of high-resolution remote-sensing images is poor, the number of network parameters is large, and the cost of training network is high. Therefore, this paper improves the DeeplabV3+ network. Firstly, MobileNetV2 network was used as the backbone feature-extraction network, and an attention-mechanism module was added after the feature-extraction module and the ASPP module to introduce focal loss balance. Our design has the following advantages: it enhances the ability of network to extract image features; it reduces network training costs; and it achieves better semantic segmentation accuracy. Experiments on high-resolution remote-sensing image datasets show that the mIou of the proposed method on WHDLD datasets is 64.76%, 4.24% higher than traditional DeeplabV3+ network mIou, and the mIou on CCF BDCI datasets is 64.58%. This is 5.35% higher than traditional DeeplabV3+ network mIou and outperforms traditional DeeplabV3+, U-NET, PSP-NET and MACU-net networks.


Assuntos
Processamento de Imagem Assistida por Computador , Tecnologia de Sensoriamento Remoto , Atenção , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tecnologia de Sensoriamento Remoto/métodos
14.
Sensors (Basel) ; 22(23)2022 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-36501783

RESUMO

Economic and environmental sustainability is becoming increasingly important in today's world. Electronic waste (e-waste) is on the rise and options to reuse parts should be explored. Hence, this paper presents the development of vision-based methods for the detection and classification of used electronics parts. In particular, the problem of classifying commonly used and relatively expensive electronic project parts such as capacitors, potentiometers, and voltage regulator ICs is investigated. A multiple object workspace scenario with an overhead camera is investigated. A customized object detection algorithm determines regions of interest and extracts data for classification. Three classification methods are explored: (a) shallow neural networks (SNNs), (b) support vector machines (SVMs), and (c) deep learning with convolutional neural networks (CNNs). All three methods utilize 30 × 30-pixel grayscale image inputs. Shallow neural networks achieved the lowest overall accuracy of 85.6%. The SVM implementation produced its best results using a cubic kernel and principal component analysis (PCA) with 20 features. An overall accuracy of 95.2% was achieved with this setting. The deep learning CNN model has three convolution layers, two pooling layers, one fully connected layer, softmax, and a classification layer. The convolution layer filter size was set to four and adjusting the number of filters produced little variation in accuracy. An overall accuracy of 98.1% was achieved with the CNN model.


Assuntos
Algoritmos , Redes Neurais de Computação , Máquina de Vetores de Suporte , Eletrônica
15.
BMC Med Imaging ; 21(1): 6, 2021 01 06.
Artigo em Inglês | MEDLINE | ID: mdl-33407213

RESUMO

BACKGROUND: Melanoma has become more widespread over the past 30 years and early detection is a major factor in reducing mortality rates associated with this type of skin cancer. Therefore, having access to an automatic, reliable system that is able to detect the presence of melanoma via a dermatoscopic image of lesions and/or skin pigmentation can be a very useful tool in the area of medical diagnosis. METHODS: Among state-of-the-art methods used for automated or computer assisted medical diagnosis, attention should be drawn to Deep Learning based on Convolutional Neural Networks, wherewith segmentation, classification and detection systems for several diseases have been implemented. The method proposed in this paper involves an initial stage that automatically crops the region of interest within a dermatoscopic image using the Mask and Region-based Convolutional Neural Network technique, and a second stage based on a ResNet152 structure, which classifies lesions as either "benign" or "malignant". RESULTS: Training, validation and testing of the proposed model was carried out using the database associated to the challenge set out at the 2017 International Symposium on Biomedical Imaging. On the test data set, the proposed model achieves an increase in accuracy and balanced accuracy of 3.66% and 9.96%, respectively, with respect to the best accuracy and the best sensitivity/specificity ratio reported to date for melanoma detection in this challenge. Additionally, unlike previous models, the specificity and sensitivity achieve a high score (greater than 0.8) simultaneously, which indicates that the model is good for accurate discrimination between benign and malignant lesion, not biased towards any of those classes. CONCLUSIONS: The results achieved with the proposed model suggest a significant improvement over the results obtained in the state of the art as far as performance of skin lesion classifiers (malignant/benign) is concerned.


Assuntos
Aprendizado Profundo , Dermoscopia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Melanoma/diagnóstico , Neoplasias Cutâneas/diagnóstico , Humanos , Melanoma/classificação , Melanoma/diagnóstico por imagem , Melanoma/patologia , Curva ROC , Neoplasias Cutâneas/classificação , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia
16.
Sensors (Basel) ; 21(23)2021 Nov 26.
Artigo em Inglês | MEDLINE | ID: mdl-34883865

RESUMO

Robot vision is an essential research field that enables machines to perform various tasks by classifying/detecting/segmenting objects as humans do. The classification accuracy of machine learning algorithms already exceeds that of a well-trained human, and the results are rather saturated. Hence, in recent years, many studies have been conducted in the direction of reducing the weight of the model and applying it to mobile devices. For this purpose, we propose a multipath lightweight deep network using randomly selected dilated convolutions. The proposed network consists of two sets of multipath networks (minimum 2, maximum 8), where the output feature maps of one path are concatenated with the input feature maps of the other path so that the features are reusable and abundant. We also replace the 3×3 standard convolution of each path with a randomly selected dilated convolution, which has the effect of increasing the receptive field. The proposed network lowers the number of floating point operations (FLOPs) and parameters by more than 50% and the classification error by 0.8% as compared to the state-of-the-art. We show that the proposed network is efficient.


Assuntos
Algoritmos , Redes Neurais de Computação , Humanos , Aprendizado de Máquina
17.
Sensors (Basel) ; 22(1)2021 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-35009765

RESUMO

Smart textiles have found numerous applications ranging from health monitoring to smart homes. Their main allure is their flexibility, which allows for seamless integration of sensing in everyday objects like clothing. The application domain also includes robotics; smart textiles have been used to improve human-robot interaction, to solve the problem of state estimation of soft robots, and for state estimation to enable learning of robotic manipulation of textiles. The latter application provides an alternative to computationally expensive vision-based pipelines and we believe it is the key to accelerate robotic learning of textile manipulation. Current smart textiles, however, maintain wired connections to external units, which impedes robotic manipulation, and lack modularity to facilitate state estimation of large cloths. In this work, we propose an open-source, fully wireless, highly flexible, light, and modular version of a piezoresistive smart textile. Its output stability was experimentally quantified and determined to be sufficient for classification tasks. Its functionality as a state sensor for larger cloths was also verified in a classification task where two of the smart textiles were sewn onto a piece of clothing of which three states are defined. The modular smart textile system was able to recognize these states with average per-class F1-scores ranging from 85.7 to 94.6% with a basic linear classifier.


Assuntos
Robótica , Têxteis , Humanos
18.
Sensors (Basel) ; 21(19)2021 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-34640794

RESUMO

The capability to estimate the pose of known geometry from point cloud data is a frequently arising requirement in robotics and automation applications. This problem is directly addressed by Iterative Closest Point (ICP), however, this method has several limitations and lacks robustness. This paper makes the case for an alternative method that seeks to find the most likely solution based on available evidence. Specifically, an evidence-based metric is described that seeks to find the pose of the object that would maximise the conditional likelihood of reproducing the observed range measurements. A seedless search heuristic is also provided to find the most likely pose estimate in light of these measurements. The method is demonstrated to provide for pose estimation (2D and 3D shape poses as well as joint-space searches), object identification/classification, and platform localisation. Furthermore, the method is shown to be robust in cluttered or non-segmented point cloud data as well as being robust to measurement uncertainty and extrinsic sensor calibration.


Assuntos
Algoritmos , Imageamento Tridimensional , Articulações
19.
Sensors (Basel) ; 21(4)2021 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-33672452

RESUMO

This paper proposes an object classification method using a flexion glove and machine learning. The classification is performed based on the information obtained from a single grasp on a target object. The flexion glove is developed with five flex sensors mounted on five finger sleeves, and is used for measuring the flexion of individual fingers while grasping an object. Flexion signals are divided into three phases, and they are the phases of picking, holding and releasing, respectively. Grasping features are extracted from the phase of holding for training the support vector machine. Two sets of objects are prepared for the classification test. One is printed-object set and the other is daily-life object set. The printed-object set is for investigating the patterns of grasping with specified shape and size, while the daily-life object set includes nine objects randomly chosen from daily life for demonstrating that the proposed method can be used to identify a wide range of objects. According to the results, the accuracy of the classifications are achieved 95.56% and 88.89% for the sets of printed objects and daily-life objects, respectively. A flexion glove which can perform object classification is successfully developed in this work and is aimed at potential grasp-to-see applications, such as visual impairment aid and recognition in dark space.

20.
Sensors (Basel) ; 21(11)2021 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-34199676

RESUMO

Automotive millimeter-wave (MMW) radar is essential in autonomous vehicles due to its robustness in all weather conditions. Traditional commercial automotive radars are limited by their resolution, which makes the object classification task difficult. Thus, the concept of a new generation of four-dimensional (4D) imaging radar was proposed. It has high azimuth and elevation resolution and contains Doppler information to produce a high-quality point cloud. In this paper, we propose an object classification network named Radar Transformer. The algorithm takes the attention mechanism as the core and adopts the combination of vector attention and scalar attention to make full use of the spatial information, Doppler information, and reflection intensity information of the radar point cloud to realize the deep fusion of local attention features and global attention features. We generated an imaging radar classification dataset and completed manual annotation. The experimental results show that our proposed method achieved an overall classification accuracy of 94.9%, which is more suitable for processing radar point clouds than the popular deep learning frameworks and shows promising performance.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA