RESUMO
Lower-limb exoskeletons (LLEs) can provide rehabilitation training and walking assistance for individuals with lower-limb dysfunction or those in need of functionality enhancement. Adapting and personalizing the LLEs is crucial for them to form an intelligent human-machine system (HMS). However, numerous LLEs lack thorough consideration of individual differences in motion planning, leading to subpar human performance. Prioritizing human physiological response is a critical objective of trajectory optimization for the HMS. This paper proposes a human-in-the-loop (HITL) motion planning method that utilizes surface electromyography signals as biofeedback for the HITL optimization. The proposed method combines offline trajectory optimization with HITL trajectory selection. Based on the derived hybrid dynamical model of the HMS, the offline trajectory is optimized using a direct collocation method, while HITL trajectory selection is based on Thompson sampling. The direct collocation method optimizes various gait trajectories and constructs a gait library according to the energy optimality law, taking into consideration dynamics and walking constraints. Subsequently, an optimal gait trajectory is selected for the wearer using Thompson sampling. The selected gait trajectory is then implemented on the LLE under a hybrid zero dynamics control strategy. Through the HITL optimization and control experiments, the effectiveness and superiority of the proposed method are verified.
Assuntos
Eletromiografia , Exoesqueleto Energizado , Marcha , Extremidade Inferior , Caminhada , Humanos , Eletromiografia/métodos , Marcha/fisiologia , Extremidade Inferior/fisiologia , Caminhada/fisiologia , Algoritmos , Biorretroalimentação Psicológica/métodos , Masculino , Adulto , Fenômenos Biomecânicos/fisiologiaRESUMO
In the field of welding robotics, visual sensors, which are mainly composed of a camera and a laser, have proven to be promising devices because of their high precision, good stability, and high safety factor. In real welding environments, there are various kinds of weld joints due to the diversity of the workpieces. The location algorithms for different weld joint types are different, and the welding parameters applied in welding are also different. It is very inefficient to manually change the image processing algorithm and welding parameters according to the weld joint type before each welding task. Therefore, it will greatly improve the efficiency and automation of the welding system if a visual sensor can automatically identify the weld joint before welding. However, there are few studies regarding these problems and the accuracy and applicability of existing methods are not strong. Therefore, a weld joint identification method for visual sensor based on image features and support vector machine (SVM) is proposed in this paper. The deformation of laser around a weld joint is taken as recognition information. Two kinds of features are extracted as feature vectors to enrich the identification information. Subsequently, based on the extracted feature vectors, the optimal SVM model for weld joint type identification is established. A comparative study of proposed and conventional strategies for weld joint identification is carried out via a contrast experiment and a robustness testing experiment. The experimental results show that the identification accuracy rate achieves 98.4%. The validity and robustness of the proposed method are verified.
RESUMO
3-D lane detection is a challenging task due to the diversity of lanes, occlusion, dazzle light, and so on. Traditional methods usually use highly specialized handcrafted features and carefully designed postprocessing to detect them. However, these methods are based on strong assumptions and single modal so that they are easily scalable and have poor performance. In this article, a multimodal fusion network (MFNet) is proposed through using multihead nonlocal attention and feature pyramid for 3-D lane detection. It includes three parts: multihead deformable transformation (MDT) module, multidirectional attention feature pyramid fusion (MA-FPF) module, and top-view lane prediction (TLP) ones. First, MDT is presented to learn and mine multimodal features from RGB images, depth maps, and point cloud data (PCD) for achieving optimal lane feature extraction. Then, MA-FPF is designed to fuse multiscale features for presenting the vanish of lane features as the network deepens. Finally, TLP is developed to estimate 3-D lanes and predict their position. Experimental results on the 3-D lane synthetic and ONCE-3DLanes datasets demonstrate that the performance of the proposed MFNet outperforms the state-of-the-art methods in both qualitative and quantitative analyses and visual comparisons.
RESUMO
Surface defect detection plays an essential role in industry, and it is challenging due to the following problems: 1) the similarity between defect and nondefect texture is very high, which eventually leads to recognition or classification errors and 2) the size of defects is tiny, which are much more difficult to be detected than larger ones. To address such problems, this article proposes an adaptive image segmentation network (AIS-Net) for pixelwise segmentation of surface defects. It consists of three main parts: multishuffle-block dilated convolution (MSDC), dual attention context guidance (DACG), and adaptive category prediction (ACP) modules, where MSDC is designed to merge the multiscale defect features for avoiding the loss of tiny defect feature caused by model depth, DACG is designed to capture more contextual information from the defect feature map for locating defect regions and obtaining clear segmentation boundaries, and ACP is used to make classification and regression for predicting defect categories. Experimental results show that the proposed AIS-Net is superior to the state-of-the-art approaches on four actual surface defect datasets (NEU-DET: 98.38% ± 0.03%, DAGM: 99.25% ± 0.02%, Magnetic-tile: 98.73% ± 0.13%, and MVTec: 99.72% ± 0.02%).