Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 70
Filter
Add more filters

Publication year range
1.
Neurobiol Dis ; 201: 106680, 2024 Sep 24.
Article in English | MEDLINE | ID: mdl-39326464

ABSTRACT

Despite effective antiretroviral therapy, cognitive impairment remains prevalent among people with HIV (PWH) and decrements in executive function are particularly prominent. One component of executive function is cognitive flexibility, which integrates a variety of executive functions to dynamically adapt one's behavior in response to changing contextual demands. Though substantial work has illuminated HIV-related aberrations in brain function, it remains unclear how the neural oscillatory dynamics serving cognitive flexibility are affected by HIV-related alterations in neural functioning. Herein, 149 participants (PWH: 74; seronegative controls: 75) between the ages of 29-76 years completed a perceptual feature matching task that probes cognitive flexibility during high-density magnetoencephalography (MEG). Neural responses were decomposed into the time-frequency domain and significant oscillatory responses in the theta (4-8 Hz), alpha (10-16 Hz), and gamma (74-98 Hz) spectral windows were imaged using a beamforming approach. Whole-brain voxel-wise comparisons were then conducted on these dynamic functional maps to identify HIV-related differences in the neural oscillatory dynamics supporting cognitive flexibility. Our findings indicated group differences in alpha oscillatory activity in the cingulo-opercular cortices, and differences in gamma activity were found in the cerebellum. Across all participants, alpha and gamma activity in these regions were associated with performance on the cognitive flexibility task. Further, PWH who had been treated with antiretroviral therapy for a longer duration and those with higher current CD4 counts had alpha responses that more closely resembled those of seronegative controls, suggesting that optimal clinical management of HIV infection is associated with preserved neural dynamics supporting cognitive flexibility.

2.
Mol Cell Proteomics ; 21(1): 100169, 2022 01.
Article in English | MEDLINE | ID: mdl-34742921

ABSTRACT

Comprehensive proteome analysis of rare cell phenotypes remains a significant challenge. We report a method for low cell number MS-based proteomics using protease digestion of mildly formaldehyde-fixed cells in cellulo, which we call the "in-cell digest." We combined this with averaged MS1 precursor library matching to quantitatively characterize proteomes from low cell numbers of human lymphoblasts. About 4500 proteins were detected from 2000 cells, and 2500 proteins were quantitated from 200 lymphoblasts. The ease of sample processing and high sensitivity makes this method exceptionally suited for the proteomic analysis of rare cell states, including immune cell subsets and cell cycle subphases. To demonstrate the method, we characterized the proteome changes across 16 cell cycle states (CCSs) isolated from an asynchronous TK6 cells, avoiding synchronization. States included late mitotic cells present at extremely low frequency. We identified 119 pseudoperiodic proteins that vary across the cell cycle. Clustering of the pseudoperiodic proteins showed abundance patterns consistent with "waves" of protein degradation in late S, at the G2&M border, midmitosis, and at mitotic exit. These clusters were distinguished by significant differences in predicted nuclear localization and interaction with the anaphase-promoting complex/cyclosome. The dataset also identifies putative anaphase-promoting complex/cyclosome substrates in mitosis and the temporal order in which they are targeted for degradation. We demonstrate that a protein signature made of these 119 high-confidence cell cycle-regulated proteins can be used to perform unbiased classification of proteomes into CCSs. We applied this signature to 296 proteomes that encompass a range of quantitation methods, cell types, and experimental conditions. The analysis confidently assigns a CCS for 49 proteomes, including correct classification for proteomes from synchronized cells. We anticipate that this robust cell cycle protein signature will be crucial for classifying cell states in single-cell proteomes.


Subject(s)
Peptide Hydrolases , Proteomics , Cell Count , Cell Cycle , Cell Cycle Proteins/metabolism , Mitosis , Proteomics/methods
3.
Sensors (Basel) ; 24(3)2024 Feb 04.
Article in English | MEDLINE | ID: mdl-38339725

ABSTRACT

Visual Simultaneous Localization and Mapping (VSLAM) estimates the robot's pose in three-dimensional space by analyzing the depth variations of inter-frame feature points. Inter-frame feature point mismatches can lead to tracking failure, impacting the accuracy of the mobile robot's self-localization and mapping. This paper proposes a method for removing mismatches of image features in dynamic scenes in visual SLAM. First, the Grid-based Motion Statistics (GMS) method was introduced for fast coarse screening of mismatched image features. Second, an Adaptive Error Threshold RANSAC (ATRANSAC) method, determined by the internal matching rate, was proposed to improve the accuracy of removing mismatched image features in dynamic and static scenes. Third, the GMS-ATRANSAC method was tested for removing mismatched image features, and experimental results showed that GMS-ATRANSAC can remove mismatches of image features on moving objects. It achieved an average error reduction of 29.4% and 32.9% compared to RANSAC and GMS-RANSAC, with a corresponding reduction in error variance of 63.9% and 58.0%, respectively. The processing time was reduced by 78.3% and 38%, respectively. Finally, the effectiveness of inter-frame feature mismatch removal in the initialization thread of ORB-SLAM2 and the tracking thread of ORB-SLAM3 was verified for the proposed algorithm.

4.
Sensors (Basel) ; 24(6)2024 Mar 11.
Article in English | MEDLINE | ID: mdl-38544070

ABSTRACT

To address the issues of low measurement accuracy and unstable results when using binocular cameras to detect objects with sparse surface textures, weak surface textures, occluded surfaces, low-contrast surfaces, and surfaces with intense lighting variations, a three-dimensional measurement method based on an improved feature matching algorithm is proposed. Initially, features are extracted from the left and right images obtained by the binocular camera. The extracted feature points serve as seed points, and a one-dimensional search space is established accurately based on the disparity continuity and epipolar constraints. The optimal search range and seed point quantity are obtained using the particle swarm optimization algorithm. The zero-mean normalized cross-correlation coefficient is employed as a similarity measure function for region growing. Subsequently, the left and right images are matched based on the grayscale information of the feature regions, and seed point matching is performed within each matching region. Finally, the obtained matching pairs are used to calculate the three-dimensional information of the target object using the triangulation formula. The proposed algorithm significantly enhances matching accuracy while reducing algorithm complexity. Experimental results on the Middlebury dataset show an average relative error of 0.75% and an average measurement time of 0.82 s. The error matching rate of the proposed image matching algorithm is 2.02%, and the PSNR is 34 dB. The algorithm improves the measurement accuracy for objects with sparse or weak textures, demonstrating robustness against brightness variations and noise interference.

5.
Sensors (Basel) ; 24(3)2024 Jan 28.
Article in English | MEDLINE | ID: mdl-38339570

ABSTRACT

The goal of visual place recognition (VPR) is to determine the location of a query image by identifying its place in a collection of image databases. Visual sensor technologies are crucial for visual place recognition as they allow for precise identification and location of query images within a database. Global descriptor-based VPR methods face the challenge of accurately capturing the local specific regions within a scene; consequently, it leads to an increasing probability of confusion during localization in such scenarios. To tackle feature extraction and feature matching challenges in VPR, we propose a modified patch-NetVLAD strategy that includes two new modules: a context-aware patch descriptor and a context-aware patch matching mechanism. Firstly, we propose a context-driven patch feature descriptor to overcome the limitations of global and local descriptors in visual place recognition. This descriptor aggregates features from each patch's surrounding neighborhood. Secondly, we introduce a context-driven feature matching mechanism that utilizes cluster and saliency context-driven weighting rules to assign higher weights to patches that are less similar to densely populated or locally similar regions for improved localization performance. We further incorporate both of these modules into the patch-NetVLAD framework, resulting in a new approach called contextual patch-NetVLAD. Experimental results are provided to show that our proposed approach outperforms other state-of-the-art methods to achieve a Recall@10 score of 99.82 on Pittsburgh30k, 99.82 on FMDataset, and 97.68 on our benchmark dataset.

6.
Sensors (Basel) ; 24(12)2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38931562

ABSTRACT

Efficient image stitching plays a vital role in the Non-Destructive Evaluation (NDE) of infrastructures. An essential challenge in the NDE of infrastructures is precisely visualizing defects within large structures. The existing literature predominantly relies on high-resolution close-distance images to detect surface or subsurface defects. While the automatic detection of all defect types represents a significant advancement, understanding the location and continuity of defects is imperative. It is worth noting that some defects may be too small to capture from a considerable distance. Consequently, multiple image sequences are captured and processed using image stitching techniques. Additionally, visible and infrared data fusion strategies prove essential for acquiring comprehensive information to detect defects across vast structures. Hence, there is a need for an effective image stitching method appropriate for infrared and visible images of structures and industrial assets, facilitating enhanced visualization and automated inspection for structural maintenance. This paper proposes an advanced image stitching method appropriate for dual-sensor inspections. The proposed image stitching technique employs self-supervised feature detection to enhance the quality and quantity of feature detection. Subsequently, a graph neural network is employed for robust feature matching. Ultimately, the proposed method results in image stitching that effectively eliminates perspective distortion in both infrared and visible images, a prerequisite for subsequent multi-modal fusion strategies. Our results substantially enhance the visualization capabilities for infrastructure inspection. Comparative analysis with popular state-of-the-art methods confirms the effectiveness of the proposed approach.

7.
Sensors (Basel) ; 23(15)2023 Aug 03.
Article in English | MEDLINE | ID: mdl-37571712

ABSTRACT

Greenhouse ventilation has always been an important concern for agricultural workers. This paper aims to introduce a low-cost wind speed estimating method based on SURF (Speeded Up Robust Feature) feature matching and the schlieren technique for airflow mixing with large temperature differences and density differences like conditions on the vent of the greenhouse. The fluid motion is directly described by the pixel displacement through the fluid kinematics analysis. Combining the algorithm with the corresponding image morphology analysis and SURF feature matching algorithm, the schlieren image with feature points is used to match the changes in air flow images in adjacent frames to estimate the velocity from pixel change. Through experiments, this method is suitable for the speed estimation of turbulent or disturbed fluid images. When the supply air speed remains constant, the method in this article obtains 760 sets of effective feature matching point groups from 150 frames of video, and approximately 500 sets of effective feature matching point groups are within 0.1 difference of the theoretical dimensionless speed. Under the supply conditions of high-frequency wind speed changes and compared with the digital signal of fan speed and data from wind speed sensors, the trend of wind speed changes is basically in line with the actual changes. The estimation error of wind speed is basically within 10%, except when the wind speed supply suddenly stops or the wind speed is 0 m/s. This method involves the ability to estimate the wind speed of air mixing with different densities, but further research is still needed in terms of statistical methods and experimental equipment.

8.
Sensors (Basel) ; 23(15)2023 Aug 04.
Article in English | MEDLINE | ID: mdl-37571724

ABSTRACT

Visual positioning is a basic component for UAV operation. The structure-based methods are, widely applied in most literature, based on local feature matching between a query image that needs to be localized and a reference image with a known pose and feature points. However, the existing methods still struggle with the different illumination and seasonal changes. In outdoor regions, the feature points and descriptors are similar, and the number of mismatches will increase rapidly, leading to the visual positioning becoming unreliable. Moreover, with the database growing, the image retrieval and feature matching are time-consuming. Therefore, in this paper, we propose a novel hierarchical visual positioning method, which includes map construction, landmark matching and pose calculation. First, we combine brain-inspired mechanisms and landmarks to construct a cognitive map, which can make image retrieval efficient. Second, the graph neural network is utilized to learn the inner relations of the feature points. To improve matching accuracy, the network uses the semantic confidence in matching score calculations. Besides, the system can eliminate the mismatches by analyzing all the matching results in the same landmark. Finally, we calculate the pose by using a PnP solver. Furthermore, we evaluate both the matching algorithm and the visual positioning method experimentally in the simulation datasets, where the matching algorithm performs better in some scenes. The results demonstrate that the retrieval time can be shortened by three-thirds with an average positioning error of 10.8 m.

9.
Sensors (Basel) ; 23(19)2023 Sep 28.
Article in English | MEDLINE | ID: mdl-37836972

ABSTRACT

This paper designs a fast image-based indoor localization method based on an anchor control network (FILNet) to improve localization accuracy and shorten the duration of feature matching. Particularly, two stages are developed for the proposed algorithm. The offline stage is to construct an anchor feature fingerprint database based on the concept of an anchor control network. This introduces detailed surveys to infer anchor features according to the information of control anchors using the visual-inertial odometry (VIO) based on Google ARcore. In addition, an affine invariance enhancement algorithm based on feature multi-angle screening and supplementation is developed to solve the image perspective transformation problem and complete the feature fingerprint database construction. In the online stage, a fast spatial indexing approach is adopted to improve the feature matching speed by searching for active anchors and matching only anchor features around the active anchors. Further, to improve the correct matching rate, a homography matrix filter model is used to verify the correctness of feature matching, and the correct matching points are selected. Extensive experiments in real-world scenarios are performed to evaluate the proposed FILNet. The experimental results show that in terms of affine invariance, compared with the initial local features, FILNet significantly improves the recall of feature matching from 26% to 57% when the angular deviation is less than 60 degrees. In the image feature matching stage, compared with the initial K-D tree algorithm, FILNet significantly improves the efficiency of feature matching, and the average time of the test image dataset is reduced from 30.3 ms to 12.7 ms. In terms of localization accuracy, compared with the benchmark method based on image localization, FILNet significantly improves the localization accuracy, and the percentage of images with a localization error of less than 0.1m increases from 31.61% to 55.89%.

10.
Sensors (Basel) ; 23(18)2023 Sep 21.
Article in English | MEDLINE | ID: mdl-37766058

ABSTRACT

Today, hyperspectral imaging plays an integral part in the remote sensing and precision agriculture field. Identifying the matching key points between hyperspectral images is an important step in tasks such as image registration, localization, object recognition, and object tracking. Low-pixel resolution hyperspectral imaging is a recent introduction to the field, bringing benefits such as lower cost and form factor compared to traditional systems. However, the use of limited pixel resolution challenges even state-of-the-art feature detection and matching methods, leading to difficulties in generating robust feature matches for images with repeated textures, low textures, low sharpness, and low contrast. Moreover, the use of narrower optics in these cameras adds to the challenges during the feature-matching stage, particularly for images captured during low-altitude flight missions. In order to enhance the robustness of feature detection and matching in low pixel resolution images, in this study we propose a novel approach utilizing 3D Convolution-based Siamese networks. Compared to state-of-the-art methods, this approach takes advantage of all the spectral information available in hyperspectral imaging in order to filter out incorrect matches and produce a robust set of matches. The proposed method initially generates feature matches through a combination of Phase Stretch Transformation-based edge detection and SIFT features. Subsequently, a 3D Convolution-based Siamese network is utilized to filter out inaccurate matches, producing a highly accurate set of feature matches. Evaluation of the proposed method demonstrates its superiority over state-of-the-art approaches in cases where they fail to produce feature matches. Additionally, it competes effectively with the other evaluated methods when generating feature matches in low-pixel resolution hyperspectral images. This research contributes to the advancement of low pixel resolution hyperspectral imaging techniques, and we believe it can specifically aid in mosaic generation of low pixel resolution hyperspectral images.

11.
Sensors (Basel) ; 22(24)2022 Dec 15.
Article in English | MEDLINE | ID: mdl-36560267

ABSTRACT

Local feature matching is a part of many large vision tasks. Local feature matching usually consists of three parts: feature detection, description, and matching. The matching task usually serves a downstream task, such as camera pose estimation, so geometric information is crucial for the matching task. We propose the geometric feature embedding matching method (GFM) for local feature matching. We propose the adaptive keypoint geometric embedding module dynamic adjust keypoint position information and the orientation geometric embedding displayed modeling of geometric information about rotation. Subsequently, we interleave the use of self-attention and cross-attention for local feature enhancement. The predicted correspondences are multiplied by the local features. The correspondences are solved by computing dual-softmax. An intuitive human extraction and matching scheme is implemented. In order to verify the effectiveness of our proposed method, we performed validation on three datasets (MegaDepth, Hpatches, Aachen Day-Night v1.1) according to their respective metrics, and the results showed that our method achieved satisfactory results in all scenes.


Subject(s)
Algorithms , Humans
12.
Sensors (Basel) ; 22(13)2022 Jun 24.
Article in English | MEDLINE | ID: mdl-35808287

ABSTRACT

Image registration based on feature is a commonly used approach due to its robustness in complex geometric deformation and larger gray difference. However, in practical application, due to the effect of various noises, occlusions, shadows, gray differences, and even changes of image contents, the corresponding feature point set may be contaminated, which may degrade the accuracy of the transformation model estimate based on Random Sample Consensus (RANSAC). In this work, we proposed a semi-automated method to create the image registration training data, which greatly reduced the workload of labeling and made it possible to train a deep neural network. In addition, for the model estimation based on RANSAC, we determined the process according to a probabilistic perspective and presented a formulation of RANSAC with the learned guidance of hypothesis sampling. At the same time, a deep convolutional neural network of ProbNet was built to generate a sampling probability of corresponding feature points, which were then used to guide the sampling of a minimum set of RANSAC to acquire a more accurate estimation model. To illustrate the effectiveness and advantages of the proposed method, qualitative and quantitative experiments are conducted. In the qualitative experiment, the effectiveness of the proposed method was illustrated by a checkerboard visualization of image pairs before and after being registered by the proposed method. In the quantitative experiment, other three representative and popular methods of vanilla RANSAC, LMeds-RANSAC, and ProSAC-RANSAC were compared, and seven different measures were introduced to comprehensively evaluate the performance of the proposed method. The quantitative experimental result showed that the proposed method had better performance than the other methods. Furthermore, with the integration of the model estimation of the image registration into the deep-learning framework, it was possible to jointly optimize all the processes of image registration via end-to-end learning to further improve the accuracy of image registration.


Subject(s)
Algorithms , Remote Sensing Technology , Consensus , Neural Networks, Computer , Probability
13.
Sensors (Basel) ; 23(1)2022 Dec 28.
Article in English | MEDLINE | ID: mdl-36616922

ABSTRACT

Random Sample Consensus, most commonly abbreviated as RANSAC, is a robust estimation method for the parameters of a model contaminated by a sizable percentage of outliers. In its simplest form, the process starts with a sampling of the minimum data needed to perform an estimation, followed by an evaluation of its adequacy, and further repetitions of this process until some stopping criterion is met. Multiple variants have been proposed in which this workflow is modified, typically tweaking one or several of these steps for improvements in computing time or the quality of the estimation of the parameters. RANSAC is widely applied in the field of robotics, for example, for finding geometric shapes (planes, cylinders, spheres, etc.) in cloud points or for estimating the best transformation between different camera views. In this paper, we present a review of the current state of the art of RANSAC family methods with a special interest in applications in robotics.


Subject(s)
Algorithms , Robotics , Research Design
14.
Sensors (Basel) ; 22(14)2022 Jul 17.
Article in English | MEDLINE | ID: mdl-35891019

ABSTRACT

Multi-target tracking (MTT) is one of the most important functions of radar systems. Traditional multi-target tracking methods based on data association convert multi-target tracking problems into single-target tracking problems. When the number of targets is large, the amount of computation increases exponentially. The Gaussian mixture probability hypothesis density (GM-PHD) filtering based on a random finite set (RFS) provides an effective method to solve multi-target tracking problems without the requirement of explicit data association. However, it is difficult to track targets accurately in real-time with dense clutter and low detection probability. To solve this problem, this paper proposes a multi-feature matching GM-PHD (MFGM-PHD) filter for radar multi-target tracking. Using Doppler and amplitude information contained in radar echo to modify the weights of Gaussian components, the weight of the clutter can be greatly reduced and the target can be distinguished from clutter. Simulations show that the proposed MFGM-PHD filter can improve the accuracy of multi-target tracking as well as the real-time performance with high clutter density and low detection probability.


Subject(s)
Radar , Normal Distribution
15.
Sensors (Basel) ; 22(20)2022 Oct 12.
Article in English | MEDLINE | ID: mdl-36298100

ABSTRACT

The affine scale-invariant feature transform (ASIFT) algorithm is a feature extraction algorithm with affinity and scale invariance, which is suitable for image feature matching using unmanned aerial vehicles (UAVs). However, there are many problems in the matching process, such as the low efficiency and mismatching. In order to improve the matching efficiency, this algorithm firstly simulates image distortion based on the position and orientation system (POS) information from real-time UAV measurements to reduce the number of simulated images. Then, the scale-invariant feature transform (SIFT) algorithm is used for feature point detection, and the extracted feature points are combined with the binary robust invariant scalable keypoints (BRISK) descriptor to generate the binary feature descriptor, which is matched using the Hamming distance. Finally, in order to improve the matching accuracy of the UAV images, based on the random sample consensus (RANSAC) a false matching eliminated algorithm is proposed. Through four groups of experiments, the proposed algorithm is compared with the SIFT and ASIFT. The results show that the algorithm can optimize the matching effect and improve the matching speed.

16.
Sensors (Basel) ; 22(20)2022 Oct 11.
Article in English | MEDLINE | ID: mdl-36298069

ABSTRACT

Feature matching for 3D point clouds is a fundamental yet challenging problem in remote sensing and 3D computer vision. However, due to a number of nuisances, the initial feature correspondences generated by matching local keypoint descriptors may contain many outliers (incorrect correspondences). To remove outliers, this paper presents a robust method called progressive consistency voting (PCV). PCV aims at assigning a reliable confidence score to each correspondence such that reasonable correspondences can be achieved by simply finding top-scored ones. To compute the confidence score, we suggest fully utilizing the geometric consistency cue between correspondences and propose a voting-based scheme. In addition, we progressively mine convincing voters from the initial correspondence set and optimize the scoring result by considering top-scored correspondences at the last iteration. Experiments on several standard datasets verify that PCV outperforms five state-of-the-art methods under almost all tested conditions and is robust to noise, data decimation, clutter, occlusion, and data modality change. We also apply PCV to point cloud registration and show that it can significantly improve the registration performance.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Imaging, Three-Dimensional/methods
17.
Entropy (Basel) ; 24(9)2022 Sep 09.
Article in English | MEDLINE | ID: mdl-36141153

ABSTRACT

Image stitching refers to stitching two or more images with overlapping areas through feature points matching to generate a panoramic image, which plays an important role in geological survey, military reconnaissance, and other fields. At present, the existing image stitching technologies mostly adopt images with good lighting conditions, but the lack of feature points in scenes with weak light such as morning or night will affect the image stitching effect, making it difficult to meet the needs of practical applications. When there exist concentrated areas of brightness such as lights and large dark areas in the nighttime image, it will further cause the loss of image details making the feature point matching unavailable. The obtained perspective transformation matrix cannot reflect the mapping relationship of the entire image, resulting in poor splicing effect, and it is difficult to meet the actual application requirements. Therefore, an adaptive image enhancement algorithm is proposed based on guided filtering to preprocess the nighttime image, and use the enhanced image for feature registration. The experimental results show that the image obtained by preprocessing the nighttime image with the proposed enhancement algorithm has better detail performance and color restoration, and greatly improves the image quality. By performing feature registration on the enhanced image, the number of matching logarithms of the image increases, so as to achieve high accuracy for images stitching.

18.
Pacing Clin Electrophysiol ; 44(4): 633-640, 2021 04.
Article in English | MEDLINE | ID: mdl-33687744

ABSTRACT

AIMS: Identifying the manufacturer and the type of cardiac implantable electronic devices (CIEDs) is important in emergent clinical settings. Recent studies have illustrated that artificial neural network models can successfully recognize CIEDs from chest X-ray images. However, all existing methods require a vast amount of medical data to train the classification model. Here, we have proposed a novel method to retrieve an identical CIED image from an image database by employing the feature point matching algorithm. METHODS AND RESULTS: A total of 653 unique X-ray images from 456 patients who visited our pacemaker clinic between April 2012 and August 2020 were collected. The device images were manually square-shaped, and was thereafter resized to 224 × 224 pixels. A scale-invariant feature transform (SIFT) algorithm was used to extract the keypoints from the query image and reference images. Paired feature points were selected via brute-force matching, and the average Euclidean distance was calculated. The image with the shortest average distance was defined as the most similar image. The classification performance was indicated by accuracy, precision, recall, and F1-score for detecting the manufacturers and model groups, respectively. The average accuracy, precision, recall, and F-1 score for the manufacturer classification were 97.0%, 0.97, 0.96, and 0.96, respectively. For the model classification task, the average accuracy, precision, recall, and F-1 score were 93.2%, 0.94, 0.92, and 0.93, respectively, all of which were higher than those of the previously reported machine learning models. CONCLUSION: Feature point matching is useful for identifying CIEDs from X-ray images.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Pacemaker, Artificial , Radiography, Thoracic , Humans , X-Rays
19.
Sensors (Basel) ; 21(13)2021 Jun 30.
Article in English | MEDLINE | ID: mdl-34209396

ABSTRACT

Loop Closure Detection (LCD) is an important technique to improve the accuracy of Simultaneous Localization and Mapping (SLAM). In this paper, we propose an LCD algorithm based on binary classification for feature matching between similar images with deep learning, which greatly improves the accuracy of LCD algorithm. Meanwhile, a novel lightweight convolutional neural network (CNN) is proposed and applied to the target detection task of key frames. On this basis, the key frames are binary classified according to their labels. Finally, similar frames are input into the improved lightweight feature matching network based on Transformer to judge whether the current position is loop closure. The experimental results show that, compared with the traditional method, LFM-LCD has higher accuracy and recall rate in the LCD task of indoor SLAM while ensuring the number of parameters and calculation amount. The research in this paper provides a new direction for LCD of robotic SLAM, which will be further improved with the development of deep learning.


Subject(s)
Algorithms , Robotics , Neural Networks, Computer
20.
Sensors (Basel) ; 21(3)2021 Feb 02.
Article in English | MEDLINE | ID: mdl-33540791

ABSTRACT

RGB-D cameras have been commercialized, and many applications using them have been proposed. In this paper, we propose a robust registration method of multiple RGB-D cameras. We use a human body tracking system provided by Azure Kinect SDK to estimate a coarse global registration between cameras. As this coarse global registration has some error, we refine it using feature matching. However, the matched feature pairs include mismatches, hindering good performance. Therefore, we propose a registration refinement procedure that removes these mismatches and uses the global registration. In an experiment, the ratio of inliers among the matched features is greater than 95% for all tested feature matchers. Thus, we experimentally confirm that mismatches can be eliminated via the proposed method even in difficult situations and that a more precise global registration of RGB-D cameras can be obtained.


Subject(s)
Monitoring, Physiologic , Calibration , Humans , Movement
SELECTION OF CITATIONS
SEARCH DETAIL