Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Phys Med Biol ; 69(9)2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38537303

RESUMO

Cardiac magnetic resonance imaging (MRI) usually requires a long acquisition time. The movement of the patients during MRI acquisition will produce image artifacts. Previous studies have shown that clear MR image texture edges are of great significance for pathological diagnosis. In this paper, a motion artifact reduction method for cardiac MRI based on edge enhancement network is proposed. Firstly, the four-plane normal vector adaptive fractional differential mask is applied to extract the edge features of blurred images. The four-plane normal vector method can reduce the noise information in the edge feature maps. The adaptive fractional order is selected according to the normal mean gradient and the local Gaussian curvature entropy of the images. Secondly, the extracted edge feature maps and blurred images are input into the de-artifact network. In this network, the edge fusion feature extraction network and the edge fusion transformer network are specially designed. The former combines the edge feature maps with the fuzzy feature maps to extract the edge feature information. The latter combines the edge attention network and the fuzzy attention network, which can focus on the blurred image edges. Finally, extensive experiments show that the proposed method can obtain higher peak signal-to-noise ratio and structural similarity index measure compared to state-of-art methods. The de-artifact images have clear texture edges.


Assuntos
Artefatos , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Razão Sinal-Ruído , Movimento (Física) , Movimento , Processamento de Imagem Assistida por Computador/métodos
2.
Sensors (Basel) ; 23(15)2023 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-37571608

RESUMO

Three-dimensional measurement is a high-throughput method that can record a large amount of information. Three-dimensional modelling of plants has the possibility to not only automate dimensional measurement, but to also enable visual assessment to be quantified, eliminating ambiguity in human judgment. In this study, we have developed new methods that could be used for the morphological analysis of plants from the information contained in 3D data. Specifically, we investigated characteristics that can be measured by scale (dimension) and/or visual assessment by humans. The latter is particularly novel in this paper. The characteristics that can be measured on a scale-related dimension were tested based on the bounding box, convex hull, column solid, and voxel. Furthermore, for characteristics that can be evaluated by visual assessment, we propose a new method using normal vectors and local curvature (LC) data. For these examinations, we used our highly accurate all-around 3D plant modelling system. The coefficient of determination between manual measurements and the scale-related methods were all above 0.9. Furthermore, the differences in LC calculated from the normal vector data allowed us to visualise and quantify the concavity and convexity of leaves. This technique revealed that there were differences in the time point at which leaf blistering began to develop among the varieties. The precise 3D model made it possible to perform quantitative measurements of lettuce size and morphological characteristics. In addition, the newly proposed LC-based analysis method made it possible to quantify the characteristics that rely on visual assessment. This research paper was able to demonstrate the following possibilities as outcomes: (1) the automation of conventional manual measurements, and (2) the elimination of variability caused by human subjectivity, thereby rendering evaluations by skilled experts unnecessary.


Assuntos
Imageamento Tridimensional , Lactuca , Lactuca/crescimento & desenvolvimento , Simulação por Computador
3.
Micromachines (Basel) ; 14(8)2023 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-37630091

RESUMO

To automatically measure the surface profile of a cylindrical workpiece, a high-precision multi-beam optical method is proposed in this paper. First, some successive images for the cylindrical workpiece's surface are acquired by a multi-beam angle sensor under different light directions. Then, the light directions are estimated based on the feature regions in the images to calculate surface normal vectors. Finally, according to the relationship of the surface normal vector and the vertical section of the workpiece's surface, a depth map is reconstructed to achieve the curvature surface, which can be employed to measure the curvature radius of the cylindrical workpiece's surface. Experimental results indicate that the proposed measurement method can achieve good measurement precision with a mean error of the curvature radius of a workpiece's surface of 0.89% at a reasonable speed of 10.226 s, which is superior to some existing methods.

4.
Sensors (Basel) ; 23(14)2023 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-37514625

RESUMO

China is the largest producer and consumer of rice, and the classification of filled/unfilled rice grains is of great significance for rice breeding and genetic analysis. The traditional method for filled/unfilled rice grain identification was generally manual, which had the disadvantages of low efficiency, poor repeatability, and low precision. In this study, we have proposed a novel method for filled/unfilled grain classification based on structured light imaging and Improved PointNet++. Firstly, the 3D point cloud data of rice grains were obtained by structured light imaging. And then the specified processing algorithms were developed for the single grain segmentation, and data enhancement with normal vector. Finally, the PointNet++ network was improved by adding an additional Set Abstraction layer and combining the maximum pooling of normal vectors to realize filled/unfilled rice grain point cloud classification. To verify the model performance, the Improved PointNet++ was compared with six machine learning methods, PointNet and PointConv. The results showed that the optimal machine learning model is XGboost, with a classification accuracy of 91.99%, while the classification accuracy of Improved PointNet++ was 98.50% outperforming the PointNet 93.75% and PointConv 92.25%. In conclusion, this study has demonstrated a novel and effective method for filled/unfilled grain recognition.

5.
Sensors (Basel) ; 23(11)2023 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-37299856

RESUMO

Three-dimensional (3D) reconstruction of objects using the polarization properties of diffuse light on the object surface has become a crucial technique. Due to the unique mapping relation between the degree of polarization of diffuse light and the zenith angle of the surface normal vector, polarization 3D reconstruction based on diffuse reflection theoretically has high accuracy. However, in practice, the accuracy of polarization 3D reconstruction is limited by the performance parameters of the polarization detector. Improper selection of performance parameters can result in large errors in the normal vector. In this paper, the mathematical models that relate the polarization 3D reconstruction errors to the detector performance parameters including polarizer extinction ratio, polarizer installation error, full well capacity and analog-to-digital (A2D) bit depth are established. At the same time, polarization detector parameters suitable for polarization 3D reconstruction are provided by the simulation. The performance parameters we recommend include an extinction ratio ≥ 200, an installation error ∈ [-1°, 1°], a full-well capacity ≥ 100 Ke-, and an A2D bit depth ≥ 12 bits. The models provided in this paper are of great significance for improving the accuracy of polarization 3D reconstruction.


Assuntos
Imageamento Tridimensional , Modelos Teóricos , Imageamento Tridimensional/métodos , Simulação por Computador
6.
Sensors (Basel) ; 23(7)2023 Mar 31.
Artigo em Inglês | MEDLINE | ID: mdl-37050696

RESUMO

There are six possible solutions for the surface normal vectors obtained from polarization information during 3D reconstruction. To resolve the ambiguity of surface normal vectors, scholars have introduced additional information, such as shading information. However, this makes the 3D reconstruction task too burdensome. Therefore, in order to make the 3D reconstruction more generally applicable, this paper proposes a complete framework to reconstruct the surface of an object using only polarized images. To solve the ambiguity problem of surface normal vectors, a jump-compensated U-shaped generative adversarial network (RU-Gan) based on jump compensation is designed for fusing six surface normal vectors. Among them, jump compensation is proposed in the encoder and decoder parts, and the content loss function is reconstructed, among other approaches. For the problem that the reflective region of the original image will cause the estimated normal vector to deviate from the true normal vector, a specular reflection model is proposed to optimize the dataset, thus reducing the reflective region. Experiments show that the estimated normal vector obtained in this paper improves the accuracy by about 20° compared with the previous conventional work, and improves the accuracy by about 1.5° compared with the recent neural network model, which means the neural network model proposed in this paper is more suitable for the normal vector estimation task. Furthermore, the object surface reconstruction framework proposed in this paper has the characteristics of simple implementation conditions and high accuracy of reconstructed texture.

7.
Sensors (Basel) ; 23(2)2023 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-36679649

RESUMO

Building reconstruction using high-resolution satellite-based synthetic SAR tomography (TomoSAR) is of great importance in urban planning and city modeling applications. However, since the imaging mode of SAR is side-by-side, the TomoSAR point cloud of a single orbit cannot achieve a complete observation of buildings. It is difficult for existing methods to extract the same features, as well as to use the overlap rate to achieve the alignment of the homologous TomoSAR point cloud and the cross-source TomoSAR point cloud. Therefore, this paper proposes a robust alignment method for TomoSAR point clouds in urban areas. First, noise points and outlier points are filtered by statistical filtering, and density of projection point (DoPP)-based projection is used to extract TomoSAR building point clouds and obtain the facade points for subsequent calculations based on density clustering. Subsequently, coarse alignment of source and target point clouds was performed using principal component analysis (PCA). Lastly, the rotation and translation coefficients were calculated using the angle of the normal vector of the opposite facade of the building and the distance of the outer end of the facade projection. The experimental results verify the feasibility and robustness of the proposed method. For the homologous TomoSAR point cloud, the experimental results show that the average rotation error of the proposed method was less than 0.1°, and the average translation error was less than 0.25 m. The alignment accuracy of the cross-source TomoSAR point cloud was evaluated for the defined angle and distance, whose values were less than 0.2° and 0.25 m.


Assuntos
Planejamento de Cidades , Tomografia Computadorizada por Raios X , Menogaril , Análise por Conglomerados , Análise de Componente Principal
8.
Sensors (Basel) ; 22(21)2022 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-36365950

RESUMO

Estimating camera pose is one of the key steps in computer vison, photogrammetry and SLAM (Simultaneous Localization and Mapping). It is mainly calculated based on the 2D-3D correspondences of features, including 2D-3D point and line correspondences. If a zoom lens is equipped, the focal length needs to be estimated simultaneously. In this paper, a new method of fast and accurate pose estimation with unknown focal length using two 2D-3D line correspondences and the camera position is proposed. Our core contribution is to convert the PnL (perspective-n-line) problem with 2D-3D line correspondences into an estimation problem with 3D-3D point correspondences. One 3D line and the camera position in the world frame can define a plane, the 2D line projection of the 3D line and the camera position in the camera frame can define another plane, and actually the two planes are the same plane, which is the key geometric characteristic in this paper's estimation of focal length and pose. We establish the transform between the normal vectors of the two planes with this characteristic, and this transform can be regarded as the camera projection of a 3D point. Then, the pose estimation using 2D-3D line correspondences is converted into pose estimation using 3D-3D point correspondences in intermediate frames, and, lastly, pose estimation can be finished quickly. In addition, using the property whereby the angle between two planes is invariant in both the camera frame and world frame, we can estimate the camera focal length quickly and accurately. Experimental results show that our proposed method has good performance in numerical stability, noise sensitivity and computational speed with synthetic data and real scenarios, and has strong robustness to camera position noise.

9.
Sensors (Basel) ; 22(17)2022 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-36080833

RESUMO

Accurately detecting the tooth profile parameters of the synchronous belt is crucial for the transmission's load distribution and service life. However, the existing detection methods have low efficiency, are greatly affected by the manual experience, and cannot realize automatic detection. A measurement method based on point cloud data is proposed to solve this issue. The surface space points of the synchronous belt are acquired by a line-structured light sensor, and the raw point clouds are preprocessed to remove outliers and reduce the number of points. Then, the point clouds are divided into plane and arc regions, and different methods are used for fitting. Finally, the parameters of each tooth are calculated. The experimental results show that the method has high measurement accuracy and reliable stability and can replace the original detection method to realize automatic detection.


Assuntos
Dente
10.
Sensors (Basel) ; 21(11)2021 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-34073498

RESUMO

Due to the complexity of surrounding environments, lidar point cloud data (PCD) are often degraded by plane noise. In order to eliminate noise, this paper proposes a filtering scheme based on the grid principal component analysis (PCA) technique and the ground splicing method. The 3D PCD is first projected onto a desired 2D plane, within which the ground and wall data are well separated from the PCD via a prescribed index based on the statistics of points in all 2D mesh grids. Then, a KD-tree is constructed for the ground data, and rough segmentation in an unsupervised method is conducted to obtain the true ground data by using the normal vector as a distinctive feature. To improve the performance of noise removal, we propose an elaborate K nearest neighbor (KNN)-based segmentation method via an optimization strategy. Finally, the denoised data of the wall and ground are spliced for further 3D reconstruction. The experimental results show that the proposed method is efficient at noise removal and is superior to several traditional methods in terms of both denoising performance and run speed.

11.
Sensors (Basel) ; 20(13)2020 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-32635370

RESUMO

We propose a completely unsupervised approach to simultaneously estimate scene depth, ego-pose, ground segmentation and ground normal vector from only monocular RGB video sequences. In our approach, estimation for different scene structures can mutually benefit each other by the joint optimization. Specifically, we use the mutual information loss to pre-train the ground segmentation network and before adding the corresponding self-learning label obtained by a geometric method. By using the static nature of the ground and its normal vector, the scene depth and ego-motion can be efficiently learned by the self-supervised learning procedure. Extensive experimental results on both Cityscapes and KITTI benchmark demonstrate the significant improvement on the estimation accuracy for both scene depth and ego-pose by our approach. We also achieve an average error of about 3° for estimated ground normal vectors. By deploying our proposed geometric constraints, the IOUaccuracy of unsupervised ground segmentation is increased by 35% on the Cityscapes dataset.

12.
Sensors (Basel) ; 19(4)2019 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-30795500

RESUMO

In this paper, we introduce an approach for measuring human gait symmetry where the input is a sequence of depth maps of subject walking on a treadmill. Body surface normals are used to describe 3D information of the walking subject in each frame. Two different schemes for embedding the temporal factor into a symmetry index are proposed. Experiments on the whole body, as well as the lower limbs, were also considered to assess the usefulness of upper body information in this task. The potential of our method was demonstrated with a dataset of 97,200 depth maps of nine different walking gaits. An ROC analysis for abnormal gait detection gave the best result ( AUC = 0.958 ) compared with other related studies. The experimental results provided by our method confirm the contribution of upper body in gait analysis as well as the reliability of approximating average gait symmetry index without explicitly considering individual gait cycles for asymmetry detection.


Assuntos
Teste de Esforço/métodos , Marcha/fisiologia , Extremidade Inferior/fisiologia , Caminhada , Adulto , Fenômenos Biomecânicos/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Gravação em Vídeo
13.
Sensors (Basel) ; 18(11)2018 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-30384481

RESUMO

This paper presents a stereo camera-based head-eye calibration method that aims to find the globally optimal transformation between a robot's head and its eye. This method is highly intuitive and simple, so it can be used in a vision system for humanoid robots without any complex procedures. To achieve this, we introduce an extended minimum variance approach for head-eye calibration using surface normal vectors instead of 3D point sets. The presented method considers both positional and orientational error variances between visual measurements and kinematic data in head-eye calibration. Experiments using both synthetic and real data show the accuracy and efficiency of the proposed method.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA