Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
1.
Sensors (Basel) ; 24(2)2024 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-38257420

RESUMEN

Hyperspectral images (HSIs) contain abundant spectral and spatial structural information, but they are inevitably contaminated by a variety of noises during data reception and transmission, leading to image quality degradation and subsequent application hindrance. Hence, removing mixed noise from hyperspectral images is an important step in improving the performance of subsequent image processing. It is a well-established fact that the data information of hyperspectral images can be effectively represented by a global spectral low-rank subspace due to the high redundancy and correlation (RAC) in the spatial and spectral domains. Taking advantage of this property, a new algorithm based on subspace representation and nonlocal low-rank tensor decomposition is proposed to filter the mixed noise of hyperspectral images. The algorithm first obtains the subspace representation of the hyperspectral image by utilizing the spectral low-rank property and obtains the orthogonal basis and representation coefficient image (RCI). Then, the representation coefficient image is grouped and denoised using tensor decomposition and wavelet decomposition, respectively, according to the spatial nonlocal self-similarity. Afterward, the orthogonal basis and denoised representation coefficient image are optimized using the alternating direction method of multipliers (ADMM). Finally, iterative regularization is used to update the image to obtain the final denoised hyperspectral image. Experiments on both simulated and real datasets demonstrate that the algorithm proposed in this paper is superior to related mainstream methods in both quantitative metrics and intuitive vision. Because it is denoising for image subspace, the time complexity is greatly reduced and is lower than related denoising algorithms in terms of computational cost.

2.
Sensors (Basel) ; 23(16)2023 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-37631827

RESUMEN

Zirconium sheet has been widely used in various fields, e.g., chemistry and aerospace. The surface scratches on the zirconium sheets caused by complex processing environment have a negative impact on the performance, e.g., working life and fatigue fracture resistance. Therefore, it is necessary to detect the defect of zirconium sheets. However, it is difficult to detect such scratch images due to lots of scattered additive noise and complex interlaced structural texture. Hence, we propose a framework for adaptively detecting scratches on the surface images of zirconium sheets, including noise removing and texture suppressing. First, the noise removal algorithm, i.e., an optimized threshold function based on dual-tree complex wavelet transform, uses selected parameters to remove scattered and numerous noise. Second, the texture suppression algorithm, i.e., an optimized relative total variation enhancement model, employs selected parameters to suppress interlaced texture. Finally, by connecting disconnection based on two types of connection algorithms and replacing the Gaussian filter in the standard Canny edge detection algorithm with our proposed framework, we can more robustly detect the scratches. The experimental results show that the proposed framework is of higher accuracy.

3.
Sensors (Basel) ; 23(17)2023 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-37687987

RESUMEN

Satellite sensors often capture remote sensing images that contain various types of stripe noise. The presence of these stripes significantly reduces the quality of the remote images and severely affects their subsequent applications in other fields. Despite the existence of many stripe noise removal methods in the research, they often result in the loss of fine details during the destriping process, and some methods even generate artifacts. In this paper, we proposed a new unidirectional variational model to remove horizontal stripe noise. The proposed model fully considered the directional characteristics and structural sparsity of the stripe noise, as well as the prior features of the underlying image, to design different sparse constraints, and the ℓp quasinorm was introduced in these constraints to better describe these sparse characteristics, thus achieving a more excellent destriping effect. Moreover, we employed the fast alternating direction method of multipliers (ADMM) to solve the proposed non-convex model. This significantly improved the efficiency and robustness of the proposed method. The qualitative and quantitative results from simulated and real data experiments confirm that our method outperforms existing destriping approaches in terms of stripe noise removal and preservation of image details.

4.
Sensors (Basel) ; 23(21)2023 Oct 24.
Artículo en Inglés | MEDLINE | ID: mdl-37960360

RESUMEN

LiDAR point clouds are significantly impacted by snow in driving scenarios, introducing scattered noise points and phantom objects, thereby compromising the perception capabilities of autonomous driving systems. Current effective methods for removing snow from point clouds largely rely on outlier filters, which mechanically eliminate isolated points. This research proposes a novel translation model for LiDAR point clouds, the 'L-DIG' (LiDAR depth images GAN), built upon refined generative adversarial networks (GANs). This model not only has the capacity to reduce snow noise from point clouds, but it also can artificially synthesize snow points onto clear data. The model is trained using depth image representations of point clouds derived from unpaired datasets, complemented by customized loss functions for depth images to ensure scale and structure consistencies. To amplify the efficacy of snow capture, particularly in the region surrounding the ego vehicle, we have developed a pixel-attention discriminator that operates without downsampling convolutional layers. Concurrently, the other discriminator equipped with two-step downsampling convolutional layers has been engineered to effectively handle snow clusters. This dual-discriminator approach ensures robust and comprehensive performance in tackling diverse snow conditions. The proposed model displays a superior ability to capture snow and object features within LiDAR point clouds. A 3D clustering algorithm is employed to adaptively evaluate different levels of snow conditions, including scattered snowfall and snow swirls. Experimental findings demonstrate an evident de-snowing effect, and the ability to synthesize snow effects.

5.
Sensors (Basel) ; 23(21)2023 Oct 27.
Artículo en Inglés | MEDLINE | ID: mdl-37960452

RESUMEN

Laser altimetry data from the Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) contain a lot of noise, which necessitates the requirement for a signal photon extraction method. In this study, we propose a density clustering method, which combines slope and elevation information from optical stereo images and adaptively adjusts the neighborhood search direction in the along-track direction. The local classification density threshold was calculated adaptively according to the uneven spatial distribution of noise and signal density, and reliable surface signal points were extracted. The performance of the algorithm was validated for strong and weak beam laser altimetry data using optical stereo images with different resolutions and positioning accuracies. The results were compared qualitatively and quantitatively with those obtained using the ATL08 algorithm. The signal extraction quality was better than that of the ATL08 algorithm for steep slope and low signal-to-noise ratio (SNR) regions. The proposed method can better balance the relationship between recall and precision, and its F1-score was higher than that of the ATL08 algorithm. The method can accurately extract continuous and reliable surface signals for both strong and weak beams among different terrains and land cover types.

6.
Sensors (Basel) ; 23(7)2023 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-37050587

RESUMEN

In bio-signal denoising, current methods reported in the literature consider purely simulated environments, requiring high computational powers and signal processing algorithms that may introduce signal distortion. To achieve an efficient noise reduction, such methods require previous knowledge of the noise signals or to have certain periodicity and stability, making the noise estimation difficult to predict. In this paper, we solve these challenges through the development of an experimental method applied to bio-signal denoising using a combined approach. This is based on the implementation of unconventional electric field sensors used for creating a noise replica required to obtain the ideal Wiener filter transfer function and achieve further noise reduction. This work aims to investigate the suitability of the proposed approach for real-time noise reduction affecting bio-signal recordings. The experimental evaluation presented here considers two scenarios: (a) human bio-signals trials including electrocardiogram, electromyogram and electrooculogram; and (b) bio-signal recordings from the MIT-MIH arrhythmia database. The performance of the proposed method is evaluated using qualitative criteria (i.e., power spectral density) and quantitative criteria (i.e., signal-to-noise ratio and mean square error) followed by a comparison between the proposed methodology and state of the art denoising methods. The results indicate that the combined approach proposed in this paper can be used for noise reduction in electrocardiogram, electromyogram and electrooculogram signals, achieving noise attenuation levels of 26.4 dB, 21.2 dB and 40.8 dB, respectively.

7.
Sensors (Basel) ; 23(24)2023 Dec 10.
Artículo en Inglés | MEDLINE | ID: mdl-38139587

RESUMEN

The photon point clouds collected by the high-sensitivity single-photon detector on the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) are utilized in various applications. However, the discretely distributed noise among the signal photons greatly increases the difficulty of signal extraction, especially the edge noise adjacent to signals. To detect signal photons from vegetation coverage areas at different slopes, this paper proposes a density-based multilevel terrain-adaptive noise removal method (MTANR) that identifies noise in a coarse-to-fine strategy based on the distribution of noise photons and is evaluated with high-precision airborne LiDAR data. First, the histogram-based successive denoising method was used as a coarse denoising process to remove distant noise and part of the sparse noise, thereby increasing the fault tolerance of the subsequent steps. Second, a rotatable ellipse that adaptively corrects the direction and shape based on the slope was utilized to search for the optimal filtering direction (OFD). Based on the direction, sparse noise removal was accomplished robustly using the Otsu's method in conjunction with the ordering points to identify the clustering structure (OPTICS) and provide a nearly noise-free environment for edge searching. Finally, the edge noise was removed by near-ground edge searching, and the signal photons were better preserved by the surface lines. The proposed MTANR was validated in four typical experimental areas: two in Baishan, China, and two in Taranaki, New Zealand. A comparison was made with three other representative methods, namely differential, regressive, and Gaussian adaptive nearest neighbor (DRAGANN), used in ATL08 products, local distance statistics (LDS), and horizontal ellipse-based OPTICS. The results demonstrated that the values of the F1 score for the signal photon identification achieved by the proposed MTANR were 0.9762, 0.9857, 0.9839, and 0.9534, respectively, which were higher than those of the other methods mentioned above. In addition, the qualitative and quantitative results demonstrated that MTANR outperformed in scenes with steep slopes, abrupt terrain changes, and uneven vegetation coverage.

8.
Sensors (Basel) ; 22(10)2022 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-35632370

RESUMEN

Despite all the expectations for photoacoustic endoscopy (PAE), there are still several technical issues that must be resolved before the technique can be successfully translated into clinics. Among these, electromagnetic interference (EMI) noise, in addition to the limited signal-to-noise ratio (SNR), have hindered the rapid development of related technologies. Unlike endoscopic ultrasound, in which the SNR can be increased by simply applying a higher pulsing voltage, there is a fundamental limitation in leveraging the SNR of PAE signals because they are mostly determined by the optical pulse energy applied, which must be within the safety limits. Moreover, a typical PAE hardware situation requires a wide separation between the ultrasonic sensor and the amplifier, meaning that it is not easy to build an ideal PAE system that would be unaffected by EMI noise. With the intention of expediting the progress of related research, in this study, we investigated the feasibility of deep-learning-based EMI noise removal involved in PAE image processing. In particular, we selected four fully convolutional neural network architectures, U-Net, Segnet, FCN-16s, and FCN-8s, and observed that a modified U-Net architecture outperformed the other architectures in the EMI noise removal. Classical filter methods were also compared to confirm the superiority of the deep-learning-based approach. Still, it was by the U-Net architecture that we were able to successfully produce a denoised 3D vasculature map that could even depict the mesh-like capillary networks distributed in the wall of a rat colorectum. As the development of a low-cost laser diode or LED-based photoacoustic tomography (PAT) system is now emerging as one of the important topics in PAT, we expect that the presented AI strategy for the removal of EMI noise could be broadly applicable to many areas of PAT, in which the ability to apply a hardware-based prevention method is limited and thus EMI noise appears more prominently due to poor SNR.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Animales , Fenómenos Electromagnéticos , Endoscopía , Procesamiento de Imagen Asistido por Computador/métodos , Ratas
9.
Sensors (Basel) ; 21(16)2021 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-34450832

RESUMEN

Complementary metal-oxide-semiconductor (CMOS) image sensors can cause noise in images collected or transmitted in unfavorable environments, especially low-illumination scenarios. Numerous approaches have been developed to solve the problem of image noise removal. However, producing natural and high-quality denoised images remains a crucial challenge. To meet this challenge, we introduce a novel approach for image denoising with the following three main contributions. First, we devise a deep image prior-based module that can produce a noise-reduced image as well as a contrast-enhanced denoised one from a noisy input image. Second, the produced images are passed through a proposed image fusion (IF) module based on Laplacian pyramid decomposition to combine them and prevent noise amplification and color shift. Finally, we introduce a progressive refinement (PR) module, which adopts the summed-area tables to take advantage of spatially correlated information for edge and image quality enhancement. Qualitative and quantitative evaluations demonstrate the efficiency, superiority, and robustness of our proposed method.


Asunto(s)
Algoritmos , Aumento de la Imagen , Relación Señal-Ruido
10.
Sensors (Basel) ; 21(24)2021 Dec 07.
Artículo en Inglés | MEDLINE | ID: mdl-34960263

RESUMEN

Today's wearable medical devices are becoming popular because of their price and ease of use. Most wearable medical devices allow users to continuously collect and check their health data, such as electrocardiograms (ECG). Therefore, many of these devices have been used to monitor patients with potential heart pathology as they perform their daily activities. However, one major challenge of collecting heart data using mobile ECG is baseline wander and motion artifacts created by the patient's daily activities, resulting in false diagnoses. This paper proposes a new algorithm that automatically removes the baseline wander and suppresses most motion artifacts in mobile ECG recordings. This algorithm clearly shows a significant improvement compared to the conventional noise removal method. Two signal quality metrics are used to compare a reference ECG with its noisy version: correlation coefficients and mean squared error. For both metrics, the experimental results demonstrate that the noisy signal filtered by our algorithm is improved by a factor of ten.


Asunto(s)
Artefactos , Procesamiento de Señales Asistido por Computador , Algoritmos , Electrocardiografía , Electrocardiografía Ambulatoria , Humanos , Movimiento (Física)
11.
Sensors (Basel) ; 21(17)2021 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-34502575

RESUMEN

Since remote sensing images are one of the main sources for people to obtain required information, the quality of the image becomes particularly important. Nevertheless, noise often inevitably exists in the image, and the targets are usually blurred by the acquisition of the imaging system, resulting in the degradation of quality of the images. In this paper, a novel preprocessing algorithm is proposed to simultaneously smooth noise and to enhance the edges, which can improve the visual quality of remote sensing images. It consists of an improved adaptive spatial filter, which is a weighted filter integrating functions of both noise removal and edge sharpness. Its processing parameters are flexible and adjustable relative to different images. The experimental results confirm that the proposed method outperforms the existing spatial algorithms both visually and quantitatively. It can play an important role in the remote sensing field in order to achieve more information of interested targets.


Asunto(s)
Algoritmos , Tecnología de Sensores Remotos , Humanos
12.
Zhongguo Yi Liao Qi Xie Za Zhi ; 45(5): 473-478, 2021 Sep 30.
Artículo en Zh | MEDLINE | ID: mdl-34628755

RESUMEN

We developed a portable non-specific low back pain measurement system EasiLBP and evaluated its performance in collecting EMG signals:during the wearer's movement without the assistance of a doctor, the collection of EMG signals by portable devices met problems such as large noise interference, difficulty in accurately calibrating the start and end points of the action interval, and imbalanced samples for feature recognition, et al. To challenge these problems, we proposed a small group-based noise removal method, a dynamic dual-threshold automatic method for identifying the start and end points of the motion interval, and a sampling method to balance group samples, respectively. Portable device and a medical EMG acquisition equipment Thought Technology FlexComp Infiniti 10 were used to perform EMG measurements on 15 patients with non-specific low back pain and 15 normal people. Clinical experiments and statistical analysis show that the portable EMG acquisition system has significant differences in EMG signal characteristics between normal people and non-specific low back pain patients, and it has good measurement consistency and accuracy with the medical EMG acquisition equipment.


Asunto(s)
Dolor de la Región Lumbar , Electromiografía , Humanos , Movimiento (Física) , Movimiento , Dimensión del Dolor
13.
J Sep Sci ; 43(9-10): 1998-2010, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32108426

RESUMEN

Wavelet transform is a versatile time-frequency analysis technique, which allows localization of useful signals in time or space and separates them from noise. The detector output from any analytical instrument is mathematically equivalent to a digital image. Signals obtained in chemical separations that vary in time (e.g., high-performance liquid chromatography) or space (e.g., planar chromatography) are amenable to wavelet analysis. This article gives an overview of wavelet analysis, and graphically explains all the relevant concepts. Continuous wavelet transform and discrete wavelet transform concepts are pictorially explained along with their chromatographic applications. An example is shown for qualitative peak overlap detection in a noisy chromatogram using continuous wavelet transform. The concept of signal decomposition, denoising, and then signal reconstruction is graphically discussed for discrete wavelet transform. All the digital filters in chromatographic instruments used today potentially broaden and distort narrow peaks. Finally, a low signal-to-noise ratio chromatogram is denoised using the procedure. Significant gains (>tenfold) in signal-to-noise ratio are shown with wavelet analysis. Peaks that were not initially visible were recovered with good accuracy. Since discrete wavelet transform denoising analysis applies to any detector used in separation science, researchers should strongly consider using wavelets for their research.

14.
Sensors (Basel) ; 20(19)2020 Sep 28.
Artículo en Inglés | MEDLINE | ID: mdl-32998346

RESUMEN

Short-time (sliding) transform based on discrete Hartley transform (DHT) is often used to estimate the power spectrum of a quasi-stationary process such as speech, audio, radar, communication, and biomedical signals. Sliding transform calculates the transform coefficients of the signal in a fixed-size moving window. In order to speed up the spectral analysis of signals with slowly changing spectra, the window can slide along the signal with a step of more than one. A fast algorithm for computing the discrete Hartley transform in windows that are equidistant from each other is proposed. The algorithm is based on a second-order recursive relation between subsequent equidistant local transform spectra. The performance of the proposed algorithm with respect to computational complexity is compared with the performance of known fast Hartley transform and sliding algorithms.

15.
J Digit Imaging ; 33(2): 504-515, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-31515756

RESUMEN

Low-dose CT denoising is a challenging task that has been studied by many researchers. Some studies have used deep neural networks to improve the quality of low-dose CT images and achieved fruitful results. In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non-trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects causing by per-pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while changing the complexity of the network, minimally.


Asunto(s)
Aprendizaje Profundo , Artefactos , Sistemas de Computación , Humanos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
16.
Entropy (Basel) ; 22(6)2020 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-33286393

RESUMEN

Entropy, the key factor of information theory, is one of the most important research areas in computer science [...].

17.
Pattern Recognit ; 90: 134-146, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-31327876

RESUMEN

In many applications, image deblurring is a pre-requisite to improve the sharpness of an image before it can be further processed. Iterative methods are widely used for deblurring images but care must be taken to ensure that the iterative process is robust, meaning that the process does not diverge and reaches the solution reasonably fast, two goals that sometimes compete against each other. In practice, it remains challenging to choose parameters for the iterative process to be robust. We propose a new approach consisting of relaxed initialization and pixel-wise updates of the step size for iterative methods to achieve robustness. The first novel design of the approach is to modify the initialization of existing iterative methods to stop a noise term from being propagated throughout the iterative process. The second novel design is the introduction of a vectorized step size that is adaptively determined through the iteration to achieve higher stability and accuracy in the whole iterative process. The vectorized step size aims to update each pixel of an image individually, instead of updating all the pixels by the same factor. In this work, we implemented the above designs based on the Landweber method to test and demonstrate the new approach. Test results showed that the new approach can deblur images from noisy observations and achieve a low mean squared error with a more robust performance.

18.
Sensors (Basel) ; 19(14)2019 Jul 18.
Artículo en Inglés | MEDLINE | ID: mdl-31323876

RESUMEN

Multiplicative speckle noise removal is a challenging task in image processing. Motivated by the performance of anisotropic diffusion in additive noise removal and the structure of the standard deviation of a compressed speckle noisy image, we address this problem with anisotropic diffusion theories. Firstly, an anisotropic diffusion model based on image statistics, including information on the gradient of the image, gray levels, and noise standard deviation of the image, is proposed. Although the proposed model can effectively remove multiplicative speckle noise, it does not consider the noise at the edge during the denoising process. Hence, we decompose the divergence term in order to make the diffusion at the edge occur along the boundaries rather than perpendicular to the boundaries, and improve the model to meet our requirements. Secondly, the iteration stopping criteria based on kurtosis and correlation in view of the lack of ground truth in real image experiments, is proposed. The optimal values of the parameters in the model are obtained by learning. To improve the denoising effect, post-processing is performed. Finally, the simulation results show that the proposed model can effectively remove the speckle noise and retain minute details of the images for the real ultrasound and RGB color images.

19.
Sensors (Basel) ; 20(1)2019 Dec 28.
Artículo en Inglés | MEDLINE | ID: mdl-31905692

RESUMEN

Speech is the most significant mode of communication among human beings and a potential method for human-computer interaction (HCI) by using a microphone sensor. Quantifiable emotion recognition using these sensors from speech signals is an emerging area of research in HCI, which applies to multiple applications such as human-reboot interaction, virtual reality, behavior assessment, healthcare, and emergency call centers to determine the speaker's emotional state from an individual's speech. In this paper, we present major contributions for; (i) increasing the accuracy of speech emotion recognition (SER) compared to state of the art and (ii) reducing the computational complexity of the presented SER model. We propose an artificial intelligence-assisted deep stride convolutional neural network (DSCNN) architecture using the plain nets strategy to learn salient and discriminative features from spectrogram of speech signals that are enhanced in prior steps to perform better. Local hidden patterns are learned in convolutional layers with special strides to down-sample the feature maps rather than pooling layer and global discriminative features are learned in fully connected layers. A SoftMax classifier is used for the classification of emotions in speech. The proposed technique is evaluated on Interactive Emotional Dyadic Motion Capture (IEMOCAP) and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) datasets to improve accuracy by 7.85% and 4.5%, respectively, with the model size reduced by 34.5 MB. It proves the effectiveness and significance of the proposed SER technique and reveals its applicability in real-world applications.


Asunto(s)
Percepción Auditiva/fisiología , Emociones/fisiología , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas , Procesamiento de Señales Asistido por Computador , Habla/fisiología , Bases de Datos como Asunto , Humanos , Espectrografía del Sonido
20.
Sensors (Basel) ; 18(4)2018 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-29614821

RESUMEN

The aim of this study is on the investigation of motion noise removal techniques using two-accelerometer sensor system and various placements of the sensors on gentle movement and walking of the patients. A Wi-Fi based data acquisition system and a framework on Matlab are developed to collect and process data while the subjects are in motion. The tests include eight volunteers who have no record of heart disease. The walking and running data on the subjects are analyzed to find the minimal-noise bandwidth of the SCG signal. This bandwidth is used to design filters in the motion noise removal techniques and peak signal detection. There are two main techniques of combining signals from the two sensors to mitigate the motion artifact: analog processing and digital processing. The analog processing comprises analog circuits performing adding or subtracting functions and bandpass filter to remove artifact noises before entering the data acquisition system. The digital processing processes all the data using combinations of total acceleration and z-axis only acceleration. The two techniques are tested on three placements of accelerometer sensors including horizontal, vertical, and diagonal on gentle motion and walking. In general, the total acceleration and z-axis acceleration are the best techniques to deal with gentle motion on all sensor placements which improve average systolic signal-noise-ratio (SNR) around 2 times and average diastolic SNR around 3 times comparing to traditional methods using only one accelerometer. With walking motion, ADDER and z-axis acceleration are the best techniques on all placements of the sensors on the body which enhance about 7 times of average systolic SNR and about 11 times of average diastolic SNR comparing to only one accelerometer method. Among the sensor placements, the performance of horizontal placement of the sensors is outstanding comparing with other positions on all motions.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda