Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 79
Filtrar
1.
Sensors (Basel) ; 24(18)2024 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-39338621

RESUMO

Stereo high dynamic range imaging (SHDRI) offers a more temporally stable solution to high dynamic range (HDR) imaging from low dynamic range input images compared to bracketing and removes the loss of accuracy that single-image HDR solutions offer. However, few solutions currently exist that take advantage of the different (asymmetric) lenses, commonly found on modern smartphones, to achieve SHDRI. This paper presents a method that achieves single-shot asymmetric HDR fusion via a reference-based deep learning approach. Results demonstrate a system that is more robust to aperture and image signal processing pipeline differences than existing solutions.

2.
Sensors (Basel) ; 24(1)2024 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-38203161

RESUMO

Recently, advancements in image sensor technology have paved the way for the proliferation of high-dynamic-range television (HDRTV). Consequently, there has been a surge in demand for the conversion of standard-dynamic-range television (SDRTV) to HDRTV, especially due to the dearth of native HDRTV content. However, since SDRTV often comes with video encoding artifacts, SDRTV to HDRTV conversion often amplifies these encoding artifacts, thereby reducing the visual quality of the output video. To solve this problem, this paper proposes a multi-frame content-aware mapping network (MCMN), aiming to improve the performance of conversion from low-quality SDRTV to high-quality HDRTV. Specifically, we utilize the temporal spatial characteristics of videos to design a content-aware temporal spatial alignment module for the initial alignment of video features. In the feature prior extraction stage, we innovatively propose a hybrid prior extraction module, including cross-temporal priors, local spatial priors, and global spatial prior extraction. Finally, we design a temporal spatial transformation module to generate an improved tone mapping result. From time to space, from local to global, our method makes full use of multi-frame information to perform inverse tone mapping of single-frame images, while it is also able to better repair coding artifacts.

3.
Sensors (Basel) ; 23(12)2023 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-37420537

RESUMO

In computational photography, high dynamic range (HDR) imaging refers to the family of techniques used to recover a wider range of intensity values compared to the limited range provided by standard sensors. Classical techniques consist of acquiring a scene-varying exposure to compensate for saturated and underexposed regions, followed by a non-linear compression of intensity values called tone mapping. Recently, there has been a growing interest in estimating HDR images from a single exposure. Some methods exploit data-driven models trained to estimate values outside the camera's visible intensity levels. Others make use of polarimetric cameras to reconstruct HDR information without exposure bracketing. In this paper, we present a novel HDR reconstruction method that employs a single PFA (polarimetric filter array) camera with an additional external polarizer to increase the scene's dynamic range across the acquired channels and to mimic different exposures. Our contribution consists of a pipeline that effectively combines standard HDR algorithms based on bracketing and data-driven solutions designed to work with polarimetric images. In this regard, we present a novel CNN (convolutional neural network) model that exploits the underlying mosaiced pattern of the PFA in combination with the external polarizer to estimate the original scene properties, and a second model designed to further improve the final tone mapping step. The combination of such techniques enables us to take advantage of the light attenuation given by the filters while producing an accurate reconstruction. We present an extensive experimental section in which we validate the proposed method on both synthetic and real-world datasets specifically acquired for the task. Quantitative and qualitative results show the effectiveness of the approach when compared to state-of-the-art methods. In particular, our technique exhibits a PSNR (peak signal-to-noise ratio) on the whole test set equal to 23 dB, which is 18% better with respect to the second-best alternative.


Assuntos
Compressão de Dados , Algoritmos , Redes Neurais de Computação , Fotografação , Razão Sinal-Ruído
4.
Sensors (Basel) ; 23(12)2023 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-37420931

RESUMO

Intelligent driver assistance systems are becoming increasingly popular in modern passenger vehicles. A crucial component of intelligent vehicles is the ability to detect vulnerable road users (VRUs) for an early and safe response. However, standard imaging sensors perform poorly in conditions of strong illumination contrast, such as approaching a tunnel or at night, due to their dynamic range limitations. In this paper, we focus on the use of high-dynamic-range (HDR) imaging sensors in vehicle perception systems and the subsequent need for tone mapping of the acquired data into a standard 8-bit representation. To our knowledge, no previous studies have evaluated the impact of tone mapping on object detection performance. We investigate the potential for optimizing HDR tone mapping to achieve a natural image appearance while facilitating object detection of state-of-the-art detectors designed for standard dynamic range (SDR) images. Our proposed approach relies on a lightweight convolutional neural network (CNN) that tone maps HDR video frames into a standard 8-bit representation. We introduce a novel training approach called detection-informed tone mapping (DI-TM) and evaluate its performance with respect to its effectiveness and robustness in various scene conditions, as well as its performance relative to an existing state-of-the-art tone mapping method. The results show that the proposed DI-TM method achieves the best results in terms of detection performance metrics in challenging dynamic range conditions, while both methods perform well in typical, non-challenging conditions. In challenging conditions, our method improves the detection F2 score by 13%. Compared to SDR images, the increase in F2 score is 49%.

5.
Sensors (Basel) ; 23(4)2023 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-36850683

RESUMO

In a routine optical remote sensor, there is a contradiction between the two requirements of high radiation sensitivity and high dynamic range. Such a problem can be solved by adopting pixel-level adaptive-gain technology, which is carried out by integrating multilevel integrating capacitors into photodetector pixels and multiple nondestructive read-outs of the target charge with a single exposure. There are four gains for any one pixel: high gain (HG), medium gain (MG), low gain (LG), and ultralow gain (ULG). This study analyzes the requirements for laboratory radiometric calibration, and we designed a laboratory calibration scheme for the distinctive imaging method of pixel-level adaptive gain. We obtained calibration coefficients for general application using one gain output, and the switching points of dynamic range and the proportional conversion relationship between adjacent gains as the adaptive-gain output. With these results, on-orbit quantification applications of spectrometers adopting pixel-level automatic gain adaptation technology are guaranteed.

6.
Sensors (Basel) ; 23(9)2023 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-37177520

RESUMO

Restorers and curators in museums sometimes find it difficult to accurately segment areas of paintings that have been contaminated with other pigments or areas that need to be restored, and work on the painting needs to be carried out with minimum possible damage. It is therefore necessary to develop measurement systems and methods that facilitate this task in the least invasive way possible. The aim of this study was to obtain high-dynamic-range (HDR) spectral reflectance and spatial resolution for Dalí's painting entitled Two Figures (1926) in order to segment a small area of black and white pigment that was affected by the contact transfer of reddish pigment from another painting. Using Hypermatrixcam to measure the HDR spectral reflectance developed by this research team, an HDR multispectral cube of 12 images was obtained for the band 470-690 nm in steps of 20 nm. With the values obtained for the spectral reflectance of the HDR cube, the colour of the area of paint affected by the transfer was studied by calculating the a*b* components with the CIELab system. These a*b* values were then used to define two methods of segmenting the exact areas in which there was a transfer of reddish pigment. The area studied in the painting was originally black, and the contamination with reddish pigment occupied 13.87% to 32% of the total area depending on the selected method. These different solutions can be explained because the lower limit is segmentation based on pure pigment and the upper limit considers red as an exclusion of non-black pigment. Over- and under-segmentation is a common problem described in the literature related to pigment selection. In this application case, as red pigment is not original and should be removed, curators will choose the method that selects the highest red area.

7.
Sensors (Basel) ; 23(22)2023 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-38005458

RESUMO

Infrared image sensing technology has received widespread attention due to its advantages of not being affected by the environment, good target recognition, and high anti-interference ability. However, with the improvement of the integration of the infrared focal plane, the dynamic range of the photoelectric system is difficult to improve, that is, the restrictive trade-off between noise and full well capacity is particularly prominent. Since the capacitance of the inversion MOS capacitor changes with the gate-source voltage adaptively, the inversion MOS capacitor is used as the capacitor in the infrared pixel circuit, which can solve the contradiction between noise in low light and full well capacity in high light. To this end, a highly dynamic pixel structure based on adaptive capacitance is proposed, so that the capacitance of the infrared image sensor can automatically change from 6.5 fF to 37.5 fF as the light intensity increases. And based on 55 nm CMOS process technology, the performance parameters of an infrared image sensor with a 12,288 × 12,288 pixel array are studied. The research results show that a small-size pixel of 5.5 µm × 5.5 µm has a large full well capacity of 1.31 Me- and a variable conversion gain, with a noise of less than 0.43 e- and a dynamic range of more than 130 dB.

8.
Sensors (Basel) ; 23(21)2023 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-37960502

RESUMO

Thin-film photodiodes (TFPD) monolithically integrated on the Si Read-Out Integrated Circuitry (ROIC) are promising imaging platforms when beyond-silicon optoelectronic properties are required. Although TFPD device performance has improved significantly, the pixel development has been limited in terms of noise characteristics compared to the Si-based image sensors. Here, a thin-film-based pinned photodiode (TF-PPD) structure is presented, showing reduced kTC noise and dark current, accompanied with a high conversion gain (CG). Indium-gallium-zinc oxide (IGZO) thin-film transistors and quantum dot photodiodes are integrated sequentially on the Si ROIC in a fully monolithic scheme with the introduction of photogate (PG) to achieve PPD operation. This PG brings not only a low noise performance, but also a high full well capacity (FWC) coming from the large capacitance of its metal-oxide-semiconductor (MOS). Hence, the FWC of the pixel is boosted up to 1.37 Me- with a 5 µm pixel pitch, which is 8.3 times larger than the FWC that the TFPD junction capacitor can store. This large FWC, along with the inherent low noise characteristics of the TF-PPD, leads to the three-digit dynamic range (DR) of 100.2 dB. Unlike a Si-based PG pixel, dark current contribution from the depleted semiconductor interfaces is limited, thanks to the wide energy band gap of the IGZO channel material used in this work. We expect that this novel 4 T pixel architecture can accelerate the deployment of monolithic TFPD imaging technology, as it has worked for CMOS Image sensors (CIS).

9.
Sensors (Basel) ; 23(6)2023 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-36991843

RESUMO

In high dynamic scenes, fringe projection profilometry (FPP) may encounter fringe saturation, and the phase calculated will also be affected to produce errors. This paper proposes a saturated fringe restoration method to solve this problem, taking the four-step phase shift as an example. Firstly, according to the saturation of the fringe group, the concepts of reliable area, shallow saturated area, and deep saturated area are proposed. Then, the parameter A related to the reflectivity of the object in the reliable area is calculated to interpolate A in the shallow and deep saturated areas. The theoretically shallow and deep saturated areas are not known in actual experiments. However, morphological operations can be used to dilate and erode reliable areas to produce cubic spline interpolation areas (CSI) and biharmonic spline interpolation (BSI) areas, which roughly correspond to shallow and deep saturated areas. After A is restored, it can be used as a known quantity to restore the saturated fringe using the unsaturated fringe in the same position, the remaining unrecoverable part of the fringe can be completed using CSI, and then the same part of the symmetrical fringe can be further restored. To further reduce the influence of nonlinear error, the Hilbert transform is also used in the phase calculation process of the actual experiment. The simulation and experimental results validate that the proposed method can still obtain correct results without adding additional equipment or increasing projection number, which proves the feasibility and robustness of the method.

10.
Sensors (Basel) ; 23(20)2023 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-37896600

RESUMO

High dynamic range (HDR) imaging technology is increasingly being used in automated driving systems (ADS) for improving the safety of traffic participants in scenes with strong differences in illumination. Therefore, a combination of HDR video, that is video with details in all illumination regimes, and (HDR) object perception techniques that can deal with this variety in illumination is highly desirable. Although progress has been made in both HDR imaging solutions and object detection algorithms in the recent years, they have progressed independently of each other. This has led to a situation in which object detection algorithms are typically designed and constantly improved to operate on 8 bit per channel content. This makes these algorithms not ideally suited for use in HDR data processing, which natively encodes to a higher bit-depth (12 bits/16 bits per channel). In this paper, we present and evaluate two novel convolutional neural network (CNN) architectures that intelligently convert high bit depth HDR images into 8-bit images. We attempt to optimize reconstruction quality by focusing on ADS object detection quality. The first research novelty is to jointly perform tone-mapping with demosaicing by additionally successfully suppressing noise and demosaicing artifacts. The first CNN performs tone-mapping with noise suppression on a full-color HDR input, while the second performs joint demosaicing and tone-mapping with noise suppression on a raw HDR input. The focus is to increase the detectability of traffic-related objects in the reconstructed 8-bit content, while ensuring that the realism of the standard dynamic range (SDR) content in diverse conditions is preserved. The second research novelty is that for the first time, to the best of our knowledge, a thorough comparative analysis against the state-of-the-art tone-mapping and demosaicing methods is performed with respect to ADS object detection accuracy on traffic-related content that abounds with diverse challenging (i.e., boundary cases) scenes. The evaluation results show that the two proposed networks have better performance in object detection accuracy and image quality, than both SDR content and content obtained with the state-of-the-art tone-mapping and demosaicing algorithms.

11.
Sensors (Basel) ; 23(20)2023 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-37896477

RESUMO

We present a 2D-stitched, 316MP, 120FPS, high dynamic range CMOS image sensor with 92 CML output ports operating at a cumulative date rate of 515 Gbit/s. The total die size is 9.92 cm × 8.31 cm and the chip is fabricated in a 65 nm, 4 metal BSI process with an overall power consumption of 23 W. A 4.3 µm dual-gain pixel has a high and low conversion gain full well of 6600e- and 41,000e-, respectively, with a total high gain temporal noise of 1.8e- achieving a composite dynamic range of 87 dB.

12.
Sensors (Basel) ; 22(18)2022 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-36146397

RESUMO

High-dynamic-range (HDR) image reconstruction methods are designed to fuse multiple Low-dynamic-range (LDR) images captured with different exposure values into a single HDR image. Recent CNN-based methods mostly perform local attention- or alignment-based fusion of multiple LDR images to create HDR contents. Depending on a single attention mechanism or alignment causes failure in compensating ghosting artifacts, which can arise in the synthesized HDR images due to the motion of objects or camera movement across different LDR image inputs. In this study, we propose a multi-scale attention-guided non-local network called MSANLnet for efficient HDR image reconstruction. To mitigate the ghosting artifacts, the proposed MSANLnet performs implicit alignment of LDR image features with multi-scale spatial attention modules and then reconstructs pixel intensity values using long-range dependencies through non-local means-based fusion. These modules adaptively select useful information that is not damaged by an object's movement or unfavorable lighting conditions for image pixel fusion. Quantitative evaluations against several current state-of-the-art methods show that the proposed approach achieves higher performance than the existing methods. Moreover, comparative visual results show the effectiveness of the proposed method in restoring saturated information from original input images and mitigating ghosting artifacts caused by large movement of objects. Ablation studies show the effectiveness of the proposed method, architectural choices, and modules for efficient HDR reconstruction.

13.
Sensors (Basel) ; 22(13)2022 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-35808165

RESUMO

Commercial hyperspectral imaging systems typically use CCD or CMOS sensors. These types of sensors have a limited dynamic range and non-linear response. This means that when evaluating an artwork under uncontrolled lighting conditions and with light and dark areas in the same scene, hyperspectral images with underexposed or saturated areas would be obtained at low or high exposure times, respectively. To overcome this problem, this article presents a system for capturing hyperspectral images consisting of a matrix of twelve spectral filters placed in twelve cameras, which, after processing these images, makes it possible to obtain the high dynamic range image to measure the spectral reflectance of the work of art being evaluated. We show the developed system and describe all its components, calibration processes, and the algorithm implemented to obtain the high dynamic range spectral reflectance measurement. In order to validate the system, high dynamic range spectral reflectance measurements from Labsphere's Spectralon Reflectance Standards were performed and compared with the same reflectance measurements but using low dynamic range images. High dynamic range hyperspectral imaging improves the colorimetric accuracy and decreases the uncertainty of the spectral reflectance measurement based on low dynamic range imaging.

14.
Sensors (Basel) ; 22(20)2022 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-36298202

RESUMO

Multi-exposure image fusion (MEF) methods for high dynamic range (HDR) imaging suffer from ghosting artifacts when dealing with moving objects in dynamic scenes. The state-of-the-art methods use optical flow to align low dynamic range (LDR) images before merging, introducing distortion into the aligned LDR images from inaccurate motion estimation due to large motion and occlusion. In place of pre-alignment, attention-based methods calculate the correlation between the reference LDR image and non-reference LDR images, thus excluding misaligned regions in LDR images. Nevertheless, they also exclude the saturated details at the same time. Taking advantage of both the alignment and attention-based methods, we propose an efficient Deep HDR Deghosting Fusion Network (DDFNet) guided by optical flow and image correlation attentions. Specifically, the DDFNet estimates the optical flow of the LDR images by a motion estimation module and encodes that optical flow as a flow feature. Additionally, it extracts correlation features between the reference LDR and other non-reference LDR images. The optical flow and correlation features are employed to adaptably combine information from LDR inputs in an attention-based fusion module. Following the merging of features, a decoder composed of Dense Networks reconstructs the HDR image without ghosting. Experimental results indicate that the proposed DDFNet achieves state-of-the-art image fusion performance on different public datasets.


Assuntos
Artefatos , Movimento (Física)
15.
Sensors (Basel) ; 22(21)2022 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-36366211

RESUMO

A high dynamic range (HDR) stereoscopic omnidirectional vision system can provide users with more realistic binocular and immersive perception, where the HDR stereoscopic omnidirectional image (HSOI) suffers distortions during its encoding and visualization, making its quality evaluation more challenging. To solve the problem, this paper proposes a client-oriented blind HSOI quality metric based on visual perception. The proposed metric mainly consists of a monocular perception module (MPM) and binocular perception module (BPM), which combine monocular/binocular, omnidirectional and HDR/tone-mapping perception. The MPM extracts features from three aspects: global color distortion, symmetric/asymmetric distortion and scene distortion. In the BPM, the binocular fusion map and binocular difference map are generated by joint image filtering. Then, brightness segmentation is performed on the binocular fusion image, and distinctive features are extracted on the segmented high/low/middle brightness regions. For the binocular difference map, natural scene statistical features are extracted by multi-coefficient derivative maps. Finally, feature screening is used to remove the redundancy between the extracted features. Experimental results on the HSOID database show that the proposed metric is generally better than the representative quality metric, and is more consistent with the subjective perception.


Assuntos
Percepção de Profundidade , Visão Binocular , Humanos , Percepção Visual
16.
Sensors (Basel) ; 22(11)2022 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-35684878

RESUMO

With the development of superframe high-dynamic-range infrared imaging technology that extends the dynamic range of thermal imaging systems, a key issue that has arisen is how to choose different integration times to obtain an HDR fusion image that contains more information. This paper proposes a multi-integration time adaptive method, in order to address the lack of objective evaluation methods for the selection of superframe infrared images, consisting of the following steps: image evaluation indicators are used to obtain the best global exposure image (the optimal integration time); images are segmented by region-growing point to obtain the ambient/high-temperature regions, selecting the local optimum images with grayscale closest to the medium grayscale of the IR imaging system for the two respective regions (lowest and highest integration time); finally, the three images above are fused and enhanced to achieve HDR infrared imaging. By comparing this method with some existing integration time selection methods and applying the proposed method to some typical fusion methods, via subjective and objective evaluation, the proposed method is shown to have obvious advantages over existing algorithms, and it can optimally select the images from different integration time series images to form the best combination that contains full image information, expanding the dynamic range of the IR imaging system.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador , Interpretação de Imagem Assistida por Computador/métodos
17.
Sensors (Basel) ; 22(20)2022 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-36298068

RESUMO

Ubiquitous computing has enabled the proliferation of low-cost solutions for capturing information about the user's environment or biometric parameters. In this sense, the do-it-yourself (DIY) approach to build new low-cost systems or verify the correspondence of low-cost systems compared to professional devices allows the spread of application possibilities. Following this trend, the authors aim to present a complete DIY and replicable procedure to evaluate the performance of a low-cost video luminance meter consisting of a Raspberry Pi and a camera module. The method initially consists of designing and developing a LED panel and a light cube that serves as reference illuminance sources. The luminance distribution along the two reference light sources is determined using a Konica Minolta luminance meter. With this approach, it is possible to identify an area for each light source with an almost equal luminance value. By applying a frame that covers part of the panel and shows only the area with nearly homogeneous luminance values and applying the two systems in a dark space in front of the low-cost video luminance meter mounted on a professional reference camera photometer LMK mobile air, it is possible to check the discrepancy in luminance values between the low-cost and professional systems when pointing different homogeneous light sources. In doing so, we primarily consider the peripheral shading effect, better known as the vignetting effect. We then differentiate the correction factor S of the Radiance Pcomb function to better match the luminance values of the low-cost system to the professional device. We also introduce an algorithm to differentiate the S factor depending on the light source. In general, the DIY calibration process described in the paper is time-consuming. However, the subsequent applications in various real-life scenarios allow us to verify the satisfactory performance of the low-cost system in terms of luminance mapping and glare evaluation compared to a professional device.


Assuntos
Fotometria , Visão Ocular , Humanos
18.
Sensors (Basel) ; 21(12)2021 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-34208062

RESUMO

Inverse Tone Mapping (ITM) methods attempt to reconstruct High Dynamic Range (HDR) information from Low Dynamic Range (LDR) image content. The dynamic range of well-exposed areas must be expanded and any missing information due to over/under-exposure must be recovered (hallucinated). The majority of methods focus on the former and are relatively successful, while most attempts on the latter are not of sufficient quality, even ones based on Convolutional Neural Networks (CNNs). A major factor for the reduced inpainting quality in some works is the choice of loss function. Work based on Generative Adversarial Networks (GANs) shows promising results for image synthesis and LDR inpainting, suggesting that GAN losses can improve inverse tone mapping results. This work presents a GAN-based method that hallucinates missing information from badly exposed areas in LDR images and compares its efficacy with alternative variations. The proposed method is quantitatively competitive with state-of-the-art inverse tone mapping methods, providing good dynamic range expansion for well-exposed areas and plausible hallucinations for saturated and under-exposed areas. A density-based normalisation method, targeted for HDR content, is also proposed, as well as an HDR data augmentation method targeted for HDR hallucination.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Alucinações , Humanos
19.
Microsc Microanal ; 26(5): 938-943, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32778194

RESUMO

We report an approach to expand the effective number of pixels available to small, two-dimensional electron detectors. To do so, we acquire subsections of a diffraction pattern that are then accurately stitched together in post-processing. Using an electron microscopy pixel array detector (EMPAD) that has only 128 × 128 pixels, we show that the field of view can be expanded while achieving high reciprocal-space sampling. Further, we highlight the need to properly account for the detector position (rotation) and the non-orthonormal diffraction shift axes to achieve an accurate reconstruction. Applying the method, we provide examples of spot and convergent beam diffraction patterns acquired with a pixelated detector.

20.
Sensors (Basel) ; 20(24)2020 Dec 18.
Artigo em Inglês | MEDLINE | ID: mdl-33352954

RESUMO

Riveted workpieces are widely used in manufacturing; however, current inspection sensors are mainly limited in nondestructive testing and obtaining the high-accuracy dimension automatically is difficult. We developed a 3-D sensor for rivet inspection using fringe projection profilometry (FPP) with texture constraint. We used multi-intensity high dynamic range (HDR) FPP method to address the varying reflectance of the metal surface then utilized an additional constraint calculated from the fused HDR texture to compensate for the artifacts caused by phase mixture around the stepwise edge. By combining the 2-D contours and 3-D FPP data, rivets can be easily segmented, and the edge points can be further refined for diameter measurement. We tested the performance on a sample of riveted aluminum frame and evaluated the accuracy using standard objects. Experiments show that denser 3-D data of a riveted metal workpiece can be acquired with high accuracy. Compared with the traditional FPP method, the diameter measurement accuracy can be improved by 50%.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa