Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 75
Filtrar
1.
Data Brief ; 55: 110658, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39049970

RESUMO

This paper details an imagery dataset of interior and exterior ambiances to assess and represent photobiological outcomes of the built environment in northern territories. The images were obtained using a Raspberry Pi Camera Module (RPiCM) mounted in a holder that fixes the camera in place. This holder allows to rotate the camera by 30° and take 12 high dynamic range (HDR) images which are then combined to create a panoramic image. The HDR images enable the calculation of photobiological effects concerning photopic light intensity for vision, and the spectral dominance regarding vision and circadian stimulation. This dataset includes 13 captures in 7 interior and 6 exterior settings, each divided into 4 subfolders containing the photographic data: the sequence of low-dynamic range images (LDR), the tone-mapped images obtained from the HDR calculation, the analysis of photopic luminance and false color, and 360° panoramic images (tone-mapped HDR, false color luminance, and spectral dominance). Each space is also supplemented with photometric data presented as a .csv file containing lux and EML units obtained via a radiometer. This dataset is valuable for architects, designers, and neuroscientists to identify opportunities for enhancing human-centric lighting in existing architecture and landscape, as well as to propose solutions that promote vision and circadian stimulation in northern territories. This research was partially used in previous studies from [10]. The dataset is published and shared through a Mendeley repository [9].

2.
Sci Rep ; 14(1): 15176, 2024 07 02.
Artigo em Inglês | MEDLINE | ID: mdl-38956114

RESUMO

Assessing programmed death ligand 1 (PD-L1) expression through immunohistochemistry (IHC) is the golden standard in predicting immunotherapy response of non-small cell lung cancer (NSCLC). However, observation of heterogeneous PD-L1 distribution in tumor space is a challenge using IHC only. Meanwhile, immunofluorescence (IF) could support both planar and three-dimensional (3D) histological analyses by combining tissue optical clearing with confocal microscopy. We optimized clinical tissue preparation for the IF assay focusing on staining, imaging, and post-processing to achieve quality identical to traditional IHC assay. To overcome limited dynamic range of the fluorescence microscope's detection system, we incorporated a high dynamic range (HDR) algorithm to restore the post imaging IF expression pattern and further 3D IF images. Following HDR processing, a noticeable improvement in the accuracy of diagnosis (85.7%) was achieved using IF images by pathologists. Moreover, 3D IF images revealed a 25% change in tumor proportion score for PD-L1 expression at various depths within tumors. We have established an optimal and reproducible process for PD-L1 IF images in NSCLC, yielding high quality data comparable to traditional IHC assays. The ability to discern accurate spatial PD-L1 distribution through 3D pathology analysis could provide more precise evaluation and prediction for immunotherapy targeting advanced NSCLC.


Assuntos
Antígeno B7-H1 , Carcinoma Pulmonar de Células não Pequenas , Imunofluorescência , Imageamento Tridimensional , Neoplasias Pulmonares , Humanos , Carcinoma Pulmonar de Células não Pequenas/metabolismo , Carcinoma Pulmonar de Células não Pequenas/patologia , Antígeno B7-H1/metabolismo , Neoplasias Pulmonares/patologia , Neoplasias Pulmonares/metabolismo , Neoplasias Pulmonares/diagnóstico , Imageamento Tridimensional/métodos , Imunofluorescência/métodos , Imuno-Histoquímica/métodos , Microscopia Confocal/métodos , Biomarcadores Tumorais/metabolismo
3.
Micromachines (Basel) ; 15(5)2024 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-38793235

RESUMO

High-dynamic-range integrated magnetometers demonstrate extensive potential applications in fields involving complex and changing magnetic fields. Among them, Diamond Nitrogen Vacancy Color Core Magnetometer has outstanding performance in wide-range and high-precision magnetic field measurement based on its inherent high spatial resolution, high sensitivity and other characteristics. Therefore, an innovative frequency-tracking scheme is proposed in this study, which continuously monitors the resonant frequency shift of the NV color center induced by a time-varying magnetic field and feeds it back to the microwave source. This scheme successfully expands the dynamic range to 6.4 mT, approximately 34 times the intrinsic dynamic range of the diamond nitrogen-vacancy (NV) center. Additionally, it achieves efficient detection of rapidly changing magnetic field signals at a rate of 0.038 T/s.

4.
ACS Nano ; 18(20): 12760-12770, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38728257

RESUMO

Phototransistors are light-sensitive devices featuring a high dynamic range, low-light detection, and mechanisms to adapt to different ambient light conditions. These features are of interest for bioinspired applications such as artificial and restored vision. In this work, we report on a graphene-based phototransistor exploiting the photogating effect that features picowatt- to microwatt-level photodetection, a dynamic range covering six orders of magnitude from 7 to 107 lux, and a responsivity of up to 4.7 × 103 A/W. The proposed device offers the highest dynamic range and lowest optical power detected compared to the state of the art in interfacial photogating and further operates air stably. These results have been achieved by a combination of multiple developments. For example, by optimizing the geometry of our devices with respect to the graphene channel aspect ratio and by introducing a semitransparent top-gate electrode, we report a factor 20-30 improvement in responsivity over unoptimized reference devices. Furthermore, we use a built-in dynamic range compression based on a partial logarithmic optical power dependence in combination with control of responsivity. These features enable adaptation to changing lighting conditions and support high dynamic range operation, similar to what is known in human visual perception. The enhanced performance of our devices therefore holds potential for bioinspired applications, such as retinal implants.

5.
J Imaging ; 10(4)2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38667990

RESUMO

This study considers a method for reconstructing a high dynamic range (HDR) original image from a single saturated low dynamic range (LDR) image of metallic objects. A deep neural network approach was adopted for the direct mapping of an 8-bit LDR image to HDR. An HDR image database was first constructed using a large number of various metallic objects with different shapes. Each captured HDR image was clipped to create a set of 8-bit LDR images. All pairs of HDR and LDR images were used to train and test the network. Subsequently, a convolutional neural network (CNN) was designed in the form of a deep U-Net-like architecture. The network consisted of an encoder, a decoder, and a skip connection to maintain high image resolution. The CNN algorithm was constructed using the learning functions in MATLAB. The entire network consisted of 32 layers and 85,900 learnable parameters. The performance of the proposed method was examined in experiments using a test image set. The proposed method was also compared with other methods and confirmed to be significantly superior in terms of reconstruction accuracy, histogram fitting, and psychological evaluation.

6.
Sensors (Basel) ; 24(1)2024 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-38203161

RESUMO

Recently, advancements in image sensor technology have paved the way for the proliferation of high-dynamic-range television (HDRTV). Consequently, there has been a surge in demand for the conversion of standard-dynamic-range television (SDRTV) to HDRTV, especially due to the dearth of native HDRTV content. However, since SDRTV often comes with video encoding artifacts, SDRTV to HDRTV conversion often amplifies these encoding artifacts, thereby reducing the visual quality of the output video. To solve this problem, this paper proposes a multi-frame content-aware mapping network (MCMN), aiming to improve the performance of conversion from low-quality SDRTV to high-quality HDRTV. Specifically, we utilize the temporal spatial characteristics of videos to design a content-aware temporal spatial alignment module for the initial alignment of video features. In the feature prior extraction stage, we innovatively propose a hybrid prior extraction module, including cross-temporal priors, local spatial priors, and global spatial prior extraction. Finally, we design a temporal spatial transformation module to generate an improved tone mapping result. From time to space, from local to global, our method makes full use of multi-frame information to perform inverse tone mapping of single-frame images, while it is also able to better repair coding artifacts.

7.
Ultramicroscopy ; 257: 113902, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38086289

RESUMO

Diffraction pattern analysis can be used to reveal the crystalline structure of materials, and this information is used to nano- and micro-structure of advanced engineering materials that enable modern life. For nano-structured materials typically diffraction pattern analysis is performed in the transmission electron microscope (TEM) and TEM diffraction patterns typically have a limited angular range (less than a few degrees) due to the long camera length, and this requires analysis of multiple patterns to probe a unit cell. As a different approach, wide angle Kikuchi patterns can be captured using an on-axis detector in the scanning electron microscope (SEM) with a shorter camera length. These 'transmission Kikuchi diffraction' (TKD) patterns present a direct projection of the unit cell and can be routinely analysed using EBSD-based methods and dynamical diffraction theory. In the present work, we enhance this analysis significantly and present a multi-exposure diffraction pattern fusion method that increases the dynamic range of the detected patterns captured with a Timepix3-based direct electron detector (DED). This method uses an easy-to-apply exposure fusion routine to collect data and extend the dynamic range, as well as normalise the intensity distribution within these very wide (>95°) angle patterns. The potential of this method is demonstrated with full diffraction sphere reprojection and highlight potential of the approach to rapidly probe the structure of nano-structured materials in the scanning electron microscope.

8.
Sensors (Basel) ; 23(22)2023 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-38005458

RESUMO

Infrared image sensing technology has received widespread attention due to its advantages of not being affected by the environment, good target recognition, and high anti-interference ability. However, with the improvement of the integration of the infrared focal plane, the dynamic range of the photoelectric system is difficult to improve, that is, the restrictive trade-off between noise and full well capacity is particularly prominent. Since the capacitance of the inversion MOS capacitor changes with the gate-source voltage adaptively, the inversion MOS capacitor is used as the capacitor in the infrared pixel circuit, which can solve the contradiction between noise in low light and full well capacity in high light. To this end, a highly dynamic pixel structure based on adaptive capacitance is proposed, so that the capacitance of the infrared image sensor can automatically change from 6.5 fF to 37.5 fF as the light intensity increases. And based on 55 nm CMOS process technology, the performance parameters of an infrared image sensor with a 12,288 × 12,288 pixel array are studied. The research results show that a small-size pixel of 5.5 µm × 5.5 µm has a large full well capacity of 1.31 Me- and a variable conversion gain, with a noise of less than 0.43 e- and a dynamic range of more than 130 dB.

9.
Sensors (Basel) ; 23(21)2023 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-37960502

RESUMO

Thin-film photodiodes (TFPD) monolithically integrated on the Si Read-Out Integrated Circuitry (ROIC) are promising imaging platforms when beyond-silicon optoelectronic properties are required. Although TFPD device performance has improved significantly, the pixel development has been limited in terms of noise characteristics compared to the Si-based image sensors. Here, a thin-film-based pinned photodiode (TF-PPD) structure is presented, showing reduced kTC noise and dark current, accompanied with a high conversion gain (CG). Indium-gallium-zinc oxide (IGZO) thin-film transistors and quantum dot photodiodes are integrated sequentially on the Si ROIC in a fully monolithic scheme with the introduction of photogate (PG) to achieve PPD operation. This PG brings not only a low noise performance, but also a high full well capacity (FWC) coming from the large capacitance of its metal-oxide-semiconductor (MOS). Hence, the FWC of the pixel is boosted up to 1.37 Me- with a 5 µm pixel pitch, which is 8.3 times larger than the FWC that the TFPD junction capacitor can store. This large FWC, along with the inherent low noise characteristics of the TF-PPD, leads to the three-digit dynamic range (DR) of 100.2 dB. Unlike a Si-based PG pixel, dark current contribution from the depleted semiconductor interfaces is limited, thanks to the wide energy band gap of the IGZO channel material used in this work. We expect that this novel 4 T pixel architecture can accelerate the deployment of monolithic TFPD imaging technology, as it has worked for CMOS Image sensors (CIS).

10.
Sensors (Basel) ; 23(20)2023 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-37896477

RESUMO

We present a 2D-stitched, 316MP, 120FPS, high dynamic range CMOS image sensor with 92 CML output ports operating at a cumulative date rate of 515 Gbit/s. The total die size is 9.92 cm × 8.31 cm and the chip is fabricated in a 65 nm, 4 metal BSI process with an overall power consumption of 23 W. A 4.3 µm dual-gain pixel has a high and low conversion gain full well of 6600e- and 41,000e-, respectively, with a total high gain temporal noise of 1.8e- achieving a composite dynamic range of 87 dB.

11.
Sensors (Basel) ; 23(20)2023 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-37896600

RESUMO

High dynamic range (HDR) imaging technology is increasingly being used in automated driving systems (ADS) for improving the safety of traffic participants in scenes with strong differences in illumination. Therefore, a combination of HDR video, that is video with details in all illumination regimes, and (HDR) object perception techniques that can deal with this variety in illumination is highly desirable. Although progress has been made in both HDR imaging solutions and object detection algorithms in the recent years, they have progressed independently of each other. This has led to a situation in which object detection algorithms are typically designed and constantly improved to operate on 8 bit per channel content. This makes these algorithms not ideally suited for use in HDR data processing, which natively encodes to a higher bit-depth (12 bits/16 bits per channel). In this paper, we present and evaluate two novel convolutional neural network (CNN) architectures that intelligently convert high bit depth HDR images into 8-bit images. We attempt to optimize reconstruction quality by focusing on ADS object detection quality. The first research novelty is to jointly perform tone-mapping with demosaicing by additionally successfully suppressing noise and demosaicing artifacts. The first CNN performs tone-mapping with noise suppression on a full-color HDR input, while the second performs joint demosaicing and tone-mapping with noise suppression on a raw HDR input. The focus is to increase the detectability of traffic-related objects in the reconstructed 8-bit content, while ensuring that the realism of the standard dynamic range (SDR) content in diverse conditions is preserved. The second research novelty is that for the first time, to the best of our knowledge, a thorough comparative analysis against the state-of-the-art tone-mapping and demosaicing methods is performed with respect to ADS object detection accuracy on traffic-related content that abounds with diverse challenging (i.e., boundary cases) scenes. The evaluation results show that the two proposed networks have better performance in object detection accuracy and image quality, than both SDR content and content obtained with the state-of-the-art tone-mapping and demosaicing algorithms.

12.
Sensors (Basel) ; 23(12)2023 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-37420537

RESUMO

In computational photography, high dynamic range (HDR) imaging refers to the family of techniques used to recover a wider range of intensity values compared to the limited range provided by standard sensors. Classical techniques consist of acquiring a scene-varying exposure to compensate for saturated and underexposed regions, followed by a non-linear compression of intensity values called tone mapping. Recently, there has been a growing interest in estimating HDR images from a single exposure. Some methods exploit data-driven models trained to estimate values outside the camera's visible intensity levels. Others make use of polarimetric cameras to reconstruct HDR information without exposure bracketing. In this paper, we present a novel HDR reconstruction method that employs a single PFA (polarimetric filter array) camera with an additional external polarizer to increase the scene's dynamic range across the acquired channels and to mimic different exposures. Our contribution consists of a pipeline that effectively combines standard HDR algorithms based on bracketing and data-driven solutions designed to work with polarimetric images. In this regard, we present a novel CNN (convolutional neural network) model that exploits the underlying mosaiced pattern of the PFA in combination with the external polarizer to estimate the original scene properties, and a second model designed to further improve the final tone mapping step. The combination of such techniques enables us to take advantage of the light attenuation given by the filters while producing an accurate reconstruction. We present an extensive experimental section in which we validate the proposed method on both synthetic and real-world datasets specifically acquired for the task. Quantitative and qualitative results show the effectiveness of the approach when compared to state-of-the-art methods. In particular, our technique exhibits a PSNR (peak signal-to-noise ratio) on the whole test set equal to 23 dB, which is 18% better with respect to the second-best alternative.


Assuntos
Compressão de Dados , Algoritmos , Redes Neurais de Computação , Fotografação , Razão Sinal-Ruído
13.
Sensors (Basel) ; 23(12)2023 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-37420931

RESUMO

Intelligent driver assistance systems are becoming increasingly popular in modern passenger vehicles. A crucial component of intelligent vehicles is the ability to detect vulnerable road users (VRUs) for an early and safe response. However, standard imaging sensors perform poorly in conditions of strong illumination contrast, such as approaching a tunnel or at night, due to their dynamic range limitations. In this paper, we focus on the use of high-dynamic-range (HDR) imaging sensors in vehicle perception systems and the subsequent need for tone mapping of the acquired data into a standard 8-bit representation. To our knowledge, no previous studies have evaluated the impact of tone mapping on object detection performance. We investigate the potential for optimizing HDR tone mapping to achieve a natural image appearance while facilitating object detection of state-of-the-art detectors designed for standard dynamic range (SDR) images. Our proposed approach relies on a lightweight convolutional neural network (CNN) that tone maps HDR video frames into a standard 8-bit representation. We introduce a novel training approach called detection-informed tone mapping (DI-TM) and evaluate its performance with respect to its effectiveness and robustness in various scene conditions, as well as its performance relative to an existing state-of-the-art tone mapping method. The results show that the proposed DI-TM method achieves the best results in terms of detection performance metrics in challenging dynamic range conditions, while both methods perform well in typical, non-challenging conditions. In challenging conditions, our method improves the detection F2 score by 13%. Compared to SDR images, the increase in F2 score is 49%.

14.
Front Psychol ; 14: 1088975, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37333576

RESUMO

Visual distractions pose a significant risk to transportation safety, with laser attacks against aircraft pilots being a common example. This study used a research-grade High Dynamic Range (HDR) display to produce bright-light distractions for 12 volunteer participants performing a combined visual task across central and peripheral visual fields. The visual scene had an average luminance of 10 cd∙m-2 with targets of approximately 0.5° angular size, while the distractions had a maximum luminance of 9,000 cd∙m-2 and were 3.6° in size. The dependent variables were the mean fixation duration during task execution (representative of information processing time), and the critical stimulus duration required to support a target level of performance (representative of task efficiency). The experiment found a statistically significant increase in mean fixation duration, rising from 192 ms without distractions to 205 ms with bright-light distractions (p = 0.023). This indicates a decrease in visibility of the low contrast targets or an increase in cognitive workload that required greater processing time for each fixation in the presence of the bright-light distractions. Mean critical stimulus duration was not significantly affected by the distraction conditions used in this study. Future experiments are suggested to replicate driving and/or piloting tasks and employ bright-light distractions based on real-world data, and we advocate the use of eye-tracking metrics as sensitive measures of changes in performance.

15.
Sensors (Basel) ; 23(9)2023 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-37177520

RESUMO

Restorers and curators in museums sometimes find it difficult to accurately segment areas of paintings that have been contaminated with other pigments or areas that need to be restored, and work on the painting needs to be carried out with minimum possible damage. It is therefore necessary to develop measurement systems and methods that facilitate this task in the least invasive way possible. The aim of this study was to obtain high-dynamic-range (HDR) spectral reflectance and spatial resolution for Dalí's painting entitled Two Figures (1926) in order to segment a small area of black and white pigment that was affected by the contact transfer of reddish pigment from another painting. Using Hypermatrixcam to measure the HDR spectral reflectance developed by this research team, an HDR multispectral cube of 12 images was obtained for the band 470-690 nm in steps of 20 nm. With the values obtained for the spectral reflectance of the HDR cube, the colour of the area of paint affected by the transfer was studied by calculating the a*b* components with the CIELab system. These a*b* values were then used to define two methods of segmenting the exact areas in which there was a transfer of reddish pigment. The area studied in the painting was originally black, and the contamination with reddish pigment occupied 13.87% to 32% of the total area depending on the selected method. These different solutions can be explained because the lower limit is segmentation based on pure pigment and the upper limit considers red as an exclusion of non-black pigment. Over- and under-segmentation is a common problem described in the literature related to pigment selection. In this application case, as red pigment is not original and should be removed, curators will choose the method that selects the highest red area.

16.
J Imaging ; 9(4)2023 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-37103234

RESUMO

The images we commonly use are RGB images that contain three pieces of information: red, green, and blue. On the other hand, hyperspectral (HS) images retain wavelength information. HS images are utilized in various fields due to their rich information content, but acquiring them requires specialized and expensive equipment that is not easily accessible to everyone. Recently, Spectral Super-Resolution (SSR), which generates spectral images from RGB images, has been studied. Conventional SSR methods target Low Dynamic Range (LDR) images. However, some practical applications require High Dynamic Range (HDR) images. In this paper, an SSR method for HDR is proposed. As a practical example, we use the HDR-HS images generated by the proposed method as environment maps and perform spectral image-based lighting. The rendering results by our method are more realistic than conventional renderers and LDR SSR methods, and this is the first attempt to utilize SSR for spectral rendering.

17.
Sensors (Basel) ; 23(6)2023 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-36991843

RESUMO

In high dynamic scenes, fringe projection profilometry (FPP) may encounter fringe saturation, and the phase calculated will also be affected to produce errors. This paper proposes a saturated fringe restoration method to solve this problem, taking the four-step phase shift as an example. Firstly, according to the saturation of the fringe group, the concepts of reliable area, shallow saturated area, and deep saturated area are proposed. Then, the parameter A related to the reflectivity of the object in the reliable area is calculated to interpolate A in the shallow and deep saturated areas. The theoretically shallow and deep saturated areas are not known in actual experiments. However, morphological operations can be used to dilate and erode reliable areas to produce cubic spline interpolation areas (CSI) and biharmonic spline interpolation (BSI) areas, which roughly correspond to shallow and deep saturated areas. After A is restored, it can be used as a known quantity to restore the saturated fringe using the unsaturated fringe in the same position, the remaining unrecoverable part of the fringe can be completed using CSI, and then the same part of the symmetrical fringe can be further restored. To further reduce the influence of nonlinear error, the Hilbert transform is also used in the phase calculation process of the actual experiment. The simulation and experimental results validate that the proposed method can still obtain correct results without adding additional equipment or increasing projection number, which proves the feasibility and robustness of the method.

18.
Sensors (Basel) ; 23(4)2023 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-36850683

RESUMO

In a routine optical remote sensor, there is a contradiction between the two requirements of high radiation sensitivity and high dynamic range. Such a problem can be solved by adopting pixel-level adaptive-gain technology, which is carried out by integrating multilevel integrating capacitors into photodetector pixels and multiple nondestructive read-outs of the target charge with a single exposure. There are four gains for any one pixel: high gain (HG), medium gain (MG), low gain (LG), and ultralow gain (ULG). This study analyzes the requirements for laboratory radiometric calibration, and we designed a laboratory calibration scheme for the distinctive imaging method of pixel-level adaptive gain. We obtained calibration coefficients for general application using one gain output, and the switching points of dynamic range and the proportional conversion relationship between adjacent gains as the adaptive-gain output. With these results, on-orbit quantification applications of spectrometers adopting pixel-level automatic gain adaptation technology are guaranteed.

19.
Sensors (Basel) ; 22(21)2022 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-36366211

RESUMO

A high dynamic range (HDR) stereoscopic omnidirectional vision system can provide users with more realistic binocular and immersive perception, where the HDR stereoscopic omnidirectional image (HSOI) suffers distortions during its encoding and visualization, making its quality evaluation more challenging. To solve the problem, this paper proposes a client-oriented blind HSOI quality metric based on visual perception. The proposed metric mainly consists of a monocular perception module (MPM) and binocular perception module (BPM), which combine monocular/binocular, omnidirectional and HDR/tone-mapping perception. The MPM extracts features from three aspects: global color distortion, symmetric/asymmetric distortion and scene distortion. In the BPM, the binocular fusion map and binocular difference map are generated by joint image filtering. Then, brightness segmentation is performed on the binocular fusion image, and distinctive features are extracted on the segmented high/low/middle brightness regions. For the binocular difference map, natural scene statistical features are extracted by multi-coefficient derivative maps. Finally, feature screening is used to remove the redundancy between the extracted features. Experimental results on the HSOID database show that the proposed metric is generally better than the representative quality metric, and is more consistent with the subjective perception.


Assuntos
Percepção de Profundidade , Visão Binocular , Humanos , Percepção Visual
20.
Sensors (Basel) ; 22(20)2022 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-36298068

RESUMO

Ubiquitous computing has enabled the proliferation of low-cost solutions for capturing information about the user's environment or biometric parameters. In this sense, the do-it-yourself (DIY) approach to build new low-cost systems or verify the correspondence of low-cost systems compared to professional devices allows the spread of application possibilities. Following this trend, the authors aim to present a complete DIY and replicable procedure to evaluate the performance of a low-cost video luminance meter consisting of a Raspberry Pi and a camera module. The method initially consists of designing and developing a LED panel and a light cube that serves as reference illuminance sources. The luminance distribution along the two reference light sources is determined using a Konica Minolta luminance meter. With this approach, it is possible to identify an area for each light source with an almost equal luminance value. By applying a frame that covers part of the panel and shows only the area with nearly homogeneous luminance values and applying the two systems in a dark space in front of the low-cost video luminance meter mounted on a professional reference camera photometer LMK mobile air, it is possible to check the discrepancy in luminance values between the low-cost and professional systems when pointing different homogeneous light sources. In doing so, we primarily consider the peripheral shading effect, better known as the vignetting effect. We then differentiate the correction factor S of the Radiance Pcomb function to better match the luminance values of the low-cost system to the professional device. We also introduce an algorithm to differentiate the S factor depending on the light source. In general, the DIY calibration process described in the paper is time-consuming. However, the subsequent applications in various real-life scenarios allow us to verify the satisfactory performance of the low-cost system in terms of luminance mapping and glare evaluation compared to a professional device.


Assuntos
Fotometria , Visão Ocular , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA