Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
ACS EST Air ; 1(4): 283-293, 2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38633206

ABSTRACT

Global ground-level measurements of elements in ambient particulate matter (PM) can provide valuable information to understand the distribution of dust and trace elements, assess health impacts, and investigate emission sources. We use X-ray fluorescence spectroscopy to characterize the elemental composition of PM samples collected from 27 globally distributed sites in the Surface PARTiculate mAtter Network (SPARTAN) over 2019-2023. Consistent protocols are applied to collect all samples and analyze them at one central laboratory, which facilitates comparison across different sites. Multiple quality assurance measures are performed, including applying reference materials that resemble typical PM samples, acceptance testing, and routine quality control. Method detection limits and uncertainties are estimated. Concentrations of dust and trace element oxides (TEO) are determined from the elemental dataset. In addition to sites in arid regions, a moderately high mean dust concentration (6 µg/m3) in PM2.5 is also found in Dhaka (Bangladesh) along with a high average TEO level (6 µg/m3). High carcinogenic risk (>1 cancer case per 100000 adults) from airborne arsenic is observed in Dhaka (Bangladesh), Kanpur (India), and Hanoi (Vietnam). Industries of informal lead-acid battery and e-waste recycling as well as coal-fired brick kilns likely contribute to the elevated trace element concentrations found in Dhaka.

2.
Article in English | MEDLINE | ID: mdl-37498755

ABSTRACT

We propose that spaceborne polarimetric imagers can be calibrated, or self-calibrated using zodiacal light (ZL). ZL is created by a cloud of interplanetary dust particles. It has a significant degree of polarization in a wide field of view. From space, ZL is unaffected by terrestrial disturbances. ZL is insensitive to the camera location, so it is suited for simultaneous cross-calibration of satellite constellations. ZL changes on a scale of months, thus being a quasi-constant target in realistic calibration sessions. We derive a forward model for polarimetric image formation. Based on it, we formulate an inverse problem for polarimetric calibration and self-calibration, as well as an algorithm for the solution. The methods here are demonstrated in simulations. Towards these simulations, we render polarized images of the sky, including ZL from space, polarimetric disturbances, and imaging noise.

3.
Article in English | MEDLINE | ID: mdl-35917574

ABSTRACT

Scattering-based computed tomography (CT) recovers a heterogeneous volumetric scattering medium using images taken from multiple directions. It is a nonlinear problem. Prior art mainly approached it by explicit physics-based optimization of image-fitting, being slow and difficult to scale. Scale is particularly important when the objects constitute large cloud fields, where volumetric recovery is important for climate studies. Besides speed, imaging and recovery need to be flexible, to efficiently handle variable viewing geometries and resolutions. These can be caused by perturbation in camera poses or fusion of data from different types of observational sensors. There is a need for fast variable imaging projection scattering tomography of clouds (VIP-CT). We develop a learning-based solution, using a deep-neural network (DNN) which trains on a large physics-based labeled volumetric dataset. The DNN parameters are oblivious to the domain scale, hence the DNN can work with arbitrarily large domains. VIP-CT offers much better quality than the state of the art. The inference speed and flexibility of VIP-CT make it effectively real-time in the context of spaceborne observations. The paper is the first to demonstrate CT of a real cloud using empirical data directly in a DNN. VIP-CT may offer a model for a learning-based solution to nonlinear CT problems in other scientific domains. Our code is available online.

4.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 8728-8739, 2022 Dec.
Article in English | MEDLINE | ID: mdl-30843801

ABSTRACT

Night beats with alternating current (AC) illumination. By passively sensing this beat, we reveal new scene information which includes: the type of bulbs in the scene, the phases of the electric grid up to city scale, and the light transport matrix. This information yields unmixing of reflections and semi-reflections, nocturnal high dynamic range, and scene rendering with bulbs not observed during acquisition. The latter is facilitated by a dataset of bulb response functions for a range of sources, which we collected and provide. To do all this, we built a novel coded-exposure high-dynamic-range imaging technique, specifically designed to operate on the grid's AC lighting.

5.
J Opt Soc Am A Opt Image Sci Vis ; 38(9): 1320-1331, 2021 Sep 01.
Article in English | MEDLINE | ID: mdl-34613139

ABSTRACT

Plankton interact with the environment according to their size and three-dimensional (3D) structure. To study them outdoors, these translucent specimens are imaged in situ. Light projects through a specimen in each image. The specimen has a random scale, drawn from the population's size distribution and random unknown pose. The specimen appears only once before drifting away. We achieve 3D tomography using such a random ensemble to statistically estimate an average volumetric distribution of the plankton type and specimen size. To counter errors due to non-rigid deformations, we weight the data, drawing from advanced models developed for cryo-electron microscopy. The weights convey the confidence in the quality of each datum. This confidence relies on a statistical error model. We demonstrate the approach on live plankton using an underwater field microscope.


Subject(s)
Models, Theoretical , Plankton , Tomography, Optical , Cryoelectron Microscopy , Models, Biological
6.
Opt Express ; 27(12): A766-A778, 2019 Jun 10.
Article in English | MEDLINE | ID: mdl-31252853

ABSTRACT

We present a wide-field imaging approach to optically sense underwater sediment resuspension events. It uses wide-field multi-directional views and diffuse backlight. Our approach algorithmically quantifies the amount of material resuspended and its spatiotemporal distribution. The suspended particles affect the radiation that reaches the cameras, hence the captured images. By measuring the radiance during and prior to resuspension, we extract the optical depth on the line of sight per pixel. Using computed tomography (CT) principles, the optical depths yield estimation of the extinction coefficient of the suspension, per voxel. The suspended density is then derived from the reconstructed extinction coefficient.

7.
IEEE Trans Pattern Anal Mach Intell ; 39(3): 603-616, 2017 03.
Article in English | MEDLINE | ID: mdl-27071162

ABSTRACT

Random refraction occurs in turbulence and through a wavy water-air interface. It creates distortion that changes in space, time and with viewpoint. Localizing objects in three dimensions (3D) despite this random distortion is important to some predators and also to submariners avoiding the salient use of periscopes. We take a multiview approach to this task. Refracted distortion statistics induce a probabilistic relation between any pixel location and a line of sight in space. Measurements of an object's random projection from multiple views and times lead to a likelihood function of the object's 3D location. The likelihood leads to estimates of the 3D location and its uncertainty. Furthermore, multiview images acquired simultaneously in a wide stereo baseline have uncorrelated distortions. This helps reduce the acquisition time needed for localization. The method is demonstrated in stereoscopic video sequences, both in a lab and a swimming pool.

8.
J Opt Soc Am A Opt Image Sci Vis ; 31(12): 2711-8, 2014 Dec 01.
Article in English | MEDLINE | ID: mdl-25606760

ABSTRACT

Many studies analyze resolution limits in single-channel, pan-chromatic systems. However, color imaging is popular. Thus, there is a need for its modeling in terms of resolving capacity under noise. This work analyzes the probability of resolving details as a function of spatial frequency in color imaging. The analysis introduces theoretical bounds for performance, using optimal linear filtering and fusion operations. The work focuses on resolution loss caused strictly by noise, without the presence of imaging blur. It applies to full-field color systems, which do not compromise resolution by spatial multiplexing. The framework allows us to assess and optimize the ability of an imaging system to distinguish an object of given size and color under image noise.

9.
Opt Express ; 21(22): 25820-33, 2013 Nov 04.
Article in English | MEDLINE | ID: mdl-24216808

ABSTRACT

Aerosols affect climate, health and aviation. Currently, their retrieval assumes a plane-parallel atmosphere and solely vertical radiative transfer. We propose a principle to estimate the aerosol distribution as it really is: a three dimensional (3D) volume. The principle is a type of tomography. The process involves wide angle integral imaging of the sky on a very large scale. The imaging can use an array of cameras in visible light. We formulate an image formation model based on 3D radiative transfer. Model inversion is done using optimization methods, exploiting a closed-form gradient which we derive for the model-fit cost function. The tomography model is distinct, as the radiation source is unidirectional and uncontrolled, while off-axis scattering dominates the images.


Subject(s)
Aerosols/analysis , Algorithms , Atmosphere/analysis , Atmosphere/chemistry , Environmental Monitoring/methods , Imaging, Three-Dimensional/methods , Tomography, Optical/methods , Reproducibility of Results , Sensitivity and Specificity
10.
IEEE Trans Pattern Anal Mach Intell ; 35(1): 245-51, 2013 Jan.
Article in English | MEDLINE | ID: mdl-23154325

ABSTRACT

Refraction causes random dynamic distortions in atmospheric turbulence and in views across a water interface. The latter scenario is experienced by submerged animals seeking to detect prey or avoid predators, which may be airborne or on land. Man encounters this when surveying a scene by a submarine or divers while wishing to avoid the use of an attention-drawing periscope. The problem of inverting random refracted dynamic distortions is difficult, particularly when some of the objects in the field of view (FOV) are moving. On the other hand, in many cases, just those moving objects are of interest, as they reveal animal, human, or machine activity. Furthermore, detecting and tracking these objects does not necessitate handling the difficult task of complete recovery of the scene. We show that moving objects can be detected very simply, with low false-positive rates, even when the distortions are very strong and dominate the object motion. Moreover, the moving object can be detected even if it has zero mean motion. While the object and distortion motions are random and unknown, they are mutually independent. This is expressed by a simple motion feature which enables discrimination of moving object points versus the background.


Subject(s)
Algorithms , Artifacts , Artificial Intelligence , Image Interpretation, Computer-Assisted/methods , Motion , Pattern Recognition, Automated/methods , Refractometry/methods
11.
J Opt Soc Am A Opt Image Sci Vis ; 29(8): 1516-28, 2012 Aug 01.
Article in English | MEDLINE | ID: mdl-23201866

ABSTRACT

Image recovery under noise is widely studied. However, there is little emphasis on performance as a function of object size. In this work we analyze the probability of recovery as a function of object spatial frequency. The analysis uses a physical model for the acquired signal and noise, and also accounts for potential postacquisition noise filtering. Linear-systems analysis yields an effective cutoff frequency, which is induced by noise, despite having no optical blur in the imaging model. This means that a low signal-to-noise ratio (SNR) in images causes resolution loss, similar to image blur. We further consider the effect on SNR of pointwise image formation models, such as added specular or indirect reflections, additive scattering, radiance attenuation in haze, and flash photography. The result is a tool that assesses the ability to recover (within a desirable success rate) an object or feature having a certain size, distance from the camera, and radiance difference from its nearby background, per attenuation coefficient of the medium. The bounds rely on the camera specifications.

12.
Opt Lett ; 37(15): 3207-9, 2012 Aug 01.
Article in English | MEDLINE | ID: mdl-22859134

ABSTRACT

Systems in which the point spread function (PSF) is a rotating beam have increasing use in three-dimensional (3D) microscopy and depth estimation. We analyze in several ways the 3D optical transfer function (OTF) of Gauss Laguerre modes and rotating beams. This is based on analysis of 3D OTFs of general aperture functions. Consequently, we suggest a criterion for depth resolution based on an effective cutoff of the axial frequency response. This criterion can be used to optimize PSFs explicitly and directly, to maximize axial resolution.


Subject(s)
Imaging, Three-Dimensional/methods , Microscopy/methods , Optical Phenomena , Rotation
13.
IEEE Trans Image Process ; 21(11): 4662-7, 2012 Nov.
Article in English | MEDLINE | ID: mdl-22829404

ABSTRACT

Ambient light is strongly attenuated in turbid media. Moreover, natural light is often more highly attenuated in some spectral bands, relative to others. Hence, imaging in turbid media often relies heavily on artificial sources for illumination. Scenes irradiated by an off-axis single point source have enhanced local object shadow edges, which may increase object visibility. However, the images may suffer from severe nonuniformity, regions of low signal (being distant from the source), and regions of strong backscatter. On the other hand, simultaneously illuminating the scene from multiple directions increases the backscatter and fills-in shadows, both of which degrade local contrast. Some previous methods tackle backscatter by scanning the scene, either temporally or spatially, requiring a large number of frames. We suggest using a few frames, in each of which wide field scene irradiance originates from a different direction. This way, shadow contrast can be maintained and backscatter can be minimized in each frame, while the sequence at large has a wider, more spatially uniform illumination. The frames are then fused by post processing to a single, clearer image. We demonstrate significant visibility enhancement underwater using as little as two frames.

14.
Appl Opt ; 50(28): F89-101, 2011 Oct 01.
Article in English | MEDLINE | ID: mdl-22016251

ABSTRACT

Underwater, natural illumination typically varies strongly temporally and spatially. The reason is that waves on the water surface refract light into the water in a spatiotemporally varying manner. The resulting underwater illumination field forms a caustic network and is known as flicker. This work shows that caustics can be useful for stereoscopic vision, naturally leading to range mapping of the scene. Range triangulation by stereoscopic vision requires the determination of correspondence between image points in different viewpoints, which is often a difficult problem. We show that the spatiotemporal caustic pattern very effectively establishes stereo correspondences. Thus, we term the use of this effect as CauStereo. The temporal radiance variations due to flicker are unique to each object point, thus disambiguating the correspondence, with very simple calculations. Theoretical limitations of the method are analyzed using ray-tracing simulations. The method is demonstrated by underwater in situ experiments.

15.
Philos Trans R Soc Lond B Biol Sci ; 366(1565): 638-48, 2011 Mar 12.
Article in English | MEDLINE | ID: mdl-21282167

ABSTRACT

Polarization may be sensed by imaging modules. This is done in various engineering systems as well as in biological systems, specifically by insects and some marine species. However, polarization per pixel is usually not the direct variable of interest. Rather, polarization-related data serve as a cue for recovering task-specific scene information. How should polarization-picture post-processing (P(4)) be done for the best scene understanding? Answering this question is not only helpful for advanced engineering (computer vision), but also to prompt hypotheses as to the processing occurring within biological systems. In various important cases, the answer is found by a principled expression of scene recovery as an inverse problem. Such an expression relies directly on a physics-based model of effects in the scene. The model includes analysis that depends on the different polarization components, thus facilitating the use of these components during the inversion, in a proper, even if non-trivial, manner. We describe several examples for this approach. These include automatic removal of path radiance in haze or underwater, overcoming partial semireflections and visual reverberations; three-dimensional recovery and distance-adaptive denoising. The resulting inversion algorithms rely on signal-processing methods, such as independent component analysis, deconvolution and optimization.


Subject(s)
Image Enhancement/methods , Light , Models, Theoretical , Algorithms , Computer Simulation , Scattering, Radiation
16.
IEEE Trans Pattern Anal Mach Intell ; 31(3): 385-99, 2009 Mar.
Article in English | MEDLINE | ID: mdl-19147870

ABSTRACT

Vision in scattering media is important but challenging. Images suffer from poor visibility due to backscattering and attenuation. Most prior methods for scene recovery use active illumination scanners (structured and gated), which can be slow and cumbersome, while natural illumination is inapplicable to dark environments. The current paper addresses the need for a non-scanning recovery method, that uses active scene irradiance. We study the formation of images under widefield artificial illumination. Based on the formation model, the paper presents an approach for recovering the object signal. It also yields rough information about the 3D scene structure. The approach can work with compact, simple hardware, having active widefield, polychromatic polarized illumination. The camera is fitted with a polarization analyzer. Two frames of the scene are taken, with different states of the analyzer or polarizer. A recovery algorithm follows the acquisition. It allows both the backscatter and the object reflection to be partially polarized. It thus unifies and generalizes prior polarization-based methods, which had assumed exclusive polarization of either of these components. The approach is limited to an effective range, due to image noise and illumination falloff. Thus, the limits and noise sensitivity are analyzed. We demonstrate the approach in underwater field experiments.


Subject(s)
Algorithms , Artifacts , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Refractometry/methods , Light , Reproducibility of Results , Scattering, Radiation , Sensitivity and Specificity
17.
Opt Express ; 17(2): 472-93, 2009 Jan 19.
Article in English | MEDLINE | ID: mdl-19158860

ABSTRACT

Outdoor imaging in haze is plagued by poor visibility. A major problem is spatially-varying reduction of contrast by airlight, which is scattered by the haze particles towards the camera. However, images can be compensated for haze, and even yield a depth map of the scene. A key step in such scene recovery is subtraction of the airlight. In particular, this can be achieved by analyzing polarization-filtered images. This analysis requires parameters of the airlight, particularly its degree of polarization (DOP). These parameters were estimated in past studies by measuring pixels in sky areas. However, the sky is often unseen in the field of view. This paper derives several methods for estimating these parameters, when the sky is not in view. The methods are based on minor prior knowledge about a couple of scene points. Moreover, we propose blind estimation of the DOP, based on the image data. This estimation is based on independent component analysis (ICA). The methods were demonstrated in field experiments.

18.
Ultrasound Med Biol ; 34(6): 981-1000, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18374469

ABSTRACT

Ultrasound images are very noisy. Along with system noise, a significant noise source is the speckle phenomenon caused by interference in the viewed object. Most of the past approaches for denoising ultrasound images essentially blur the image and they do not handle attenuation. We discuss an approach that does not blur the image and handles attenuation. It is based on frequency compounding, in which images of the same object are acquired in different acoustic frequencies and, then, compounded. Existing frequency compounding methods have been based on simple averaging, and have achieved only limited enhancement. The reason is that the statistical and physical characteristics of the signal and noise vary with depth, and the noise is correlated between acoustic frequencies. Hence, we suggest two spatially varying frequency compounding methods, based on the understanding of these characteristics. As demonstrated in experiments, the proposed approaches suppress various noise sources and also recover attenuated objects while maintaining a high resolution.


Subject(s)
Image Enhancement/methods , Models, Statistical , Ultrasonography/methods , Artifacts , Humans
19.
J Opt Soc Am A Opt Image Sci Vis ; 24(7): 1920-9, 2007 Jul.
Article in English | MEDLINE | ID: mdl-17728814

ABSTRACT

Some scenarios require performance estimation of an imaging or a computer vision system prior to its actual operation such as in system design, as well as in tasks of high risk or cost. To predict the performance, we propose an image-based approach that accounts for underlying image-formation processes while using real image data. We give a detailed description of image formation from scene photons to image gray levels. This analysis includes all the optical, electrical, and digital sources of signal distortion and noise. On the basis of this analysis and our access to the camera parameters, we devise a simple image-based algorithm. It transforms a baseline high-quality image to render an estimated outcome of the system we wish to operate or design. We demonstrate our approach on thermal imaging systems (infrared spectrum, 3-5 microm).


Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Models, Theoretical , Pattern Recognition, Automated/methods , Cluster Analysis , Computer Simulation , Reproducibility of Results , Sensitivity and Specificity
20.
IEEE Trans Pattern Anal Mach Intell ; 29(9): 1655-60, 2007 Sep.
Article in English | MEDLINE | ID: mdl-17627052

ABSTRACT

When imaging in scattering media, visibility degrades as objects become more distant. Visibility can be significantly restored by computer vision methods that account for physical processes occurring during image formation. Nevertheless, such recovery is prone to noise amplification in pixels corresponding to distant objects, where the medium transmittance is low. We present an adaptive filtering approach that counters the above problems: while significantly improving visibility relative to raw images, it inhibits noise amplification. Essentially, the recovery formulation is regularized, where the regularization adapts to the spatially varying medium transmittance. Thus, this regularization does not blur close objects. We demonstrate the approach in atmospheric and underwater experiments, based on an automatic method for determining the medium transmittance.


Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Nephelometry and Turbidimetry/methods , Light , Reproducibility of Results , Scattering, Radiation , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...