Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Opt Express ; 31(15): 23729-23745, 2023 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-37475217

RESUMEN

3D single-photon LiDAR imaging has an important role in many applications. However, full deployment of this modality will require the analysis of low signal to noise ratio target returns and very high volume of data. This is particularly evident when imaging through obscurants or in high ambient background light conditions. This paper proposes a multiscale approach for 3D surface detection from the photon timing histogram to permit a significant reduction in data volume. The resulting surfaces are background-free and can be used to infer depth and reflectivity information about the target. We demonstrate this by proposing a hierarchical Bayesian model for 3D reconstruction and spectral classification of multispectral single-photon LiDAR data. The reconstruction method promotes spatial correlation between point-cloud estimates and uses a coordinate gradient descent algorithm for parameter estimation. Results on simulated and real data show the benefits of the proposed target detection and reconstruction approaches when compared to state-of-the-art processing algorithms.

2.
Neuroimage ; 241: 118418, 2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34303793

RESUMEN

Whole brain estimation of the haemodynamic response function (HRF) in functional magnetic resonance imaging (fMRI) is critical to get insight on the global status of the neurovascular coupling of an individual in healthy or pathological condition. Most of existing approaches in the literature works on task-fMRI data and relies on the experimental paradigm as a surrogate of neural activity, hence remaining inoperative on resting-stage fMRI (rs-fMRI) data. To cope with this issue, recent works have performed either a two-step analysis to detect large neural events and then characterize the HRF shape or a joint estimation of both the neural and haemodynamic components in an univariate fashion. In this work, we express the neural activity signals as a combination of piece-wise constant temporal atoms associated with sparse spatial maps and introduce an haemodynamic parcellation of the brain featuring a temporally dilated version of a given HRF model in each parcel with unknown dilation parameters. We formulate the joint estimation of the HRF shapes and spatio-temporal neural representations as a multivariate semi-blind deconvolution problem in a paradigm-free setting and introduce constraints inspired from the dictionary learning literature to ease its identifiability. A fast alternating minimization algorithm, along with its efficient implementation, is proposed and validated on both synthetic and real rs-fMRI data at the subject level. To demonstrate its significance at the population level, we apply this new framework to the UK Biobank data set, first for the discrimination of haemodynamic territories between balanced groups (n=24 individuals in each) patients with an history of stroke and healthy controls and second, for the analysis of normal aging on the neurovascular coupling. Overall, we statistically demonstrate that a pathology like stroke or a condition like normal brain aging induce longer haemodynamic delays in certain brain areas (e.g. Willis polygon, occipital, temporal and frontal cortices) and that this haemodynamic feature may be predictive with an accuracy of 74 % of the individual's age in a supervised classification task performed on n=459 subjects.


Asunto(s)
Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Hemodinámica/fisiología , Imagen por Resonancia Magnética/métodos , Desempeño Psicomotor/fisiología , Anciano , Anciano de 80 o más Años , Envejecimiento/fisiología , Bases de Datos Factuales/estadística & datos numéricos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Análisis Multivariante , Accidente Cerebrovascular/diagnóstico por imagen , Accidente Cerebrovascular/fisiopatología
3.
Opt Express ; 29(21): 33184-33196, 2021 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-34809135

RESUMEN

3D time-of-flight (ToF) imaging is used in a variety of applications such as augmented reality (AR), computer interfaces, robotics and autonomous systems. Single-photon avalanche diodes (SPADs) are one of the enabling technologies providing accurate depth data even over long ranges. By developing SPADs in array format with integrated processing combined with pulsed, flood-type illumination, high-speed 3D capture is possible. However, array sizes tend to be relatively small, limiting the lateral resolution of the resulting depth maps and, consequently, the information that can be extracted from the image for applications such as object detection. In this paper, we demonstrate that these limitations can be overcome through the use of convolutional neural networks (CNNs) for high-performance object detection. We present outdoor results from a portable SPAD camera system that outputs 16-bin photon timing histograms with 64×32 spatial resolution, with each histogram containing thousands of photons. The results, obtained with exposure times down to 2 ms (equivalent to 500 FPS) and in signal-to-background (SBR) ratios as low as 0.05, point to the advantages of providing the CNN with full histogram data rather than point clouds alone. Alternatively, a combination of point cloud and active intensity data may be used as input, for a similar level of performance. In either case, the GPU-accelerated processing time is less than 1 ms per frame, leading to an overall latency (image acquisition plus processing) in the millisecond range, making the results relevant for safety-critical computer vision applications which would benefit from faster than human reaction times.

4.
Opt Express ; 29(8): 11917-11937, 2021 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-33984963

RESUMEN

The number of applications that use depth imaging is increasing rapidly, e.g. self-driving autonomous vehicles and auto-focus assist on smartphone cameras. Light detection and ranging (LIDAR) via single-photon sensitive detector (SPAD) arrays is an emerging technology that enables the acquisition of depth images at high frame rates. However, the spatial resolution of this technology is typically low in comparison to the intensity images recorded by conventional cameras. To increase the native resolution of depth images from a SPAD camera, we develop a deep network built to take advantage of the multiple features that can be extracted from a camera's histogram data. The network is designed for a SPAD camera operating in a dual-mode such that it captures alternate low resolution depth and high resolution intensity images at high frame rates, thus the system does not require any additional sensor to provide intensity images. The network then uses the intensity images and multiple features extracted from down-sampled histograms to guide the up-sampling of the depth. Our network provides significant image resolution enhancement and image denoising across a wide range of signal-to-noise ratios and photon levels. Additionally, we show that the network can be applied to other data types of SPAD data, demonstrating the generality of the algorithm.

5.
Opt Express ; 28(2): 1330-1344, 2020 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-32121846

RESUMEN

We present a scanning light detection and ranging (LIDAR) system incorporating an individual Ge-on-Si single-photon avalanche diode (SPAD) detector for depth and intensity imaging in the short-wavelength infrared region. The time-correlated single-photon counting technique was used to determine the return photon time-of-flight for target depth information. In laboratory demonstrations, depth and intensity reconstructions were made of targets at short range, using advanced image processing algorithms tailored for the analysis of single-photon time-of-flight data. These laboratory measurements were used to predict the performance of the single-photon LIDAR system at longer ranges, providing estimations that sub-milliwatt average power levels would be required for kilometer range depth measurements.

6.
Opt Express ; 27(4): 4590-4611, 2019 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-30876075

RESUMEN

We investigate the depth imaging of objects through various densities of different obscurants (water fog, glycol-based vapor, and incendiary smoke) using a time-correlated single-photon detection system which had an operating wavelength of 1550 nm and an average optical output power of approximately 1.5 mW. It consisted of a monostatic scanning transceiver unit used in conjunction with a picosecond laser source and an individual Peltier-cooled InGaAs/InP single-photon avalanche diode (SPAD) detector. We acquired depth and intensity data of targets imaged through distances of up to 24 meters for the different obscurants. We compare several statistical algorithms which reconstruct both the depth and intensity images for short data acquisition times, including very low signal returns in the photon-starved regime.

7.
Opt Express ; 26(5): 5541-5557, 2018 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-29529757

RESUMEN

A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.

8.
Opt Express ; 25(10): 11919-11931, 2017 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-28788749

RESUMEN

Depth and intensity profiling of targets at a range of up to 10 km is demonstrated using time-of-flight time-correlated single-photon counting technique. The system comprised a pulsed laser source at 1550 nm wavelength, a monostatic scanning transceiver and a single-element InGaAs/InP single-photon avalanche diode (SPAD) detector. High-resolution three-dimensional images of various targets acquired over ranges between 800 metres and 10.5 km demonstrate long-range depth and intensity profiling, feature extraction and the potential for target recognition. Using a total variation restoration optimization algorithm, the acquisition time necessary for each pixel could be reduced by at least a factor of ten compared to a pixel-wise image processing approach. Kilometer range depth profiles are reconstructed with average signal returns of less than one photon per pixel.

9.
Sci Adv ; 8(48): eade0123, 2022 Dec 02.
Artículo en Inglés | MEDLINE | ID: mdl-36449608

RESUMEN

Single-photon-sensitive depth sensors are being increasingly used in next-generation electronics for human pose and gesture recognition. However, cost-effective sensors typically have a low spatial resolution, restricting their use to basic motion identification and simple object detection. Here, we perform a temporal to spatial mapping that drastically increases the resolution of a simple time-of-flight sensor, i.e., an initial resolution of 4 × 4 pixels to depth images of resolution 32 × 32 pixels. The output depth maps can then be used for accurate three-dimensional human pose estimation of multiple people. We develop a new explainable framework that provides intuition to how our network uses its input data and provides key information about the relevant parameters. Our work greatly expands the use cases of simple single-photon avalanche detector time-of-flight sensors and opens up promising possibilities for future super-resolution techniques applied to other types of sensors with similar data types, i.e., radar and sonar.

10.
Sci Rep ; 11(1): 11236, 2021 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-34045553

RESUMEN

Recently, time-of-flight LiDAR using the single-photon detection approach has emerged as a potential solution for three-dimensional imaging in challenging measurement scenarios, such as over distances of many kilometres. The high sensitivity and picosecond timing resolution afforded by single-photon detection offers high-resolution depth profiling of remote, complex scenes while maintaining low power optical illumination. These properties are ideal for imaging in highly scattering environments such as through atmospheric obscurants, for example fog and smoke. In this paper we present the reconstruction of depth profiles of moving objects through high levels of obscurant equivalent to five attenuation lengths between transceiver and target at stand-off distances up to 150 m. We used a robust statistically based processing algorithm designed for the real time reconstruction of single-photon data obtained in the presence of atmospheric obscurant, including providing uncertainty estimates in the depth reconstruction. This demonstration of real-time 3D reconstruction of moving scenes points a way forward for high-resolution imaging from mobile platforms in degraded visual environments.

11.
Artículo en Inglés | MEDLINE | ID: mdl-31831417

RESUMEN

This paper presents a new algorithm for the learning of spatial correlation and non-local restoration of single-photon 3-Dimensional Lidar images acquired in the photon starved regime (fewer or less than one photon per pixel) or with a reduced number of scanned spatial points (pixels). The algorithm alternates between three steps: (i) extract multi-scale information, (ii) build a robust graph of non-local spatial correlations between pixels, and (iii) the restoration of depth and reflectivity images. A non-uniform sampling approach, which assigns larger patches to homogeneous regions and smaller ones to heterogeneous regions, is adopted to reduce the computational cost associated with the graph. The restoration of the 3D images is achieved by minimizing a cost function accounting for the multi-scale information and the non-local spatial correlation between patches. This minimization problem is efficiently solved using the alternating direction method of multipliers (ADMM) that presents fast convergence properties. Various results based on simulated and real Lidar data show the benefits of the proposed algorithm that improves the quality of the estimated depth and reflectivity images, especially in the photon-starved regime or when containing a reduced number of spatial points.

12.
Sci Rep ; 9(1): 8075, 2019 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-31147564

RESUMEN

The ability to measure and record high-resolution depth images at long stand-off distances is important for a wide range of applications, including connected and automotive vehicles, defense and security, and agriculture and mining. In LIDAR (light detection and ranging) applications, single-photon sensitive detection is an emerging approach, offering high sensitivity to light and picosecond temporal resolution, and consequently excellent surface-to-surface resolution. The use of large format CMOS (complementary metal-oxide semiconductor) single-photon detector arrays provides high spatial resolution and allows the timing information to be acquired simultaneously across many pixels. In this work, we combine state-of-the-art single-photon detector array technology with non-local data fusion to generate high resolution three-dimensional depth information of long-range targets. The system is based on a visible pulsed illumination system at a wavelength of 670 nm and a 240 × 320 array sensor, achieving sub-centimeter precision in all three spatial dimensions at a distance of 150 meters. The non-local data fusion combines information from an optical image with sparse sampling of the single-photon array data, providing accurate depth information at low signature regions of the target.

13.
IEEE Trans Image Process ; 25(10): 4565-79, 2016 10.
Artículo en Inglés | MEDLINE | ID: mdl-27416597

RESUMEN

This paper presents three hyperspectral mixture models jointly with Bayesian algorithms for supervised hyperspectral unmixing. Based on the residual component analysis model, the proposed general formulation assumes the linear model to be corrupted by an additive term whose expression can be adapted to account for nonlinearities (NLs), endmember variability (EV), or mismodeling effects (MEs). The NL effect is introduced by considering a polynomial expression that is related to bilinear models. The proposed new formulation of EV accounts for shape and scale endmember changes while enforcing a smooth spectral/spatial variation. The ME formulation considers the effect of outliers and copes with some types of EV and NL. The known constraints on the parameter of each observation model are modeled via suitable priors. The posterior distribution associated with each Bayesian model is optimized using a coordinate descent algorithm, which allows the computation of the maximum a posteriori estimator of the unknown model parameters. The proposed mixture and Bayesian models and their estimation algorithms are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity, when compared with the state-of-the-art algorithms.

14.
IEEE Trans Image Process ; 24(12): 4904-17, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26302517

RESUMEN

This paper presents an unsupervised Bayesian algorithm for hyperspectral image unmixing, accounting for endmember variability. The pixels are modeled by a linear combination of endmembers weighted by their corresponding abundances. However, the endmembers are assumed random to consider their variability in the image. An additive noise is also considered in the proposed model, generalizing the normal compositional model. The proposed algorithm exploits the whole image to benefit from both spectral and spatial information. It estimates both the mean and the covariance matrix of each endmember in the image. This allows the behavior of each material to be analyzed and its variability to be quantified in the scene. A spatial segmentation is also obtained based on the estimated abundances. In order to estimate the parameters associated with the proposed Bayesian model, we propose to use a Hamiltonian Monte Carlo algorithm. The performance of the resulting unmixing strategy is evaluated through simulations conducted on both synthetic and real data.

15.
IEEE Trans Image Process ; 21(6): 3017-25, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-22345533

RESUMEN

This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA