Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Opt Express ; 32(10): 16645-16656, 2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38858865

RESUMEN

Single-Photon Avalanche Diode (SPAD) direct Time-of-Flight (dToF) sensors provide depth imaging over long distances, enabling the detection of objects even in the absence of contrast in colour or texture. However, distant objects are represented by just a few pixels and are subject to noise from solar interference, limiting the applicability of existing computer vision techniques for high-level scene interpretation. We present a new SPAD-based vision system for human activity recognition, based on convolutional and recurrent neural networks, which is trained entirely on synthetic data. In tests using real data from a 64×32 pixel SPAD, captured over a distance of 40 m, the scheme successfully overcomes the limited transverse resolution (in which human limbs are approximately one pixel across), achieving an average accuracy of 89% in distinguishing between seven different activities. The approach analyses continuous streams of video-rate depth data at a maximal rate of 66 FPS when executed on a GPU, making it well-suited for real-time applications such as surveillance or situational awareness in autonomous systems.


Asunto(s)
Fotones , Humanos , Actividades Humanas , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Diseño de Equipo
2.
Nature ; 563(7733): 701-704, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30429614

RESUMEN

Human pluripotent cell lines hold enormous promise for the development of cell-based therapies. Safety, however, is a crucial prerequisite condition for clinical applications. Numerous groups have attempted to eliminate potentially harmful cells through the use of suicide genes1, but none has quantitatively defined the safety level of transplant therapies. Here, using genome-engineering strategies, we demonstrate the protection of a suicide system from inactivation in dividing cells. We created a transcriptional link between the suicide gene herpes simplex virus thymidine kinase (HSV-TK) and a cell-division gene (CDK1); this combination is designated the safe-cell system. Furthermore, we used a mathematical model to quantify the safety level of the cell therapy as a function of the number of cells that is needed for the therapy and the type of genome editing that is performed. Even with the highly conservative estimates described here, we anticipate that our solution will rapidly accelerate the entry of cell-based medicine into the clinic.


Asunto(s)
Proteína Quinasa CDC2/genética , División Celular/genética , Tratamiento Basado en Trasplante de Células y Tejidos/métodos , Genes Transgénicos Suicidas/genética , Seguridad del Paciente , Animales , Proliferación Celular , Tratamiento Basado en Trasplante de Células y Tejidos/normas , Células Madre Embrionarias/citología , Células Madre Embrionarias/metabolismo , Femenino , Ganciclovir/farmacología , Humanos , Masculino , Ratones , Ratones Endogámicos C57BL , Simplexvirus/enzimología , Simplexvirus/genética , Timidina Quinasa/genética , Timidina Quinasa/metabolismo
3.
Opt Express ; 31(5): 7060-7072, 2023 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-36859845

RESUMEN

3D time-of-flight (ToF) image sensors are used widely in applications such as self-driving cars, augmented reality (AR), and robotics. When implemented with single-photon avalanche diodes (SPADs), compact, array format sensors can be made that offer accurate depth maps over long distances, without the need for mechanical scanning. However, array sizes tend to be small, leading to low lateral resolution, which combined with low signal-to-background ratio (SBR) levels under high ambient illumination, may lead to difficulties in scene interpretation. In this paper, we use synthetic depth sequences to train a 3D convolutional neural network (CNN) for denoising and upscaling (×4) depth data. Experimental results, based on synthetic as well as real ToF data, are used to demonstrate the effectiveness of the scheme. With GPU acceleration, frames are processed at >30 frames per second, making the approach suitable for low-latency imaging, as required for obstacle avoidance.

4.
Sensors (Basel) ; 23(21)2023 Nov 03.
Artículo en Inglés | MEDLINE | ID: mdl-37960642

RESUMEN

Self-driving vehicles demand efficient and reliable depth-sensing technologies. Lidar, with its capability for long-distance, high-precision measurement, is a crucial component in this pursuit. However, conventional mechanical scanning implementations suffer from reliability, cost, and frame rate limitations. Solid-state lidar solutions have emerged as a promising alternative, but the vast amount of photon data processed and stored using conventional direct time-of-flight (dToF) prevents long-distance sensing unless power-intensive partial histogram approaches are used. In this paper, we introduce a groundbreaking 'guided' dToF approach, harnessing external guidance from other onboard sensors to narrow down the depth search space for a power and data-efficient solution. This approach centers around a dToF sensor in which the exposed time window of independent pixels can be dynamically adjusted. We utilize a 64-by-32 macropixel dToF sensor and a pair of vision cameras to provide the guiding depth estimates. Our demonstrator captures a dynamic outdoor scene at 3 fps with distances up to 75 m. Compared to a conventional full histogram approach, on-chip data is reduced by over twenty times, while the total laser cycles in each frame are reduced by at least six times compared to any partial histogram approach. The capability of guided dToF to mitigate multipath reflections is also demonstrated. For self-driving vehicles where a wealth of sensor data is already available, guided dToF opens new possibilities for efficient solid-state lidar.

5.
Opt Express ; 29(14): 22504-22516, 2021 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-34266012

RESUMEN

Light-in-flight (LIF) imaging is the measurement and reconstruction of light's path as it moves and interacts with objects. It is well known that relativistic effects can result in apparent velocities that differ significantly from the speed of light. However, less well known is that Rayleigh scattering and the effects of imaging optics can lead to observed intensities changing by several orders of magnitude along light's path. We develop a model that enables us to correct for all of these effects, thus we can accurately invert the observed data and reconstruct the true intensity-corrected optical path of a laser pulse as it travels in air. We demonstrate the validity of our model by observing the photon arrival time and intensity distribution obtained from single-photon avalanche detector (SPAD) array data for a laser pulse propagating towards and away from the camera. We can then reconstruct the true intensity-corrected path of the light in four dimensions (three spatial dimensions and time).

6.
Opt Express ; 29(21): 33184-33196, 2021 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-34809135

RESUMEN

3D time-of-flight (ToF) imaging is used in a variety of applications such as augmented reality (AR), computer interfaces, robotics and autonomous systems. Single-photon avalanche diodes (SPADs) are one of the enabling technologies providing accurate depth data even over long ranges. By developing SPADs in array format with integrated processing combined with pulsed, flood-type illumination, high-speed 3D capture is possible. However, array sizes tend to be relatively small, limiting the lateral resolution of the resulting depth maps and, consequently, the information that can be extracted from the image for applications such as object detection. In this paper, we demonstrate that these limitations can be overcome through the use of convolutional neural networks (CNNs) for high-performance object detection. We present outdoor results from a portable SPAD camera system that outputs 16-bin photon timing histograms with 64×32 spatial resolution, with each histogram containing thousands of photons. The results, obtained with exposure times down to 2 ms (equivalent to 500 FPS) and in signal-to-background (SBR) ratios as low as 0.05, point to the advantages of providing the CNN with full histogram data rather than point clouds alone. Alternatively, a combination of point cloud and active intensity data may be used as input, for a similar level of performance. In either case, the GPU-accelerated processing time is less than 1 ms per frame, leading to an overall latency (image acquisition plus processing) in the millisecond range, making the results relevant for safety-critical computer vision applications which would benefit from faster than human reaction times.

7.
Opt Express ; 29(8): 11917-11937, 2021 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-33984963

RESUMEN

The number of applications that use depth imaging is increasing rapidly, e.g. self-driving autonomous vehicles and auto-focus assist on smartphone cameras. Light detection and ranging (LIDAR) via single-photon sensitive detector (SPAD) arrays is an emerging technology that enables the acquisition of depth images at high frame rates. However, the spatial resolution of this technology is typically low in comparison to the intensity images recorded by conventional cameras. To increase the native resolution of depth images from a SPAD camera, we develop a deep network built to take advantage of the multiple features that can be extracted from a camera's histogram data. The network is designed for a SPAD camera operating in a dual-mode such that it captures alternate low resolution depth and high resolution intensity images at high frame rates, thus the system does not require any additional sensor to provide intensity images. The network then uses the intensity images and multiple features extracted from down-sampled histograms to guide the up-sampling of the depth. Our network provides significant image resolution enhancement and image denoising across a wide range of signal-to-noise ratios and photon levels. Additionally, we show that the network can be applied to other data types of SPAD data, demonstrating the generality of the algorithm.

8.
Opt Express ; 26(5): 5541-5557, 2018 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-29529757

RESUMEN

A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.

9.
Opt Express ; 26(3): 2280-2291, 2018 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-29401768

RESUMEN

Single-photon avalanche photodiode (SPAD) image sensors offer time-gated photon counting, at high binary frame rates of >100 kFPS and with no readout noise. This makes them well-suited to a range of scientific applications, including microscopy, sensing and quantum optics. However, due to the complex electronics required, the fill factor tends to be significantly lower (< 10%) than that of EMCCD and sCMOS cameras (>90%), whilst the pixel size is typically larger, impacting the sensitivity and practicalities of the SPAD devices. This paper presents the first characterisation of a cylindrical-shaped microlens array applied to a small, 8 micron, pixel SPAD imager. The enhanced fill factor, ≈50% for collimated light, is the highest reported value amongst SPAD sensors with comparable resolution and pixel pitch. We demonstrate the impact of the increased sensitivity in single-molecule localisation microscopy, obtaining a resolution of below 40nm, the best reported figure for a SPAD sensor.

10.
Sensors (Basel) ; 18(2)2018 Jan 23.
Artículo en Inglés | MEDLINE | ID: mdl-29360795

RESUMEN

Quanta Imager Sensors provide photon detections at high frame rates, with negligible read-out noise, making them ideal for high-speed optical tracking. At the basic level of bit-planes or binary maps of photon detections, objects may present limited detail. However, through motion estimation and spatial reassignment of photon detections, the objects can be reconstructed with minimal motion artefacts. We here present the first demonstration of high-speed two-dimensional (2D) tracking and reconstruction of rigid, planar objects with a Quanta Image Sensor, including a demonstration of depth-resolved tracking.

11.
Sensors (Basel) ; 18(4)2018 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-29641479

RESUMEN

This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1-3 µm.

12.
Sensors (Basel) ; 16(7)2016 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-27447643

RESUMEN

SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.

13.
Sci Rep ; 13(1): 176, 2023 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-36604441

RESUMEN

Single-Photon Avalanche Detector (SPAD) arrays are a rapidly emerging technology. These multi-pixel sensors have single-photon sensitivities and pico-second temporal resolutions thus they can rapidly generate depth images with millimeter precision. Such sensors are a key enabling technology for future autonomous systems as they provide guidance and situational awareness. However, to fully exploit the capabilities of SPAD array sensors, it is crucial to establish the quality of depth images they are able to generate in a wide range of scenarios. Given a particular optical system and a finite image acquisition time, what is the best-case depth resolution and what are realistic images generated by SPAD arrays? In this work, we establish a robust yet simple numerical procedure that rapidly establishes the fundamental limits to depth imaging with SPAD arrays under real world conditions. Our approach accurately generates realistic depth images in a wide range of scenarios, allowing the performance of an optical depth imaging system to be established without the need for costly and laborious field testing. This procedure has applications in object detection and tracking for autonomous systems and could be easily extended to systems for underwater imaging or for imaging around corners.


Asunto(s)
Dispositivos Ópticos , Semiconductores , Imagen Óptica , Fotones , Factores de Tiempo
14.
Adv Sci (Weinh) ; 9(31): e2203018, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36068166

RESUMEN

Establishing the biological basis of cognition and its disorders will require high precision spatiotemporal measurements of neural activity. Recently developed genetically encoded voltage indicators (GEVIs) report both spiking and subthreshold activity of identified neurons. However, maximally capitalizing on the potential of GEVIs will require imaging at millisecond time scales, which remains challenging with standard camera systems. Here, application of single photon avalanche diode (SPAD) sensors is reported to image neural activity at kilohertz frame rates. SPADs are electronic devices that when activated by a single photon cause an avalanche of electrons and a large electric current. An array of SPAD sensors is used to image individual neurons expressing the GEVI Voltron-JF525-HTL. It is shown that subthreshold and spiking activity can be resolved with shot noise limited signals at frame rates of up to 10 kHz. SPAD imaging is able to reveal millisecond scale synchronization of neural activity in an ex vivo seizure model. SPAD sensors may have widespread applications for investigation of millisecond timescale neural dynamics.


Asunto(s)
Neuronas , Fotones , Neuronas/fisiología , Diagnóstico por Imagen , Electrónica
15.
Sci Adv ; 8(48): eade0123, 2022 Dec 02.
Artículo en Inglés | MEDLINE | ID: mdl-36449608

RESUMEN

Single-photon-sensitive depth sensors are being increasingly used in next-generation electronics for human pose and gesture recognition. However, cost-effective sensors typically have a low spatial resolution, restricting their use to basic motion identification and simple object detection. Here, we perform a temporal to spatial mapping that drastically increases the resolution of a simple time-of-flight sensor, i.e., an initial resolution of 4 × 4 pixels to depth images of resolution 32 × 32 pixels. The output depth maps can then be used for accurate three-dimensional human pose estimation of multiple people. We develop a new explainable framework that provides intuition to how our network uses its input data and provides key information about the relevant parameters. Our work greatly expands the use cases of simple single-photon avalanche detector time-of-flight sensors and opens up promising possibilities for future super-resolution techniques applied to other types of sensors with similar data types, i.e., radar and sonar.

16.
Sci Rep ; 9(1): 8075, 2019 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-31147564

RESUMEN

The ability to measure and record high-resolution depth images at long stand-off distances is important for a wide range of applications, including connected and automotive vehicles, defense and security, and agriculture and mining. In LIDAR (light detection and ranging) applications, single-photon sensitive detection is an emerging approach, offering high sensitivity to light and picosecond temporal resolution, and consequently excellent surface-to-surface resolution. The use of large format CMOS (complementary metal-oxide semiconductor) single-photon detector arrays provides high spatial resolution and allows the timing information to be acquired simultaneously across many pixels. In this work, we combine state-of-the-art single-photon detector array technology with non-local data fusion to generate high resolution three-dimensional depth information of long-range targets. The system is based on a visible pulsed illumination system at a wavelength of 670 nm and a 240 × 320 array sensor, achieving sub-centimeter precision in all three spatial dimensions at a distance of 150 meters. The non-local data fusion combines information from an optical image with sparse sampling of the single-photon array data, providing accurate depth information at low signature regions of the target.

17.
Sci Rep ; 6: 37349, 2016 11 23.
Artículo en Inglés | MEDLINE | ID: mdl-27876857

RESUMEN

Single molecule localisation microscopy (SMLM) has become an essential part of the super-resolution toolbox for probing cellular structure and function. The rapid evolution of these techniques has outstripped detector development and faster, more sensitive cameras are required to further improve localisation certainty. Single-photon avalanche photodiode (SPAD) array cameras offer single-photon sensitivity, very high frame rates and zero readout noise, making them a potentially ideal detector for ultra-fast imaging and SMLM experiments. However, performance traditionally falls behind that of emCCD and sCMOS devices due to lower photon detection efficiency. Here we demonstrate, both experimentally and through simulations, that the sensitivity of a binary SPAD camera in SMLM experiments can be improved significantly by aggregating only frames containing signal, and that this leads to smaller datasets and competitive performance with that of existing detectors. The simulations also indicate that with predicted future advances in SPAD camera technology, SPAD devices will outperform existing scientific cameras when capturing fast temporal dynamics.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA