Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Sensors (Basel) ; 23(3)2023 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-36772173

RESUMEN

In this work, a new method for aerial robot remote sensing using stereo vision is proposed. A variable baseline and flexible configuration stereo setup is achieved by separating the left camera and right camera on two separate quadrotor aerial robots. Monocular cameras, one on each aerial robot, are used as a stereo pair, allowing independent adjustment of the pose of the stereo pair. In contrast to conventional stereo vision where two cameras are fixed, having a flexible configuration system allows a large degree of independence in changing the configuration in accordance with various kinds of applications. Larger baselines can be used for stereo vision of farther away targets while using a vertical stereo configuration in tasks where there would be a loss of horizontal overlap caused by a lack of suitable horizontal configuration. Additionally, a method for the practical use of variable baseline stereo vision is introduced, combining multiple point clouds from multiple stereo baselines. Issues from using an inappropriate baseline, such as estimation error induced by insufficient baseline, and occlusions from using too large a baseline can be avoided with this solution.

2.
Sensors (Basel) ; 21(7)2021 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-33916733

RESUMEN

A burst image sensor named Hanabi, meaning fireworks in Japanese, includes a branching CCD and multiple CMOS readout circuits. The sensor is backside-illuminated with a light/charge guide pipe to minimize the temporal resolution by suppressing the horizontal motion of signal carriers. On the front side, the pixel has a guide gate at the center, branching to six first-branching gates, each bifurcating to second-branching gates, and finally connected to 12 (=6×2) floating diffusions. The signals are either read out after an image capture operation to replay 12 to 48 consecutive images, or continuously transferred to a memory chip stacked on the front side of the sensor chip and converted to digital signals. A CCD burst image sensor enables a noiseless signal transfer from a photodiode to the in-situ storage even at very high frame rates. However, the pixel count conflicts with the frame count due to the large pixel size for the relatively large in-pixel CCD memory elements. A CMOS burst image sensor can use small trench-type capacitors for memory elements, instead of CCD channels. However, the transfer noise from a floating diffusion to the memory element increases in proportion to the square root of the frame rate. The Hanabi chip overcomes the compromise between these pros and cons.

3.
Sensors (Basel) ; 20(23)2020 Dec 02.
Artículo en Inglés | MEDLINE | ID: mdl-33276651

RESUMEN

The theoretical temporal resolution limit tT of a silicon photodiode (Si PD) is 11.1 ps. We call "super temporal resolution" the temporal resolution that is shorter than that limit. To achieve this resolution, Germanium is selected as a candidate material for the photodiode (Ge PD) for visible light since the absorption coefficient of Ge for the wavelength is several tens of times higher than that of Si, allowing a very thin PD. On the other hand, the saturation drift velocity of electrons in Ge is about 2/3 of that in Si. The ratio suggests an ultra-short propagation time of electrons in the Ge PD. However, the diffusion coefficient of electrons in Ge is four times higher than that of Si. Therefore, Monte Carlo simulations were applied to analyze the temporal resolution of the Ge PD. The estimated theoretical temporal resolution limit is 0.26 ps, while the practical limit is 1.41 ps. To achieve a super temporal resolution better than 11.1 ps, the driver circuit must operate at least 100 GHz. It is thus proposed to develop, at first, a short-wavelength infrared (SWIR) ultra-high-speed image sensor with a thicker and wider Ge PD, and then gradually decrease the size along with the progress of the driver circuits.

4.
Sensors (Basel) ; 19(18)2019 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-31547285

RESUMEN

A tactile image sensor employing a camera is capable of obtaining rich tactile information through image sequences with high spatial resolution. There have been many studies on the tactile image sensors from more than 30 years ago, and, recently, they have been applied in the field of robotics. Tactile image sensors can be classified into three typical categories according to the method of conversion from physical contact to light signals: Light conductive plate-based, marker displacement- based, and reflective membrane-based sensors. Other important elements of the sensor, such as the optical system, image sensor, and post-image analysis algorithm, have been developed. In this work, the literature is surveyed, and an overview of tactile image sensors employing a camera is provided with a focus on the sensing principle, typical design, and variation in the sensor configuration.

5.
Sensors (Basel) ; 19(10)2019 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-31096653

RESUMEN

Light in flight was captured by a single shot of a newly developed backside-illuminated multi-collection-gate image sensor at a frame interval of 10 ns without high-speed gating devices such as a streak camera or post data processes. This paper reports the achievement and further evolution of the image sensor toward the theoretical temporal resolution limit of 11.1 ps derived by the authors. The theoretical analysis revealed the conditions to minimize the temporal resolution. Simulations show that the image sensor designed following the specified conditions and fabricated by existing technology will achieve a frame interval of 50 ps. The sensor, 200 times faster than our latest sensor will innovate advanced analytical apparatuses using time-of-flight or lifetime measurements, such as imaging TOF-MS, FLIM, pulse neutron tomography, PET, LIDAR, and more, beyond these known applications.

6.
Sensors (Basel) ; 18(8)2018 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-30042368

RESUMEN

The paper summarizes the evolution of the Backside-Illuminated Multi-Collection-Gate (BSI MCG) image sensors from the proposed fundamental structure to the development of a practical ultimate-high-speed silicon image sensor. A test chip of the BSI MCG image sensor achieves the temporal resolution of 10 ns. The authors have derived the expression of the temporal resolution limit of photoelectron conversion layers. For silicon image sensors, the limit is 11.1 ps. By considering the theoretical derivation, a high-speed image sensor designed can achieve the frame rate close to the theoretical limit. However, some of the conditions conflict with performance indices other than the frame rate, such as sensitivity and crosstalk. After adjusting these trade-offs, a simple pixel model of the image sensor is designed and evaluated by simulations. The results reveal that the sensor can achieve a temporal resolution of 50 ps with the existing technology.

7.
Sensors (Basel) ; 18(9)2018 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-30223542

RESUMEN

The paper presents an ultra-high-speed image sensor for motion pictures of reproducible events emitting very weak light. The sensor is backside-illuminated. Each pixel is equipped with multiple collection gates (MCG) at the center of the front side. Each collection gate is connected to an in-pixel large memory unit, which can accumulate image signals captured by repetitive imaging. The combination of the backside illumination, image signal accumulation, and slow readout from the in-pixel signal storage after an image capturing operation offers a very high sensitivity. Pipeline signal transfer from the the multiple collection gates (MCG) to the in-pixel memory units enables the sensor to achieve a large frame count and a very high frame rate at the same time. A test sensor was fabricated with a pixel count of 32 × 32 pixels. Each pixel is equipped with four collection gates, each connected to a memory unit with 305 elements; thus, with a total frame count of 1220 (305 × 4) frames. The test camera achieved 25 Mfps, while the sensor was designed to operate at 50 Mfps.

8.
Sensors (Basel) ; 17(3)2017 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-28264527

RESUMEN

The frame rate of the digital high-speed video camera was 2000 frames per second (fps) in 1989, and has been exponentially increasing. A simulation study showed that a silicon image sensor made with a 130 nm process technology can achieve about 1010 fps. The frame rate seems to approach the upper bound. Rayleigh proposed an expression on the theoretical spatial resolution limit when the resolution of lenses approached the limit. In this paper, the temporal resolution limit of silicon image sensors was theoretically analyzed. It is revealed that the limit is mainly governed by mixing of charges with different travel times caused by the distribution of penetration depth of light. The derived expression of the limit is extremely simple, yet accurate. For example, the limit for green light of 550 nm incident to silicon image sensors at 300 K is 11.1 picoseconds. Therefore, the theoretical highest frame rate is 90.1 Gfps (about 1011 fps).

9.
Front Robot AI ; 8: 774080, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34926592

RESUMEN

In the inspection work involving foodstuffs in food factories, there are cases where people not only visually inspect foodstuffs, but must also physically touch foodstuffs with their hands to find foreign or undesirable objects mixed in the product. To contribute to the automation of the inspection process, this paper proposes a method for detecting foreign objects in food based on differences in hardness using a camera-based tactile image sensor. Because the foreign objects to be detected are often small, the tactile sensor requires a high spatial resolution. In addition, inspection work in food factories requires a sufficient inspection speed. The proposed cylindrical tactile image sensor meets these requirements because it can efficiently acquire high-resolution tactile images with a camera mounted inside while rolling the cylindrical sensor surface over the target object. By analyzing the images obtained from the tactile image sensor, we detected the presence of foreign objects and their locations. By using a reflective membrane-type sensor surface with high sensitivity, small and hard foreign bodies of sub-millimeter size mixed in with soft food were successfully detected. The effectiveness of the proposed method was confirmed through experiments to detect shell fragments left on the surface of raw shrimp and bones left in fish fillets.

10.
Neurosci Res ; 148: 28-33, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-30529110

RESUMEN

The critical flicker-fusion frequency (CFF), defined as the frequency at which a flickering light is indistinguishable from a continuous light, is a useful measure of visual temporal resolution. The mouse CFF has been studied by electrophysiological approaches such as recordings of the electroretinogram (ERG) and the visually evoked potential (VEP), but it has not been measured behaviorally. Here we estimated the mouse CFF by using a touchscreen based operant system. The test with ascending series of frequencies and that with randomized frequencies resulted in about 17 and 14 Hz, respectively, as the frequency which could not be distinguished from steady lights. Since the ascending method of limits tend to overestimate the threshold than the descending method, we estimated the mouse CFF to be about 14 Hz. Our results highlight usefulness of the operant conditioning method in measurement of the mouse visual temporal resolution.


Asunto(s)
Discriminación en Psicología , Percepción Visual , Animales , Condicionamiento Operante , Potenciales Evocados Visuales , Masculino , Ratones , Ratones Endogámicos C57BL
11.
Neural Netw ; 21(8): 1197-204, 2008 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-18723317

RESUMEN

The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.


Asunto(s)
Modelos Neurológicos , Neuronas/fisiología , Retina/fisiología , Corteza Visual/fisiología , Computadores , Ojo Artificial , Humanos , Microcomputadores , Orientación , Reconocimiento de Normas Patrones Automatizadas , Semiconductores , Percepción Espacial , Corteza Visual/citología , Campos Visuales , Vías Visuales/citología
12.
Neural Netw ; 21(2-3): 331-40, 2008.
Artículo en Inglés | MEDLINE | ID: mdl-18272330

RESUMEN

We designed a VLSI binocular vision system that emulates the disparity computation in the primary visual cortex (V1). The system consists of two silicon retinas, orientation chips, and field programmable gate array (FPGA), mimicking a hierarchical architecture of visual information processing in the disparity energy model. The silicon retinas emulate a Laplacian-Gaussian-like receptive field of the vertebrate retina. The orientation chips generate an orientation-selective receptive field by aggregating multiple pixels of the silicon retina, mimicking the Hubel-Wiesel-type feed-forward model in order to emulate a Gabor-like receptive field of simple cells. The FPGA receives outputs from the orientation chips corresponding to the left and right eyes and calculates the responses of the complex cells based on the disparity energy model. The system can provide the responses of complex cells tuned to five different disparities and a disparity map obtained by comparing these energy outputs. Owing to the combination of spatial filtering by analog parallel circuits and pixel-wise computation by hard-wired digital circuits, the present system can execute the disparity computation in real time using compact hardware.


Asunto(s)
Procesamiento Automatizado de Datos/métodos , Modelos Neurológicos , Robótica , Disparidad Visual/fisiología , Visión Binocular , Corteza Visual/fisiología , Humanos , Estimulación Luminosa/métodos , Campos Visuales
13.
IEEE Trans Neural Netw ; 16(4): 972-9, 2005 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-16121737

RESUMEN

In this paper, we designed and fabricated a multichip neuromorphic analog very large scale integrated (aVLSI) system, which emulates the orientation selective response of the simple cell in the primary visual cortex. The system consists of a silicon retina and an orientation chip. An image, which is filtered by a concentric center-surround (CS) antagonistic receptive field of the silicon retina, is transferred to the orientation chip. The image transfer from the silicon retina to the orientation chip is carried out with analog signals. The orientation chip selectively aggregates multiple pixels of the silicon retina, mimicking the feedforward model proposed by Hubel and Wiesel. The chip provides the orientation-selective (OS) outputs which are tuned to 0 degrees, 60 degrees, and 120 degrees. The feed-forward aggregation reduces the fixed pattern noise that is due to the mismatch of the transistors in the orientation chip. The spatial properties of the orientation selective response were examined in terms of the adjustable parameters of the chip, i.e., the number of aggregated pixels and size of the receptive field of the silicon retina. The multichip aVLSI architecture used in the present study can be applied to implement higher order cells such as the complex cell of the primary visual cortex.


Asunto(s)
Potenciales de Acción/fisiología , Red Nerviosa/fisiología , Redes Neurales de la Computación , Neuronas/fisiología , Retina/fisiología , Percepción Espacial/fisiología , Corteza Visual/fisiología , Animales , Biomimética/instrumentación , Biomimética/métodos , Electrónica , Diseño de Equipo , Análisis de Falla de Equipo , Humanos , Orientación/fisiología , Semiconductores , Campos Visuales/fisiología
14.
IEEE Trans Neural Netw ; 22(9): 1482-93, 2011 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-21803687

RESUMEN

A silicon retina is an intelligent vision sensor that can execute real-time image preprocessing by using a parallel analog circuit that mimics the structure of the neuronal circuits in the vertebrate retina. For enhancing the sensor's robustness to changes in illumination in a practical environment, we have designed and fabricated a silicon retina on the basis of a computational model of brightness constancy. The chip has a wide-dynamic-range and shows a constant response against changes in the illumination intensity. The photosensor in the present chip approximates logarithmic illumination-to-voltage transfer characteristics as a result of the application of a time-modulated reset voltage technique. Two types of image processing, namely, Laplacian-Gaussian-like spatial filtering and computing the frame difference, are carried out by using resistive networks and sample/hold circuits in the chip. As a result of these processings, the chip exhibits brightness constancy over a wide range of illumination. The chip is fabricated by using the 0.25- µm complementary metal-oxide semiconductor image sensor technology. The number of pixels is 64 × 64, and the power consumption is 32 mW at the frame rate of 30 fps. We show that our chip not only has a wide-dynamic-range but also shows a constant response to the changes in illumination.


Asunto(s)
Ojo Artificial , Dinámicas no Lineales , Estimulación Luminosa/efectos adversos , Retina/fisiología , Algoritmos , Simulación por Computador , Sensibilidad de Contraste , Humanos , Modelos Neurológicos , Redes Neurales de la Computación , Silicio
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA