Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
IEEE Trans Pattern Anal Mach Intell ; 43(12): 4272-4290, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-32750769

RESUMEN

What is the current state-of-the-art for image restoration and enhancement applied to degraded images acquired under less than ideal circumstances? Can the application of such algorithms as a pre-processing step improve image interpretability for manual analysis or automatic visual recognition to classify scene content? While there have been important advances in the area of computational photography to restore or enhance the visual quality of an image, the capabilities of such techniques have not always translated in a useful way to visual recognition tasks. Consequently, there is a pressing need for the development of algorithms that are designed for the joint problem of improving visual appearance and recognition, which will be an enabling factor for the deployment of visual recognition tools in many real-world scenarios. To address this, we introduce the UG 2 dataset as a large-scale benchmark composed of video imagery captured under challenging conditions, and two enhancement tasks designed to test algorithmic impact on visual quality and automatic object recognition. Furthermore, we propose a set of metrics to evaluate the joint improvement of such tasks as well as individual algorithmic advances, including a novel psychophysics-based evaluation regime for human assessment and a realistic set of quantitative measures for object recognition performance. We introduce six new algorithms for image restoration or enhancement, which were created as part of the IARPA sponsored UG 2 Challenge workshop held at CVPR 2018. Under the proposed evaluation regime, we present an in-depth analysis of these algorithms and a host of deep learning-based and classic baseline approaches. From the observed results, it is evident that we are in the early days of building a bridge between computational photography and visual recognition, leaving many opportunities for innovation in this area.

2.
IEEE Trans Pattern Anal Mach Intell ; 41(9): 2280-2286, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-29994469

RESUMEN

By providing substantial amounts of data and standardized evaluation protocols, datasets in computer vision have helped fuel advances across all areas of visual recognition. But even in light of breakthrough results on recent benchmarks, it is still fair to ask if our recognition algorithms are doing as well as we think they are. The vision sciences at large make use of a very different evaluation regime known as Visual Psychophysics to study visual perception. Psychophysics is the quantitative examination of the relationships between controlled stimuli and the behavioral responses they elicit in experimental test subjects. Instead of using summary statistics to gauge performance, psychophysics directs us to construct item-response curves made up of individual stimulus responses to find perceptual thresholds, thus allowing one to identify the exact point at which a subject can no longer reliably recognize the stimulus class. In this article, we introduce a comprehensive evaluation framework for visual recognition models that is underpinned by this methodology. Over millions of procedurally rendered 3D scenes and 2D images, we compare the performance of well-known convolutional neural networks. Our results bring into question recent claims of human-like performance, and provide a path forward for correcting newly surfaced algorithmic deficiencies.

3.
J Neurosci Methods ; 262: 133-41, 2016 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-26778609

RESUMEN

BACKGROUND: The marking technique in microneurography uses stimulus-induced changes in neural conduction velocity to characterize human C-fibers. Changes in conduction velocity are manifested as variations in the temporal latency between periodic electrical stimuli and the resulting APs. When successive recorded sweeps are displayed vertically in a "waterfall" format, APs correlated with the stimulus form visible vertical tracks. Automated detection of these latency tracks is made difficult by sometimes poor signal-to-noise ratio in recordings, spontaneous neural firings uncorrelated with the stimuli, and multi-unit recordings with crossing or closely parallel tracks. NEW METHOD: We developed an automated track-detection technique based on a local linearization of the latency tracks of stimulus-correlated APs. This technique enhances latency tracks, eliminates transient noise spikes and spontaneous neural activity not correlated with the stimulus, and automatically detects latency tracks across successive sweeps in a recording. RESULTS: We evaluated our method on microneurography recordings showing varying signal quality, spontaneous firing, and multiple tracks that run closely parallel and cross. The method showed excellent detection of latency tracks in all of our recordings. COMPARISON WITH EXISTING METHOD(S): We compare our method to the commonly used track detection method of Hammarberg as implemented in the Drever program. CONCLUSIONS: Our method is a robust means of automatically detecting latency tracks in typical microneurography recordings.


Asunto(s)
Potenciales de Acción/fisiología , Procesamiento Automatizado de Datos , Fibras Nerviosas Amielínicas/fisiología , Neuronas/fisiología , Tiempo de Reacción/fisiología , Animales , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...