Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Pattern Anal Mach Intell ; 43(12): 4272-4290, 2021 12.
Article in English | MEDLINE | ID: mdl-32750769

ABSTRACT

What is the current state-of-the-art for image restoration and enhancement applied to degraded images acquired under less than ideal circumstances? Can the application of such algorithms as a pre-processing step improve image interpretability for manual analysis or automatic visual recognition to classify scene content? While there have been important advances in the area of computational photography to restore or enhance the visual quality of an image, the capabilities of such techniques have not always translated in a useful way to visual recognition tasks. Consequently, there is a pressing need for the development of algorithms that are designed for the joint problem of improving visual appearance and recognition, which will be an enabling factor for the deployment of visual recognition tools in many real-world scenarios. To address this, we introduce the UG 2 dataset as a large-scale benchmark composed of video imagery captured under challenging conditions, and two enhancement tasks designed to test algorithmic impact on visual quality and automatic object recognition. Furthermore, we propose a set of metrics to evaluate the joint improvement of such tasks as well as individual algorithmic advances, including a novel psychophysics-based evaluation regime for human assessment and a realistic set of quantitative measures for object recognition performance. We introduce six new algorithms for image restoration or enhancement, which were created as part of the IARPA sponsored UG 2 Challenge workshop held at CVPR 2018. Under the proposed evaluation regime, we present an in-depth analysis of these algorithms and a host of deep learning-based and classic baseline approaches. From the observed results, it is evident that we are in the early days of building a bridge between computational photography and visual recognition, leaving many opportunities for innovation in this area.

2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 694-697, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30440491

ABSTRACT

The age of a subject can be estimated from the brain MR image by evaluating morphological changes in healthy aging. We consider using two-types of local features to estimate the age from T1-weighted images: handcrafted and automatically extracted features in this paper. The handcrafted brain local features are defined by volumes of brain tissues parcellated into 90 or 1,024 local regions defined by the automated anatomical labeling atlas. The automatically extracted features are obtained by using the convolutional neural network (CNN). This paper explores the difference between the handcrafted features and the automatically extracted features. Through a set of experiments using 1,099 T1-weighted images from a Japanese MR image database, we demonstrate the effectiveness of the proposed methods, analyze the effectiveness of each local region for age estimation and discuss its medical implication.


Subject(s)
Aging , Brain/diagnostic imaging , Magnetic Resonance Imaging , Neural Networks, Computer , Humans
3.
IEEE Trans Pattern Anal Mach Intell ; 26(3): 397-402, 2004 Mar.
Article in English | MEDLINE | ID: mdl-15376885

ABSTRACT

Optimization methods based on iterative schemes can be divided into two classes: line-search methods and trust-region methods. While line-search techniques are commonly found in various vision applications, not much attention is paid to trust-region ones. Motivated by the fact that line-search methods can be considered as special cases of trust-region methods, we propose to establish a trust-region framework for real-time tracking. Our approach is characterized by three key contributions. First, since a trust-region tracking system is more effective, it often yields better performances than the outcomes of other trackers that rely on iterative optimization to perform tracking, e.g., a line-search-based mean-shift tracker. Second, we have formulated a representation model that uses two coupled weighting schemes derived from the covariance ellipse to integrate an object's color probability distribution and edge density information. As a result, the system can address rotation and nonuniform scaling in a continuous space, rather than working on some presumably possible discrete values of rotation angle and scale. Third, the framework is very flexible in that a variety of distance functions can be adapted easily. Experimental results and comparative studies are provided to demonstrate the efficiency of the proposed method.


Subject(s)
Algorithms , Artificial Intelligence , Image Interpretation, Computer-Assisted/methods , Movement/physiology , Pattern Recognition, Automated , Subtraction Technique , Computer Graphics , Image Enhancement/methods , Information Storage and Retrieval/methods , Models, Statistical , Numerical Analysis, Computer-Assisted , Reproducibility of Results , Sensitivity and Specificity , Signal Processing, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL