Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-31535999

RESUMEN

This paper presents a special matrix factorization based on sparse representation that detects anomalies in video sequences generated with moving cameras. Such representation is made by associating the frames of the target video, that is a sequence to be tested for the presence of anomalies, with the frames of an anomaly-free reference video, which is a previously validated sequence. This factorization is done by a sparse coefficient matrix, and any target-video anomaly is encapsulated into a residue term. In order to cope with camera trepidations, domaintransformations are incorporated into the sparse representation process. Approximations of the transformed-domain optimization problem are introduced to turn it into a feasible iterative process. Results obtained from a comprehensive video database acquired with moving cameras on a visually cluttered environment indicate that the proposed algorithm provides a better geometric registration between reference and target videos, greatly improving the overall performance of the anomaly-detection system.

2.
IEEE Trans Image Process ; 26(7): 3410-3424, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28422660

RESUMEN

In this paper, we propose a fast weak classifier that can detect and track eyes in video sequences. The approach relies on a least-squares detector based on the inner product detector (IPD) that can stimate a probability density distribution for a feature's location-which fits naturally with a Bayesian estimation cycle, such as a Kalman or particle filter. As a least-squares sliding window detector, it possesses tolerance to small variations in the desired pattern while maintaining good generalization capabilities and computational efficiency. We propose two approaches to integrating the IPD with a particle filter tracker. We use the BioID, FERET, LFPW, and COFW public datasets as well as five manually annotated high-definition video sequences to quantitatively evaluate the algorithms' performance. The video data set contains four subjects, different types of backgrounds, blurring due to fast motion, and occlusions. All code and data are available.

3.
IEEE Trans Image Process ; 25(9): 4046-60, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-27333603

RESUMEN

Directional intra prediction plays an important role in current state-of-the-art video coding standards. In directional prediction, neighbouring samples are projected along a specific direction to predict a block of samples. Ultimately, each prediction mode can be regarded as a set of very simple linear predictors, a different one for each pixel of a block. Therefore, a natural question that arises is whether one could use the theory of linear prediction in order to generate intra prediction modes that provide increased coding efficiency. However, such an interpretation of each directional mode as a set of linear predictors is too poor to provide useful insights for their design. In this paper, we introduce an interpretation of directional prediction as a particular case of linear prediction, which uses the first-order linear filters and a set of geometric transformations. This interpretation motivated the proposal of a generalized intra prediction framework, whereby the first-order linear filters are replaced by adaptive linear filters with sparsity constraints. In this context, we investigate the use of efficient sparse linear models, adaptively estimated for each block through the use of different algorithms, such as matching pursuit, least angle regression, least absolute shrinkage and selection operator, or elastic net. The proposed intra prediction framework was implemented and evaluated within the state-of-the-art high efficiency video coding standard. Experiments demonstrated the advantage of this predictive solution, mainly in the presence of images with complex features and textured areas, achieving higher average bitrate savings than other related sparse representation methods proposed in the literature.

4.
IEEE Trans Image Process ; 24(11): 4055-68, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26353355

RESUMEN

A complete encoding solution for efficient intra-based depth map compression is proposed in this paper. The algorithm, denominated predictive depth coding (PDC), was specifically developed to efficiently represent the characteristics of depth maps, mostly composed by smooth areas delimited by sharp edges. At its core, PDC involves a directional intra prediction framework and a straightforward residue coding method, combined with an optimized flexible block partitioning scheme. In order to improve the algorithm in the presence of depth edges that cannot be efficiently predicted by the directional modes, a constrained depth modeling mode, based on explicit edge representation, was developed. For residue coding, a simple and low complexity approach was investigated, using constant and linear residue modeling, depending on the prediction mode. The performance of the proposed intra depth map coding approach was evaluated based on the quality of the synthesized views using the encoded depth maps and original texture views. The experimental tests based on all intra configuration demonstrated the superior rate-distortion performance of PDC, with average bitrate savings of 6%, when compared with the current state-of-the-art intra depth map coding solution present in the 3D extension of a high-efficiency video coding (3D-HEVC) standard. By using view synthesis optimization in both PDC and 3D-HEVC encoders, the average bitrate savings increase to 14.3%. This suggests that the proposed method, without using transform-based residue coding, is an efficient alternative to the current 3D-HEVC algorithm for intra depth map coding.

5.
IEEE Trans Image Process ; 22(3): 1005-17, 2013 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-23144033

RESUMEN

Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Análisis de Ondículas , Inteligencia Artificial , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
6.
IEEE Trans Image Process ; 21(12): 4758-69, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22997263

RESUMEN

Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Rayos Infrarrojos , Grabación en Video/métodos , Humanos
7.
IEEE Trans Image Process ; 20(1): 64-75, 2011 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-21172744

RESUMEN

In this paper, we address the problem of no-reference quality assessment for digital pictures corrupted with blur. We start with the generation of a large real image database containing pictures taken by human users in a variety of situations, and the conduction of subjective tests to generate the ground truth associated to those images. Based upon this ground truth, we select a number of high quality pictures and artificially degrade them with different intensities of simulated blur (gaussian and linear motion), totalling 6000 simulated blur images. We extensively evaluate the performance of state-of-the-art strategies for no-reference blur quantification in different blurring scenarios, and propose a paradigm for blur evaluation in which an effective method is pursued by combining several metrics and low-level image features. We test this paradigm by designing a no-reference quality assessment algorithm for blurred images which combines different metrics in a classifier based upon a neural network structure. Experimental results show that this leads to an improved performance that better reflects the images' ground truth. Finally, based upon the real image database, we show that the proposed method also outperforms other algorithms and metrics in realistic blur scenarios.

8.
Water Res ; 44(13): 3946-58, 2010 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-20605620

RESUMEN

Ca-loaded Pelvetia canaliculata biomass was used to remove Pb(2+) in aqueous solution from batch and continuous systems. The physicochemical characterization of algae Pelvetia particles by potentiometric titration and FTIR analysis has shown a gel structure with two major binding groups - carboxylic (2.8 mmol g(-1)) and hydroxyl (0.8 mmol g(-1)), with an affinity constant distribution for hydrogen ions well described by a Quasi-Gaussian distribution. Equilibrium adsorption (pH 3 and 5) and desorption (eluents: HNO(3) and CaCl(2)) experiments were performed, showing that the biosorption mechanism was attributed to ion exchange among calcium, lead and hydrogen ions with stoichiometry 1:1 (Ca:Pb) and 1:2 (Ca:H and Pb:H). The uptake capacity of lead ions decreased with pH, suggesting that there is a competition between H(+) and Pb(2+) for the same binding sites. A mass action law for the ternary mixture was able to predict the equilibrium data, with the selectivity constants alpha(Ca)(H)=9+/-1 and alpha(Ca)(Pb)=44+/-5, revealing a higher affinity of the biomass towards lead ions. Adsorption (initial solution pH 4.5 and 2.5) and desorption (0.3M HNO(3)) kinetics were performed in batch and continuous systems. A mass transfer model using the Nernst-Planck approximation for the ionic flux of each counter-ion was used for the prediction of the ions profiles in batch systems and packed bed columns. The intraparticle effective diffusion constants were determined as 3.73x10(-7)cm(2)s(-1) for H(+), 7.56x10(-8)cm(2)s(-1) for Pb(2+) and 6.37x10(-8)cm(2)s(-1) for Ca(2+).


Asunto(s)
Calcio/química , Plomo/aislamiento & purificación , Modelos Químicos , Phaeophyceae/metabolismo , Adsorción , Biodegradación Ambiental , Biomasa , Reactores Biológicos , Hidrógeno/análisis , Concentración de Iones de Hidrógeno , Intercambio Iónico , Iones , Cinética , Phaeophyceae/crecimiento & desarrollo , Potenciometría , Espectroscopía Infrarroja por Transformada de Fourier
9.
IEEE Trans Image Process ; 19(10): 2712-24, 2010 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-20423803

RESUMEN

In this paper, we propose a new encoder for scanned compound documents, based upon a recently introduced coding paradigm called multidimensional multiscale parser (MMP). MMP uses approximate pattern matching, with adaptive multiscale dictionaries that contain concatenations of scaled versions of previously encoded image blocks. These features give MMP the ability to adjust to the input image's characteristics, resulting in high coding efficiencies for a wide range of image types. This versatility makes MMP a good candidate for compound digital document encoding. The proposed algorithm first classifies the image blocks as smooth (texture) and nonsmooth (text and graphics). Smooth and nonsmooth blocks are then compressed using different MMP-based encoders, adapted for encoding either type of blocks. The adaptive use of these two types of encoders resulted in performance gains over the original MMP algorithm, further increasing the performance advantage over the current state-of-the-art image encoders for scanned compound images, without compromising the performance for other image types.

10.
IEEE Trans Biomed Eng ; 56(3): 896-900, 2009 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-19389688

RESUMEN

This paper presents the results of a multiscale pattern-matching-based ECG encoder, which employs simple preprocessing techniques for adapting the input signal. Experiments carried out with records from the Massachusetts Institute of Technology-Beth Israel Hospital database show that the proposed scheme is effective, outperforming some state-of-the-art schemes described in the literature.


Asunto(s)
Compresión de Datos/métodos , Electrocardiografía , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos
11.
IEEE Trans Image Process ; 17(9): 1640-53, 2008 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-18701400

RESUMEN

In this paper, we exploit a recently introduced coding algorithm called multidimensional multiscale parser (MMP) as an alternative to the traditional transform quantization-based methods. MMP uses approximate pattern matching with adaptive multiscale dictionaries that contain concatenations of scaled versions of previously encoded image blocks. We propose the use of predictive coding schemes that modify the source's probability distribution, in order to favour the efficiency of MMP's dictionary adaptation. Statistical conditioning is also used, allowing for an increased coding efficiency of the dictionaries' symbols. New dictionary design methods, that allow for an effective compromise between the introduction of new dictionary elements and the reduction of codebook redundancy, are also proposed. Experimental results validate the proposed techniques by showing consistent improvements in PSNR performance over the original MMP algorithm. When compared with state-of-the-art methods, like JPEG2000 and H.264/AVC, the proposed algorithm achieves relevant gains (up to 6 dB) for nonsmooth images and very competitive results for smooth images. These results strongly suggest that the new paradigm posed by MMP can be regarded as an alternative to the one traditionally used in image coding, for a wide range of image types.


Asunto(s)
Algoritmos , Compresión de Datos/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Señales Asistido por Computador , Grabación en Video/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
12.
IEEE Trans Biomed Eng ; 55(7): 1920-3, 2008 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-18595812

RESUMEN

In this paper, the multidimensional multiscale parser (MMP) is employed for encoding electromyographic signals. The experiments were carried out with real signals acquired in laboratory and show that the proposed scheme is effective, outperforming even wavelet-based state-of-the-art schemes present in the literature in terms of percent root mean square difference x compression ratio.


Asunto(s)
Potenciales de Acción/fisiología , Algoritmos , Compresión de Datos/métodos , Electromiografía/métodos , Contracción Isométrica/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Señales Asistido por Computador , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
13.
IEEE Trans Biomed Eng ; 55(7): 1923-6, 2008 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-18595813

RESUMEN

In this brief, we present new preprocessing techniques for electrocardiogram signals, namely, dc equalization and complexity sorting, which when applied can improve current 2-D compression algorithms. The experimental results with signals from the Massachusetts Institute of Technology - Beth Israel Hospital (MIT-BIH) database outperform the ones from many state-of-the-art schemes described in the literature.


Asunto(s)
Algoritmos , Compresión de Datos/métodos , Electrocardiografía/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Señales Asistido por Computador , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...