Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters











Publication year range
1.
Sensors (Basel) ; 21(10)2021 May 19.
Article in English | MEDLINE | ID: mdl-34069517

ABSTRACT

Microplastics (MPs) have been found in aqueous environments ranging from rural ponds and lakes to the deep ocean. Despite the ubiquity of MPs, our ability to characterize MPs in the environment is limited by the lack of technologies for rapidly and accurately identifying and quantifying MPs. Although standards exist for MP sample collection and preparation, methods of MP analysis vary considerably and produce data with a broad range of data content and quality. The need for extensive analysis-specific sample preparation in current technology approaches has hindered the emergence of a single technique which can operate on aqueous samples in the field, rather than on dried laboratory preparations. In this perspective, we consider MP measurement technologies with a focus on both their eventual field-deployability and their respective data products (e.g., MP particle count, size, and/or polymer type). We present preliminary demonstrations of several prospective MP measurement techniques, with an eye towards developing a solution or solutions that can transition from the laboratory to the field. Specifically, experimental results are presented from multiple prototype systems that measure various physical properties of MPs: pyrolysis-differential mobility spectroscopy, short-wave infrared imaging, aqueous Nile Red labeling and counting, acoustophoresis, ultrasound, impedance spectroscopy, and dielectrophoresis.

2.
IEEE Trans Image Process ; 20(11): 3014-27, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21435973

ABSTRACT

Real-time, two-way transmission of American Sign Language (ASL) video over cellular networks provides natural communication among members of the Deaf community. As a communication tool, compressed ASL video must be evaluated according to the intelligibility of the conversation, not according to conventional definitions of video quality. Guided by linguistic principles and human perception of ASL, this paper proposes a full-reference computational model of intelligibility for ASL (CIM-ASL) that is suitable for evaluating compressed ASL video. The CIM-ASL measures distortions only in regions relevant for ASL communication, using spatial and temporal pooling mechanisms that vary the contribution of distortions according to their relative impact on the intelligibility of the compressed video. The model is trained and evaluating using ground truth experimental data collected in three separate studies. The CIM-ASL provides accurate estimates of subjective intelligibility and demonstrates statistically significant improvements over computational models traditionally used to estimate video quality. The CIM-ASL is incorporated into an H.264 compliant video coding framework, creating a closed-loop encoding system optimized explicitly for ASL intelligibility. The ASL-optimized encoder achieves bitrate reductions between 10% and 42%, without reducing intelligibility, when compared to a general purpose H.264 encoder.


Subject(s)
Computer Simulation , Data Compression/methods , Sign Language , Humans , Manual Communication , Videotape Recording
3.
J Opt Soc Am A Opt Image Sci Vis ; 28(2): 157-88, 2011 Feb 01.
Article in English | MEDLINE | ID: mdl-21293521

ABSTRACT

Quality estimators aspire to quantify the perceptual resemblance, but not the usefulness, of a distorted image when compared to a reference natural image. However, humans can successfully accomplish tasks (e.g., object identification) using visibly distorted images that are not necessarily of high quality. A suite of novel subjective experiments reveals that quality does not accurately predict utility (i.e., usefulness). Thus, even accurate quality estimators cannot accurately estimate utility. In the absence of utility estimators, leading quality estimators are assessed as both quality and utility estimators and dismantled to understand those image characteristics that distinguish utility from quality. A newly proposed utility estimator demonstrates that a measure of contour degradation is sufficient to accurately estimate utility and is argued to be compatible with shape-based theories of object perception.

4.
IEEE Trans Image Process ; 16(9): 2284-98, 2007 Sep.
Article in English | MEDLINE | ID: mdl-17784602

ABSTRACT

This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking and visual summation in order to determine whether the distortions in the distorted image are visible. If the distortions are below the threshold of detection, the distorted image is deemed to be of perfect visual fidelity (VSNR = infinity) and no further analysis is required. If the distortions are suprathreshold, a second stage is applied which operates based on the low-level visual property of perceived contrast, and the mid-level visual property of global precedence. These two properties are modeled as Euclidean distances in distortion-contrast space of a multiscale wavelet decomposition, and VSNR is computed based on a simple linear sum of these distances. The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in terms of its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.


Subject(s)
Algorithms , Biomimetics/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Visual Perception/physiology , Humans , Reproducibility of Results , Sensitivity and Specificity
5.
IEEE Trans Image Process ; 16(4): 967-81, 2007 Apr.
Article in English | MEDLINE | ID: mdl-17405430

ABSTRACT

Under a rate constraint, wavelet-based image coding involves strategic discarding of information such that the remaining data can be described with a given amount of rate. In a practical coding system, this task requires knowledge of the relationship between quantization step size and compressed rate for each group of wavelet coefficients, the R-Q curve. A common approach to this problem is to fit each subband with a scalar probability distribution and compute entropy estimates based on the model. This approach is not effective at rates below 1.0 bits-per-pixel because the distributions of quantized data do not reflect the dependencies in coefficient magnitudes. These dependencies can be addressed with doubly stochastic models, which have been previously proposed to characterize more localized behavior, though there are tradeoffs between storage, computation time, and accuracy. Using a doubly stochastic generalized Gaussian model, it is demonstrated that the relationship between step size and rate is accurately described by a low degree polynomial in the logarithm of the step size. Based on this observation, an entropy estimation scheme is presented which offers an excellent tradeoff between speed and accuracy; after a simple data-gathering step, estimates are computed instantaneously by evaluating a single polynomial for each group of wavelet coefficients quantized with the same step size. These estimates are on average within 3% of a desired target rate for several of state-of-the-art coders.


Subject(s)
Algorithms , Data Compression/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Models, Statistical , Signal Processing, Computer-Assisted , Artifacts , Computer Simulation , Data Interpretation, Statistical , Entropy , Reproducibility of Results , Sensitivity and Specificity , Stochastic Processes
6.
IEEE Trans Image Process ; 16(4): 982-96, 2007 Apr.
Article in English | MEDLINE | ID: mdl-17405431

ABSTRACT

Many modern wavelet quantization schemes specify wavelet coefficient step sizes as continuous functions of an input step-size selection criterion; rate control is achieved by selecting an appropriate set of step sizes. In embedded wavelet coders, however, rate control is achieved simply by truncating the coded bit stream at the desired rate. The order in which wavelet data are coded implicitly controls quantization step sizes applied to create the reconstructed image. Since these step sizes are effectively discontinuous, piecewise-constant functions of rate, this paper examines the problem of designing a coding order for such a coder, guided by a quantization scheme where step sizes evolve continuously with rate. In particular, it formulates an optimization problem that minimizes the average relative difference between the piecewise-constant implicit step sizes associated with a layered coding strategy and the smooth step sizes given by a quantization scheme. The solution to this problem implies a coding order. Elegant, near-optimal solutions are presented to optimize step sizes over a variety of regions of rates, either continuous or discrete. This method can be used to create layers of coded data using any scalar quantization scheme combined with any wavelet bit-plane coder. It is illustrated using a variety of state-of-the-art coders and quantization schemes. In addition, the proposed method is verified with objective and subjective testing.


Subject(s)
Algorithms , Data Compression/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Signal Processing, Computer-Assisted , Artifacts , Reproducibility of Results , Sensitivity and Specificity
7.
IEEE Trans Image Process ; 16(3): 649-63, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17357726

ABSTRACT

Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system.


Subject(s)
Algorithms , Data Compression/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Models, Statistical , Signal Processing, Computer-Assisted , Computer Simulation , Computer Systems , Data Interpretation, Statistical , Numerical Analysis, Computer-Assisted , Reproducibility of Results , Sensitivity and Specificity
8.
IEEE Trans Image Process ; 14(4): 397-410, 2005 Apr.
Article in English | MEDLINE | ID: mdl-15825476

ABSTRACT

This paper presents a contrast-based quantization strategy for use in lossy wavelet image compression that attempts to preserve visual quality at any bit rate. Based on the results of recent psychophysical experiments using near-threshold and suprathreshold wavelet subband quantization distortions presented against natural-image backgrounds, subbands are quantized such that the distortions in the reconstructed image exhibit root-mean-squared contrasts selected based on image, subband, and display characteristics and on a measure of total visual distortion so as to preserve the visual system's ability to integrate edge structure across scale space. Within a single, unified framework, the proposed contrast-based strategy yields images which are competitive in visual quality with results from current visually lossless approaches at high bit rates and which demonstrate improved visual quality over current visually lossy approaches at low bit rates. This strategy operates in the context of both nonembedded and embedded quantization, the latter of which yields a highly scalable codestream which attempts to maintain visual quality at all bit rates; a specific application of the proposed algorithm to JPEG-2000 is presented.


Subject(s)
Algorithms , Computer Graphics , Data Compression/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Signal Processing, Computer-Assisted , Artificial Intelligence , Multimedia , Numerical Analysis, Computer-Assisted , Reproducibility of Results , Sensitivity and Specificity
9.
J Opt Soc Am A Opt Image Sci Vis ; 20(7): 1164-80, 2003 Jul.
Article in English | MEDLINE | ID: mdl-12868624

ABSTRACT

Quantization of the coefficients within a discrete wavelet transform subband gives rise to distortions in the reconstructed image that are localized in spatial frequency and orientation and are spatially correlated with the image. We investigated the detectability of these distortions: Contrast thresholds were measured for both simple and compound distortions presented in the unmasked paradigm and against two natural-image maskers. Simple and compound distortions were generated through uniform scalar quantization of one or two subbands. Unmasked detection thresholds for simple distortions yielded contrast sensitivity functions similar to those reported for 1-octave Gabor patches. Detection thresholds for simple distortions presented against two natural-image backgrounds revealed that thresholds were elevated across the frequency range of 1.15-18.4 cycles per degree with the greatest elevation for low-frequency distortions. Unmasked thresholds for compound distortions revealed relative sensitivities of 1.1-1.2, suggesting that summation of responses to wavelet distortions is similar to summation of responses to gratings. Masked thresholds for compound distortions revealed relative sensitivities of 1.5-1.7, suggesting greater summation when distortions are masked by natural images.


Subject(s)
Discrimination, Psychological/physiology , Vision, Ocular/physiology , Adult , Contrast Sensitivity , Humans , Male , Models, Biological , Photic Stimulation/methods , Psychophysics , Sensory Thresholds
10.
IEEE Trans Image Process ; 12(4): 420-30, 2003.
Article in English | MEDLINE | ID: mdl-18237920

ABSTRACT

Wavelet transform coefficients are defined by both a magnitude and a sign. While efficient algorithms exist for coding the transform coefficient magnitudes, current wavelet image coding algorithms are not as efficient at coding the sign of the transform coefficients. It is generally assumed that there is no compression gain to be obtained from entropy coding of the sign. Only recently have some authors begun to investigate this component of wavelet image coding. In this paper, sign coding is examined in detail in the context of an embedded wavelet image coder. In addition to using intraband wavelet coefficients in a sign coding context model, a projection technique is described that allows nonintraband wavelet coefficients to be incorporated into the context model. At the decoder, accumulated sign prediction statistics are also used to derive improved reconstruction estimates for zero-quantized coefficients. These techniques are shown to yield PSNR improvements averaging 0.3 dB, and are applicable to any genre of embedded wavelet image codec.

11.
IEEE Trans Image Process ; 12(5): 489-99, 2003.
Article in English | MEDLINE | ID: mdl-18237926

ABSTRACT

Reversible integer wavelet transforms are increasingly popular in lossless image compression, as evidenced by their use in the recently developed JPEG2000 image coding standard. In this paper, a projection-based technique is presented for decreasing the first-order entropy of transform coefficients and improving the lossless compression performance of reversible integer wavelet transforms. The projection technique is developed and used to predict a wavelet transform coefficient as a linear combination of other wavelet transform coefficients. It yields optimal fixed prediction steps for lifting-based wavelet transforms and unifies many wavelet-based lossless image compression results found in the literature. Additionally, the projection technique is used in an adaptive prediction scheme that varies the final prediction step of the lifting-based transform based on a modeling context. Compared to current fixed and adaptive lifting-based transforms, the projection technique produces improved reversible integer wavelet transforms with superior lossless compression performance. It also provides a generalized framework that explains and unifies many previous results in wavelet-based lossless image compression.

SELECTION OF CITATIONS
SEARCH DETAIL