RESUMO
In practical media distribution systems, visual content usually undergoes multiple stages of quality degradation along the delivery chain, but the pristine source content is rarely available at most quality monitoring points along the chain to serve as a reference for quality assessment. As a result, full-reference (FR) and reduced-reference (RR) image quality assessment (IQA) methods are generally infeasible. Although no-reference (NR) methods are readily applicable, their performance is often not reliable. On the other hand, intermediate references of degraded quality are often available, e.g., at the input of video transcoders, but how to make the best use of them in proper ways has not been deeply investigated. Here we make one of the first attempts to establish a new paradigm named degraded-reference IQA (DR IQA). Specifically, by using a two-stage distortion pipeline we lay out the architectures of DR IQA and introduce a 6-bit code to denote the choices of configurations. We construct the first large-scale databases dedicated to DR IQA and will make them publicly available. We make novel observations on distortion behavior in multi-stage distortion pipelines by comprehensively analyzing five multiple distortion combinations. Based on these observations, we develop novel DR IQA models and make extensive comparisons with a series of baseline models derived from top-performing FR and NR models. The results suggest that DR IQA may offer significant performance improvement in multiple distortion environments, thereby establishing DR IQA as a valid IQA paradigm that is worth further exploration.
RESUMO
Peer-based aggression following social rejection is a costly and prevalent problem for which existing treatments have had little success. This may be because aggression is a complex process influenced by current states of attention and arousal, which are difficult to measure on a moment to moment basis via self report. It is therefore crucial to identify nonverbal behavioral indices of attention and arousal that predict subsequent aggression. We used Support Vector Machines (SVMs) and eye gaze duration and pupillary response features, measured during positive and negative peer-based social interactions, to predict subsequent aggressive behavior towards those same peers. We found that eye gaze and pupillary reactivity not only predicted aggressive behavior, but performed better than models that included information about the participant's exposure to harsh parenting or trait aggression. Eye gaze and pupillary reactivity models also performed equally as well as those that included information about peer reputation (e.g. whether the peer was rejecting or accepting). This is the first study to decode nonverbal eye behavior during social interaction to predict social rejection-elicited aggression.