Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Basic Clin Neurosci ; 13(4): 455-463, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36561232

RESUMEN

Introduction: This study aims to investigate the attentional bias toward drug-related stimuli along with subjective craving after encountering such stimuli in methamphetamine users. Studies of cue reactivity have confirmed a bias in attention and gaze toward drug-related stimuli for most substances; however, methamphetamine drugs are less studied through a direct measure, such as eye tracking. Methods: A total of 30 male subjects in the case group (methamphetamine users) and 36 subjects in the control group (no prior drug use) participated in this study. The participant's eye movement data were collected while they were viewing pairs of drug-related and non-drug images in a dot-probe paradigm. Craving was assessed via a self-report questionnaire on a scale of 0 to 10 before and after the psychophysical task. Results: The analysis of eye-movement data showed a meaningful gaze bias toward cue images (drug-related) in the case group. Additionally, the gaze duration on cue images was significantly higher in the case group, in contrast to the control group. The same effect was observed in analyzing the dot-probe task; that is, the mean reaction time to a probe that replaced a cue image was significantly lower. The mean of the first-fixation measure in the control group was not significantly higher than chance; however, the percentage of the first-fixation on cue images in the drug users was meaningfully biased. Reported craving was significantly greater after performing the task compared to before. Conclusion: Our results indicated an attentional bias toward drug-related cues in methamphetamine users as well as subjective craving after encountering such cues. Highlights: The gaze duration on cue images was significantly higher in methamphetamine users.The mean reaction time to a probe that replaced a cue image was significantly lower in methamphetamine users compared to the control group.The mean of the first-fixation measure in the case group was significantly better than chance.Craving was reported to be significantly greater after performing the task. Plain Language Summary: Substance users tend to focus on the stimuli associated with substances. This is known as attention bias. Attention bias leads to increased craving. Attention bias for various substances has been previously reported; however, methamphetamine attention bias has not been evaluated so far. In this study, we measured the attention bias toward stimuli related to methamphetamine in methamphetamine users and control subjects with direct (eye tracking) and indirect (dot probe paradigm) methods. In addition, we measured the number of cravings in the case group. Our results confirmed the bias in attention toward methamphetamine-related stimuli in the case group compared to the control group.

2.
Artículo en Inglés | MEDLINE | ID: mdl-30507532

RESUMEN

Just noticeable difference (JND) models are widely used for perceptual redundancy estimation in images and videos. A common method for measuring the accuracy of a JND model is to inject random noise in an image based on the JND model, and check whether the JND-noise-contaminated image is perceptually distinguishable from the original image or not. Also, when comparing the accuracy of two different JND models, the model that produces the JND-noise-contaminated image with better quality at the same level of noise energy is the better model. But in both of these cases, a subjective test is necessary, which is very time consuming and costly. In this paper, we present a full-reference metric called PDP (perceptual distinguishability predictor), which can be used to determine whether a given JND-noise-contaminated image is perceptually distinguishable from the reference image. The proposed metric employs the concept of sparse coding, and extracts a feature vector out of a given image pair. The feature vector is then fed to a multilayer neural network for classification. To train the network, we built a public database of 999 natural images with distinguishbility thresholds for four different JND models obtained from an extensive subjective experiment. The results indicated that PDD achieves high classification accuracy of 97.1%. The proposed method can be used to objectively compare various JND models without performing any subjective test. It can also be used to obtain proper scaling factors to improve the JND thresholds estimated by an arbitrary JND model.

3.
IEEE Trans Image Process ; 26(6): 2882-2891, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-28391196

RESUMEN

In this paper, a novel method is presented for producing energy-efficient images, i.e., images that consume less electrical energy on energy-adaptive displays, yet have the same or very similar perceptual quality to their original images. The proposed method relies on the fact that the energy consumption of pixels in modern energy-adaptive displays like OLED displays is directly proportional to the luminance of the pixels. Hence, in this paper, to reduce the energy consumption of an image, while at the same time preserving its perceptual quality, it is proposed to reduce the luminance of the pixels in the image by one just-noticeable-difference (JND) threshold. To determine the JND thresholds, an adaptive saliency-modulated JND (SJND) model is developed. In the proposed model, the JND thresholds of each block in the given image are elevated by two non-linear saliency modulation functions using the visual saliency of the block. The parameters of the saliency modulation functions are estimated through an adaptive optimization framework, which utilizes a state-of-the-art saliency-based objective image quality assessment method. To evaluate the proposed methods, a set of subjective experiments were conducted, and the real energy consumption of the produced energy-efficient images were measured by an accurate power monitor equipment on an OLED display. The obtained experimental results demonstrated that, on average, the proposed method is able to reduce the energy consumption by about 14.1% while preserving the perceptual quality of the displayed images.

4.
IEEE Trans Image Process ; 23(1): 19-33, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-24107933

RESUMEN

In region-of-interest (ROI)-based video coding, ROI parts of the frame are encoded with higher quality than non-ROI parts. At low bit rates, such encoding may produce attention-grabbing coding artifacts, which may draw viewer's attention away from ROI, thereby degrading visual quality. In this paper, we present a saliency-aware video compression method for ROI-based video coding. The proposed method aims at reducing salient coding artifacts in non-ROI parts of the frame in order to keep user's attention on ROI. Further, the method allows saliency to increase in high quality parts of the frame, and allows saliency to reduce in non-ROI parts. Experimental results indicate that the proposed method is able to improve visual quality of encoded video relative to conventional rate distortion optimized video coding, as well as two state-of-the art perceptual video coding methods.


Asunto(s)
Algoritmos , Artefactos , Compresión de Datos/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Señales Asistido por Computador , Grabación en Video/métodos , Fotograbar/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
5.
IEEE Trans Image Process ; 21(2): 898-903, 2012 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-21859619

RESUMEN

This correspondence describes a publicly available database of eye-tracking data, collected on a set of standard video sequences that are frequently used in video compression, processing, and transmission simulations. A unique feature of this database is that it contains eye-tracking data for both the first and second viewings of the sequence. We have made available the uncompressed video sequences and the raw eye-tracking data for each sequence, along with different visualizations of the data and a preliminary analysis based on two well-known visual attention models.


Asunto(s)
Algoritmos , Bases de Datos Factuales , Movimientos Oculares/fisiología , Grabación en Video , Adulto , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Internet , Masculino
6.
IEEE Trans Image Process ; 20(11): 3195-206, 2011 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-21435974

RESUMEN

In video transmission over packet-based networks, packet losses often occur in bursts. In this paper, we present a novel packetization method for increasing the robustness of compressed video against bursty packet losses. The proposed method is based on creating a coding order of macroblocks (MBs) so that the blocks that are close to each other in the coding order end up being far from each other in the frame. We formulate this idea as a discrete optimization problem, prove its NP-hardness, and discuss several possible solution methods. Experimental results indicate that the proposed method improves the quality of reconstructed frames under burst loss by several decibels compared to conventional flexible MB ordering techniques, and about 0.7 dB compared to the state-of-the-art method called explicit chessboard wipe.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...