Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
IEEE Trans Image Process ; 33: 3606-3619, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38814774

RESUMEN

We conducted a large-scale study of human perceptual quality judgments of High Dynamic Range (HDR) and Standard Dynamic Range (SDR) videos subjected to scaling and compression levels and viewed on three different display devices. While conventional expectations are that HDR quality is better than SDR quality, we have found subject preference of HDR versus SDR depends heavily on the display device, as well as on resolution scaling and bitrate. To study this question, we collected more than 23,000 quality ratings from 67 volunteers who watched 356 videos on OLED, QLED, and LCD televisions, and among many other findings, observed that HDR videos were often rated as lower quality than SDR videos at lower bitrates, particularly when viewed on LCD and QLED displays. Since it is of interest to be able to measure the quality of videos under these scenarios, e.g. to inform decisions regarding scaling, compression, and SDR vs HDR, we tested several well-known full-reference and no-reference video quality models on the new database. Towards advancing progress on this problem, we also developed a novel no-reference model called HDRPatchMAX, that uses a contrast-based analysis of classical and bit-depth features to predict quality more accurately than existing metrics.

2.
IEEE Trans Image Process ; 33: 42-57, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37988212

RESUMEN

As compared to standard dynamic range (SDR) videos, high dynamic range (HDR) content is able to represent and display much wider and more accurate ranges of brightness and color, leading to more engaging and enjoyable visual experiences. HDR also implies increases in data volume, further challenging existing limits on bandwidth consumption and on the quality of delivered content. Perceptual quality models are used to monitor and control the compression of streamed SDR content. A similar strategy should be useful for HDR content, yet there has been limited work on building HDR video quality assessment (VQA) algorithms. One reason for this is a scarcity of high-quality HDR VQA databases representative of contemporary HDR standards. Towards filling this gap, we created the first publicly available HDR VQA database dedicated to HDR10 videos, called the Laboratory for Image and Video Engineering (LIVE) HDR Database. It comprises 310 videos from 31 distinct source sequences processed by ten different compression and resolution combinations, simulating bitrate ladders used by the streaming industry. We used this data to conduct a subjective quality study, gathering more than 20,000 human quality judgments under two different illumination conditions. To demonstrate the usefulness of this new psychometric data resource, we also designed a new framework for creating HDR quality sensitive features, using a nonlinear transform to emphasize distortions occurring in spatial portions of videos that are enhanced by HDR, e.g., having darker blacks and brighter whites. We apply this new method, which we call HDRMAX, to modify the widely-deployed Video Multimethod Assessment Fusion (VMAF) model. We show that VMAF+HDRMAX provides significantly elevated performance on both HDR and SDR videos, exceeding prior state-of-the-art model performance. The database is now accessible at: https://live.ece.utexas.edu/research/LIVEHDR/LIVEHDR_index.html. The model will be made available at a later date at: https://live.ece.utexas.edu//research/Quality/index_algorithms.htm.

3.
IEEE Trans Image Process ; 31: 1027-1041, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34951848

RESUMEN

Video livestreaming is gaining prevalence among video streaming service s, especially for the delivery of live, high motion content such as sport ing events. The quality of the se livestreaming videos can be adversely affected by any of a wide variety of events, including capture artifacts, and distortions incurred during coding and transmission. High motion content can cause or exacerbate many kinds of distortion, such as motion blur and stutter. Because of this, the development of objective Video Quality Assessment (VQA) algorithms that can predict the perceptual quality of high motion, live streamed videos is greatly desired. Important resources for developing these algorithms are appropriate databases that exemplify the kinds of live streaming video distortions encountered in practice. Towards making progress in this direction, we built a video quality database specifically designed for live streaming VQA research. The new video database is called the Laboratory for Image and Video Engineering (LIVE) Livestream Database. The LIVE Livestream Database includes 315 videos of 45 source sequences from 33 original contents impaired by 6 types of distortions. We also performed a subjective quality study using the new database, whereby more than 12,000 human opinions were gathered from 40 subjects. We demonstrate the usefulness of the new resource by performing a holistic evaluation of the performance of current state-of-the-art (SOTA) VQA models. We envision that researchers will find the dataset to be useful for the development, testing, and comparison of future VQA models. The LIVE Livestream database is being made publicly available for these purposes at https://live.ece. utexas.edu/research/LIVE_APV_Study/apv_index.html.


Asunto(s)
Algoritmos , Artefactos , Bases de Datos Factuales , Humanos , Movimiento (Física) , Grabación en Video
4.
IEEE Trans Image Process ; 30: 8059-8074, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34534087

RESUMEN

We propose a new model for no-reference video quality assessment (VQA). Our approach uses a new idea of highly-localized space-time (ST) slices called Space-Time Chips (ST Chips). ST Chips are localized cuts of video data along directions that implicitly capture motion. We use perceptually-motivated bandpass and normalization models to first process the video data, and then select oriented ST Chips based on how closely they fit parametric models of natural video statistics. We show that the parameters that describe these statistics can be used to reliably predict the quality of videos, without the need for a reference video. The proposed method implicitly models ST video naturalness, and deviations from naturalness. We train and test our model on several large VQA databases, and show that our model achieves state-of-the-art performance at reduced cost, without requiring motion computation.

5.
Biosens Bioelectron ; 120: 77-84, 2018 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-30149216

RESUMEN

Conventional analytical techniques, which have been developed for high sensitivity and selectivity for the detection and quantification of relevant biomarkers, may not be as suitable for medical diagnosis in resource scarce environments as compared to point-of-care devices (POC). We have developed a new reactive sensing material which contains ionic gold entrapped within an agarose gel scaffold for POC quantification of ascorbic acid (AA) in tear fluid. Pathologically elevated concentration of AA in human tear fluid can serve as a biomarker for full-thickness injuries to the ocular surface, which are a medical emergency. This reactive sensing material will undergo colorimetric changes, quantitatively dependent on endogenous bio-reductants that are applied, as the entrapped ionic gold is reduced to form plasmonic nanoparticles. The capacity for this reactive material to function as a plasmonically driven biosensor, called 'OjoGel' (ojo-eye), was demonstrated with the endogenous reducing agent, AA. Through applications of AA of varied concentrations to the OjoGel, we demonstrated a quantitative colorimetric relationship between red (R) hexadecimal values and concentrations of AA in said treatments. This colorimetric relationship is directly resultant of plasmonic gold nanoparticle formation within the OjoGel scaffold. Using a commercially available mobile phone-based Pixel Picker® application, the OjoGel plasmonic sensing platform opens a new avenue for easy-to-use, rapid, and quantitative biosensing with low cost and accurate results.


Asunto(s)
Ácido Ascórbico/análisis , Técnicas Biosensibles/métodos , Lesiones Oculares/diagnóstico , Geles/química , Oro/química , Lágrimas/química , Colorimetría , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...