Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Med Internet Res ; 26: e55403, 2024 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-39163110

RESUMEN

BACKGROUND: In China, mitral valve regurgitation (MR) is the most common cardiovascular valve disease. However, patients in China typically experience a high incidence of this condition, coupled with a low level of health knowledge and a relatively low rate of surgical treatment. TikTok hosts a vast amount of content related to diseases and health knowledge, providing viewers with access to relevant information. However, there has been no investigation or evaluation of the quality of videos specifically addressing MR. OBJECTIVE: This study aims to assess the quality of videos about MR on TikTok in China. METHODS: A cross-sectional study was conducted on the Chinese version of TikTok on September 9, 2023. The top 100 videos on MR were included and evaluated using quantitative scoring tools such as the modified DISCERN (mDISCERN), the Journal of the American Medical Association (JAMA) benchmark criteria, the Global Quality Score (GQS), and the Patient Education Materials Assessment Tool for Audio-Visual Content (PEMAT-A/V). Correlation and stepwise regression analyses were performed to examine the relationships between video quality and various characteristics. RESULTS: We obtained 88 valid video files, of which most (n=81, 92%) were uploaded by certified physicians, primarily cardiac surgeons, and cardiologists. News agencies/organizations and physicians had higher GQS scores compared with individuals (news agencies/organizations vs individuals, P=.001; physicians vs individuals, P=.03). Additionally, news agencies/organizations had higher PEMAT understandability scores than individuals (P=.01). Videos focused on disease knowledge scored higher in GQS (P<.001), PEMAT understandability (P<.001), and PEMAT actionability (P<.001) compared with videos covering surgical cases. PEMAT actionability scores were higher for outpatient cases compared with surgical cases (P<.001). Additionally, videos focused on surgical techniques had lower PEMAT actionability scores than those about disease knowledge (P=.04). The strongest correlations observed were between thumbs up and comments (r=0.92, P<.001), thumbs up and favorites (r=0.89, P<.001), thumbs up and shares (r=0.87, P<.001), comments and favorites (r=0.81, P<.001), comments and shares (r=0.87, P<.001), and favorites and shares (r=0.83, P<.001). Stepwise regression analysis identified "length (P<.001)," "content (P<.001)," and "physicians (P=.004)" as significant predictors of GQS. The final model (model 3) explained 50.1% of the variance in GQSs. The predictive equation for GQS is as follows: GQS = 3.230 - 0.294 × content - 0.274 × physicians + 0.005 × length. This model was statistically significant (P=.004) and showed no issues with multicollinearity or autocorrelation. CONCLUSIONS: Our study reveals that while most MR-related videos on TikTok were uploaded by certified physicians, ensuring professional and scientific content, the overall quality scores were suboptimal. Despite the educational value of these videos, the guidance provided was often insufficient. The predictive equation for GQS developed from our analysis offers valuable insights but should be applied with caution beyond the study context. It suggests that creators should focus on improving both the content and presentation of their videos to enhance the quality of health information shared on social media.


Asunto(s)
Insuficiencia de la Válvula Mitral , Estudios Transversales , Humanos , Insuficiencia de la Válvula Mitral/fisiopatología , Insuficiencia de la Válvula Mitral/cirugía , China , Grabación en Video , Educación del Paciente como Asunto/métodos , Educación del Paciente como Asunto/normas , Fuentes de Información
2.
Artículo en Inglés | MEDLINE | ID: mdl-38852711

RESUMEN

BACKGROUND: Patients and healthcare professionals extensively rely on the internet for medical information. Low-quality videos can significantly impact the patient-doctor relationship, potentially affecting consultation efficiency and decision-making process. Chat Generative Pre-Trained Transformer (ChatGPT) is an artificial intelligence application with the potential to improve medical reports, provide medical information, and supplement orthopedic knowledge acquisition. This study aimed to assess the ability of ChatGPT-4 to detect deficiencies in these videos, assuming it would be successful in identifying such deficiencies. MATERIALS AND METHODS: YouTube was searched for "rotator cuff surgery" and "rotator cuff surgery clinic" videos. A total of 90 videos were evaluated, with 40 included in the study after exclusions. Using the Google Chrome extension ''YouTube Summary with ChatGPT & Claude,'' transcripts of these videos were accessed. Two senior orthopedic surgeons and ChatGPT-4 evaluated the videos using the rotator cuff surgery YouTube score (RCSS) system and DISCERN criteria. RESULTS: ChatGPT-4's RCSS evaluations were comparable to those of the observers in 25% of instances, and 40% for DISCERN. The interobserver agreement between human observers and ChatGPT-4 was fair (AC1: 0.575 for DISCERN and AC1: 0.516 for RCSS). Even after correcting ChatGPT-4's incorrect answers, the agreement did not change significantly. ChatGPT-4 tended to give higher scores than the observers, particularly in sections related to anatomy, surgical technique, and indications for surgery. CONCLUSION: The use of ChatGPT-4 as an observer in evaluating rotator cuff surgery-related videos and identifying deficiencies is not currently recommended. Future studies with trained ChatGPT models may address these deficiencies and enable ChatGPT to evaluate videos at a human observer level.

3.
J Med Internet Res ; 25: e39162, 2023 02 08.
Artículo en Inglés | MEDLINE | ID: mdl-36753307

RESUMEN

BACKGROUND: TikTok was an important channel for consumers to access and adopt health information. But the quality of health content in TikTok remains underinvestigated. OBJECTIVE: Our study aimed to identify upload sources, contents, and feature information of gallstone disease videos on TikTok and further evaluated the factors related to video quality. METHODS: We investigated the first 100 gallstone-related videos on TikTok and analyzed these videos' upload sources, content, and characteristics. The quality of videos was evaluated using quantitative scoring tools such as DISCERN instrument, the Journal of American Medical Association (JAMA) benchmark criteria, and Global Quality Scores (GQS). Moreover, the correlation between video quality and video characteristics, including duration, likes, comments, and shares, was further investigated. RESULTS: According to video sources, 81% of the videos were posted by doctors. Furthermore, disease knowledge was the most dominant video content, accounting for 56% of all the videos. The mean DISCERN, JAMA, and GQS scores of all 100 videos are 39.61 (SD 11.36), 2.00 (SD 0.40), and 2.76 (SD 0.95), respectively. According to DISCERN and GQS, gallstone-related videos' quality score on TikTok is not high, mainly at fair (43/100, 43%,) and moderate (46/100, 46%). The total DISCERN scores of doctors were significantly higher than that of individuals and news agencies, surgery techniques were significantly higher than lifestyle and news, and disease knowledge was significantly higher than news, respectively. DISCERN scores and video duration were positively correlated. Negative correlations were found between DISCERN scores and likes and shares of videos. In GQS analysis, no significant differences were found between groups based on different sources or different contents. JAMA was excluded in the video quality and correlation analysis due to a lack of discrimination and inability to evaluate the video quality accurately. CONCLUSIONS: Although the videos of gallstones on TikTok are mainly provided by doctors and contain disease knowledge, they are of low quality. We found a positive correlation between video duration and video quality. High-quality videos received low attention, and popular videos were of low quality. Medical information on TikTok is currently not rigorous enough to guide patients to make accurate judgments. TikTok was not an appropriate source of knowledge to educate patients due to the low quality and reliability of the information.


Asunto(s)
Cálculos Biliares , Medios de Comunicación Sociales , Humanos , Cálculos Biliares/diagnóstico , Estudios Transversales , Reproducibilidad de los Resultados , Benchmarking , Emociones , Grabación en Video , Difusión de la Información
4.
Sensors (Basel) ; 23(5)2023 Mar 02.
Artículo en Inglés | MEDLINE | ID: mdl-36904947

RESUMEN

Video delivered over IP networks in real-time applications, which utilize RTP protocol over unreliable UDP such as videotelephony or live-streaming, is often prone to degradation caused by multiple sources. The most significant is the combined effect of video compression and its transmission over the communication channel. This paper analyzes the adverse impact of packet loss on video quality encoded with various combinations of compression parameters and resolutions. For the purposes of the research, a dataset containing 11,200 full HD and ultra HD video sequences encoded to H.264 and H.265 formats at five bit rates was compiled with a simulated packet loss rate (PLR) ranging from 0 to 1%. Objective assessment was conducted by using peak signal to noise ratio (PSNR) and Structural Similarity Index (SSIM) metrics, whereas the well-known absolute category rating (ACR) was used for subjective evaluation. Analysis of the results confirmed the presumption that video quality decreases along with the rise of packet loss rate, regardless of compression parameters. The experiments further led to a finding that the quality of sequences affected by PLR declines with increasing bit rate. Additionally, the paper includes recommendations of compression parameters for use under various network conditions.

5.
Sensors (Basel) ; 23(3)2023 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-36772550

RESUMEN

Ultra-high-definition (UHD) video has brought new challenges to objective video quality assessment (VQA) due to its high resolution and high frame rate. Most existing VQA methods are designed for non-UHD videos-when they are employed to deal with UHD videos, the processing speed will be slow and the global spatial features cannot be fully extracted. In addition, these VQA methods usually segment the video into multiple segments, predict the quality score of each segment, and then average the quality score of each segment to obtain the quality score of the whole video. This breaks the temporal correlation of the video sequences and is inconsistent with the characteristics of human visual perception. In this paper, we present a no-reference VQA method, aiming to effectively and efficiently predict quality scores for UHD videos. First, we construct a spatial distortion feature network based on a super-resolution model (SR-SDFNet), which can quickly extract the global spatial distortion features of UHD videos. Then, to aggregate the spatial distortion features of each UHD frame, we propose a time fusion network based on a reinforcement learning model (RL-TFNet), in which the actor network continuously combines multiple frame features extracted by SR-SDFNet and outputs an action to adjust the current quality score to approximate the subjective score, and the critic network outputs action values to optimize the quality perception of the actor network. Finally, we conduct large-scale experiments on UHD VQA databases and the results reveal that, compared to other state-of-the-art VQA methods, our method achieves competitive quality prediction performance with a shorter runtime and fewer model parameters.

6.
Sensors (Basel) ; 23(12)2023 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-37420788

RESUMEN

This article describes an empirical exploration on the effect of information loss affecting compressed representations of dynamic point clouds on the subjective quality of the reconstructed point clouds. The study involved compressing a set of test dynamic point clouds using the MPEG V-PCC (Video-based Point Cloud Compression) codec at 5 different levels of compression and applying simulated packet losses with three packet loss rates (0.5%, 1% and 2%) to the V-PCC sub-bitstreams prior to decoding and reconstructing the dynamic point clouds. The recovered dynamic point clouds qualities were then assessed by human observers in experiments conducted at two research laboratories in Croatia and Portugal, to collect MOS (Mean Opinion Score) values. These scores were subject to a set of statistical analyses to measure the degree of correlation of the data from the two laboratories, as well as the degree of correlation between the MOS values and a selection of objective quality measures, while taking into account compression level and packet loss rates. The subjective quality measures considered, all of the full-reference type, included point cloud specific measures, as well as others adapted from image and video quality measures. In the case of image-based quality measures, FSIM (Feature Similarity index), MSE (Mean Squared Error), and SSIM (Structural Similarity index) yielded the highest correlation with subjective scores in both laboratories, while PCQM (Point Cloud Quality Metric) showed the highest correlation among all point cloud-specific objective measures. The study showed that even 0.5% packet loss rates reduce the decoded point clouds subjective quality by more than 1 to 1.5 MOS scale units, pointing out the need to adequately protect the bitstreams against losses. The results also showed that the degradations in V-PCC occupancy and geometry sub-bitstreams have significantly higher (negative) impact on decoded point cloud subjective quality than degradations of the attribute sub-bitstream.


Asunto(s)
Compresión de Datos , Humanos , Compresión de Datos/métodos , Croacia , Portugal
7.
Sensors (Basel) ; 23(4)2023 Feb 04.
Artículo en Inglés | MEDLINE | ID: mdl-36850368

RESUMEN

In the five years between 2017 and 2022, IP video traffic tripled, according to Cisco. User-Generated Content (UGC) is mainly responsible for user-generated IP video traffic. The development of widely accessible knowledge and affordable equipment makes it possible to produce UGCs of quality that is practically indistinguishable from professional content, although at the beginning of UGC creation, this content was frequently characterized by amateur acquisition conditions and unprofessional processing. In this research, we focus only on UGC content, whose quality is obviously different from that of professional content. For the purpose of this paper, we refer to "in the wild" as a closely related idea to the general idea of UGC, which is its particular case. Studies on UGC recognition are scarce. According to research in the literature, there are currently no real operational algorithms that distinguish UGC content from other content. In this study, we demonstrate that the XGBoost machine learning algorithm (Extreme Gradient Boosting) can be used to develop a novel objective "in the wild" video content recognition model. The final model is trained and tested using video sequence databases with professional content and "in the wild" content. We have achieved a 0.916 accuracy value for our model. Due to the comparatively high accuracy of the model operation, a free version of its implementation is made accessible to the research community. It is provided via an easy-to-use Python package installable with Pip Installs Packages (pip).

8.
Indian J Public Health ; 67(3): 422-427, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37929385

RESUMEN

Background: Providing health-care services through telemedicine for musculoskeletal ailments after the first wave of COVID-19 may help reduce the burden on the already-strained health-care system. Objectives: The objectives of this study were (1) to assess the satisfaction levels of orthopedic surgeons and patients with respect to telemedicine and (2) to determine the factors governing the overall efficacy of telemedicine consultations. Materials and Methods: A cross-sectional study was conducted to ascertain the perception of telemedicine (both doctors and patients) under the following domains - (1) information provided and ease of usage; (2) doctor-patient communication; (3) ease of prescribing and understanding treatment; and (4) audio-video quality of the consultation. The influence of these factors on overall satisfaction was determined using multinomial logistic regression analysis. Results: Of the 204 patients and 27 surgeons who completed the questionnaire, 77% (patients) and 89% (surgeons) were satisfied with the overall efficacy of telemedicine. Maximum satisfaction was noted with the ease of obtaining a telemedicine appointment (168/204). 68.6% of patients further stated they would prefer future visits virtually. While all four factors were found to have a significant correlation (P < 0.001) with the overall efficacy of teleconsultation services, the quality of the telephone call (odds ratio [OR] =90.15) and good doctor-patient communication (OR = 15.5) were found to be the most important of the lot. Conclusion: Our study not only demonstrates the high degree of satisfaction with telehealth services but is also able to pinpoint the areas where improvement is needed to enhance the overall experience with this technology.


Asunto(s)
COVID-19 , Cirujanos Ortopédicos , Telemedicina , Humanos , Estudios Transversales , Pandemias , India , Percepción , Satisfacción del Paciente
9.
Cluster Comput ; 26(2): 1159-1167, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36619851

RESUMEN

Availability is one of the primary goals of smart networks, especially, if the network is under heavy video streaming traffic. In this paper, we propose a deep learning based methodology to enhance availability of video streaming systems by developing a prediction model for video streaming quality, required power consumption, and required bandwidth based on video codec parameters. The H.264/AVC codec, which is one of the most popular codecs used in video steaming and conferencing communications, is chosen as a case study in this paper. We model the predicted consumed power, the predicted perceived video quality, and the predicted required bandwidth for the video codec based on video resolution and quantization parameters. We train, validate, and test the developed models through extensive experiments using several video contents. Results show that an accurate model can be built for the needed purpose and the video streaming quality, required power consumption, and required bandwidth can be predicted accurately which can be utilized to enhance network availability in a cooperative environment.

10.
Med Teach ; 44(3): 287-293, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34666585

RESUMEN

PURPOSE: Medical education instructional videos are more popular and easier to create than ever before. Standard quality measures for this medium do not exist, leaving educators, learners, and content creators unable to assess these videos. MATERIALS AND METHODS: Drawing from the literature on video quality and popularity, reusable learning objects, and multimedia and curriculum development principles, we developed a 26-item instructional video quality checklist (IVQC), to capture aspects of educational design (six items), source reliability (four items), multimedia principle adherence (10 items), and accessibility (six items). Two raters applied IVQC to 206 videos from five producers across topics from two organ systems (cardiology and pulmonology) encompassing four disciplines (anatomy, physiology, pathology, and pharmacology). RESULTS: Inter-rater reliability was strong. According to two-rater means, eight multimedia items were present in over 80% of videos. A minority of videos included learning objectives (46%), alternative language translations (41%), when the video was updated (40%), analogies (37%), or references (9%). Producer ratings varied significantly (p < .001) across 17 of 26 items. There were no significant differences according to the video topic. CONCLUSIONS: IVQC detected differences in elements of instructional video quality. Future work can apply this instrument to a broader array of videos and in authentic educational settings.


Asunto(s)
Lista de Verificación , Educación Médica , Humanos , Aprendizaje , Reproducibilidad de los Resultados , Grabación en Video
11.
Sensors (Basel) ; 22(24)2022 Dec 10.
Artículo en Inglés | MEDLINE | ID: mdl-36560065

RESUMEN

During acquisition, storage, and transmission, the quality of digital videos degrades significantly. Low-quality videos lead to the failure of many computer vision applications, such as object tracking or detection, intelligent surveillance, etc. Over the years, many different features have been developed to resolve the problem of no-reference video quality assessment (NR-VQA). In this paper, we propose a novel NR-VQA algorithm that integrates the fusion of temporal statistics of local and global image features with an ensemble learning framework in a single architecture. Namely, the temporal statistics of global features reflect all parts of the video frames, while the temporal statistics of local features reflect the details. Specifically, we apply a broad spectrum of statistics of local and global features to characterize the variety of possible video distortions. In order to study the effectiveness of the method introduced in this paper, we conducted experiments on two large benchmark databases, i.e., KoNViD-1k and LIVE VQC, which contain authentic distortions, and we compared it to 14 other well-known NR-VQA algorithms. The experimental results show that the proposed method is able to achieve greatly improved results on the considered benchmark datasets. Namely, the proposed method exhibits significant progress in performance over other recent NR-VQA approaches.


Asunto(s)
Algoritmos , Grabación en Video/métodos
12.
Sensors (Basel) ; 22(6)2022 Mar 12.
Artículo en Inglés | MEDLINE | ID: mdl-35336380

RESUMEN

With the constantly growing popularity of video-based services and applications, no-reference video quality assessment (NR-VQA) has become a very hot research topic. Over the years, many different approaches have been introduced in the literature to evaluate the perceptual quality of digital videos. Due to the advent of large benchmark video quality assessment databases, deep learning has attracted a significant amount of attention in this field in recent years. This paper presents a novel, innovative deep learning-based approach for NR-VQA that relies on a set of in parallel pre-trained convolutional neural networks (CNN) to characterize versatitely the potential image and video distortions. Specifically, temporally pooled and saliency weighted video-level deep features are extracted with the help of a set of pre-trained CNNs and mapped onto perceptual quality scores independently from each other. Finally, the quality scores coming from the different regressors are fused together to obtain the perceptual quality of a given video sequence. Extensive experiments demonstrate that the proposed method sets a new state-of-the-art on two large benchmark video quality assessment databases with authentic distortions. Moreover, the presented results underline that the decision fusion of multiple deep architectures can significantly benefit NR-VQA.


Asunto(s)
Atención , Redes Neurales de la Computación , Bases de Datos Factuales
13.
Sensors (Basel) ; 22(21)2022 Oct 22.
Artículo en Inglés | MEDLINE | ID: mdl-36365783

RESUMEN

An objective stereo video quality assessment (SVQA) strives to be consistent with human visual perception while ensuring a low time and labor cost of evaluation. The temporal-spatial characteristics of video make the data processing volume of quality evaluation surge, making an SVQA more challenging. Aiming at the effect of distortion on the stereoscopic temporal domain, a stereo video quality assessment method based on the temporal-spatial relation is proposed in this paper. Specifically, a temporal adaptive model (TAM) for a video is established to describe the space-time domain of the video from both local and global levels. This model can be easily embedded into any 2D CNN backbone network. Compared with the improved model based on 3D CNN, this model has obvious advantages in operating efficiency. Experimental results on NAMA3DS1-COSPAD1 database, WaterlooIVC 3D Video Phase I database, QI-SVQA database and SIAT depth quality database show that the model has excellent performance.


Asunto(s)
Percepción de Profundidad , Percepción Visual , Humanos , Grabación en Video , Visión Ocular , Comunicación
14.
Sensors (Basel) ; 22(23)2022 Nov 29.
Artículo en Inglés | MEDLINE | ID: mdl-36502009

RESUMEN

Recently, there has been an increase in research interest in the seamless streaming of video on top of Hypertext Transfer Protocol (HTTP) in cellular networks (3G/4G). The main challenges involved are the variation in available bit rates on the Internet caused by resource sharing and the dynamic nature of wireless communication channels. State-of-the-art techniques, such as Dynamic Adaptive Streaming over HTTP (DASH), support the streaming of stored video, but they suffer from the challenge of live video content due to fluctuating bit rate in the network. In this work, a novel dynamic bit rate analysis technique is proposed to model client-server architecture using attention-based long short-term memory (A-LSTM) networks for solving the problem of smooth video streaming over HTTP networks. The proposed client system analyzes the bit rate dynamically, and a status report is sent to the server to adjust the ongoing session parameter. The server assesses the dynamics of the bit rate on the fly and calculates the status for each video sequence. The bit rate and buffer length are given as sequential inputs to LSTM to produce feature vectors. These feature vectors are given different weights to produce updated feature vectors. These updated feature vectors are given to multi-layer feed forward neural networks to predict six output class labels (144p, 240p, 360p, 480p, 720p, and 1080p). Finally, the proposed A-LSTM work is evaluated in real-time using a code division multiple access evolution-data optimized network (CDMA20001xEVDO Rev-A) with the help of an Internet dongle. Furthermore, the performance is analyzed with the full reference quality metric of streaming video to validate our proposed work. Experimental results also show an average improvement of 37.53% in peak signal-to-noise ratio (PSNR) and 5.7% in structural similarity (SSIM) index over the commonly used buffer-filling technique during the live streaming of video.


Asunto(s)
Redes Neurales de la Computación , Grabación en Video/métodos
15.
J Obstet Gynaecol ; 42(5): 1325-1330, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34704513

RESUMEN

With increasing numbers of laparoscopic hysterectomies, surgical trainees are compelled to learn more about endoscopy. Owing to coronavirus disease-related social distancing requirements, online education has gained prominence. Here, we aimed to investigate the laparoscopic hysterectomy video quality on YouTube using the LAParoscopic surgery Video Educational GuidelineS (LAP-VEGaS). YouTube was searched on June 7, 2020 using 'laparoscopic hysterectomy'. Three examiners evaluated videos using Global Operative Assessment of Laparoscopic Skills (GOALS). Subsequently, videos were assessed for their conformity to the LAP-VEGaS and LAP-VEGaS Video Assessment Tool. Interobserver reliability was estimated using intraclass coefficients and Cronbach's alpha. Cochran's Q test was used to determine correlations among quantitative data. The median GOALS score was 21.50. The observers' GOALS scores were significantly correlated. The results showed low conformity to the LAP-VEGaS. YouTube is the most used platform among trainees. The low YouTube video educational quality highlights the necessity for peer review, as trainees increasingly seek such resources during the pandemic.IMPACT STATEMENTWhat is already known on this subject? YouTube is the most commonly used online resource for educational material among surgical trainees. Online videos usually do not undergo a peer-review process. The LAParoscopic surgery Video Educational GuidelineS (LAP-VEGaS) may be used to assess the educational quality of surgical videos.What do the results of this study add? To our knowledge, this is the first study on the quality of laparoscopic hysterectomy videos available on YouTube and the first study to evaluate YouTube laparoscopic surgery videos using the LAP-VEGaS Video Assessment Tool (VAT). Our study revealed the low educational quality of YouTube laparoscopic hysterectomy videos. The LAP-VEGaS VAT seems to be a valid and practical tool for assessing online laparoscopic hysterectomy videos.What are the implications of these findings for clinical practice and/or further research? Medical communities, especially tertiary care or academic centres, may upload educational peer-reviewed videos for trainees seeking this type of resource, especially during the coronavirus disease pandemic, as surgical education alternatives are limited.


Asunto(s)
COVID-19 , Laparoscopía , Medios de Comunicación Sociales , COVID-19/prevención & control , Femenino , Humanos , Histerectomía , Laparoscopía/educación , Reproducibilidad de los Resultados , Grabación en Video/métodos
16.
Entropy (Basel) ; 24(6)2022 May 26.
Artículo en Inglés | MEDLINE | ID: mdl-35741477

RESUMEN

Mobile multimedia communication requires considerable resources such as bandwidth and efficiency to support Quality-of-Service (QoS) and user Quality-of-Experience (QoE). To increase the available bandwidth, 5G network designers have incorporated Cognitive Radio (CR), which can adjust communication parameters according to the needs of an application. The transmission errors occur in wireless networks, which, without remedial action, will result in degraded video quality. Secure transmission is also a challenge for such channels. Therefore, this paper's innovative scheme "VQProtect" focuses on the visual quality protection of compressed videos by detecting and correcting channel errors while at the same time maintaining video end-to-end confidentiality so that the content remains unwatchable. For the purpose, a two-round secure process is implemented on selected syntax elements of the compressed H.264/AVC bitstreams. To uphold the visual quality of data affected by channel errors, a computationally efficient Forward Error Correction (FEC) method using Random Linear Block coding (with complexity of O(k(n-1)) is implemented to correct the erroneous data bits, effectively eliminating the need for retransmission. Errors affecting an average of 7-10% of the video data bits were simulated with the Gilbert-Elliot model when experimental results demonstrated that 90% of the resulting channel errors were observed to be recoverable by correctly inferring the values of erroneous bits. The proposed solution's effectiveness over selectively encrypted and error-prone video has been validated through a range of Video Quality Assessment (VQA) metrics.

17.
Sensors (Basel) ; 21(6)2021 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-33802202

RESUMEN

Video quality evaluation needs a combined approach that includes subjective and objective metrics, testing, and monitoring of the network. This paper deals with the novel approach of mapping quality of service (QoS) to quality of experience (QoE) using QoE metrics to determine user satisfaction limits, and applying QoS tools to provide the minimum QoE expected by users. Our aim was to connect objective estimations of video quality with the subjective estimations. A comprehensive tool for the estimation of the subjective evaluation is proposed. This new idea is based on the evaluation and marking of video sequences using the sentinel flag derived from spatial information (SI) and temporal information (TI) in individual video frames. The authors of this paper created a video database for quality evaluation, and derived SI and TI from each video sequence for classifying the scenes. Video scenes from the database were evaluated by objective and subjective assessment. Based on the results, a new model for prediction of subjective quality is defined and presented in this paper. This quality is predicted using an artificial neural network based on the objective evaluation and the type of video sequences defined by qualitative parameters such as resolution, compression standard, and bitstream. Furthermore, the authors created an optimum mapping function to define the threshold for the variable bitrate setting based on the flag in the video, determining the type of scene in the proposed model. This function allows one to allocate a bitrate dynamically for a particular segment of the scene and maintains the desired quality. Our proposed model can help video service providers with the increasing the comfort of the end users. The variable bitstream ensures consistent video quality and customer satisfaction, while network resources are used effectively. The proposed model can also predict the appropriate bitrate based on the required quality of video sequences, defined using either objective or subjective assessment.

18.
Sensors (Basel) ; 21(16)2021 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-34450761

RESUMEN

Over the past few decades, video quality assessment (VQA) has become a valuable research field. The perception of in-the-wild video quality without reference is mainly challenged by hybrid distortions with dynamic variations and the movement of the content. In order to address this barrier, we propose a no-reference video quality assessment (NR-VQA) method that adds the enhanced awareness of dynamic information to the perception of static objects. Specifically, we use convolutional networks with different dimensions to extract low-level static-dynamic fusion features for video clips and subsequently implement alignment, followed by a temporal memory module consisting of recurrent neural networks branches and fully connected (FC) branches to construct feature associations in a time series. Meanwhile, in order to simulate human visual habits, we built a parametric adaptive network structure to obtain the final score. We further validated the proposed method on four datasets (CVD2014, KoNViD-1k, LIVE-Qualcomm, and LIVE-VQC) to test the generalization ability. Extensive experiments have demonstrated that the proposed method not only outperforms other NR-VQA methods in terms of overall performance of mixed datasets but also achieves competitive performance in individual datasets compared to the existing state-of-the-art methods.


Asunto(s)
Movimiento , Redes Neurales de la Computación , Humanos
19.
Sensors (Basel) ; 21(16)2021 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-34450931

RESUMEN

Video has become the most popular medium of communication over the past decade, with nearly 90 percent of the bandwidth on the Internet being used for video transmission. Thus, evaluating the quality of an acquired or compressed video has become increasingly important. The goal of video quality assessment (VQA) is to measure the quality of a video clip as perceived by a human observer. Since manually rating every video clip to evaluate quality is infeasible, researchers have attempted to develop various quantitative metrics that estimate the perceptual quality of video. In this paper, we propose a new region-based average video quality assessment (RAVA) technique extending image quality assessment (IQA) metrics. In our experiments, we extend two full-reference (FR) image quality metrics to measure the feasibility of the proposed RAVA technique. Results on three different datasets show that our RAVA method is practical in predicting objective video scores.


Asunto(s)
Algoritmos , Humanos
20.
Sensors (Basel) ; 21(19)2021 Sep 26.
Artículo en Inglés | MEDLINE | ID: mdl-34640751

RESUMEN

Video coding technology makes the required storage and transmission bandwidth of video services decrease by reducing the bitrate of the video stream. However, the compressed video signals may involve perceivable information loss, especially when the video is overcompressed. In such cases, the viewers can observe visually annoying artifacts, namely, Perceivable Encoding Artifacts (PEAs), which degrade their perceived video quality. To monitor and measure these PEAs (including blurring, blocking, ringing and color bleeding), we propose an objective video quality metric named Saliency-Aware Artifact Measurement (SAAM) without any reference information. The SAAM metric first introduces video saliency detection to extract interested regions and further splits these regions into a finite number of image patches. For each image patch, the data-driven model is utilized to evaluate intensities of PEAs. Finally, these intensities are fused into an overall metric using Support Vector Regression (SVR). In experiment section, we compared the SAAM metric with other popular video quality metrics on four publicly available databases: LIVE, CSIQ, IVP and FERIT-RTRK. The results reveal the promising quality prediction performance of the SAAM metric, which is superior to most of the popular compressed video quality evaluation models.


Asunto(s)
Algoritmos , Artefactos , Presión , Grabación en Video
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA