Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Clin Nurs ; 2024 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-38764248

RESUMO

AIM: To map the commonly used quantitative blood loss measurement methods in clinical practice and provide a solid foundation for future studies. DESIGN AND METHOD: This study adhered to the JBI methodology for scoping reviews and preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews. We conducted a literature search using five databases to retrieve articles published between January 2012 and September 2022. The search was repeated on 29 February 2024. Data extraction and verification were carried out by two independent researchers using a self-designed data extraction form. RESULTS: Ultimately, 26 studies published between 2012 and 2024 were considered eligible for inclusion. Six categories of methods were identified from the 26 articles. Among the included studies, only two involved randomized controlled trials, with the majority being observational studies. The World Health Organization (2012) version of the postpartum haemorrhage diagnostic criteria was predominantly used in most studies. Gravimetric and volumetric methods emerged as the most commonly used methods for quantifying postpartum haemorrhages. The timing of blood collection was inconsistent among the included studies. Only 12 studies mentioned measures for the management of amniotic fluid. CONCLUSIONS: This scoping review supports the replacement of the visual estimation of blood loss with quantitative assessment methods. Supporting a specific assessment approach is not feasible due to the variability of the study. Future research should focus on establishing the best practices for specific quantitative methods to standardize the management of postpartum haemorrhage and reduce the incidence of postpartum haemorrhage-related adverse outcomes. RELEVANCE TO CLINICAL PRACTICE: Healthcare professionals need to acknowledge the low accuracy of visual estimation methods and implement quantitative methods to assess postpartum blood loss. Given the limitations inherent in each assessment method, quantification of blood loss should be combined with assessment of maternal vital signs, physiologic indicators and other factors.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38625773

RESUMO

Blind video quality assessment (BVQA) plays an indispensable role in monitoring and improving the end-users' viewing experience in various real-world video-enabled media applications. As an experimental field, the improvements of BVQA models have been measured primarily on a few human-rated VQA datasets. Thus, it is crucial to gain a better understanding of existing VQA datasets in order to properly evaluate the current progress in BVQA. Towards this goal, we conduct a first-of-its-kind computational analysis of VQA datasets via designing minimalistic BVQA models. By minimalistic, we restrict our family of BVQA models to build only upon basic blocks: a video preprocessor (for aggressive spatiotemporal downsampling), a spatial quality analyzer, an optional temporal quality analyzer, and a quality regressor, all with the simplest possible instantiations. By comparing the quality prediction performance of different model variants on eight VQA datasets with realistic distortions, we find that nearly all datasets suffer from the easy dataset problem of varying severity, some of which even admit blind image quality assessment (BIQA) solutions. We additionally justify our claims by comparing our model generalization capabilities on these VQA datasets, and by ablating a dizzying set of BVQA design choices related to the basic building blocks. Our results cast doubt on the current progress in BVQA, and meanwhile shed light on good practices of constructing next-generation VQA datasets and models.

3.
IEEE Trans Image Process ; 32: 3847-3861, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37428674

RESUMO

In recent years, User Generated Content (UGC) has grown dramatically in video sharing applications. It is necessary for service-providers to use video quality assessment (VQA) to monitor and control users' Quality of Experience when watching UGC videos. However, most existing UGC VQA studies only focus on the visual distortions of videos, ignoring that the perceptual quality also depends on the accompanying audio signals. In this paper, we conduct a comprehensive study on UGC audio-visual quality assessment (AVQA) from both subjective and objective perspectives. Specially, we construct the first UGC AVQA database named SJTU-UAV database, which includes 520 in-the-wild UGC audio and video (A/V) sequences collected from the YFCC100m database. A subjective AVQA experiment is conducted on the database to obtain the mean opinion scores (MOSs) of the A/V sequences. To demonstrate the content diversity of the SJTU-UAV database, we give a detailed analysis of the SJTU-UAV database as well as other two synthetically-distorted AVQA databases and one authentically-distorted VQA database, from both the audio and video aspects. Then, to facilitate the development of AVQA fields, we construct a benchmark of AVQA models on the proposed SJTU-UAV database and other two AVQA databases, of which the benchmark models consist of AVQA models designed for synthetically distorted A/V sequences and AVQA models built through combining the popular VQA methods and audio features via support vector regressor (SVR). Finally, considering benchmark AVQA models perform poorly in assessing in-the-wild UGC videos, we further propose an effective AVQA model via jointly learning quality-aware audio and visual feature representations in the temporal domain, which is seldom investigated by existing AVQA models. Our proposed model outperforms the aforementioned benchmark AVQA models on the SJTU-UAV database and two synthetically distorted AVQA databases. The SJTU-UAV database and the code of the proposed model will be released to facilitate further research.


Assuntos
Aprendizagem , Bases de Dados Factuais , Gravação em Vídeo/métodos , Humanos
4.
IEEE Trans Med Imaging ; 42(11): 3295-3306, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37267133

RESUMO

The high-quality pathological microscopic images are essential for physicians or pathologists to make a correct diagnosis. Image quality assessment (IQA) can quantify the visual distortion degree of images and guide the imaging system to improve image quality, thus raising the quality of pathological microscopic images. Current IQA methods are not ideal for pathological microscopy images due to their specificity. In this paper, we present deep learning-based blind image quality assessment model with saliency block and patch block for pathological microscopic images. The saliency block and patch block can handle the local and global distortions, respectively. To better capture the area of interest of pathologists when viewing pathological images, the saliency block is fine-tuned by eye movement data of pathologists. The patch block can capture lots of global information strongly related to image quality via the interaction between different image patches from different positions. The performance of the developed model is validated by the home-made Pathological Microscopic Image Quality Database under Screen and Immersion Scenarios (PMIQD-SIS) and cross-validated by the five public datasets. The results of ablation experiments demonstrate the contribution of the added blocks. The dataset and the corresponding code are publicly available at: https://github.com/mikugyf/PMIQD-SIS.


Assuntos
Imersão , Microscopia , Bases de Dados Factuais
5.
Artigo em Inglês | MEDLINE | ID: mdl-37030730

RESUMO

With the popularity of mobile Internet, audio and video (A/V) have become the main way for people to entertain and socialize daily. However, in order to reduce the cost of media storage and transmission, A/V signals will be compressed by service providers before they are transmitted to end-users, which inevitably causes distortions in the A/V signals and degrades the end-user's Quality of Experience (QoE). This motivates us to research the objective audio-visual quality assessment (AVQA). In the field of AVQA, most previous works only focus on single-mode audio or visual signals, which ignores that the perceptual quality of users depends on both audio and video signals. Therefore, we propose an objective AVQA architecture for multi-mode signals based on attentional neural networks. Specifically, we first utilize an attention prediction model to extract the salient regions of video frames. Then, a pre-trained convolutional neural network is used to extract short-time features of the salient regions and the corresponding audio signals. Next, the short-time features are fed into Gated Recurrent Unit (GRU) networks to model the temporal relationship between adjacent frames. Finally, the fully connected layers are utilized to fuse the temporal related features of A/V signals modeled by the GRU network into the final quality score. The proposed architecture is flexible and can be applied to both full-reference and no-reference AVQA. Experimental results on the LIVE-SJTU Database and UnB-AVC Database demonstrate that our model outperforms the state-of-the-art AVQA methods. The code of the proposed method will be publicly available to promote the development of the field of AVQA.

6.
IEEE Trans Cybern ; 53(6): 3651-3664, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34847052

RESUMO

Existing no-reference (NR) image quality assessment (IQA) metrics are still not convincing for evaluating the quality of the camera-captured images. Toward tackling this issue, we, in this article, establish a novel NR quality metric for quantifying the quality of the camera-captured images reliably. Since the image quality is hierarchically perceived from the low-level preliminary visual perception to the high-level semantic comprehension in the human brain, in our proposed metric, we characterize the image quality by exploiting both the low-level image properties and the high-level semantics of the image. Specifically, we extract a series of low-level features to characterize the fundamental image properties, including the brightness, saturation, contrast, noiseness, sharpness, and naturalness, which are highly indicative of the camera-captured image quality. Correspondingly, the high-level features are designed to characterize the semantics of the image. The low-level and high-level perceptual features play complementary roles in measuring the image quality. To infer the image quality, we employ the support vector regression (SVR) to map all the informative features to a single quality score. Thorough tests conducted on two standard camera-captured image databases demonstrate the effectiveness of the proposed quality metric in assessing the image quality and its superiority over the state-of-the-art NR quality metrics. The source code of the proposed metric for camera-captured images is released at https://github.com/YT2015?tab=repositories.

7.
IEEE Trans Image Process ; 31: 7206-7221, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36367913

RESUMO

With the development of multimedia technology, Augmented Reality (AR) has become a promising next-generation mobile platform. The primary value of AR is to promote the fusion of digital contents and real-world environments, however, studies on how this fusion will influence the Quality of Experience (QoE) of these two components are lacking. To achieve better QoE of AR, whose two layers are influenced by each other, it is important to evaluate its perceptual quality first. In this paper, we consider AR technology as the superimposition of virtual scenes and real scenes, and introduce visual confusion as its basic theory. A more general problem is first proposed, which is evaluating the perceptual quality of superimposed images, i.e., confusing image quality assessment. A ConFusing Image Quality Assessment (CFIQA) database is established, which includes 600 reference images and 300 distorted images generated by mixing reference images in pairs. Then a subjective quality perception experiment is conducted towards attaining a better understanding of how humans perceive the confusing images. Based on the CFIQA database, several benchmark models and a specifically designed CFIQA model are proposed for solving this problem. Experimental results show that the proposed CFIQA model achieves state-of-the-art performance compared to other benchmark models. Moreover, an extended ARIQA study is further conducted based on the CFIQA study. We establish an ARIQA database to better simulate the real AR application scenarios, which contains 20 AR reference images, 20 background (BG) reference images, and 560 distorted images generated from AR and BG references, as well as the correspondingly collected subjective quality ratings. Three types of full-reference (FR) IQA benchmark variants are designed to study whether we should consider the visual confusion when designing corresponding IQA algorithms. An ARIQA metric is finally proposed for better evaluating the perceptual quality of AR images. Experimental results demonstrate the good generalization ability of the CFIQA model and the state-of-the-art performance of the ARIQA model. The databases, benchmark models, and proposed metrics are available at: https://github.com/DuanHuiyu/ARIQA.


Assuntos
Realidade Aumentada , Humanos , Algoritmos , Bases de Dados Factuais
9.
IEEE Trans Cybern ; 52(7): 7094-7106, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33315574

RESUMO

In the era of multimedia and Internet, the quick response (QR) code helps people obtain information from offline to online quickly. However, the QR code is often limited in many scenarios because of its random and dull appearance. Therefore, this article proposes a novel approach to embed hyperlinks into common images, making the hyperlinks invisible for human eyes but detectable for mobile devices equipped with a camera. Our approach is an end-to-end neural network with an encoder to hide messages and a decoder to extract messages. To maintain the hidden message resilient to cameras, we build a distortion network between the encoder and the decoder to augment the encoded images. The distortion network uses differentiable 3-D rendering operations, which can simulate the distortion introduced by camera imaging in both printing and display scenarios. To maintain the visual attraction of the image with hyperlinks, a loss function conforming to the human visual system (HVS) is used to supervise the training of the encoder. Experimental results show that the proposed approach outperforms the previous work on both robustness and quality. Based on the proposed approach, many applications become possible, for example, "image hyperlinks" for advertisement on TV, website, or poster, and "invisible watermark" for copyright protection on digital resources or product packagings.


Assuntos
Redes Neurais de Computação , Humanos
10.
IEEE Trans Neural Netw Learn Syst ; 33(3): 1051-1065, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-33296311

RESUMO

Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examples crafted against an ensemble of source models transfer to other target models and, thus, pose a security threat to black-box applications (when attackers have no access to the target models). Current transfer-based ensemble attacks, however, only consider a limited number of source models to craft an adversarial example and, thus, obtain poor transferability. Besides, recent query-based black-box attacks, which require numerous queries to the target model, not only come under suspicion by the target model but also cause expensive query cost. In this article, we propose a novel transfer-based black-box attack, dubbed serial-minigroup-ensemble-attack (SMGEA). Concretely, SMGEA first divides a large number of pretrained white-box source models into several "minigroups." For each minigroup, we design three new ensemble strategies to improve the intragroup transferability. Moreover, we propose a new algorithm that recursively accumulates the "long-term" gradient memories of the previous minigroup to the subsequent minigroup. This way, the learned adversarial information can be preserved, and the intergroup transferability can be improved. Experiments indicate that SMGEA not only achieves state-of-the-art black-box attack ability over several data sets but also deceives two online black-box saliency prediction systems in real world, i.e., DeepGaze-II (https://deepgaze.bethgelab.org/) and SALICON (http://salicon.net/demo/). Finally, we contribute a new code repository to promote research on adversarial attack and defense over ubiquitous pixel-to-pixel computer vision tasks. We share our code together with the pretrained substitute model zoo at https://github.com/CZHQuality/AAA-Pix2pix.


Assuntos
Algoritmos , Redes Neurais de Computação , Aprendizagem , Memória de Longo Prazo
11.
IEEE Trans Image Process ; 30: 517-531, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33201815

RESUMO

Virtual viewpoints synthesis is an essential process for many immersive applications including Free-viewpoint TV (FTV). A widely used technique for viewpoints synthesis is Depth-Image-Based-Rendering (DIBR) technique. However, such technique may introduce challenging non-uniform spatial-temporal structure-related distortions. Most of the existing state-of-the-art quality metrics fail to handle these distortions, especially the temporal structure inconsistencies observed during the switch of different viewpoints. To tackle this problem, an elastic metric and multi-scale trajectory based video quality metric (EM-VQM) is proposed in this paper. Dense motion trajectory is first used as a proxy for selecting temporal sensitive regions, where local geometric distortions might significantly diminish the perceived quality. Afterwards, the amount of temporal structure inconsistencies and unsmooth viewpoints transitions are quantified by calculating 1) the amount of motion trajectory deformations with elastic metric and, 2) the spatial-temporal structural dissimilarity. According to the comprehensive experimental results on two FTV video datasets, the proposed metric outperforms the state-of-the-art metrics designed for free-viewpoint videos significantly and achieves a gain of 12.86% and 16.75% in terms of median Pearson linear correlation coefficient values on the two datasets compared to the best one, respectively.

12.
IEEE Trans Image Process ; 30: 277-292, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33180725

RESUMO

Video frame interpolation aims to improve users' watching experiences by generating high-frame-rate videos from low-frame-rate ones. Existing approaches typically focus on synthesizing intermediate frames using high-quality reference images. However, the captured reference frames may suffer from inevitable spatial degradations such as motion blur, sensor noise, etc. Few studies have approached the joint video enhancement problem, namely synthesizing high-frame-rate and high-quality results from low-frame-rate degraded inputs. In this paper, we propose a unified optimization framework for video frame interpolation with spatial degradations. Specifically, we develop a frame interpolation module with a pyramid structure to cyclically synthesize high-quality intermediate frames. The pyramid module features adjustable spatial receptive field and temporal scope, thus contributing to controllable computational complexity and restoration ability. Besides, we propose an inter-pyramid recurrent module to connect sequential models to exploit the temporal relationship. The pyramid module integrates the recurrent module, thus can iteratively synthesize temporally smooth results. And the pyramid modules share weights across iterations, thus it does not expand the model's parameter size. Our model can be generalized to several applications such as up-converting the frame rate of videos with motion blur, reducing compression artifacts, and jointly super-resolving low-resolution videos. Extensive experimental results demonstrate that our method performs favorably against state-of-the-art methods on various video frame interpolation and enhancement tasks.

13.
Artigo em Inglês | MEDLINE | ID: mdl-32324554

RESUMO

The topics of visual and audio quality assessment (QA) have been widely researched for decades, yet nearly all of this prior work has focused only on single-mode visual or audio signals. However, visual signals rarely are presented without accompanying audio, including heavy-bandwidth video streaming applications. Moreover, the distortions that may separately (or conjointly) afflict the visual and audio signals collectively shape user-perceived quality of experience (QoE). This motivated us to conduct a subjective study of audio and video (A/V) quality, which we then used to compare and develop A/V quality measurement models and algorithms. The new LIVE-SJTU Audio and Video Quality Assessment (A/V-QA) Database includes 336 A/V sequences that were generated from 14 original source contents by applying 24 different A/V distortion combinations on them. We then conducted a subjective A/V quality perception study on the database towards attaining a better understanding of how humans perceive the overall combined quality of A/V signals. We also designed four different families of objective A/V quality prediction models, using a multimodal fusion strategy. The different types of A/V quality models differ in both the unimodal audio and video quality prediction models comprising the direct signal measurements and in the way that the two perceptual signal modes are combined. The objective models are built using both existing state-of-the-art audio and video quality prediction models and some new prediction models, as well as quality-predictive features delivered by a deep neural network. The methods of fusing audio and video quality predictions that are considered include simple product combinations as well as learned mappings. Using the new subjective A/V database as a tool, we validated and tested all of the objective A/V quality prediction models. We will make the database publicly available to facilitate further research.

14.
Artigo em Inglês | MEDLINE | ID: mdl-31976897

RESUMO

Owning to the recorded light ray distributions, light field contains much richer information and provides possibilities of some enlightening applications, and it has becoming more and more popular. To facilitate the relevant applications, many light field processing techniques have been proposed recently. These operations also bring the loss of visual quality, and thus there is need of a light field quality metric to quantify the visual quality loss. To reduce the processing complexity and resource consumption, light fields are generally sparsely sampled, compressed, and finally reconstructed and displayed to the users. We consider the distortions introduced in this typical light field processing chain, and propose a full-reference light field quality metric. Specifically, we measure the light field quality from three aspects: global spatial quality based on view structure matching, local spatial quality based on near-edge mean square error, and angular quality based on multi-view quality analysis. These three aspects have captured the most common distortions introduced in light field processing, including global distortions like blur and blocking, local geometric distortions like ghosting and stretching, and angular distortions like flickering and sampling. Experimental results show that the proposed method can estimate light field quality accurately, and it outperforms the state-of-the-art quality metrics which may be effective for light field.

15.
Artigo em Inglês | MEDLINE | ID: mdl-31976898

RESUMO

Audio information has been bypassed by most of current visual attention prediction studies. However, sound could have influence on visual attention and such influence has been widely investigated and proofed by many psychological studies. In this paper, we propose a novel multi-modal saliency (MMS) model for videos containing scenes with high audio-visual correspondence. In such scenes, humans tend to be attracted by the sound sources and it is also possible to localize the sound sources via cross-modal analysis. Specifically, we first detect the spatial and temporal saliency maps from the visual modality by using a novel free energy principle. Then we propose to detect the audio saliency map from both audio and visual modalities by localizing the moving-sounding objects using cross-modal kernel canonical correlation analysis, which is first of its kind in the literature. Finally we propose a new two-stage adaptive audiovisual saliency fusion method to integrate the spatial, temporal and audio saliency maps to our audio-visual saliency map. The proposed MMS model has captured the influence of audio, which is not considered in the latest deep learning based saliency models. To take advantages of both deep saliency modeling and audio-visual saliency modeling, we propose to combine deep saliency models and the MMS model via a later fusion, and we find that an average of 5% performance gain is obtained. Experimental results on audio-visual attention databases show that the introduced models incorporating audio cues have significant superiority over state-of-the-art image and video saliency models which utilize a single visual modality.

16.
J Oral Maxillofac Surg ; 78(4): 662.e1-662.e13, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31857063

RESUMO

PURPOSE: The aim of the present study was to redetermine the position of the key points (skeletal marker points) in the damaged female and male jaws to improve the accuracy of jaw reconstruction. MATERIALS AND METHODS: To develop a personalized jaw reconstruction guidance program for each patient, we first made 3 statistics to compare the gender differences in the jaw. Next, we proposed and compared 3 methods to use to restore the key skeletal marker points of the damaged jaw according to our statistics. RESULTS: We collected 111 groups of computed tomography data of the jaw from normal people as experimental material. The use of our statistics showed that gender differences are present in the shape of the jaw. In addition, some key angles and distances of the jaw satisfied the Gaussian distribution. The reconstruction results showed that our methods will result in better effects than the widely used method. CONCLUSIONS: To reduce errors, gender differences should be considered when designing a reconstruction approach to the jaw. In addition, our methods can improve the accuracy of reconstruction of the jaw.


Assuntos
Arcada Osseodentária , Tomografia Computadorizada por Raios X , Feminino , Humanos , Masculino
17.
Biomed Eng Online ; 18(1): 111, 2019 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-31729983

RESUMO

BACKGROUND: Head-mounted displays (HMDs) and virtual reality (VR) have been frequently used in recent years, and a user's experience and computation efficiency could be assessed by mounting eye-trackers. However, in addition to visually induced motion sickness (VIMS), eye fatigue has increasingly emerged during and after the viewing experience, highlighting the necessity of quantitatively assessment of the detrimental effects. As no measurement method for the eye fatigue caused by HMDs has been widely accepted, we detected parameters related to optometry test. We proposed a novel computational approach for estimation of eye fatigue by providing various verifiable models. RESULTS: We implemented three classifications and two regressions to investigate different feature sets, which led to present two valid assessment models for eye fatigue by employing blinking features and eye movement features with the ground truth of indicators for optometry test. Three graded results and one continuous result were provided by each model, respectively, which caused the whole result to be repeatable and comparable. CONCLUSION: We showed differences between VIMS and eye fatigue, and we also presented a new scheme to assess eye fatigue of HMDs users by analysis of parameters of the eye tracker.


Assuntos
Astenopia/diagnóstico , Movimentos Oculares , Cabeça , Adulto , Astenopia/fisiopatologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
18.
Artigo em Inglês | MEDLINE | ID: mdl-31613763

RESUMO

Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time-consuming and expensive. Most of current studies on human attention and saliency modeling have used high-quality stereotype stimuli. In real world, however, captured images undergo various types of transformations. Can we use these transformations to augment existing saliency datasets? Here, we first create a novel saliency dataset including fixations of 10 observers over 1900 images degraded by 19 types of transformations. Second, by analyzing eye movements, we find that observers look at different locations over transformed versus original images. Third, we utilize the new data over transformed images, called data augmentation transformation (DAT), to train deep saliency models. We find that label-preserving DATs with negligible impact on human gaze boost saliency prediction, whereas some other DATs that severely impact human gaze degrade the performance. These label-preserving valid augmentation transformations provide a solution to enlarge existing saliency datasets. Finally, we introduce a novel saliency model based on generative adversarial networks (dubbed GazeGAN). A modified U-Net is utilized as the generator of the GazeGAN, which combines classic "skip connection" with a novel "center-surround connection" (CSC) module. Our proposed CSC module mitigates trivial artifacts while emphasizing semantic salient regions, and increases model nonlinearity, thus demonstrating better robustness against transformations. Extensive experiments and comparisons indicate that GazeGAN achieves state-of-the-art performance over multiple datasets. We also provide a comprehensive comparison of 22 saliency models on various transformed scenes, which contributes a new robustness benchmark to saliency community. Our code and dataset are available at.

19.
J Oral Maxillofac Surg ; 77(3): 664.e1-664.e16, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30598300

RESUMO

PURPOSE: For severe mandibular or maxillary defects across the midline, doctors often lack data on the shape of the jaws when designing virtual surgery. This study sought to repair the personalized 3-dimensional shape of the jaw, particularly when the jaw is severely damaged. MATERIALS AND METHODS: Two linear regression methods, denoted method I and method II, were used to reconstruct key points of the severely damaged maxilla or mandible based on the remaining jaw. The predictor variable was the position of key points. Outcome variables were the position of key points and the error between the predicted and actual positions. Another variable was the average error. In the final data analysis, the effect of the method was judged based on the mean error and error probability distribution. RESULTS: Computed tomographic data of jaws from 44 normal adults in East China were collected over 2 years by the Shanghai Jiao Tong University School of Medicine (Shanghai, China). Sixteen 16 key points were extracted for each jaw. Method I showed that 2-dimensional regression can yield the best overall result and that the position error of most points can be decreased to smaller than 5 mm. The result of method II was similar to that of method I but showed cumulative errors. CONCLUSIONS: Linear regression can be used to locate key points. Two-dimensional regression has the best effect, which can be used as a reference to develop a surgical plan and perform surgery.


Assuntos
Mandíbula , Maxila , Adulto , Cefalometria , China , Humanos , Modelos Lineares
20.
IEEE Trans Vis Comput Graph ; 24(10): 2689-2701, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-29990169

RESUMO

With the quick development and popularity of computers, computer-generated signals have drastically invaded into our daily lives. Screen content image is a typical example, since it also includes graphic and textual images as components as compared with natural scene images which have been deeply explored, and thus screen content image has posed novel challenges to current researches, such as compression, transmission, display, quality assessment, and more. In this paper, we focus our attention on evaluating the quality of screen content images based on the analysis of structural variation, which is caused by compression, transmission, and more. We classify structures into global and local structures, which correspond to basic and detailed perceptions of humans, respectively. The characteristics of graphic and textual images, e.g., limited color variations, and the human visual system are taken into consideration. Based on these concerns, we systematically combine the measurements of variations in the above-stated two types of structures to yield the final quality estimation of screen content images. Thorough experiments are conducted on three screen content image quality databases, in which the images are corrupted during capturing, compression, transmission, etc. Results demonstrate the superiority of our proposed quality model as compared with state-of-the-art relevant methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...