Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(4)2021 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-33671615

RESUMO

In this paper, a novel device identification method is proposed to improve the security of Visible Light Communication (VLC) in 5G networks. This method extracts the fingerprints of Light-Emitting Diodes (LEDs) to identify the devices accessing the 5G network. The extraction and identification mechanisms have been investigated from the theoretical perspective as well as verified experimentally. Moreover, a demonstration in a practical indoor VLC-based 5G network has been carried out to evaluate the feasibility and accuracy of this approach. The fingerprints of four identical white LEDs were extracted successfully from the received 5G NR (New Radio) signals. To perform identification, four types of machine-learning-based classifiers were employed and the resulting accuracy was up to 97.1%.

2.
Sensors (Basel) ; 20(6)2020 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-32183258

RESUMO

Wireless Capsule Endoscopy is a state-of-the-art technology for medical diagnoses of gastrointestinal diseases. The amount of data produced by an endoscopic capsule camera is huge. These vast amounts of data are not practical to be saved internally due to power consumption and the available size. So, this data must be transmitted wirelessly outside the human body for further processing. The data should be compressed and transmitted efficiently in the domain of power consumption. In this paper, a new approach in the design and implementation of a low complexity, multiplier-less compression algorithm is proposed. Statistical analysis of capsule endoscopy images improved the performance of traditional lossless techniques, like Huffman coding and DPCM coding. Furthermore the Huffman implementation based on simple logic gates and without the use of memory tables increases more the speed and reduce the power consumption of the proposed system. Further analysis and comparison with existing state-of-the-art methods proved that the proposed method has better performance.


Assuntos
Endoscopia por Cápsula/métodos , Gastroenteropatias/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tecnologia sem Fio/tendências , Algoritmos , Compressão de Dados , Gastroenteropatias/diagnóstico , Humanos
3.
Heliyon ; 5(12): e02778, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31867450

RESUMO

The human eyes and their surrounding features are capable of conveying an array of emotional and social information through expressions. Producing virtual human eyes which are able to communicate these complex mental states continues to be a challenging research topic in computer graphics (CG) as subtle inaccuracies can be the difference between realistic and uncanny. With the recent emergence of virtual customer service agents, the demand for expressive virtual eyes is increasing. One essential question that remains to be answered is: Can virtual human eyes effectively transmit emotion? Through a combination of 3D scanning and manual hand modelling techniques, we developed an efficient pipeline to realise a virtual model of the human eye area that displays real-world characteristics. From this model eye expression renders of the six basic emotions, anger, disgust, fear, happiness, sadness and surprise were generated (Ekman et al., 1969). The perceptual quality of the model was evaluated by showing respondents from two age groups the six eye expressions renders and corresponding real-world photos. Respondents then judged which of the six emotions best described each image. Our findings indicate a clear relationship between the recognition levels for both photographic and virtual stimuli plus a significant level of emotional perception was found for the virtual eye expressions of sadness and anger. This research of human cognition and CG is a starting point for investigating the use of artificial human eye expressions as an effective research tool in the perceptual community.

4.
IEEE Trans Cybern ; 46(4): 916-29, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25910269

RESUMO

Automatic continuous affective state prediction from naturalistic facial expression is a very challenging research topic but very important in human-computer interaction. One of the main challenges is modeling the dynamics that characterize naturalistic expressions. In this paper, a novel two-stage automatic system is proposed to continuously predict affective dimension values from facial expression videos. In the first stage, traditional regression methods are used to classify each individual video frame, while in the second stage, a time-delay neural network (TDNN) is proposed to model the temporal relationships between consecutive predictions. The two-stage approach separates the emotional state dynamics modeling from an individual emotional state prediction step based on input features. In doing so, the temporal information used by the TDNN is not biased by the high variability between features of consecutive frames and allows the network to more easily exploit the slow changing dynamics between emotional states. The system was fully tested and evaluated on three different facial expression video datasets. Our experimental results demonstrate that the use of a two-stage approach combined with the TDNN to take into account previously classified frames significantly improves the overall performance of continuous emotional state estimation in naturalistic facial expressions. The proposed approach has won the affect recognition sub-challenge of the Third International Audio/Visual Emotion Recognition Challenge.


Assuntos
Emoções/classificação , Expressão Facial , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Bases de Dados Factuais , Face/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA