Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters











Database
Language
Publication year range
1.
Front Robot AI ; 9: 1028271, 2022.
Article in English | MEDLINE | ID: mdl-36212613

ABSTRACT

Nowadays, human action recognition has become an essential task in health care and other fields. During the last decade, several authors have developed algorithms for human activity detection and recognition by exploiting at the maximum the high-performance computing devices to improve the quality and efficiency of their results. However, in real-time and practical human action recognition applications, the simulation of these algorithms exceed the capacity of current computer systems by considering several factors, such as camera movement, complex scene and occlusion. One potential solution to decrease the computational complexity in the human action detection and recognition can be found in the nature of the human visual perception. Specifically, this process is called selective visual attention. Inspired by this neural phenomena, we propose for the first time a spiking neural P system for efficient feature extraction from human motion. Specifically, we propose this neural structure to carry out a pre-processing stage since many studies have revealed that an analysis of visual information of the human brain proceeds in a sequence of operations, in which each one is applied to a specific location or locations. In this way, this specialized processing have allowed to focus the recognition of the objects in a simpler manner. To create a compact and high speed spiking neural P system, we use their cutting-edge variants, such as rules on the synapses, communication on request and astrocyte-like control. Our results have demonstrated that the use of the proposed neural P system increases significantly the performance of low-computational complexity neural classifiers up to more 97% in the human action recognition.

2.
J Imaging ; 8(6)2022 Jun 09.
Article in English | MEDLINE | ID: mdl-35735965

ABSTRACT

The main purpose of this paper is the study of the correlations between Image Aesthetic (IA) and Image Naturalness (IN) and the analysis of the influence of IA and IN on Image Quality (IQ) in different contexts. The first contribution is a study about the potential relationships between IA and IN. For that study, two sub-questions are considered. The first one is to validate the idea that IA and IN are not correlated to each other. The second one is about the influence of IA and IN features on Image Naturalness Assessment (INA) and Image Aesthetic Assessment (IAA), respectively. Secondly, it is obvious that IQ is related to IA and IN, but the exact influence of IA and IN on IQ has not been evaluated. Besides that, the context impact on those influences has not been clarified, so the second contribution is to investigate the influence of IA and IN on IQ in different contexts. The results obtained from rigorous experiments prove that although there are moderate and weak correlations between IA and IN, they are still two different components of IQ. It also appears that viewers' IQ perception is affected by some contextual factors, and the influence of IA and IN on IQ depends on the considered context.

3.
Z Med Phys ; 31(3): 316-326, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33612389

ABSTRACT

PURPOSE: In this study, two intraocular lenses (spherical IOL SA60AT and aspherical IOL SN60WF) are examined in an eye model under conditions of misalignment (defocus, decentration and tilt). The lenses are rated using the contrast sensitivity function (CSF) based on Barten's physical model. The square root integral (SQRI) method is used as a quality criterion comparable to the subjective image quality assessment of the human eye. METHODS: The IOLs to be tested are decentered from 0 to 1mm and tilted from -5 to +5 degrees in the Navarro eye model (optimized for far-point 6m and pupil aperture 3mm). The defocus of the IOLs is ±0.1mm at the anterior chamber depth (ACD). The optical modulation transfer function (MTF) is simulated with a ray tracing program. The SQRI is calculated using this MTF and the Barten CSF model (for in-focus at aperture 3 and 4.5mm and for defocus at 3mm). RESULTS: With increasing decentration, the spherical IOL shows a significantly smaller loss of quality for both apertures compared to the aspherical lens. With an aperture of 4.5mm, the image quality of the aspherical IOL is better for small decentration and tilt. The loss of quality of the spherical IOL increases with increasing tilt in both directions. In contrast, the image quality of the aspherical IOL is reduced under decentration for certain tilt values. For ACD-0.1mm, both IOLs behave similarly to the in-focus situation. For ACD+0.1mm, the influence of tilt without decentration is small for both IOLs. With increasing decentration, the quality loss of the aspherical IOL is similar to that in-focus and greater than that of the spherical lens. CONCLUSION: In general, under the same conditions the spherical SA60AT displays a lower tolerance in loss of quality of subjective vision with lens alignment errors, in comparison to the aspherical SN60WF, limited by certain combinations of decentration and tilt according to this study. This study shows a way to evaluate IOLs based on the subjective visual performance of the eye.


Subject(s)
Lens, Crystalline , Lenses, Intraocular , Humans , Models, Theoretical , Prosthesis Design , Visual Acuity
4.
J Vis Lang Comput ; 552019 Dec.
Article in English | MEDLINE | ID: mdl-31827316

ABSTRACT

We propose an image-space contrast enhancement method for color-encoded visualization. The contrast of an image is enhanced through a perceptually guided approach that interfaces with the user with a single and intuitive parameter of the virtual viewing distance. To this end, we analyze a multiscale contrast model of the input image and test the visibility of bandpass images of all scales at a virtual viewing distance. By adapting weights of bandpass images with a threshold model of spatial vision, this image-based method enhances contrast to compensate for contrast loss caused by viewing the image at a certain distance. Relevant features in the color image can be further emphasized by the user using overcompensation. The weights can be assigned with a simple band-based approach, or with an efficient pixel-based approach that reduces ringing artifacts. The method is efficient and can be integrated into any visualization tool as it is a generic image-based post-processing technique. Using highly diverse datasets, we show the usefulness of perception compensation across a wide range of typical visualizations.

5.
Cortex ; 99: 273-280, 2018 02.
Article in English | MEDLINE | ID: mdl-29306707

ABSTRACT

Numerous studies have reported that temporal order perception is biased in neurological patients with extinction and neglect. These individuals tend to perceive two objectively simultaneous stimuli as occurring asynchronously, with the ipsilesional item being perceived as appearing prior to the contralesional item. Likewise, they report that two stimuli occurred simultaneously in situations where the contralesional item is presented substantially prior to the ipsilesional item. Therefore, they exhibit a biased point of subjective simultaneity (PSS). Here we demonstrate that the magnitude of this effect is modulated by the relative position of the stimuli with respect to the patient's trunk. This effect was only observed in patients who still exhibited neglect symptoms, and neither the pathological bias nor substantial modulation were observed in individuals who had recovered from neglect, those who never had neglect or neurologically healthy controls. Crucially, our design kept the retinal and head-centered coordinates of these stimuli constant, providing a pure measure for the influence of egocentric trunk position. This finding emphasizes the interaction of egocentric spatial position on the temporal symptoms observed in these individuals.


Subject(s)
Attention , Perceptual Disorders/physiopathology , Space Perception , Stroke/physiopathology , Time Perception , Acute Disease , Aged , Chronic Disease , Extinction, Psychological , Female , Humans , Judgment , Male , Middle Aged , Recovery of Function , Torso , Visual Perception
6.
Curr Biol ; 27(10): 1514-1520.e3, 2017 May 22.
Article in English | MEDLINE | ID: mdl-28479319

ABSTRACT

Interacting with the natural environment leads to complex stimulations of our senses. Here we focus on the estimation of visual speed, a critical source of information for the survival of many animal species as they monitor moving prey or approaching dangers. In mammals, and in particular in primates, speed information is conceived to be represented by a set of channels sensitive to different spatial and temporal characteristics of the optic flow [1-5]. However, it is still largely unknown how the brain accurately infers the speed of complex natural scenes from this set of spatiotemporal channels [6-14]. As complex stimuli, we chose a set of well-controlled moving naturalistic textures called "compound motion clouds" (CMCs) [15, 16] that simultaneously activate multiple spatiotemporal channels. We found that CMC stimuli that have the same physical speed are perceived moving at different speeds depending on which channel combinations are activated. We developed a computational model demonstrating that the activity in a given channel is both boosted and weakened after a systematic pattern over neighboring channels. This pattern of interactions can be understood as a combination of two components oriented in speed (consistent with a slow-speed prior) and scale (sharpening of similar features). Interestingly, the interaction along scale implements a lateral inhibition mechanism, a canonical principle that hitherto was found to operate mainly in early sensory processing. Overall, the speed-scale normalization mechanism may reflect the natural tendency of the visual system to integrate complex inputs into one coherent percept.


Subject(s)
Brain/physiology , Models, Statistical , Motion Perception/physiology , Visual Pathways/physiology , Adult , Female , Humans , Male , Psychophysics
7.
J Microsc ; 266(2): 153-165, 2017 05.
Article in English | MEDLINE | ID: mdl-28117893

ABSTRACT

Partitioning epidermis surface microstructure (ESM) images into skin ridge and skin furrow regions is an important preprocessing step before quantitative analyses on ESM images. Binarization segmentation is a potential technique for partitioning ESM images because of its computational simplicity and ease of implementation. However, even for some state-of-the-art binarization methods, it remains a challenge to automatically segment ESM images, because the grey-level histograms of ESM images have no obvious external features to guide automatic assessment of appropriate thresholds. Inspired by human visual perceptual functions of structural feature extraction and comparison, we propose a structure similarity-guided image binarization method. The proposed method seeks for the binary image that best approximates the input ESM image in terms of structural features. The proposed method is validated by comparing it with two recently developed automatic binarization techniques as well as a manual binarization method on 20 synthetic noisy images and 30 ESM images. The experimental results show: (1) the proposed method possesses self-adaption ability to cope with different images with same grey-level histogram; (2) compared to two automatic binarization techniques, the proposed method significantly improves average accuracy in segmenting ESM images with an acceptable decrease in computational efficiency; (3) and the proposed method is applicable for segmenting practical EMS images. (Matlab code of the proposed method can be obtained by contacting with the corresponding author.).


Subject(s)
Automation, Laboratory/methods , Epidermis/ultrastructure , Image Processing, Computer-Assisted/methods , Optical Imaging/methods , Surface Properties , Humans
8.
Sensors (Basel) ; 16(6)2016 Jun 22.
Article in English | MEDLINE | ID: mdl-27338412

ABSTRACT

Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods.

9.
Psychon Bull Rev ; 23(2): 432-8, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26450627

ABSTRACT

The ability to recognize the same image projected to different retinal locations is critical for visual object recognition in natural contexts. According to many theories, the translation invariance for objects extends only to trained retinal locations, so that a familiar object projected to a nontrained location should not be identified. In another approach, invariance is achieved "online," such that learning to identify an object in one location immediately affords generalization to other locations. We trained participants to name novel objects at one retinal location using eyetracking technology and then tested their ability to name the same images presented at novel retinal locations. Across three experiments, we found robust generalization. These findings provide a strong constraint for theories of vision.


Subject(s)
Generalization, Psychological/physiology , Pattern Recognition, Visual/physiology , Recognition, Psychology/physiology , Space Perception/physiology , Adult , Humans
SELECTION OF CITATIONS
SEARCH DETAIL