Your browser doesn't support javascript.
loading
Dynamic Nonlinear Spatial Integrations on Encoding Contrasting Stimuli of Tectal Neurons.
Huang, Shuman; Hu, Pingge; Zhao, Zhenmeng; Shi, Li.
Affiliation
  • Huang S; Key Laboratory of Artificial Intelligence and Personalized Learning in Education of Henan Province, College of Computer and Information Engineering, Henan Normal University, Xinxiang 453007, China.
  • Hu P; Department of Automation, Tsinghua University, Beijing 100084, China.
  • Zhao Z; School of Software, Henan Normal University, Xinxiang 453007, China.
  • Shi L; Department of Automation, Tsinghua University, Beijing 100084, China.
Animals (Basel) ; 14(11)2024 May 26.
Article in En | MEDLINE | ID: mdl-38891623
ABSTRACT
Animals detect targets using a variety of visual cues, with the visual salience of these cues determining which environmental features receive priority attention and further processing. Surround modulation plays a crucial role in generating visual saliency, which has been extensively studied in avian tectal neurons. Recent work has reported that the suppression of tectal neurons induced by motion contrasting stimulus is stronger than that by luminance contrasting stimulus. However, the underlying mechanism remains poorly understood. In this study, we built a computational model (called Generalized Linear-Dynamic Modulation) which incorporates independent nonlinear tuning mechanisms for excitatory and inhibitory inputs. This model aims to describe how tectal neurons encode contrasting stimuli. The results showed that (1) The dynamic nonlinear integration structure substantially improved the accuracy (significant difference (p < 0.001, paired t-test) in the goodness of fit between the two models) of the predicted responses to contrasting stimuli, verifying the nonlinear processing performed by tectal neurons. (2) The modulation difference between luminance and motion contrasting stimuli emerged from the predicted response by the full model but not by that with only excitatory synaptic input (spatial luminance 89 ± 2.8% (GL_DM) vs. 87 ± 2.1% (GL_DMexc); motion contrasting stimuli 87 ± 1.7% (GL_DM) vs. 83 ± 2.2% (GL_DMexc)). These results validate the proposed model and further suggest the role of dynamic nonlinear spatial integrations in contextual visual information processing, especially in spatial integration, which is important for object detection performed by birds.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Animals (Basel) Year: 2024 Document type: Article Affiliation country: China Country of publication: Switzerland

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Animals (Basel) Year: 2024 Document type: Article Affiliation country: China Country of publication: Switzerland