Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
PLoS Comput Biol ; 20(6): e1012192, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38857280

ABSTRACT

Conventional and electron microscopy visualize structures in the micrometer to nanometer range, and such visualizations contribute decisively to our understanding of biological processes. Due to different factors in recording processes, microscopy images are subject to noise. Especially at their respective resolution limits, a high degree of noise can negatively effect both image interpretation by experts and further automated processing. However, the deteriorating effects of strong noise can be alleviated to a large extend by image enhancement algorithms. Because of the inherent high noise, a requirement for such algorithms is their applicability directly to noisy images or, in the extreme case, to just a single noisy image without a priori noise level information (referred to as blind zero-shot setting). This work investigates blind zero-shot algorithms for microscopy image denoising. The strategies for denoising applied by the investigated approaches include: filtering methods, recent feed-forward neural networks which were amended to be trainable on noisy images, and recent probabilistic generative models. As datasets we consider transmission electron microscopy images including images of SARS-CoV-2 viruses and fluorescence microscopy images. A natural goal of denoising algorithms is to simultaneously reduce noise while preserving the original image features, e.g., the sharpness of structures. However, in practice, a tradeoff between both aspects often has to be found. Our performance evaluations, therefore, focus not only on noise removal but set noise removal in relation to a metric which is instructive about sharpness. For all considered approaches, we numerically investigate their performance, report their denoising/sharpness tradeoff on different images, and discuss future developments. We observe that, depending on the data, the different algorithms can provide significant advantages or disadvantages in terms of their noise removal vs. sharpness preservation capabilities, which may be very relevant for different virological applications, e.g., virological analysis or image segmentation.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , SARS-CoV-2 , Signal-To-Noise Ratio , Image Processing, Computer-Assisted/methods , Humans , COVID-19/diagnostic imaging , Neural Networks, Computer , Microscopy, Electron, Transmission/methods , Computational Biology/methods , Microscopy/methods
2.
Neuroimage ; 237: 118106, 2021 08 15.
Article in English | MEDLINE | ID: mdl-33991696

ABSTRACT

Speech comprehension in natural soundscapes rests on the ability of the auditory system to extract speech information from a complex acoustic signal with overlapping contributions from many sound sources. Here we reveal the canonical processing of speech in natural soundscapes on multiple scales by using data-driven modeling approaches to characterize sounds to analyze ultra high field fMRI recorded while participants listened to the audio soundtrack of a movie. We show that at the functional level the neuronal processing of speech in natural soundscapes can be surprisingly low dimensional in the human cortex, highlighting the functional efficiency of the auditory system for a seemingly complex task. Particularly, we find that a model comprising three functional dimensions of auditory processing in the temporal lobes is shared across participants' fMRI activity. We further demonstrate that the three functional dimensions are implemented in anatomically overlapping networks that process different aspects of speech in natural soundscapes. One is most sensitive to complex auditory features present in speech, another to complex auditory features and fast temporal modulations, that are not specific to speech, and one codes mainly sound level. These results were derived with few a-priori assumptions and provide a detailed and computationally reproducible account of the cortical activity in the temporal lobe elicited by the processing of speech in natural soundscapes.


Subject(s)
Auditory Perception/physiology , Brain Mapping/methods , Models, Theoretical , Speech Perception/physiology , Temporal Lobe/physiology , Unsupervised Machine Learning , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Motion Pictures , Temporal Lobe/diagnostic imaging , Young Adult
3.
Entropy (Basel) ; 23(5)2021 Apr 29.
Article in English | MEDLINE | ID: mdl-33947060

ABSTRACT

Latent Variable Models (LVMs) are well established tools to accomplish a range of different data processing tasks. Applications exploit the ability of LVMs to identify latent data structure in order to improve data (e.g., through denoising) or to estimate the relation between latent causes and measurements in medical data. In the latter case, LVMs in the form of noisy-OR Bayes nets represent the standard approach to relate binary latents (which represent diseases) to binary observables (which represent symptoms). Bayes nets with binary representation for symptoms may be perceived as a coarse approximation, however. In practice, real disease symptoms can range from absent over mild and intermediate to very severe. Therefore, using diseases/symptoms relations as motivation, we here ask how standard noisy-OR Bayes nets can be generalized to incorporate continuous observables, e.g., variables that model symptom severity in an interval from healthy to pathological. This transition from binary to interval data poses a number of challenges including a transition from a Bernoulli to a Beta distribution to model symptom statistics. While noisy-OR-like approaches are constrained to model how causes determine the observables' mean values, the use of Beta distributions additionally provides (and also requires) that the causes determine the observables' variances. To meet the challenges emerging when generalizing from Bernoulli to Beta distributed observables, we investigate a novel LVM that uses a maximum non-linearity to model how the latents determine means and variances of the observables. Given the model and the goal of likelihood maximization, we then leverage recent theoretical results to derive an Expectation Maximization (EM) algorithm for the suggested LVM. We further show how variational EM can be used to efficiently scale the approach to large networks. Experimental results finally illustrate the efficacy of the proposed model using both synthetic and real data sets. Importantly, we show that the model produces reliable results in estimating causes using proofs of concepts and first tests based on real medical data and on images.

4.
PLoS Comput Biol ; 15(1): e1006595, 2019 01.
Article in English | MEDLINE | ID: mdl-30653497

ABSTRACT

We investigate how the neural processing in auditory cortex is shaped by the statistics of natural sounds. Hypothesising that auditory cortex (A1) represents the structural primitives out of which sounds are composed, we employ a statistical model to extract such components. The input to the model are cochleagrams which approximate the non-linear transformations a sound undergoes from the outer ear, through the cochlea to the auditory nerve. Cochleagram components do not superimpose linearly, but rather according to a rule which can be approximated using the max function. This is a consequence of the compression inherent in the cochleagram and the sparsity of natural sounds. Furthermore, cochleagrams do not have negative values. Cochleagrams are therefore not matched well by the assumptions of standard linear approaches such as sparse coding or ICA. We therefore consider a new encoding approach for natural sounds, which combines a model of early auditory processing with maximal causes analysis (MCA), a sparse coding model which captures both the non-linear combination rule and non-negativity of the data. An efficient truncated EM algorithm is used to fit the MCA model to cochleagram data. We characterize the generative fields (GFs) inferred by MCA with respect to in vivo neural responses in A1 by applying reverse correlation to estimate spectro-temporal receptive fields (STRFs) implied by the learned GFs. Despite the GFs being non-negative, the STRF estimates are found to contain both positive and negative subfields, where the negative subfields can be attributed to explaining away effects as captured by the applied inference method. A direct comparison with ferret A1 shows many similar forms, and the spectral and temporal modulation tuning of both ferret and model STRFs show similar ranges over the population. In summary, our model represents an alternative to linear approaches for biological auditory encoding while it captures salient data properties and links inhibitory subfields to explaining away effects.


Subject(s)
Auditory Cortex/physiology , Cochlea/physiology , Models, Neurological , Models, Statistical , Signal Processing, Computer-Assisted , Acoustic Stimulation , Algorithms , Animals , Female , Ferrets , Hearing Tests , Humans , Male
5.
Neural Comput ; 30(8): 2113-2174, 2018 08.
Article in English | MEDLINE | ID: mdl-29894656

ABSTRACT

We explore classifier training for data sets with very few labels. We investigate this task using a neural network for nonnegative data. The network is derived from a hierarchical normalized Poisson mixture model with one observed and two hidden layers. With the single objective of likelihood optimization, both labeled and unlabeled data are naturally incorporated into learning. The neural activation and learning equations resulting from our derivation are concise and local. As a consequence, the network can be scaled using standard deep learning tools for parallelized GPU implementation. Using standard benchmarks for nonnegative data, such as text document representations, MNIST, and NIST SD19, we study the classification performance when very few labels are used for training. In different settings, the network's performance is compared to standard and recently suggested semisupervised classifiers. While other recent approaches are more competitive for many labels or fully labeled data sets, we find that the network studied here can be applied to numbers of few labels where no other system has been reported to operate so far.

6.
Neural Comput ; 29(11): 2979-3013, 2017 11.
Article in English | MEDLINE | ID: mdl-28957027

ABSTRACT

Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

7.
Neural Comput ; 29(8): 2177-2202, 2017 08.
Article in English | MEDLINE | ID: mdl-28562214

ABSTRACT

We propose a nonparametric procedure to achieve fast inference in generative graphical models when the number of latent states is very large. The approach is based on iterative latent variable preselection, where we alternate between learning a selection function to reveal the relevant latent variables and using this to obtain a compact approximation of the posterior distribution for EM. This can make inference possible where the number of possible latent states is, for example, exponential in the number of latent variables, whereas an exact approach would be computationally infeasible. We learn the selection function entirely from the observed data and current expectation-maximization state via gaussian process regression. This is in contrast to earlier approaches, where selection functions were manually designed for each problem setting. We show that our approach performs as well as these bespoke selection functions on a wide variety of inference problems. In particular, for the challenging case of a hierarchical model for object localization with occlusion, we achieve results that match a customized state-of-the-art selection method at a far lower computational cost.

8.
Cytometry A ; 85(6): 501-11, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24733633

ABSTRACT

Personalized medicine is a modern healthcare approach where information on each person's unique clinical constitution is exploited to realize early disease intervention based on more informed medical decisions. The application of diagnostic tools in combination with measurement evaluation that can be performed in a reliable and automated fashion plays a key role in this context. As the progression of various cancer diseases and the effectiveness of their treatments are related to a varying number of tumor cells that circulate in blood, the determination of their extremely low numbers by liquid biopsy is a decisive prognostic marker. To detect and enumerate circulating tumor cells (CTCs) in a reliable and automated fashion, we apply methods from machine learning using a naive Bayesian classifier (NBC) based on a probabilistic generative mixture model. Cells are collected with a functionalized medical wire and are stained for fluorescence microscopy so that their color signature can be used for classification through the construction of Red-Green-Blue (RGB) color histograms. Exploiting the information on the fluorescence signature of CTCs by the NBC does not only allow going beyond previous approaches but also provides a method of unsupervised learning that is required for unlabeled training data. A quantitative comparison with a state-of-the-art support vector machine, which requires labeled data, demonstrates the competitiveness of the NBC method.


Subject(s)
Bayes Theorem , Early Detection of Cancer , Neoplasms/diagnosis , Neoplastic Cells, Circulating/pathology , Algorithms , Artificial Intelligence , Humans , Neoplasms/pathology , Neoplastic Cells, Circulating/ultrastructure , Pattern Recognition, Automated/methods , Precision Medicine , Support Vector Machine
9.
PLoS Comput Biol ; 9(6): e1003062, 2013.
Article in English | MEDLINE | ID: mdl-23754938

ABSTRACT

Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of 'globular' receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of 'globular' fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of 'globular' fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of 'globular' fields well. Our computational study, therefore, suggests that 'globular' fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex.


Subject(s)
Vision, Ocular , Visual Cortex/cytology , Animals , Computational Biology , Humans , Models, Theoretical
10.
PLoS Comput Biol ; 8(3): e1002432, 2012.
Article in English | MEDLINE | ID: mdl-22457610

ABSTRACT

Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.


Subject(s)
Action Potentials/physiology , Feedback, Physiological/physiology , Models, Neurological , Nerve Net/physiology , Neural Inhibition/physiology , Neurons/physiology , Synaptic Transmission/physiology , Animals , Computer Simulation , Humans
11.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 9787-9801, 2022 Dec.
Article in English | MEDLINE | ID: mdl-34882546

ABSTRACT

How can we efficiently find very large numbers of clusters C in very large datasets N of potentially high dimensionality D? Here we address the question by using a novel variational approach to optimize Gaussian mixture models (GMMs) with diagonal covariance matrices. The variational method approximates expectation maximization (EM) by applying truncated posteriors as variational distributions and partial E-steps in combination with coresets. Run time complexity to optimize the clustering objective then reduces from O(NCD) per conventional EM iteration to O(N'G2D) for a variational EM iteration on coresets (with coreset size N ' ≤ N and truncation parameter G ≪ C). Based on the strongly reduced run time complexity per iteration, which scales sublinearly with NC, we then provide a concrete, practically applicable, parallelized and highly efficient clustering algorithm. In numerical experiments on standard large-scale benchmarks we (A) show that also overall clustering times scale sublinearly with NC, and (B) observe substantial wall-clock speedups compared to already highly efficient recently reported results. The algorithm's sublinear scaling allows for applications at scales where alternative methods cease to be applicable. We demonstrate such very large-scale applicability using the YFCC100M benchmark, for which we realize with a GMM of up to 50.000 clusters an optimization of a data density model with up to 150 M parameters.

12.
Phys Rev E ; 104(4-1): 044105, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34781434

ABSTRACT

We study a phase transition in parameter learning of hidden Markov models (HMMs). We do this by generating sequences of observed symbols from given discrete HMMs with uniformly distributed transition probabilities and a noise level encoded in the output probabilities. We apply the Baum-Welch (BW) algorithm, an expectation-maximization algorithm from the field of machine learning. By using the BW algorithm we then try to estimate the parameters of each investigated realization of an HMM. We study HMMs with n=4,8, and 16 states. By changing the amount of accessible learning data and the noise level, we observe a phase-transition-like change in the performance of the learning algorithm. For bigger HMMs and more learning data, the learning behavior improves tremendously below a certain threshold in the noise strength. For a noise level above the threshold, learning is not possible. Furthermore, we use an overlap parameter applied to the results of a maximum a posteriori (Viterbi) algorithm to investigate the accuracy of the hidden state estimation around the phase transition.

13.
J Vis ; 8(7): 34.1-18, 2008 Dec 29.
Article in English | MEDLINE | ID: mdl-19146266

ABSTRACT

Our aim here is to create a fully neural, functionally competitive, and correspondence-based model for invariant face recognition. By recurrently integrating information about feature similarities, spatial feature relations, and facial structure stored in memory, the system evaluates face identity ("what"-information) and face position ("where"-information) using explicit representations for both. The network consists of three functional layers of processing, (1) an input layer for image representation, (2) a middle layer for recurrent information integration, and (3) a gallery layer for memory storage. Each layer consists of cortical columns as functional building blocks that are modeled in accordance with recent experimental findings. In numerical simulations we apply the system to standard benchmark databases for face recognition. We find that recognition rates of our biologically inspired approach lie in the same range as recognition rates of recent and purely functionally motivated systems.


Subject(s)
Computer Simulation , Models, Theoretical , Pattern Recognition, Visual/physiology , Cognition/physiology , Face , Humans
14.
Sci Rep ; 8(1): 10038, 2018 07 03.
Article in English | MEDLINE | ID: mdl-29968764

ABSTRACT

In natural data, the class and intensity of stimuli are correlated. Current machine learning algorithms ignore this ubiquitous statistical property of stimuli, usually by requiring normalized inputs. From a biological perspective, it remains unclear how neural circuits may account for these dependencies in inference and learning. Here, we use a probabilistic framework to model class-specific intensity variations, and we derive approximate inference and online learning rules which reflect common hallmarks of neural computation. Concretely, we show that a neural circuit equipped with specific forms of synaptic and intrinsic plasticity (IP) can learn the class-specific features and intensities of stimuli simultaneously. Our model provides a normative interpretation of IP as a critical part of sensory learning and predicts that neurons can represent nontrivial input statistics in their excitabilities. Computationally, our approach yields improved statistical representations for realistic datasets in the visual and auditory domains. In particular, we demonstrate the utility of the model in estimating the contrastive stress of speech.

15.
Front Comput Neurosci ; 11: 54, 2017.
Article in English | MEDLINE | ID: mdl-28690509

ABSTRACT

Biological and artificial neural networks (ANNs) represent input signals as patterns of neural activity. In biology, neuromodulators can trigger important reorganizations of these neural representations. For instance, pairing a stimulus with the release of either acetylcholine (ACh) or dopamine (DA) evokes long lasting increases in the responses of neurons to the paired stimulus. The functional roles of ACh and DA in rearranging representations remain largely unknown. Here, we address this question using a Hebbian-learning neural network model. Our aim is both to gain a functional understanding of ACh and DA transmission in shaping biological representations and to explore neuromodulator-inspired learning rules for ANNs. We model the effects of ACh and DA on synaptic plasticity and confirm that stimuli coinciding with greater neuromodulator activation are over represented in the network. We then simulate the physiological release schedules of ACh and DA. We measure the impact of neuromodulator release on the network's representation and on its performance on a classification task. We find that ACh and DA trigger distinct changes in neural representations that both improve performance. The putative ACh signal redistributes neural preferences so that more neurons encode stimulus classes that are challenging for the network. The putative DA signal adapts synaptic weights so that they better match the classes of the task at hand. Our model thus offers a functional explanation for the effects of ACh and DA on cortical representations. Additionally, our learning algorithm yields performances comparable to those of state-of-the-art optimisation methods in multi-layer perceptrons while requiring weaker supervision signals and interacting with synaptically-local weight updates.

16.
PLoS One ; 10(5): e0124088, 2015.
Article in English | MEDLINE | ID: mdl-25954947

ABSTRACT

Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.


Subject(s)
Algorithms , Image Interpretation, Computer-Assisted , Nonlinear Dynamics , Databases as Topic , Dictionaries as Topic , Image Processing, Computer-Assisted , Learning
17.
Neural Netw ; 17(8-9): 1377-89, 2004.
Article in English | MEDLINE | ID: mdl-15555872

ABSTRACT

We study self-organization of receptive fields (RFs) of cortical minicolumns. Input driven self-organization is induced by Hebbian synaptic plasticity of afferent fibers to model minicolumns based on spiking neurons and background oscillations. If input in the form of spike patterns is presented during learning, the RFs of minicolumns hierarchically specialize to increasingly small groups of similar RFs in a series of nested group subdivisions. In a number of experiments we show that the system finds clusters of similar spike patterns, that it is capable of evenly cover the input space if the input is continuously distributed, and that it extracts basic features from input consisting of superpositions of spike patterns. With a continuous version of the bars test we, furthermore, demonstrate the system's ability to evenly cover the space of extracted basic input features. The hierarchical nature and its flexibility with respect to input distinguishes the presented type of self-organization from others including similar but non-hierarchical self-organization as discussed in [Lucke J., & von der Malsburg, C. (2004). Rapid processing and unsupervised learning in a model of the cortical macrocolumn. Neural Computation 16, 501-533]. The capabilities of the presented system match crucial properties of the plasticity of cortical RFs and we suggest it as a model for their hierarchical formation.


Subject(s)
Artificial Intelligence , Neural Networks, Computer , Pattern Recognition, Visual/physiology , Animals , Brain Mapping , Cerebral Cortex/physiology , Humans , Neuronal Plasticity/physiology , Neurons, Afferent/physiology , Visual Fields/physiology
18.
IEEE Trans Pattern Anal Mach Intell ; 36(10): 1950-62, 2014 Oct.
Article in English | MEDLINE | ID: mdl-26352627

ABSTRACT

We study the task of cleaning scanned text documents that are strongly corrupted by dirt such as manual line strokes, spilled ink, etc. We aim at autonomously removing such corruptions from a single letter-size page based only on the information the page contains. Our approach first learns character representations from document patches without supervision. For learning, we use a probabilistic generative model parameterizing pattern features, their planar arrangements and their variances. The model's latent variables describe pattern position and class, and feature occurrences. Model parameters are efficiently inferred using a truncated variational EM approach. Based on the learned representation, a clean document can be recovered by identifying, for each patch, pattern class and position while a quality measure allows for discrimination between character and non-character patterns. For a full Latin alphabet we found that a single page does not contain sufficiently many character examples. However, even if heavily corrupted by dirt, we show that a page containing a lower number of character types can efficiently and autonomously be cleaned solely based on the structural regularity of the characters it contains. In different example applications with different alphabets, we demonstrate and discuss the effectiveness, efficiency and generality of the approach.

19.
Neural Comput ; 21(10): 2805-45, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19548804

ABSTRACT

We study a dynamical model of processing and learning in the visual cortex, which reflects the anatomy of V1 cortical columns and properties of their neuronal receptive fields. Based on recent results on the fine-scale structure of columns in V1, we model the activity dynamics in subpopulations of excitatory neurons and their interaction with systems of inhibitory neurons. We find that a dynamical model based on these aspects of columnar anatomy can give rise to specific types of computations that result in self-organization of afferents to the column. For a given type of input, self-organization reliably extracts the basic input components represented by neuronal receptive fields. Self-organization is very noise tolerant and can robustly be applied to different types of input. To quantitatively analyze the system's component extraction capabilities, we use two standard benchmarks: the bars test and natural images. In the bars test, the system shows the highest noise robustness reported so far. If natural image patches are used as input, self-organization results in Gabor-like receptive fields. In quantitative comparison with in vivo measurements, we find that the obtained receptive fields capture statistical properties of V1 simple cells that algorithms such as independent component analysis or sparse coding do not reproduce.


Subject(s)
Nerve Net/physiology , Sensory Receptor Cells/physiology , Visual Cortex/physiology , Visual Fields/physiology , Visual Pathways/physiology , Visual Perception/physiology , Algorithms , Animals , Computer Simulation , Humans , Interneurons/physiology , Neural Inhibition/physiology , Neural Networks, Computer , Pattern Recognition, Visual/physiology , Synaptic Transmission/physiology
20.
Clin Neurophysiol ; 120(5): 1003-8, 2009 May.
Article in English | MEDLINE | ID: mdl-19329358

ABSTRACT

OBJECTIVE: Input-output (IO) curves of motor evoked potentials (MEP) are widely used to assess corticospinal excitability by transcranial magnetic stimulation (TMS). Here we sought to determine hysteresis effects on IO curves, i.e. their short-term dependence on prior corticospinal activation. METHODS: IO curves were measured from the first dorsal interosseous (FDI) muscle of 14 healthy volunteers in three different conditions of stimulus intensity order: increase from lowest to highest, decrease from highest to lowest, and random. Intensities ranged from 80% to 170% of the resting motor threshold (RMT). IO curves were measured in the resting vs. active FDI and at two different intertrial intervals (ITI, 5 s and 20 s). RESULTS: In the resting FDI and at ITI=5 s, the IO curve in condition "decrease" shifted significantly to the left compared to condition "increase". The IO curve in condition "random" ran in between the other two curves. Hysteresis was most pronounced in the high intensity part of the IO curves. Hysteresis did not occur at ITI=20 s or in the active FDI. CONCLUSIONS: Findings implicate that hysteresis can influence IO curves significantly. One possible underlying mechanism might be short-term synaptic enhancement. SIGNIFICANCE: Consideration of IO curve hysteresis effects is important to avoid systematic data bias in clinical and research TMS applications.


Subject(s)
Artifacts , Evoked Potentials, Motor/physiology , Motor Cortex/physiology , Pyramidal Tracts/physiology , Transcranial Magnetic Stimulation/methods , Adult , Brain Mapping , Electromyography , Female , Hand/innervation , Hand/physiology , Humans , Male , Motor Neurons/physiology , Muscle Contraction/physiology , Muscle, Skeletal/innervation , Muscle, Skeletal/physiology , Neural Conduction/physiology , Signal Processing, Computer-Assisted , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL