RESUMEN
We argue the Fisher information matrix (FIM) of one hidden layer networks with the ReLU activation function. For a network, let W denote the d×p weight matrix from the d-dimensional input to the hidden layer consisting of p neurons, and v the p-dimensional weight vector from the hidden layer to the scalar output. We focus on the FIM of v, which we denote as I. Under certain conditions, we characterize the first three clusters of eigenvalues and eigenvectors of the FIM. Specifically, we show that the following approximately holds. (1) Since I is non-negative owing to the ReLU, the first eigenvalue is the Perron-Frobenius eigenvalue. (2) For the cluster of the next maximum values, the eigenspace is spanned by the row vectors of W. (3) The direct sum of the eigenspace of the first eigenvalue and that of the third cluster is spanned by the set of all the vectors obtained as the Hadamard product of any pair of the row vectors of W. We confirmed by numerical calculation that the above is approximately correct when the number of hidden nodes is about 10000.
Asunto(s)
Redes Neurales de la Computación , NeuronasRESUMEN
Midbrain dopamine neurons are activated directly by cholinergic agonists or by stimulation of the cholinergic neurons in the laterodorsal tegmental nucleus (LDT) of the pons in rats. In urethane-anesthetized mice, electrical stimulation of the LDT resulted in a rapid, stimulus-time-locked increase in dopamine release in the nucleus accumbens (NAc), followed several minutes later by a prolonged increase in dopamine release. In mutant mice with truncated M5 receptors, the prolonged phase of dopamine release was absent, but the initial, rapid phase of dopamine release was fully observed. We conclude that M5 muscarinic receptors on midbrain dopamine neurons mediate a prolonged facilitation of dopamine release in the NAc. These results imply that M5 muscarinic receptors play an important role in motivational behaviors driven by dopamine activity in the accumbens.
Asunto(s)
Dopamina/metabolismo , Núcleo Accumbens/metabolismo , Puente/fisiología , Receptores Muscarínicos/metabolismo , Animales , Estimulación Eléctrica , Electrodos Implantados , Cinética , Masculino , Ratones , Ratones Mutantes , Motivación , Antagonistas Muscarínicos/farmacología , Núcleo Accumbens/efectos de los fármacos , Receptor Muscarínico M5 , Receptores Muscarínicos/efectos de los fármacos , Receptores Muscarínicos/genética , Recompensa , Escopolamina/farmacologíaRESUMEN
The rarest and least understood of the muscarinic receptors is the M5 subtype. Recombinant methods were used to create mutant mice with a deletion in the third intracellular loop of the M5 receptor gene. Salivation induced by the nonselective muscarinic agonist pilocarpine (1 mg/kg s.c.) was reduced in homozygous mutants from 15 to 60 min after injection as compared with wild-type mice. After 18-h food and water deprivation, drinking was increased in homozygous mutants, but feeding was not increased. The mutant and wild-type mice had similar responses in tests of open-field exploration, seizures induced by pilocarpine (300 mg/kg) or hypothermia induced by pilocarpine (1-3 mg/kg). These results indicate that M5 muscarinic receptors are important for fluid intake and suggest that M5 receptors are involved in slow secretory processes.
Asunto(s)
Ingestión de Líquidos/genética , Eliminación de Gen , Receptores Muscarínicos/deficiencia , Receptores Muscarínicos/genética , Animales , Genotipo , Ratones , Ratones Mutantes , Mutación , Ratas , Receptor Muscarínico M5RESUMEN
We are interested in developing a safe semi-supervised learning that works in any situation. Semi-supervised learning postulates that n(') unlabeled data are available in addition to n labeled data. However, almost all of the previous semi-supervised methods require additional assumptions (not only unlabeled data) to make improvements on supervised learning. If such assumptions are not met, then the methods possibly perform worse than supervised learning. Sokolovska, Cappé, and Yvon (2008) proposed a semi-supervised method based on a weighted likelihood approach. They proved that this method asymptotically never performs worse than supervised learning (i.e., it is safe) without any assumption. Their method is attractive because it is easy to implement and is potentially general. Moreover, it is deeply related to a certain statistical paradox. However, the method of Sokolovska et al. (2008) assumes a very limited situation, i.e., classification, discrete covariates, n(')â∞ and a maximum likelihood estimator. In this paper, we extend their method by modifying the weight. We prove that our proposal is safe in a significantly wide range of situations as long as n≤n('). Further, we give a geometrical interpretation of the proof of safety through the relationship with the above-mentioned statistical paradox. Finally, we show that the above proposal is asymptotically safe even when n(')Asunto(s)
Inteligencia Artificial
, Algoritmos
, Funciones de Verosimilitud