Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Bases de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Neuroimage ; 181: 734-747, 2018 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-30055372

RESUMEN

This work presents a novel approach to finding linkage/association between multimodal brain imaging data, such as structural MRI (sMRI) and functional MRI (fMRI). Motivated by the machine translation domain, we employ a deep learning model, and consider two different imaging views of the same brain like two different languages conveying some common facts. That analogy enables finding linkages between two modalities. The proposed translation-based fusion model contains a computing layer that learns "alignments" (or links) between dynamic connectivity features from fMRI data and static gray matter patterns from sMRI data. The approach is evaluated on a multi-site dataset consisting of eyes-closed resting state imaging data collected from 298 subjects (age- and gender matched 154 healthy controls and 144 patients with schizophrenia). Results are further confirmed on an independent dataset consisting of eyes-open resting state imaging data from 189 subjects (age- and gender matched 91 healthy controls and 98 patients with schizophrenia). We used dynamic functional connectivity (dFNC) states as the functional features and ICA-based sources from gray matter densities as the structural features. The dFNC states characterized by weakly correlated intrinsic connectivity networks (ICNs) were found to have stronger association with putamen and insular gray matter pattern, while the dFNC states of profuse strongly correlated ICNs exhibited stronger links with the gray matter pattern in precuneus, posterior cingulate cortex (PCC), and temporal cortex. Further investigation with the estimated link strength (or alignment score) showed significant group differences between healthy controls and patients with schizophrenia in several key regions including temporal lobe, and linked these to connectivity states showing less occupancy in healthy controls. Moreover, this novel approach revealed significant correlation between a cognitive score (attention/vigilance) and the function/structure alignment score that was not detected when data modalities were considered separately.


Asunto(s)
Conectoma/métodos , Aprendizaje Profundo , Sustancia Gris/fisiología , Red Nerviosa/fisiopatología , Trastornos Psicóticos/fisiopatología , Esquizofrenia/fisiopatología , Adulto , Femenino , Sustancia Gris/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Red Nerviosa/diagnóstico por imagen , Trastornos Psicóticos/diagnóstico por imagen , Esquizofrenia/diagnóstico por imagen
2.
Neural Netw ; 32: 257-66, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-22386786

RESUMEN

Functional link networks are single-layered neural networks that impose nonlinearity in the input layer using nonlinear functions of the original input variables. In this paper, we present a fully complex-valued functional link network (CFLN) with multivariate polynomials as the nonlinear functions. Unlike multilayer neural networks, the CFLN is free from local minima problem, and it offers very fast learning of parameters because of its linear structure. Polynomial based CFLN does not require an activation function which is a major concern in the complex-valued neural networks. However, it is important to select a smaller subset of polynomial terms (monomials) for faster and better performance since the number of all possible monomials may be quite large. Here, we use the orthogonal least squares (OLS) method in a constructive fashion (starting from lower degree to higher) for the selection of a parsimonious subset of monomials. It is argued here that computing CFLN in purely complex domain is advantageous than in double-dimensional real domain, in terms of number of connection parameters, faster design, and possibly generalization performance. Simulation results on a function approximation, wind prediction with real-world data, and a nonlinear channel equalization problem exhibit that the OLS based CFLN yields very simple structure having favorable performance.


Asunto(s)
Análisis de los Mínimos Cuadrados , Redes Neurales de la Computación , Algoritmos , Inteligencia Artificial , Simulación por Computador , Bases de Datos Factuales , Predicción , Meteorología/métodos , Dinámicas no Lineales , Viento
3.
IEEE Trans Syst Man Cybern B Cybern ; 39(3): 705-22, 2009 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-19203888

RESUMEN

This paper presents a new algorithm, called adaptive merging and growing algorithm (AMGA), in designing artificial neural networks (ANNs). This algorithm merges and adds hidden neurons during the training process of ANNs. The merge operation introduced in AMGA is a kind of a mixed mode operation, which is equivalent to pruning two neurons and adding one neuron. Unlike most previous studies, AMGA puts emphasis on autonomous functioning in the design process of ANNs. This is the main reason why AMGA uses an adaptive not a predefined fixed strategy in designing ANNs. The adaptive strategy merges or adds hidden neurons based on the learning ability of hidden neurons or the training progress of ANNs. In order to reduce the amount of retraining after modifying ANN architectures, AMGA prunes hidden neurons by merging correlated hidden neurons and adds hidden neurons by splitting existing hidden neurons. The proposed AMGA has been tested on a number of benchmark problems in machine learning and ANNs, including breast cancer, Australian credit card assessment, and diabetes, gene, glass, heart, iris, and thyroid problems. The experimental results show that AMGA can design compact ANN architectures with good generalization ability compared to other algorithms.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Enfermedad , Genes , Estadística como Asunto
4.
IEEE Trans Syst Man Cybern B Cybern ; 39(6): 1590-605, 2009 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-19502131

RESUMEN

The generalization ability of artificial neural networks (ANNs) is greatly dependent on their architectures. Constructive algorithms provide an attractive automatic way of determining a near-optimal ANN architecture for a given problem. Several such algorithms have been proposed in the literature and shown their effectiveness. This paper presents a new constructive algorithm (NCA) in automatically determining ANN architectures. Unlike most previous studies on determining ANN architectures, NCA puts emphasis on architectural adaptation and functional adaptation in its architecture determination process. It uses a constructive approach to determine the number of hidden layers in an ANN and of neurons in each hidden layer. To achieve functional adaptation, NCA trains hidden neurons in the ANN by using different training sets that were created by employing a similar concept used in the boosting algorithm. The purpose of using different training sets is to encourage hidden neurons to learn different parts or aspects of the training data so that the ANN can learn the whole training data in a better way. In this paper, the convergence and computational issues of NCA are analytically studied. The computational complexity of NCA is found to be O(W xP(t) xtau), where W is the number of weights in the ANN, P(t) is the number of training examples, and tau is the number of training epochs. This complexity has the same order as what the backpropagation learning algorithm requires for training a fixed ANN architecture. A set of eight classification and two approximation benchmark problems was used to evaluate the performance of NCA. The experimental results show that NCA can produce ANN architectures with fewer hidden neurons and better generalization ability compared to existing constructive and nonconstructive algorithms.


Asunto(s)
Algoritmos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA