Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Neuroimage ; 184: 417-430, 2019 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-30240902

RESUMEN

Multivoxel pattern analysis (MVPA) methods have been widely applied in recent years to classify human brain states in functional magnetic resonance imaging (fMRI) data analysis. Voxel selection plays an important role in MVPA studies not only because it can improve decoding accuracy but also because it is useful for understanding brain functions. There are many voxel selection methods that have been proposed in fMRI literature. However, most of these methods either overlook the structure information of fMRI data or require additional cross-validation procedures to determine the hyperparameters of the models. In the present work, we proposed a voxel selection method for binary brain decoding called group sparse Bayesian logistic regression (GSBLR). This method utilizes the group sparse property of fMRI data by using a grouped automatic relevance determination (GARD) as a prior for model parameters. All the parameters in the GSBLR can be estimated automatically, thereby avoiding additional cross-validation. Experimental results based on two publicly available fMRI datasets and simulated datasets demonstrate that GSBLR achieved better classification accuracies and yielded more stable solutions than several state-of-the-art methods.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Teorema de Bayes , Conjuntos de Datos como Asunto , Humanos
2.
Pattern Recognit Lett ; 38: 132-141, 2014 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-24532862

RESUMEN

Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.

3.
J Appl Stat ; 47(7): 1220-1234, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-35707022

RESUMEN

Appointment no-shows have a negative impact on patient health and have caused substantial loss in resources and revenue for health care systems. Intervention strategies to reduce no-show rates can be more effective if targeted to the subpopulations of patients with higher risk of not showing to their appointments. We use electronic health records (EHR) from a large medical center to predict no-show patients based on demographic and health care features. We apply sparse Bayesian modeling approaches based on Lasso and automatic relevance determination to predict and identify the most relevant risk factors of no-show patients at a provider level.

4.
Ultramicroscopy ; 170: 43-59, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-27529804

RESUMEN

Advances in scanning transmission electron microscopy (STEM) techniques have enabled us to automatically obtain electron energy-loss (EELS)/energy-dispersive X-ray (EDX) spectral datasets from a specified region of interest (ROI) at an arbitrary step width, called spectral imaging (SI). Instead of manually identifying the potential constituent chemical components from the ROI and determining the chemical state of each spectral component from the SI data stored in a huge three-dimensional matrix, it is more effective and efficient to use a statistical approach for the automatic resolution and extraction of the underlying chemical components. Among many different statistical approaches, we adopt a non-negative matrix factorization (NMF) technique, mainly because of the natural assumption of non-negative values in the spectra and cardinalities of chemical components, which are always positive in actual data. This paper proposes a new NMF model with two penalty terms: (i) an automatic relevance determination (ARD) prior, which optimizes the number of components, and (ii) a soft orthogonal constraint, which clearly resolves each spectrum component. For the factorization, we further propose a fast optimization algorithm based on hierarchical alternating least-squares. Numerical experiments using both phantom and real STEM-EDX/EELS SI datasets demonstrate that the ARD prior successfully identifies the correct number of physically meaningful components. The soft orthogonal constraint is also shown to be effective, particularly for STEM-EELS SI data, where neither the spatial nor spectral entries in the matrices are sparse.

5.
Neural Netw ; 53: 69-80, 2014 May.
Artículo en Inglés | MEDLINE | ID: mdl-24561452

RESUMEN

Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense.


Asunto(s)
Máquina de Vectores de Soporte , Teorema de Bayes , Análisis de los Mínimos Cuadrados , Modelos Teóricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA