Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
J Water Health ; 20(9): 1364-1379, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36170191

RESUMEN

This study aimed to develop an empirical model to predict the spatial distribution of Aphanizomenon using the Ridiyagama reservoir in Sri Lanka with a dual-model strategy. In December 2020, a bloom was detected with a high density of Aphanizomenon and chlorophyll-a concentration. We generated a set of algorithms using in situ chlorophyll-a data with surface reflectance of Sentinel-2 bands on the same day using linear regression analysis. The in situ chlorophyll-a concentration was better regressed to the reflectance ratio of (1 + R665)/(1-R705) derived from B4 and B5 bands of Sentinel-2 with high reliability (R2 = 0.81, p < 0.001). The second regression model was developed to predict Aphanizomenon cell density using chlorophyll-a as the proxy and the relationship was strong and significant (R2 = 0.75, p<0.001). Coupling the former regression models, an empirical model was derived to predict Aphanizomenon cell density in the same reservoir with high reliability (R2 = 0.71, p<0.001). Furthermore, the predicted and observed spatial distribution of Aphanizomenon was fairly agreed. Our results highlight that the present empirical model has a high capability for an accurate prediction of Aphanizomenon cell density and their spatial distribution in freshwaters, which helps in the management of toxic algal blooms and associated health impacts.


Asunto(s)
Aphanizomenon , Cianobacterias , Clorofila , Agua Dulce/microbiología , Reproducibilidad de los Resultados , Imágenes Satelitales
2.
IEEE Trans Neural Netw ; 17(4): 1039-49, 2006 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-16856665

RESUMEN

Sequential minimal optimization (SMO) is one popular algorithm for training support vector machine (SVM), but it still requires a large amount of computation time for solving large size problems. This paper proposes one parallel implementation of SMO for training SVM. The parallel SMO is developed using message passing interface (MPI). Specifically, the parallel SMO first partitions the entire training data set into smaller subsets and then simultaneously runs multiple CPU processors to deal with each of the partitioned data sets. Experiments show that there is great speedup on the adult data set and the Mixing National Institute of Standard and Technology (MNIST) data set when many processors are used. There are also satisfactory results on the Web data set.


Asunto(s)
Inteligencia Artificial , Análisis Numérico Asistido por Computador , Algoritmos
3.
Bioinformatics ; 19(17): 2246-53, 2003 Nov 22.
Artículo en Inglés | MEDLINE | ID: mdl-14630653

RESUMEN

MOTIVATION: This paper gives a new and efficient algorithm for the sparse logistic regression problem. The proposed algorithm is based on the Gauss-Seidel method and is asymptotically convergent. It is simple and extremely easy to implement; it neither uses any sophisticated mathematical programming software nor needs any matrix operations. It can be applied to a variety of real-world problems like identifying marker genes and building a classifier in the context of cancer diagnosis using microarray data. RESULTS: The gene selection method suggested in this paper is demonstrated on two real-world data sets and the results were found to be consistent with the literature. AVAILABILITY: The implementation of this algorithm is available at the site http://guppy.mpe.nus.edu.sg/~mpessk/SparseLOGREG.shtml SUPPLEMENTARY INFORMATION: Supplementary material is available at the site http://guppy.mpe.nus.edu.sg/~mpessk/SparseLOGREG.shtml


Asunto(s)
Algoritmos , Análisis por Conglomerados , Perfilación de la Expresión Génica/métodos , Pruebas Genéticas/métodos , Neoplasias/diagnóstico , Neoplasias/genética , Análisis de Secuencia por Matrices de Oligonucleótidos/métodos , Análisis de Regresión , Biomarcadores de Tumor/genética , Neoplasias de la Mama/clasificación , Neoplasias de la Mama/diagnóstico , Neoplasias de la Mama/genética , Neoplasias del Colon/clasificación , Neoplasias del Colon/diagnóstico , Neoplasias del Colon/genética , Humanos , Neoplasias/clasificación , Reconocimiento de Normas Patrones Automatizadas , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
4.
Neural Comput ; 15(2): 487-507, 2003 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-12590817

RESUMEN

This article extends the well-known SMO algorithm of support vector machines (SVMs) to least-squares SVM formulations that include LS-SVM classification, kernel ridge regression, and a particular form of regularized kernel Fisher discriminant. The algorithm is shown to be asymptotically convergent. It is also extremely easy to implement. Computational experiments show that the algorithm is fast and scales efficiently (quadratically) as a function of the number of examples.


Asunto(s)
Algoritmos , Análisis de los Mínimos Cuadrados
5.
IEEE Trans Neural Netw ; 13(5): 1225-9, 2002.
Artículo en Inglés | MEDLINE | ID: mdl-18244520

RESUMEN

The paper discusses implementation issues related to the tuning of the hyperparameters of a support vector machine (SVM) with L/sub 2/ soft margin, for which the radius/margin bound is taken as the index to be minimized, and iterative techniques are employed for computing radius and margin. The implementation is shown to be feasible and efficient, even for large problems having more than 10000 support vectors.

6.
Neural Comput ; 13(5): 1103-18, 2001 May.
Artículo en Inglés | MEDLINE | ID: mdl-11359646

RESUMEN

Gaussian processes are powerful regression models specified by parameterized mean and covariance functions. Standard approaches to choose these parameters (known by the name hyperparameters) are maximum likelihood and maximum a posteriori. In this article, we propose and investigate predictive approaches based on Geisser's predictive sample reuse (PSR) methodology and the related Stone's cross-validation (CV) methodology. More specifically, we derive results for Geisser's surrogate predictive probability (GPP), Geisser's predictive mean square error (GPE), and the standard CV error and make a comparative study. Within an approximation we arrive at the generalized cross-validation (GCV) and establish its relationship with the GPP and GPE approaches. These approaches are tested on a number of problems. Experimental results show that these approaches are strongly competitive with the existing approaches.


Asunto(s)
Modelos Estadísticos , Distribución Normal , Análisis de Regresión , Funciones de Verosimilitud , Probabilidad , Reproducibilidad de los Resultados
7.
Artículo en Inglés | MEDLINE | ID: mdl-18244725

RESUMEN

In this paper, a stochastic connectionist approach is proposed for solving function optimization problems with real-valued parameters. With the assumption of increased processing capability of a node in the connectionist network, we show how a broader class of problems can be solved. As the proposed approach is a stochastic search technique, it avoids getting stuck in local optima. Robustness of the approach is demonstrated on several multi-modal functions with different numbers of variables. Optimization of a well-known partitional clustering criterion, the squared-error criterion (SEC), is formulated as a function optimization problem and is solved using the proposed approach. This approach is used to cluster selected data sets and the results obtained are compared with that of the K-means algorithm and a simulated annealing (SA) approach. The amenability of the connectionist approach to parallelization enables effective use of parallel hardware.

8.
IEEE Trans Neural Netw ; 11(1): 124-36, 2000.
Artículo en Inglés | MEDLINE | ID: mdl-18249745

RESUMEN

In this paper we give a new fast iterative algorithm for support vector machine (SVM) classifier design. The basic problem treated is one that does not allow classification violations. The problem is converted to a problem of computing the nearest point between two convex polytopes. The suitability of two classical nearest point algorithms, due to Gilbert, and Mitchell et al., is studied. Ideas from both these algorithms are combined and modified to derive our fast algorithm. For problems which require classification violations to be allowed, the violations are quadratically penalized and an idea due to Cortes and Vapnik and Friess is used to convert it to a problem in which there are no classification violations. Comparative computational evaluation of our algorithm against powerful SVM methods such as Platt's sequential minimal optimization shows that our algorithm is very competitive.

9.
IEEE Trans Neural Netw ; 11(5): 1188-93, 2000.
Artículo en Inglés | MEDLINE | ID: mdl-18249845

RESUMEN

This paper points out an important source of inefficiency in Smola and Schölkopf's sequential minimal optimization (SMO) algorithm for support vector machine (SVM) regression that is caused by the use of a single threshold value. Using clues from the KKT conditions for the dual problem, two threshold parameters are employed to derive modifications of SMO for regression. These modified algorithms perform significantly faster than the original SMO on the datasets tried.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...