Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Entropy (Basel) ; 26(3)2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38539782

RESUMO

The partial information decomposition (PID) framework is concerned with decomposing the information that a set of (two or more) random variables (the sources) has about another variable (the target) into three types of information: unique, redundant, and synergistic. Classical information theory alone does not provide a unique way to decompose information in this manner and additional assumptions have to be made. One often overlooked way to achieve this decomposition is using a so-called measure of union information-which quantifies the information that is present in at least one of the sources-from which a synergy measure stems. In this paper, we introduce a new measure of union information based on adopting a communication channel perspective, compare it with existing measures, and study some of its properties. We also include a comprehensive critical review of characterizations of union information and synergy measures that have been proposed in the literature.

2.
Entropy (Basel) ; 25(7)2023 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-37509922

RESUMO

The partial information decomposition (PID) framework is concerned with decomposing the information that a set of random variables has with respect to a target variable into three types of components: redundant, synergistic, and unique. Classical information theory alone does not provide a unique way to decompose information in this manner, and additional assumptions have to be made. Recently, Kolchinsky proposed a new general axiomatic approach to obtain measures of redundant information based on choosing an order relation between information sources (equivalently, order between communication channels). In this paper, we exploit this approach to introduce three new measures of redundant information (and the resulting decompositions) based on well-known preorders between channels, contributing to the enrichment of the PID landscape. We relate the new decompositions to existing ones, study several of their properties, and provide examples illustrating their novelty. As a side result, we prove that any preorder that satisfies Kolchinsky's axioms yields a decomposition that meets the axioms originally introduced by Williams and Beer when they first proposed PID.

3.
Anal Chem ; 82(4): 1462-9, 2010 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-20095581

RESUMO

A rapid detection of the nonauthenticity of suspect tablets is a key first step in the fight against pharmaceutical counterfeiting. The chemical characterization of these tablets is the logical next step to evaluate their impact on patient health and help authorities in tracking their source. Hyperspectral unmixing of near-infrared (NIR) image data is an emerging effective technology to infer the number of compounds, their spectral signatures, and the mixing fractions in a given tablet, with a resolution of a few tens of micrometers. In a linear mixing scenario, hyperspectral vectors belong to a simplex whose vertices correspond to the spectra of the compounds present in the sample. SISAL (simplex identification via split augmented Lagrangian), MVSA (minimum volume simplex analysis), and MVES (minimum-volume enclosing simplex) are recent algorithms designed to identify the vertices of the minimum volume simplex containing the spectral vectors and the mixing fractions at each pixel (vector). This work demonstrates the usefulness of these techniques, based on minimum volume criteria, for unmixing NIR hyperspectral data of tablets. The experiments herein reported show that SISAL/MVSA and MVES largely outperform MCR-ALS (multivariate curve resolution-alternating least-squares), which is considered the state-of-the-art in spectral unmixing for analytical chemistry. These experiments are based on synthetic data (studying the effect of noise and the presence/absence of pure pixels) and on a real data set composed of NIR images of counterfeit tablets.


Assuntos
Fraude , Preparações Farmacêuticas/análise , Preparações Farmacêuticas/química , Espectrofotometria Infravermelho , Comprimidos , Fatores de Tempo
4.
J Acoust Soc Am ; 128(4): 1747-54, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-20968348

RESUMO

Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.


Assuntos
Automóveis , Planejamento de Cidades , Hidrocarbonetos , Modelos Estatísticos , Ruído dos Transportes , Processamento de Sinais Assistido por Computador , Acústica/instrumentação , Análise de Fourier , Porosidade , Pressão , Espectrografia do Som
5.
Neural Netw ; 127: 193-203, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32387926

RESUMO

In this paper, we introduce a neural network framework for semi-supervised clustering with pairwise (must-link or cannot-link) constraints. In contrast to existing approaches, we decompose semi-supervised clustering into two simpler classification tasks: the first stage uses a pair of Siamese neural networks to label the unlabeled pairs of points as must-link or cannot-link; the second stage uses the fully pairwise-labeled dataset produced by the first stage in a supervised neural-network-based clustering method. The proposed approach is motivated by the observation that binary classification (such as assigning pairwise relations) is usually easier than multi-class clustering with partial supervision. On the other hand, being classification-based, our method solves only well-defined classification problems, rather than less well specified clustering tasks. Extensive experiments on various datasets demonstrate the high performance of the proposed method.


Assuntos
Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Análise por Conglomerados , Bases de Dados Factuais/tendências , Aprendizado de Máquina Supervisionado/tendências
6.
Artigo em Inglês | MEDLINE | ID: mdl-31021796

RESUMO

This paper introduces a new approach to patchbased image restoration based on external datasets and importance sampling. The minimum mean squared error (MMSE) estimate of the image patches, the computation of which requires solving a multidimensional (typically intractable) integral, is approximated using samples from an external dataset. The new method, which can be interpreted as a generalization of the external non-local means (NLM), uses self-normalized importance sampling to efficiently approximate the MMSE estimates. The use of self-normalized importance sampling endows the proposed method with great flexibility, namely regarding the statistical properties of the measurement noise. The effectiveness of the proposed method is shown in a series of experiments using both generic large-scale and class-specific external datasets.

7.
Artigo em Inglês | MEDLINE | ID: mdl-30222572

RESUMO

We propose a new approach to image fusion, inspired by the recent plug-and-play (PnP) framework. In PnP, a denoiser is treated as a black-box and plugged into an iterative algorithm, taking the place of the proximity operator of some convex regularizer, which is formally equivalent to a denoising operation. This approach offers flexibility and excellent performance, but convergence may be hard to analyze, as most state-of-the-art denoisers lack an explicit underlying objective function. Here, we propose using a scene-adapted denoiser (i.e., targeted to the specific scene being imaged) plugged into the iterations of the alternating direction method of multipliers (ADMM). This approach, which is a natural choice for image fusion problems, not only yields state-of-the-art results, but it also allows proving convergence of the resulting algorithm. The proposed method is tested on two different problems: hyperspectral fusion/sharpening and fusion of blurred-noisy image pairs.

8.
IEEE Trans Image Process ; 16(12): 2992-3004, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18092598

RESUMO

Iterative shrinkage/thresholding (IST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these IST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step IST (TwIST) algorithms, exhibiting much faster convergence rate than IST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (l(p) norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
9.
IEEE Trans Image Process ; 16(12): 2980-91, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18092597

RESUMO

Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous "singularity issue" (SI) of "iteratively reweighted least squares" (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using l1 bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Armazenamento e Recuperação da Informação/métodos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
10.
Appl Spectrosc ; 71(6): 1148-1156, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27852875

RESUMO

The monitoring of biopharmaceutical products using Fourier transform infrared (FT-IR) spectroscopy relies on calibration techniques involving the acquisition of spectra of bioprocess samples along the process. The most commonly used method for that purpose is partial least squares (PLS) regression, under the assumption that a linear model is valid. Despite being successful in the presence of small nonlinearities, linear methods may fail in the presence of strong nonlinearities. This paper studies the potential usefulness of nonlinear regression methods for predicting, from in situ near-infrared (NIR) and mid-infrared (MIR) spectra acquired in high-throughput mode, biomass and plasmid concentrations in Escherichia coli DH5-α cultures producing the plasmid model pVAX-LacZ. The linear methods PLS and ridge regression (RR) are compared with their kernel (nonlinear) versions, kPLS and kRR, as well as with the (also nonlinear) relevance vector machine (RVM) and Gaussian process regression (GPR). For the systems studied, RR provided better predictive performances compared to the remaining methods. Moreover, the results point to further investigation based on larger data sets whenever differences in predictive accuracy between a linear method and its kernelized version could not be found. The use of nonlinear methods, however, shall be judged regarding the additional computational cost required to tune their additional parameters, especially when the less computationally demanding linear methods herein studied are able to successfully monitor the variables under study.


Assuntos
Reatores Biológicos , Dinâmica não Linear , Plasmídeos , Espectroscopia de Infravermelho com Transformada de Fourier , Biomassa , Escherichia coli/genética , Escherichia coli/metabolismo , Plasmídeos/genética , Plasmídeos/metabolismo
11.
Ultrasound Med Biol ; 31(2): 243-50, 2005 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-15708464

RESUMO

This paper describes a new method for segmentation of fetal anatomic structures from echographic images. More specifically, we estimate and measure the contours of the femur and of cranial cross-sections of fetal bodies, which can thus be automatically measured. Contour estimation is formulated as a statistical estimation problem, where both the contour and the observation model parameters are unknown. The observation model (or likelihood function) relates, in probabilistic terms, the observed image with the underlying contour. This likelihood function is derived from a region-based statistical image model. The contour and the observation model parameters are estimated according to the maximum likelihood (ML) criterion, via deterministic iterative algorithms. Experiments reported in the paper, using synthetic and real images, testify for the adequacy and good performance of the proposed approach.


Assuntos
Algoritmos , Feto/anatomia & histologia , Ultrassonografia Pré-Natal/métodos , Fêmur/diagnóstico por imagem , Fêmur/embriologia , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Funções Verossimilhança , Modelos Biológicos , Crânio/diagnóstico por imagem , Crânio/embriologia
12.
IEEE Trans Pattern Anal Mach Intell ; 27(6): 957-68, 2005 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-15943426

RESUMO

Recently developed methods for learning sparse classifiers are among the state-of-the-art in supervised learning. These methods learn classifiers that incorporate weighted sums of basis functions with sparsity-promoting priors encouraging the weight estimates to be either significantly large or exactly zero. From a learning-theoretic perspective, these methods control the capacity of the learned classifier by minimizing the number of basis functions used, resulting in better generalization. This paper presents three contributions related to learning sparse classifiers. First, we introduce a true multiclass formulation based on multinomial logistic regression. Second, by combining a bound optimization approach with a component-wise update procedure, we derive fast exact algorithms for learning sparse multiclass classifiers that scale favorably in both the number of training samples and the feature dimensionality, making them applicable even to large data sets in high-dimensional feature spaces. To the best of our knowledge, these are the first algorithms to perform exact multinomial logistic regression with a sparsity-promoting prior. Third, we show how nontrivial generalization bounds can be derived for our classifier in the binary case. Experimental results on standard benchmark data sets attest to the accuracy, sparsity, and efficiency of the proposed methods.


Assuntos
Algoritmos , Inteligência Artificial , Armazenamento e Recuperação da Informação/métodos , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Análise por Conglomerados , Simulação por Computador , Modelos Biológicos , Análise de Regressão
13.
IEEE Trans Pattern Anal Mach Intell ; 27(5): 822-7, 2005 May.
Artigo em Inglês | MEDLINE | ID: mdl-15875804

RESUMO

The problem of inferring 3D orientation of a camera from video sequences has been mostly addressed by first computing correspondences of image features. This intermediate step is now seen as the main bottleneck of those approaches. In this paper, we propose a new 3D orientation estimation method for urban (indoor and outdoor) environments, which avoids correspondences between frames. The scene property exploited by our method is that many edges are oriented along three orthogonal directions; this is the recently introduced Manhattan world (MW) assumption. The main contributions of this paper are: the definition of equivalence classes of equiprojective orientations, the introduction of a new small rotation model, formalizing the fact that the camera moves smoothly, and the decoupling of elevation and twist angle estimation from that of the compass angle. We build a probabilistic sequential orientation estimation method, based on an MW likelihood model, with the above-listed contributions allowing a drastic reduction of the search space for each orientation estimate. We demonstrate the performance of our method using real video sequences.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Percepção Espacial , Gravação em Vídeo/métodos , Análise por Conglomerados , Aumento da Imagem/métodos , Armazenamento e Recuperação da Informação/métodos , Fotografação/métodos , Técnica de Subtração
14.
IEEE Trans Pattern Anal Mach Intell ; 26(9): 1105-11, 2004 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-15742887

RESUMO

This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation-maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.


Assuntos
Algoritmos , Inteligência Artificial , Teorema de Bayes , Diagnóstico por Computador/métodos , Perfilação da Expressão Gênica/métodos , Modelos Biológicos , Reconhecimento Automatizado de Padrão/métodos , Biomarcadores Tumorais/genética , Análise por Conglomerados , Neoplasias do Colo/diagnóstico , Neoplasias do Colo/genética , Simulação por Computador , Humanos , Armazenamento e Recuperação da Informação/métodos , Leucemia/diagnóstico , Leucemia/genética , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
15.
IEEE Trans Pattern Anal Mach Intell ; 26(9): 1154-66, 2004 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-15742891

RESUMO

Clustering is a common unsupervised learning technique used to discover group structure in a set of data. While there exist many algorithms for clustering, the important issue of feature selection, that is, what attributes of the data should be used by the clustering algorithms, is rarely touched upon. Feature selection for clustering is difficult because, unlike in supervised learning, there are no class labels for the data and, thus, no obvious criteria to guide the search. Another important problem in clustering is the determination of the number of clusters, which clearly impacts and is influenced by the feature selection issue. In this paper, we propose the concept of feature saliency and introduce an expectation-maximization (EM) algorithm to estimate it, in the context of mixture-based clustering. Due to the introduction of a minimum message length model selection criterion, the saliency of irrelevant features is driven toward zero, which corresponds to performing feature selection. The criterion and algorithm are then extended to simultaneously estimate the feature saliencies and the number of clusters.


Assuntos
Algoritmos , Inteligência Artificial , Análise por Conglomerados , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Modelos Biológicos , Reconhecimento Automatizado de Padrão/métodos , Aumento da Imagem/métodos , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
16.
IEEE Trans Image Process ; 12(8): 906-16, 2003.
Artigo em Inglês | MEDLINE | ID: mdl-18237964

RESUMO

This paper introduces an expectation-maximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with low-complexity, expressed in the wavelet coefficients, taking advantage of the well known sparsity of wavelet representations. Previous works have investigated wavelet-based restoration but, except for certain special cases, the resulting criteria are solved approximately or require demanding optimization methods. The EM algorithm herein proposed combines the efficient image representation offered by the discrete wavelet transform (DWT) with the diagonalization of the convolution operator obtained in the Fourier domain. Thus, it is a general-purpose approach to wavelet-based image restoration with computational complexity comparable to that of standard wavelet denoising schemes or of frequency domain deconvolution methods. The algorithm alternates between an E-step based on the fast Fourier transform (FFT) and a DWT-based M-step, resulting in an efficient iterative process requiring O(N log N) operations per iteration. The convergence behavior of the algorithm is investigated, and it is shown that under mild conditions the algorithm converges to a globally optimal restoration. Moreover, our new approach performs competitively with, in some cases better than, the best existing methods in benchmark tests.

17.
IEEE Trans Image Process ; 23(1): 466-77, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24144664

RESUMO

This paper presents a new method to estimate the parameters of two types of blurs, linear uniform motion (approximated by a line characterized by angle and length) and out-of-focus (modeled as a uniform disk characterized by its radius), for blind restoration of natural images. The method is based on the spectrum of the blurred images and is supported on a weak assumption, which is valid for the most natural images: the power-spectrum is approximately isotropic and has a power-law decay with the spatial frequency. We introduce two modifications to the radon transform, which allow the identification of the blur spectrum pattern of the two types of blurs above mentioned. The blur parameters are identified by fitting an appropriate function that accounts separately for the natural image spectrum and the blur frequency response. The accuracy of the proposed method is validated by simulations, and the effectiveness of the proposed method is assessed by testing the algorithm on real natural blurred images and comparing it with state-of-the-art blind deconvolution methods.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Modelos Lineares , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
18.
IEEE Trans Image Process ; 22(5): 1712-25, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23193235

RESUMO

The analysis of moving objects in image sequences (video) has been one of the major themes in computer vision. In this paper, we focus on video-surveillance tasks; more specifically, we consider pedestrian trajectories and propose modeling them through a small set of motion/vector fields together with a space-varying switching mechanism. Despite the diversity of motion patterns that can occur in a given scene, we show that it is often possible to find a relatively small number of typical behaviors, and model each of these behaviors by a "simple" motion field. We increase the expressiveness of the formulation by allowing the trajectories to switch from one motion field to another, in a space-dependent manner. We present an expectation-maximization algorithm to learn all the parameters of the model, and apply it to trajectory classification tasks. Experiments with both synthetic and real data support the claims about the performance of the proposed approach.

19.
IEEE Trans Image Process ; 22(7): 2751-63, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23591491

RESUMO

Image deblurring (ID) is an ill-posed problem typically addressed by using regularization, or prior knowledge, on the unknown image (and also on the blur operator, in the blind case). ID is often formulated as an optimization problem, where the objective function includes a data term encouraging the estimated image (and blur, in blind ID) to explain the observed data well (typically, the squared norm of a residual) plus a regularizer that penalizes solutions deemed undesirable. The performance of this approach depends critically (among other things) on the relative weight of the regularizer (the regularization parameter) and on the number of iterations of the algorithm used to address the optimization problem. In this paper, we propose new criteria for adjusting the regularization parameter and/or the number of iterations of ID algorithms. The rationale is that if the recovered image (and blur, in blind ID) is well estimated, the residual image is spectrally white; contrarily, a poorly deblurred image typically exhibits structured artifacts (e.g., ringing, oversmoothness), yielding residuals that are not spectrally white. The proposed criterion is particularly well suited to a recent blind ID algorithm that uses continuation, i.e., slowly decreases the regularization parameter along the iterations; in this case, choosing this parameter and deciding when to stop are one and the same thing. Our experiments show that the proposed whiteness-based criteria yield improvements in SNR, on average, only 0.15 dB below those obtained by (clairvoyantly) stopping the algorithm at the best SNR. We also illustrate the proposed criteria on non-blind ID, reporting results that are competitive with state-of-the-art criteria (such as Monte Carlo-based GSURE and projected SURE), which, however, are not applicable for blind ID.

20.
J Integr Bioinform ; 9(3): 207, 2012 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-22829578

RESUMO

Biclustering has been recognized as a remarkably effective method for discovering local temporal expression patterns and unraveling potential regulatory mechanisms, essential to understanding complex biomedical processes, such as disease progression and drug response. In this work, we propose a classification approach based on meta-biclusters (a set of similar biclusters) applied to prognostic prediction. We use real clinical expression time series to predict the response of patients with multiple sclerosis to treatment with Interferon-ß. As compared to previous approaches, the main advantages of this strategy are the interpretability of the results and the reduction of data dimensionality, due to biclustering. This would allow the identification of the genes and time points which are most promising for explaining different types of response profiles, according to clinical knowledge. We assess the impact of different unsupervised and supervised discretization techniques on the classification accuracy. The experimental results show that, in many cases, the use of these discretization methods improves the classification accuracy, as compared to the use of the original features.


Assuntos
Algoritmos , Biologia Computacional/métodos , Regulação da Expressão Gênica , Análise por Conglomerados , Humanos , Fatores de Tempo , Fluxo de Trabalho
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA