Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-37906493

RESUMEN

Networks found with neural architecture search (NAS) achieve the state-of-the-art performance in a variety of tasks, out-performing human-designed networks. However, most NAS methods heavily rely on human-defined assumptions that constrain the search: architecture's outer skeletons, number of layers, parameter heuristics, and search spaces. In addition, common search spaces consist of repeatable modules (cells) instead of fully exploring the architecture's search space by designing entire architectures (macro-search). Imposing such constraints requires deep human expertise and restricts the search to predefined settings. In this article, we propose less constrained macro-neural architecture search (LCMNAS), a method that pushes NAS to less constrained search spaces by performing macro-search without relying on predefined heuristics or bounded search spaces. LCMNAS introduces three components for the NAS pipeline: 1) a method that leverages information about well-known architectures to autonomously generate complex search spaces based on weighted directed graphs (WDGs) with hidden properties; 2) an evolutionary search strategy that generates complete architectures from scratch; and 3) a mixed-performance estimation approach that combines information about architectures at the initialization stage and lower fidelity estimates to infer their trainability and capacity to model complex functions. We present experiments in 14 different datasets showing that LCMNAS is capable of generating both cell and macro-based architectures with minimal GPU computation and state-of-the-art results. Moreover, we conduct extensive studies on the importance of different NAS components in both cell and macro-based settings. The code for reproducibility is publicly available at https://github.com/VascoLopes/LCMNAS.

2.
SN Appl Sci ; 3(5): 590, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33942027

RESUMEN

In this paper, we propose three methods for door state classification with the goal to improve robot navigation in indoor spaces. These methods were also developed to be used in other areas and applications since they are not limited to door detection as other related works are. Our methods work offline, in low-powered computers as the Jetson Nano, in real-time with the ability to differentiate between open, closed and semi-open doors. We use the 3D object classification, PointNet, real-time semantic segmentation algorithms such as, FastFCN, FC-HarDNet, SegNet and BiSeNet, the object detection algorithm, DetectNet and 2D object classification networks, AlexNet and GoogleNet. We built a 3D and RGB door dataset with images from several indoor environments using a 3D Realsense camera D435. This dataset is freely available online. All methods are analysed taking into account their accuracy and the speed of the algorithm in a low powered computer. We conclude that it is possible to have a door classification algorithm running in real-time on a low-power device.

3.
Neural Comput ; 22(10): 2698-728, 2010 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-20569178

RESUMEN

This letter focuses on the issue of whether risk functionals derived from information-theoretic principles, such as Shannon or Rényi's entropies, are able to cope with the data classification problem in both the sense of attaining the risk functional minimum and implying the minimum probability of error allowed by the family of functions implemented by the classifier, here denoted by min Pe. The analysis of this so-called minimization of error entropy (MEE) principle is carried out in a single perceptron with continuous activation functions, yielding continuous error distributions. In spite of the fact that the analysis is restricted to single perceptrons, it reveals a large spectrum of behaviors that MEE can be expected to exhibit in both theory and practice. In what concerns the theoretical MEE, our study clarifies the role of the parameters controlling the perceptron activation function (of the squashing type) in often reaching the minimum probability of error. Our study also clarifies the role of the kernel density estimator of the error density in achieving the minimum probability of error in practice.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas , Algoritmos , Simulación por Computador/normas , Entropía , Conceptos Matemáticos , Modelos Estadísticos
4.
Neural Netw ; 21(9): 1302-10, 2008 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-18572384

RESUMEN

The learning process of a multilayer perceptron requires the optimization of an error function E(y,t) comparing the predicted output, y, and the observed target, t. We review some usual error functions, analyze their mathematical properties for data classification purposes, and introduce a new one, E(Exp), inspired by the Z-EDM algorithm that we have recently proposed. An important property of E(Exp) is its ability to emulate the behavior of other error functions by the sole adjustment of a real-valued parameter. In other words, E(Exp) is a sort of generalized error function embodying complementary features of other functions. The experimental results show that the flexibility of the new, generalized, error function allows one to obtain the best results achievable with the other functions with a performance improvement in some cases.


Asunto(s)
Algoritmos , Interpretación Estadística de Datos , Redes Neurales de la Computación , Clasificación , Entropía , Modelos Estadísticos
5.
IEEE Trans Pattern Anal Mach Intell ; 30(1): 62-75, 2008 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-18000325

RESUMEN

Hierarchical clustering is a stepwise clustering method usually based on proximity measures between objects or sets of objects from a given data set. The most common proximity measures are distance measures. The derived proximity matrices can be used to build graphs, which provide the basic structure for some clustering methods. We present here a new proximity matrix based on an entropic measure and also a clustering algorithm (LEGClust) that builds layers of subgraphs based on this matrix, and uses them and a hierarchical agglomerative clustering technique to form the clusters. Our approach capitalizes on both a graph structure and a hierarchical construction. Moreover, by using entropy as a proximity measure we are able, with no assumption about the cluster shapes, to capture the local structure of the data, forcing the clustering method to reflect this structure. We present several experiments on artificial and real data sets that provide evidence on the superior performance of this new algorithm when compared with competing ones.


Asunto(s)
Algoritmos , Inteligencia Artificial , Análisis por Conglomerados , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
6.
IEEE Trans Pattern Anal Mach Intell ; 29(4): 607-12, 2007 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-17299218

RESUMEN

This paper focuses on noncooperative iris recognition, i.e., the capture of iris images at large distances, under less controlled lighting conditions, and without active participation of the subjects. This increases the probability of capturing very heterogeneous images (regarding focus, contrast, or brightness) and with several noise factors (iris obstructions and reflections). Current iris recognition systems are unable to deal with noisy data and substantially increase their error rates, especially the false rejections, in these conditions. We propose an iris classification method that divides the segmented and normalized iris image into six regions, makes an independent feature extraction and comparison for each region, and combines each of the dissimilarity values through a classification rule. Experiments show a substantial decrease, higher than 40 percent, of the false rejection rates in the recognition of noisy iris images.


Asunto(s)
Inteligencia Artificial , Biometría/métodos , Análisis por Conglomerados , Interpretación de Imagen Asistida por Computador/métodos , Iris/anatomía & histología , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Algoritmos , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
7.
J Biomol Screen ; 21(3): 252-9, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26746583

RESUMEN

High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.


Asunto(s)
Antineoplásicos/farmacología , Descubrimiento de Drogas/métodos , Ensayos Analíticos de Alto Rendimiento , Línea Celular Tumoral , Biología Computacional/métodos , Femenino , Ingeniería Genética , Humanos , Fenotipo , Reproducibilidad de los Resultados , Bibliotecas de Moléculas Pequeñas
8.
IEEE Trans Image Process ; 24(1): 163-75, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25420258

RESUMEN

One of the major problems found when developing a 3D recognition system involves the choice of keypoint detector and descriptor. To help solve this problem, we present a new method for the detection of 3D keypoints on point clouds and we perform benchmarking between each pair of 3D keypoint detector and 3D descriptor to evaluate their performance on object and category recognition. These evaluations are done in a public database of real 3D objects. Our keypoint detector is inspired by the behavior and neural architecture of the primate visual system. The 3D keypoints are extracted based on a bottom-up 3D saliency map, that is, a map that encodes the saliency of objects in the visual environment. The saliency map is determined by computing conspicuity maps (a combination across different modalities) of the orientation, intensity, and color information in a bottom-up and in a purely stimulus-driven manner. These three conspicuity maps are fused into a 3D saliency map and, finally, the focus of attention (or keypoint location) is sequentially directed to the most salient points in this map. Inhibiting this location automatically allows the system to attend to the next most salient location. The main conclusions are: with a similar average number of keypoints, our 3D keypoint detector outperforms the other eight 3D keypoint detectors evaluated by achieving the best result in 32 of the evaluated metrics in the category and object recognition experiments, when the second best detector only obtained the best result in eight of these metrics. The unique drawback is the computational time, since biologically inspired 3D keypoint based on bottom-up saliency is slower than the other detectors. Given that there are big differences in terms of recognition performance, size and time requirements, the selection of the keypoint detector and descriptor has to be matched to the desired task and we give some directions to facilitate this choice.


Asunto(s)
Algoritmos , Imagenología Tridimensional/métodos , Modelos Neurológicos , Reconocimiento de Normas Patrones Automatizadas/métodos , Programas Informáticos , Animales , Bases de Datos Factuales , Primates , Curva ROC , Percepción Visual
9.
IEEE Trans Pattern Anal Mach Intell ; 32(8): 1529-35, 2010 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-20558882

RESUMEN

The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.


Asunto(s)
Biometría/métodos , Bases de Datos Factuales , Procesamiento de Imagen Asistido por Computador/métodos , Iris/anatomía & histología , Humanos , Luz
10.
Neural Comput ; 18(9): 2036-61, 2006 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-16846386

RESUMEN

Entropy-based cost functions are enjoying a growing attractiveness in unsupervised and supervised classification tasks. Better performances in terms both of error rate and speed of convergence have been reported. In this letter, we study the principle of error entropy minimization (EEM) from a theoretical point of view. We use Shannon's entropy and study univariate data splitting in two-class problems. In this setting, the error variable is a discrete random variable, leading to a not too complicated mathematical analysis of the error entropy. We start by showing that for uniformly distributed data, there is equivalence between the EEM split and the optimal classifier. In a more general setting, we prove the necessary conditions for this equivalence and show the existence of class configurations where the optimal classifier corresponds to maximum error entropy. The presented theoretical results provide practical guidelines that are illustrated with a set of experiments with both real and simulated data sets, where the effectiveness of EEM is compared with the usual mean square error minimization.


Asunto(s)
Clasificación , Entropía , Modelos Teóricos , Clasificación/métodos , Distribución Normal
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA