Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Nature ; 625(7995): 468-475, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38096900

RESUMEN

Large language models (LLMs) have demonstrated tremendous capabilities in solving complex tasks, from quantitative reasoning to understanding natural language. However, LLMs sometimes suffer from confabulations (or hallucinations), which can result in them making plausible but incorrect statements1,2. This hinders the use of current large models in scientific discovery. Here we introduce FunSearch (short for searching in the function space), an evolutionary procedure based on pairing a pretrained LLM with a systematic evaluator. We demonstrate the effectiveness of this approach to surpass the best-known results in important problems, pushing the boundary of existing LLM-based approaches3. Applying FunSearch to a central problem in extremal combinatorics-the cap set problem-we discover new constructions of large cap sets going beyond the best-known ones, both in finite dimensional and asymptotic cases. This shows that it is possible to make discoveries for established open problems using LLMs. We showcase the generality of FunSearch by applying it to an algorithmic problem, online bin packing, finding new heuristics that improve on widely used baselines. In contrast to most computer search approaches, FunSearch searches for programs that describe how to solve a problem, rather than what the solution is. Beyond being an effective and scalable strategy, discovered programs tend to be more interpretable than raw solutions, enabling feedback loops between domain experts and FunSearch, and the deployment of such programs in real-world applications.

2.
IEEE Trans Pattern Anal Mach Intell ; 37(12): 2545-57, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26539857

RESUMEN

Many tasks in computer vision, such as action classification and object detection, require us to rank a set of samples according to their relevance to a particular visual category. The performance of such tasks is often measured in terms of the average precision (ap). Yet it is common practice to employ the support vector machine ( svm) classifier, which optimizes a surrogate 0-1 loss. The popularity of svmcan be attributed to its empirical performance. Specifically, in fully supervised settings, svm tends to provide similar accuracy to ap-svm, which directly optimizes an ap-based loss. However, we hypothesize that in the significantly more challenging and practically useful setting of weakly supervised learning, it becomes crucial to optimize the right accuracy measure. In order to test this hypothesis, we propose a novel latent ap-svm that minimizes a carefully designed upper bound on the ap-based loss function over weakly supervised samples. Using publicly available datasets, we demonstrate the advantage of our approach over standard loss-based learning frameworks on three challenging problems: action classification, character recognition and object detection.

3.
IEEE Trans Pattern Anal Mach Intell ; 37(7): 1373-86, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-26352446

RESUMEN

We consider the problem of parameter estimation and energy minimization for a region-based semantic segmentation model. The model divides the pixels of an image into non-overlapping connected regions, each of which is to a semantic class. In the context of energy minimization, the main problem we face is the large number of putative pixel-to-region assignments. We address this problem by designing an accurate linear programming based approach for selecting the best set of regions from a large dictionary. The dictionary is constructed by merging and intersecting segments obtained from multiple bottom-up over-segmentations. The linear program is solved efficiently using dual decomposition. In the context of parameter estimation, the main problem we face is the lack of fully supervised data. We address this issue by developing a principled framework for parameter estimation using diverse data. More precisely, we propose a latent structural support vector machine formulation, where the latent variables model any missing information in the human annotation. Of particular interest to us are three types of annotations: (i) images segmented using generic foreground or background classes; (ii) images with bounding boxes specified for objects; and (iii) images labeled to indicate the presence of a class. Using large, publicly available datasets we show that our methods are able to significantly improve the accuracy of the region-based model.

4.
Inf Process Med Imaging ; 23: 414-25, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24683987

RESUMEN

Magneto- and electroencephalography (M/EEG) measure the electromagnetic signals produced by brain activity. In order to address the issue of limited signal-to-noise ratio (SNR) with raw data, acquisitions consist of multiple repetitions of the same experiment. An important challenge arising from such data is the variability of brain activations over the repetitions. It hinders statistical analysis such as prediction performance in a supervised learning setup. One such confounding variability is the time offset of the peak of the activation, which varies across repetitions. We propose to address this misalignment issue by explicitly modeling time shifts of different brain responses in a classification setup. To this end, we use the latent support vector machine (LSVM) formulation, where the latent shifts are inferred while learning the classifier parameters. The inferred shifts are further used to improve the SNR of the M/EEG data, and to infer the chronometry and the sequence of activations across the brain regions that are involved in the experimental task. Results are validated on a long-term memory retrieval task, showing significant improvement using the proposed latent discriminative method.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Electroencefalografía/métodos , Almacenamiento y Recuperación de la Información/métodos , Magnetoencefalografía/métodos , Memoria a Largo Plazo/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Inteligencia Artificial , Humanos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
5.
Med Image Comput Comput Assist Interv ; 16(Pt 3): 219-26, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24505764

RESUMEN

The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.


Asunto(s)
Algoritmos , Interpretación Estadística de Datos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Músculo Esquelético/anatomía & histología , Reconocimiento de Normas Patrones Automatizadas/métodos , Inteligencia Artificial , Análisis Discriminante , Humanos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
6.
IEEE Trans Pattern Anal Mach Intell ; 32(3): 530-45, 2010 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-20075476

RESUMEN

We present a probabilistic method for segmenting instances of a particular object category within an image. Our approach overcomes the deficiencies of previous segmentation techniques based on traditional grid conditional random fields (CRF), namely that 1) they require the user to provide seed pixels for the foreground and the background and 2) they provide a poor prior for specific shapes due to the small neighborhood size of grid CRF. Specifically, we automatically obtain the pose of the object in a given image instead of relying on manual interaction. Furthermore, we employ a probabilistic model which includes shape potentials for the object to incorporate top-down information that is global across the image, in addition to the grid clique potentials which provide the bottom-up information used in previous approaches. The shape potentials are provided by the pose of the object obtained using an object category model. We represent articulated object categories using a novel layered pictorial structures model. Nonarticulated object categories are modeled using a set of exemplars. These object category models have the advantage that they can handle large intraclass shape, appearance, and spatial variation. We develop an efficient method, OBJCUT, to obtain segmentations using our probabilistic framework. Novel aspects of this method include: 1) efficient algorithms for sampling the object category models of our choice and 2) the observation that a sampling-based approximation of the expected log-likelihood of the model can be increased by a single graph cut. Results are presented on several articulated (e.g., animals) and nonarticulated (e.g., fruits) object categories. We provide a favorable comparison of our method with the state of the art in object category specific image segmentation, specifically the methods of Leibe and Schiele and Schoenemann and Cremers.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA