Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
Publication year range
1.
Artículo en Inglés | MEDLINE | ID: mdl-38949947

RESUMEN

Training with more data has always been the most stable and effective way of improving performance in the deep learning era. The Open Images dataset, the largest object detection dataset, presents significant opportunities and challenges for general and sophisticated scenarios. However, its semi-automatic collection and labeling process, designed to manage the huge data scale, leads to label-related problems, including explicit or implicit multiple labels per object and highly imbalanced label distribution. In this work, we quantitatively analyze the major problems in large-scale object detection and provide a detailed yet comprehensive demonstration of our solutions. First, we design a concurrent softmax to handle the multi-label problems in object detection and propose a soft-balance sampling method with a hybrid training scheduler to address the label imbalance. This approach yields a notable improvement of 3.34 points, achieving the best single-model performance with a mAP of 60.90% on the public object detection test set of Open Images. Then, we introduce a well-designed ensemble mechanism that substantially enhances the performance of the single model, achieving an overall mAP of 67.17%, which is 4.29 points higher than the best result from the Open Images public test 2018. Our result is published on https://www.kaggle.com/c/open-images-2019-object-detection/leaderboard.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(10): 11856-11868, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37192026

RESUMEN

Pre-training on large-scale datasets has played an increasingly significant role in computer vision and natural language processing recently. However, as there exist numerous application scenarios that have distinctive demands such as certain latency constraints and specialized data distributions, it is prohibitively expensive to take advantage of large-scale pre-training for per-task requirements. we focus on two fundamental perception tasks (object detection and semantic segmentation) and present a complete and flexible system named GAIA-Universe(GAIA), which could automatically and efficiently give birth to customized solutions according to heterogeneous downstream needs through data union and super-net training. GAIA is capable of providing powerful pre-trained weights and searching models that conform to downstream demands such as hardware constraints, computation constraints, specified data domains, and telling relevant data for practitioners who have very few datapoints on their tasks. With GAIA, we achieve promising results on COCO, Objects365, Open Images, BDD100 k, and UODB which is a collection of datasets including KITTI, VOC, WiderFace, DOTA, Clipart, Comic, and more. Taking COCO as an example, GAIA is able to efficiently produce models covering a wide range of latency from 16 ms to 53 ms, and yields AP from 38.2 to 46.5 without whistles and bells. GAIA is released at https://github.com/GAIA-vision.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda