Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37104112

RESUMO

Despite simplicity, stochastic gradient descent (SGD)-like algorithms are successful in training deep neural networks (DNNs). Among various attempts to improve SGD, weight averaging (WA), which averages the weights of multiple models, has recently received much attention in the literature. Broadly, WA falls into two categories: 1) online WA, which averages the weights of multiple models trained in parallel, is designed for reducing the gradient communication overhead of parallel mini-batch SGD and 2) offline WA, which averages the weights of one model at different checkpoints, is typically used to improve the generalization ability of DNNs. Though online and offline WA are similar in form, they are seldom associated with each other. Besides, these methods typically perform either offline parameter averaging or online parameter averaging, but not both. In this work, we first attempt to incorporate online and offline WA into a general training framework termed hierarchical WA (HWA). By leveraging both the online and offline averaging manners, HWA is able to achieve both faster convergence speed and superior generalization performance without any fancy learning rate adjustment. Besides, we also analyze the issues faced by the existing WA methods, and how our HWA addresses them, empirically. Finally, extensive experiments verify that HWA outperforms the state-of-the-art methods significantly.

2.
Plants (Basel) ; 11(3)2022 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-35161394

RESUMO

Rapeseed is a significant oil-bearing cash crop. As a hybrid crop, Brassica napus L. produces a high yield, but it also has drawbacks such as a tall stalk, easy lodging, and is not suitable for mechanized production. To address these concerns, we created the DW871 rapeseed dwarf variety, which has a high yield, high oil content, and is suitable for mechanized production. To fully comprehend the dwarfing mechanism of DW871 and provide a theoretical foundation for future applications of the variety, we used transcriptome and proteome sequencing to identify genes and proteins associated with the dwarfing phenotype, using homologous high-stalk material HW871 as a control. By RNA-seq and iTRAQ, we discovered 8665 DEGs and 50 DAPs. Comprehensive transcription and translation level analysis revealed 25 correlations, 23 of which have the same expression trend, involving monolignin synthesis, pectin-lignin assembly, lignification, glucose modification, cell wall composition and architecture, cell morphology, vascular bundle development, and stalk tissue composition and architecture. As a result of these results, we can formulate a hypothesis about the DW871 dwarfing phenotype: plant hormone signal transduction, such as IAA and BRs, is linked to the formation of dwarf phenotypes, and metabolic pathways related to lignin synthesis, such as phenylpropane biosynthesis, also play a role. Our works will contribute to a better understanding of the genes and proteins involved in the rapeseed dwarf phenotype, and we will propose new insights into the dwarfing mechanism of Brassica napus L.

3.
IEEE Trans Image Process ; 31: 1120-1133, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34878975

RESUMO

Recent advanced methods for fashion landmark detection are mainly driven by training convolutional neural networks on large-scale fashion datasets, which has a large number of annotated landmarks. However, such large-scale annotations are difficult and expensive to obtain in real-world applications, thus models that can generalize well from a small amount of labelled data are desired. We investigate this problem of few-shot fashion landmark detection, where only a few labelled samples are available for an unseen task. This work proposes a novel framework named MetaCloth via meta-learning, which is able to learn unseen tasks of dense fashion landmark detection with only a few annotated samples. Unlike previous meta-learning work that focus on solving " N -way K -shot" tasks, where each task predicts N number of classes by training with K annotated samples for each class ( N is fixed for all seen and unseen tasks), a task in MetaCloth detects N different landmarks for different clothing categories using K samples, where N varies across tasks, because different clothing categories usually have various number of landmarks. Therefore, numbers of parameters are various for different seen and unseen tasks in MetaCloth. MetaCloth is carefully designed to dynamically generate different numbers of parameters for different tasks, and learn a generalizable feature extraction network from a few annotated samples with a set of good initialization parameters. Extensive experiments show that MetaCloth outperforms its counterparts by a large margin.

4.
Artigo em Inglês | MEDLINE | ID: mdl-33989151

RESUMO

Reducing complexity of the pipeline of instance segmentation is crucial for real-world applications. This work addresses this problem by introducing an anchor-box free and single-shot instance segmentation framework, termed PolarMask++, which reformulates the instance segmentation problem as predicting the contours of objects in the polar coordinate, leading to several appealing benefits. (1) The polar representation unifies instance segmentation (masks) and object detection (bounding boxes) into a single framework, reducing the design and computational complexity. (2) We carefully design two modules (soft polar centerness and polar IoU loss) to sample high-quality center examples and optimize polar contour regression, making the performance of PolarMask++ does not depend on the bounding box prediction and thus more efficient in training. (3) PolarMask++ is fully convolutional and can be easily embedded into most off-the-shelf detectors. To further improve the accuracy of the framework, a Refined Feature Pyramid is introduced to improve the feature representation at different scales. Extensive experiments demonstrate the effectiveness of PolarMask++, which achieves competitive results on COCO dataset, and new state-of-the-art results on text detection and cell segmentation datasets. We hope polar representation can provide a new perspective for designing algorithms to solve single-shot instance segmentation. Code is released at: github.com/xieenze/PolarMask.

5.
IEEE Trans Pattern Anal Mach Intell ; 43(2): 712-728, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31380746

RESUMO

We address a learning-to-normalize problem by proposing Switchable Normalization (SN), which learns to select different normalizers for different normalization layers of a deep neural network. SN employs three distinct scopes to compute statistics (means and variances) including a channel, a layer, and a minibatch. SN switches between them by learning their importance weights in an end-to-end manner. It has several good properties. First, it adapts to various network architectures and tasks (see Fig. 1). Second, it is robust to a wide range of batch sizes, maintaining high performance even when small minibatch is presented (e.g., 2 images/GPU). Third, SN does not have sensitive hyper-parameter, unlike group normalization that searches the number of groups as a hyper-parameter. Without bells and whistles, SN outperforms its counterparts on various challenging benchmarks, such as ImageNet, COCO, CityScapes, ADE20K, MegaFace and Kinetics. Analyses of SN are also presented to answer the following three questions: (a) Is it useful to allow each normalization layer to select its own normalizer? (b) What impacts the choices of normalizers? (c) Do different tasks and datasets prefer different normalizers? We hope SN will help ease the usage and understand the normalization techniques in deep learning. The code of SN has been released at https://github.com/switchablenorms.

6.
IEEE Trans Cybern ; 50(3): 1120-1131, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30582564

RESUMO

Context representations have been widely used to profit semantic image segmentation. The emergence of depth data provides additional information to construct more discriminating context representations. Depth data preserves the geometric relationship of objects in a scene, which is generally hard to be inferred from RGB images. While deep convolutional neural networks (CNNs) have been successful in solving semantic segmentation, we encounter the problem of optimizing CNN training for the informative context using depth data to enhance the segmentation accuracy. In this paper, we present a novel switchable context network (SCN) to facilitate semantic segmentation of RGB-D images. Depth data is used to identify objects existing in multiple image regions. The network analyzes the information in the image regions to identify different characteristics, which are then used selectively through switching network branches. With the content extracted from the inherent image structure, we are able to generate effective context representations that are aware of both image structures and object relationships, leading to a more coherent learning of semantic segmentation network. We demonstrate that our SCN outperforms state-of-the-art methods on two public datasets.

7.
IEEE Trans Image Process ; 28(10): 4870-4882, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31056496

RESUMO

Video person re-identification has attracted much attention in recent years. It aims to match image sequences of pedestrians from different camera views. Previous approaches usually improve this task from three aspects, including: 1) selecting more discriminative frames; 2) generating more informative temporal representations; and 3) developing more effective distance metrics. To address the above issues, we present a novel and practical deep architecture for video person re-identification termed self-and-collaborative attention network (SCAN), which adopts the video pairs as the input and outputs their matching scores. SCAN has several appealing properties. First, SCAN adopts a non-parametric attention mechanism to refine the intra-sequence and inter-sequence feature representation of videos and outputs self-and-collaborative feature representation for each video, making the discriminative frames aligned between the probe and gallery sequences. Second, beyond the existing models, a generalized pairwise similarity measurement is proposed to generate the similarity feature representation of video pair by calculating the Hadamard product of their self-representation difference and collaborative-representation difference. Thus, the matching result can be predicted by the binary classifier. Third, a dense clip segmentation strategy is also introduced to generate rich probe-gallery pairs to optimize the model. In the test phase, the final matching score of two videos is determined by averaging the scores of top-ranked clip-pairs. Extensive experiments demonstrate the effectiveness of SCAN, which outperforms the top-1 accuracies of the best-performing baselines on iLIDS-VID, PRID2011, and MARS datasets, respectively.

8.
IEEE Trans Pattern Anal Mach Intell ; 41(3): 596-610, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29993474

RESUMO

This paper investigates a fundamental problem of scene understanding: how to parse a scene image into a structured configuration (i.e., a semantic object hierarchy with object interaction relations). We propose a deep architecture consisting of two networks: i) a convolutional neural network (CNN) extracting the image representation for pixel-wise object labeling and ii) a recursive neural network (RsNN) discovering the hierarchical object structure and the inter-object relations. Rather than relying on elaborative annotations (e.g., manually labeled semantic maps and relations), we train our deep model in a weakly-supervised learning manner by leveraging the descriptive sentences of the training images. Specifically, we decompose each sentence into a semantic tree consisting of nouns and verb phrases, and apply these tree structures to discover the configurations of the training images. Once these scene configurations are determined, then the parameters of both the CNN and RsNN are updated accordingly by back propagation. The entire model training is accomplished through an Expectation-Maximization method. Extensive experiments show that our model is capable of producing meaningful scene configurations and achieving more favorable scene labeling results on two benchmarks (i.e., PASCAL VOC 2012 and SYSU-Scenes) compared with other state-of-the-art weakly-supervised deep learning methods. In particular, SYSU-Scenes contains more than 5,000 scene images with their semantic sentence descriptions, which is created by us for advancing research on scene parsing.

9.
IEEE Trans Image Process ; 24(12): 4766-79, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26276992

RESUMO

Extracting informative image features and learning effective approximate hashing functions are two crucial steps in image retrieval. Conventional methods often study these two steps separately, e.g., learning hash functions from a predefined hand-crafted feature space. Meanwhile, the bit lengths of output hashing codes are preset in the most previous methods, neglecting the significance level of different bits and restricting their practical flexibility. To address these issues, we propose a supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images. We pose hashing learning as a problem of regularized similarity learning. In particular, we organize the training images into a batch of triplet samples, each sample containing two images with the same label and one with a different label. With these triplet samples, we maximize the margin between the matched pairs and the mismatched pairs in the Hamming space. In addition, a regularization term is introduced to enforce the adjacency consistency, i.e., images of similar appearances should have similar codes. The deep convolutional neural network is utilized to train the model in an end-to-end fashion, where discriminative image features and hash functions are simultaneously optimized. Furthermore, each bit of our hashing codes is unequally weighted, so that we can manipulate the code lengths by truncating the insignificant bits. Our framework outperforms state-of-the-arts on public benchmarks of similar image search and also achieves promising results in the application of person re-identification in surveillance. It is also shown that the generated bit-scalable hashing codes well preserve the discriminative powers with shorter code lengths.


Assuntos
Identificação Biométrica/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Algoritmos , Animais , Bases de Dados Factuais , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA