Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(5)2024 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-38474944

RESUMO

In this paper, we introduce a novel panoptic segmentation method called the Mask-Pyramid Network. Existing Mask RCNN-based methods first generate a large number of box proposals and then filter them at each feature level, which requires a lot of computational resources, while most of the box proposals are suppressed and discarded in the Non-Maximum Suppression process. Additionally, for panoptic segmentation, it is a problem to properly fuse the semantic segmentation results with the Mask RCNN-produced instance segmentation results. To address these issues, we propose a new mask pyramid mechanism to distinguish objects and generate much fewer proposals by referring to existing segmented masks, so as to reduce computing resource consumption. The Mask-Pyramid Network generates object proposals and predicts masks from larger to smaller sizes. It records the pixel area occupied by the larger object masks, and then only generates proposals on the unoccupied areas. Each object mask is represented as a H × W × 1 logit, which fits well in format with the semantic segmentation logits. By applying SoftMax to the concatenated semantic and instance segmentation logits, it is easy and natural to fuse both segmentation results. We empirically demonstrate that the proposed Mask-Pyramid Network achieves comparable accuracy performance on the Cityscapes and COCO datasets. Furthermore, we demonstrate the computational efficiency of the proposed method and obtain competitive results.

2.
IEEE Trans Image Process ; 32: 4443-4458, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37527316

RESUMO

In this paper, we propose a scribble-based video colorization network with temporal aggregation called SVCNet. It can colorize monochrome videos based on different user-given color scribbles. It addresses three common issues in the scribble-based video colorization area: colorization vividness, temporal consistency, and color bleeding. To improve the colorization quality and strengthen the temporal consistency, we adopt two sequential sub-networks in SVCNet for precise colorization and temporal smoothing, respectively. The first stage includes a pyramid feature encoder to incorporate color scribbles with a grayscale frame, and a semantic feature encoder to extract semantics. The second stage finetunes the output from the first stage by aggregating the information of neighboring colorized frames (as short-range connections) and the first colorized frame (as a long-range connection). To alleviate the color bleeding artifacts, we learn video colorization and segmentation simultaneously. Furthermore, we set the majority of operations on a fixed small image resolution and use a Super-resolution Module at the tail of SVCNet to recover original sizes. It allows the SVCNet to fit different image resolutions at the inference. Finally, we evaluate the proposed SVCNet on DAVIS and Videvo benchmarks. The experimental results demonstrate that SVCNet produces both higher-quality and more temporally consistent videos than other well-known video colorization approaches. The codes and models can be found at https://github.com/zhaoyuzhi/SVCNet.

3.
Artigo em Inglês | MEDLINE | ID: mdl-37561623

RESUMO

Hyperspectral (HS) reconstruction from RGB images denotes the recovery of whole-scene HS information, which has attracted much attention recently. State-of-the-art approaches often adopt convolutional neural networks to learn the mapping for HS reconstruction from RGB images. However, they often do not achieve high HS reconstruction performance across different scenes consistently. In addition, their performance in recovering HS images from clean and real-world noisy RGB images is not consistent. To improve the HS reconstruction accuracy and robustness across different scenes and from different input images, we present an effective HSGAN framework with a two-stage adversarial training strategy. The generator is a four-level top-down architecture that extracts and combines features on multiple scales. To generalize well to real-world noisy images, we further propose a spatial-spectral attention block (SSAB) to learn both spatial-wise and channel-wise relations. We conduct the HS reconstruction experiments from both clean and real-world noisy RGB images on five well-known HS datasets. The results demonstrate that HSGAN achieves superior performance to existing methods. Please visit https://github.com/zhaoyuzhi/HSGAN to try our codes.

4.
IEEE Trans Image Process ; 31: 2541-2556, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35275819

RESUMO

In this paper, we present a novel end-to-end pose transfer framework to transform a source person image to an arbitrary pose with controllable attributes. Due to the spatial misalignment caused by occlusions and multi-viewpoints, maintaining high-quality shape and texture appearance is still a challenging problem for pose-guided person image synthesis. Without considering the deformation of shape and texture, existing solutions on controllable pose transfer still cannot generate high-fidelity texture for the target image. To solve this problem, we design a new image reconstruction decoder - ShaTure which formulates shape and texture in a braiding manner. It can interchange discriminative features in both feature-level space and pixel-level space so that the shape and texture can be mutually fine-tuned. In addition, we develop a new bottleneck module - Adaptive Style Selector (AdaSS) Module which can enhance the multi-scale feature extraction capability by self-recalibration of the feature map through channel-wise attention. Both quantitative and qualitative results show that the proposed framework has superiority compared with the state-of-the-art human pose and attribute transfer methods. Detailed ablation studies report the effectiveness of each contribution, which proves the robustness and efficacy of the proposed framework.


Assuntos
Processamento de Imagem Assistida por Computador , Humanos
5.
IEEE Trans Neural Netw Learn Syst ; 33(4): 1638-1649, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33361012

RESUMO

Most of the deep quantization methods adopt unsupervised approaches, and the quantization process usually occurs in the Euclidean space on top of the deep feature and its approximate value. When this approach is applied to the retrieval tasks, since the internal product space of the retrieval process is different from the Euclidean space of quantization, minimizing the quantization error (QE) does not necessarily lead to a good performance on the maximum inner product search (MIPS). To solve these problems, we treat Softmax classification as vector quantization (VQ) with angular decision boundaries and propose angular deep supervised VQ (ADSVQ) for image retrieval. Our approach can simultaneously learn the discriminative feature representation and the updatable codebook, both lying on a hypersphere. To reduce the QE between centroids and deep features, two regularization terms are proposed as supervision signals to encourage the intra-class compactness and inter-class balance, respectively. ADSVQ explicitly reformulates the asymmetric distance computation in MIPS to transform the image retrieval process into a two-stage classification process. Moreover, we discuss the extension of multiple-label cases from the perspective of quantization with binary classification. Extensive experiments demonstrate that the proposed ADSVQ has excellent performance on four well-known image data sets when compared with the state-of-the-art hashing methods.

6.
Sensors (Basel) ; 21(14)2021 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-34300460

RESUMO

Human action recognition methods in videos based on deep convolutional neural networks usually use random cropping or its variants for data augmentation. However, this traditional data augmentation approach may generate many non-informative samples (video patches covering only a small part of the foreground or only the background) that are not related to a specific action. These samples can be regarded as noisy samples with incorrect labels, which reduces the overall action recognition performance. In this paper, we attempt to mitigate the impact of noisy samples by proposing an Auto-augmented Siamese Neural Network (ASNet). In this framework, we propose backpropagating salient patches and randomly cropped samples in the same iteration to perform gradient compensation to alleviate the adverse gradient effects of non-informative samples. Salient patches refer to the samples containing critical information for human action recognition. The generation of salient patches is formulated as a Markov decision process, and a reinforcement learning agent called SPA (Salient Patch Agent) is introduced to extract patches in a weakly supervised manner without extra labels. Extensive experiments were conducted on two well-known datasets UCF-101 and HMDB-51 to verify the effectiveness of the proposed SPA and ASNet.


Assuntos
Redes Neurais de Computação , Reconhecimento Psicológico , Atividades Humanas , Humanos , Aprendizagem , Cadeias de Markov
7.
Sensors (Basel) ; 21(7)2021 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-33805558

RESUMO

Deep reinforcement learning (DRL) has been utilized in numerous computer vision tasks, such as object detection, autonomous driving, etc. However, relatively few DRL methods have been proposed in the area of image segmentation, particularly in left ventricle segmentation. Reinforcement learning-based methods in earlier works often rely on learning proper thresholds to perform segmentation, and the segmentation results are inaccurate due to the sensitivity of the threshold. To tackle this problem, a novel DRL agent is designed to imitate the human process to perform LV segmentation. For this purpose, we formulate the segmentation problem as a Markov decision process and innovatively optimize it through DRL. The proposed DRL agent consists of two neural networks, i.e., First-P-Net and Next-P-Net. The First-P-Net locates the initial edge point, and the Next-P-Net locates the remaining edge points successively and ultimately obtains a closed segmentation result. The experimental results show that the proposed model has outperformed the previous reinforcement learning methods and achieved comparable performances compared with deep learning baselines on two widely used LV endocardium segmentation datasets, namely Automated Cardiac Diagnosis Challenge (ACDC) 2017 dataset, and Sunnybrook 2009 dataset. Moreover, the proposed model achieves higher F-measure accuracy compared with deep learning methods when training with a very limited number of samples.


Assuntos
Ventrículos do Coração , Redes Neurais de Computação , Coração , Ventrículos do Coração/diagnóstico por imagem , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...