Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-39093671

RESUMO

Recently, fast Magnetic Resonance Imaging reconstruction technology has emerged as a promising way to improve the clinical diagnostic experience by significantly reducing scan times. While existing studies have used Generative Adversarial Networks to achieve impressive results in reconstructing MR images, they still suffer from challenges such as blurred zones/boundaries and abnormal spots caused by inevitable noise in the reconstruction process. To this end, we propose a novel deep framework termed Anisotropic Diffusion-Assisted Generative Adversarial Networks, which aims to maximally preserve valid high-frequency information and structural details while minimizing noises in reconstructed images by optimizing a joint loss function in a unified framework. In doing so, it enables more authentic and accurate MR image generation. To specifically handle unforeseeable noises, an Anisotropic Diffused Reconstruction Module is developed and added aside the backbone network as a denoise assistant, which improves the final image quality by minimizing reconstruction losses between targets and iteratively denoised generative outputs with no extra computational complexity during the testing phase. To make the most of valuable MRI data, we extend its application to support multi-modal learning to boost reconstructed image quality by aggregating more valid information from images of diverse modalities. Extensive experiments on public datasets show that the proposed framework can achieve superior performance in polishing up the quality of reconstructed MR images. For example, the proposed method obtains average PSNR and mSSIM values of 35.785dB and 0.9765 on the MRNet dataset, which are at least about 2.9dB and 0.07 higher than those from the baselines.

2.
Front Neurosci ; 17: 1280831, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37736267
3.
Artigo em Inglês | MEDLINE | ID: mdl-32976101

RESUMO

Despite the thrilling success achieved by existing binary descriptors, most of them are still in the mire of three limitations: 1) vulnerable to the geometric transformations; 2) incapable of preserving the manifold structure when learning binary codes; 3) NO guarantee to find the true match if multiple candidates happen to have the same Hamming distance to a given query. All these together make the binary descriptor less effective, given large-scale visual recognition tasks. In this paper, we propose a novel learning-based feature descriptor, namely Unsupervised Deep Binary Descriptor (UDBD), which learns transformation invariant binary descriptors via projecting the original data and their transformed sets into a joint binary space. Moreover, we involve a ℓ2,1-norm loss term in the binary embedding process to gain simultaneously the robustness against data noises and less probability of mistakenly flipping bits of the binary descriptor, on top of it, a graph constraint is used to preserve the original manifold structure in the binary space. Furthermore, a weak bit mechanism is adopted to find the real match from candidates sharing the same minimum Hamming distance, thus enhancing matching performance. Extensive experimental results on public datasets show the superiority of UDBD in terms of matching and retrieval accuracy over state-of-the-arts.

4.
Artigo em Inglês | MEDLINE | ID: mdl-30452370

RESUMO

This paper proposes a deep hashing framework, namely Unsupervised Deep Video Hashing (UDVH), for largescale video similarity search with the aim to learn compact yet effective binary codes. Our UDVH produces the hash codes in a self-taught manner by jointly integrating discriminative video representation with optimal code learning, where an efficient alternating approach is adopted to optimize the objective function. The key differences from most existing video hashing methods lie in 1) UDVH is an unsupervised hashing method that generates hash codes by cooperatively utilizing feature clustering and a specifically-designed binarization with the original neighborhood structure preserved in the binary space; 2) a specific rotation is developed and applied onto video features such that the variance of each dimension can be balanced, thus facilitating the subsequent quantization step. Extensive experiments performed on three popular video datasets show that UDVH is overwhelmingly better than the state-of-the-arts in terms of various evaluation metrics, which makes it practical in real-world applications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA