Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(14)2022 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-35890934

RESUMO

Dense multi-view image reconstruction has played an active role in research for a long time and interest has recently increased. Multi-view images can solve many problems and enhance the efficiency of many applications. This paper presents a more specific solution for reconstructing high-density light field (LF) images. We present this solution for images captured by Lytro Illum cameras to solve the implicit problem related to the discrepancy between angular and spatial resolution resulting from poor sensor resolution. We introduce the residual channel attention light field (RCA-LF) structure to solve different LF reconstruction tasks. In our approach, view images are grouped in one stack where epipolar information is available. We use 2D convolution layers to process and extract features from the stacked view images. Our method adopts the channel attention mechanism to learn the relation between different views and give higher weight to the most important features, restoring more texture details. Finally, experimental results indicate that the proposed model outperforms earlier state-of-the-art methods for visual and numerical evaluation.

2.
Sensors (Basel) ; 22(5)2022 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-35271103

RESUMO

Although Light-Field (LF) technology attracts attention due to its large number of applications, especially with the introduction of consumer LF cameras and its frequent use, reconstructing densely sampled LF images represents a great challenge to the use and development of LF technology. Our paper proposes a learning-based method to reconstruct densely sampled LF images from a sparse set of input images. We trained our model with raw LF images rather than using multiple images of the same scene. Raw LF can represent the two-dimensional array of images captured in a single image. Therefore, it enables the network to understand and model the relationship between different images of the same scene well and thus restore more texture details and provide better quality. Using raw images has transformed the task from image reconstruction into image-to-image translation. The feature of small-baseline LF was used to define the images to be reconstructed using the nearest input view to initialize input images. Our network was trained end-to-end to minimize the sum of absolute errors between the reconstructed and ground-truth images. Experimental results on three challenging real-world datasets demonstrate the high performance of our proposed method and its outperformance over the state-of-the-art methods.


Assuntos
Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos
3.
Sensors (Basel) ; 22(23)2022 Dec 04.
Artigo em Inglês | MEDLINE | ID: mdl-36502186

RESUMO

Computer vision tasks, such as motion estimation, depth estimation, object detection, etc., are better suited to light field images with more structural information than traditional 2D monocular images. However, since costly data acquisition instruments are difficult to calibrate, it is always hard to obtain real-world scene light field images. The majority of the datasets for static light field images now available are modest in size and cannot be used in methods such as transformer to fully leverage local and global correlations. Additionally, studies on dynamic situations, such as object tracking and motion estimates based on 4D light field images, have been rare, and we anticipate a superior performance. In this paper, we firstly propose a new static light field dataset that contains up to 50 scenes and takes 8 to 10 perspectives for each scene, with the ground truth including disparities, depths, surface normals, segmentations, and object poses. This dataset is larger scaled compared to current mainstream datasets for depth estimation refinement, and we focus on indoor and some outdoor scenarios. Second, to generate additional optical flow ground truth that indicates 3D motion of objects in addition to the ground truth obtained in static scenes in order to calculate more precise pixel level motion estimation, we released a light field scene flow dataset with dense 3D motion ground truth of pixels, and each scene has 150 frames. Thirdly, by utilizing the DistgDisp and DistgASR, which decouple the angular and spatial domain of the light field, we perform disparity estimation and angular super-resolution to evaluate the performance of our light field dataset. The performance and potential of our dataset in disparity estimation and angular super-resolution have been demonstrated by experimental results.


Assuntos
Algoritmos , Movimento (Física)
4.
Sensors (Basel) ; 18(3)2018 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-29562722

RESUMO

Deconvolution provides an efficient technology to implement angular super-resolution for scanning radar forward-looking imaging. However, deconvolution is an ill-posed problem, of which the solution is not only sensitive to noise, but also would be easily deteriorate by the noise amplification when excessive iterations are conducted. In this paper, a penalized maximum likelihood angular super-resolution method is proposed to tackle these problems. Firstly, a new likelihood function is deduced by separately considering the noise in I and Q channels to enhance the accuracy of the noise modeling for radar imaging system. Afterwards, to conquer the noise amplification and maintain the resolving ability of the proposed method, a joint square-Laplace penalty is particularly formulated by making use of the outlier sensitivity property of square constraint as well as the sparse expression ability of Laplace distribution. Finally, in order to facilitate the engineering application of the proposed method, an accelerated iterative solution strategy is adopted to solve the obtained convex optimal problem. Experiments based on both synthetic data and real data demonstrate the effectiveness and superior performance of the proposed method.

5.
Biomed J ; 46(1): 154-162, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35026475

RESUMO

BACKGROUND: Rotational angiography acquires radiographs at multiple projection angles to demonstrate superimposed vasculature. However, this comes at the expense of the inherent risk of increased ionizing radiation. In this paper, building upon a successful deep learning model, we developed a novel technique to super-resolve the radiography at different projection angles to reduce the actual projections needed for a diagnosable radiographic procedure. METHODS: Ten models were trained for different levels of angular super-resolution (ASR), denoted as ASRN, where for every N+2 frames, the first and the last frames were submitted as inputs to super-resolve the intermediate N frames. RESULTS: The results show that large arterial structures were well preserved in all ASR levels. Small arteries were adequately visualized in lower ASR levels but progressively blurred out in higher ASR levels. Noninferiority of image quality was demonstrated in ASR1-4 (99.75% confidence intervals: -0.16-0.03, -0.19-0.04, -0.17-0.01, -0.15-0.05 respectively). CONCLUSIONS: ASR technique is capable of super-resolving rotational angiographic frames at intermediate projection angles.


Assuntos
Angiografia , Redes Neurais de Computação , Humanos , Raios X , Radiografia , Processamento de Imagem Assistida por Computador/métodos
6.
Med Image Anal ; 79: 102431, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35397471

RESUMO

Mapping the human connectome using fiber-tracking permits the study of brain connectivity and yields new insights into neuroscience. However, reliable connectome reconstruction using diffusion magnetic resonance imaging (dMRI) data acquired by widely available clinical protocols remains challenging, thus limiting the connectome/tractography clinical applications. Here we develop fiber orientation distribution (FOD) network (FOD-Net), a deep-learning-based framework for FOD angular super-resolution. Our method enhances the angular resolution of FOD images computed from common clinical-quality dMRI data, to obtain FODs with quality comparable to those produced from advanced research scanners. Super-resolved FOD images enable superior tractography and structural connectome reconstruction from clinical protocols. The method was trained and tested with high-quality data from the Human Connectome Project (HCP) and further validated with a local clinical 3.0T scanner as well as with another public available multicenter-multiscanner dataset. Using this method, we improve the angular resolution of FOD images acquired with typical single-shell low-angular-resolution dMRI data (e.g., 32 directions, b=1000s/mm2) to approximate the quality of FODs derived from time-consuming, multi-shell high-angular-resolution dMRI research protocols. We also demonstrate tractography improvement, removing spurious connections and bridging missing connections. We further demonstrate that connectomes reconstructed by super-resolved FODs achieve comparable results to those obtained with more advanced dMRI acquisition protocols, on both HCP and clinical 3.0T data. Advances in deep-learning approaches used in FOD-Net facilitate the generation of high quality tractography/connectome analysis from existing clinical MRI environments. Our code is freely available at https://github.com/ruizengalways/FOD-Net.


Assuntos
Conectoma , Aprendizado Profundo , Encéfalo/diagnóstico por imagem , Conectoma/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa