Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Vis Comput Graph ; 26(10): 2994-3007, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32870780

RESUMO

State-of-the-art methods for diminished reality propagate pixel information from a keyframe to subsequent frames for real-time inpainting. However, these approaches produce artifacts, if the scene geometry is not sufficiently planar. In this article, we present InpaintFusion, a new real-time method that extends inpainting to non-planar scenes by considering both color and depth information in the inpainting process. We use an RGB-D sensor for simultaneous localization and mapping, in order to both track the camera and obtain a surfel map in addition to RGB images. We use the RGB-D information in a cost function for both the color and the geometric appearance to derive a global optimization for simultaneous inpainting of color and depth. The inpainted depth is merged in a global map by depth fusion. For the final rendering, we project the map model into image space, where we can use it for effects such as relighting and stereo rendering of otherwise hidden structures. We demonstrate the capabilities of our method by comparing it to inpainting results with methods using planar geometric proxies.

2.
IEEE Trans Vis Comput Graph ; 25(11): 3063-3072, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31403421

RESUMO

We propose an algorithm for generating an unstructured lumigraph in real-time from an image stream. This problem has important applications in mixed reality, such as telepresence, interior design or as-built documentation. Unlike conventional texture optimization in structure from motion, our method must choose views from the input stream in a strictly incremental manner, since only a small number of views can be stored or transmitted. This requires formulating an online variant of the well-known view-planning problem, which must take into account what parts of the scene have already been seen and how the lumigraph sample distribution could improve in the future. We address this highly unconstrained problem by regularizing the scene structure using a regular grid structure. Upon the grid structure, we define a coverage metric describing how well the lumigraph samples cover the grid in terms of spatial and angular resolution, and we greedily keep incoming views if they improve the coverage. We evaluate the performance of our algorithm quantitatively and qualitatively on a variety of synthetic and real scenes, and demonstrate visually appealing results obtained at real-time frame rates (in the range of 3Hz-100Hz per incoming image, depending on configuration).

3.
IEEE Trans Vis Comput Graph ; 24(4): 1437-1446, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29543162

RESUMO

Drones allow exploring dangerous or impassable areas safely from a distant point of view. However, flight control from an egocentric view in narrow or constrained environments can be challenging. Arguably, an exocentric view would afford a better overview and, thus, more intuitive flight control of the drone. Unfortunately, such an exocentric view is unavailable when exploring indoor environments. This paper investigates the potential of drone-augmented human vision, i.e., of exploring the environment and controlling the drone indirectly from an exocentric viewpoint. If used with a see-through display, this approach can simulate X-ray vision to provide a natural view into an otherwise occluded environment. The user's view is synthesized from a three-dimensional reconstruction of the indoor environment using image-based rendering. This user interface is designed to reduce the cognitive load of the drone's flight control. The user can concentrate on the exploration of the inaccessible space, while flight control is largely delegated to the drone's autopilot system. We assess our system with a first experiment showing how drone-augmented human vision supports spatial understanding and improves natural interaction with the drone.


Assuntos
Aeronaves , Gráficos por Computador , Apresentação de Dados , Imageamento Tridimensional/métodos , Sistemas Homem-Máquina , Adulto , Humanos , Masculino , Gravação em Vídeo , Realidade Virtual , Raios X , Adulto Jovem
4.
PLoS One ; 8(6): e67329, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23825652

RESUMO

Brain tissue changes in autism spectrum disorders seem to be rather subtle and widespread than anatomically distinct. Therefore a multimodal, whole brain imaging technique appears to be an appropriate approach to investigate whether alterations in white and gray matter integrity relate to consistent changes in functional resting state connectivity in individuals with high functioning autism (HFA). We applied diffusion tensor imaging (DTI), voxel-based morphometry (VBM) and resting state functional connectivity magnetic resonance imaging (fcMRI) to assess differences in brain structure and function between 12 individuals with HFA (mean age 35.5, SD 11.4, 9 male) and 12 healthy controls (mean age 33.3, SD 9.0, 8 male). Psychological measures of empathy and emotionality were obtained and correlated with the most significant DTI, VBM and fcMRI findings. We found three regions of convergent structural and functional differences between HFA participants and controls. The right temporo-parietal junction area and the left frontal lobe showed decreased fractional anisotropy (FA) values along with decreased functional connectivity and a trend towards decreased gray matter volume. The bilateral superior temporal gyrus displayed significantly decreased functional connectivity that was accompanied by the strongest trend of gray matter volume decrease in the temporal lobe of HFA individuals. FA decrease in the right temporo-parietal region was correlated with psychological measurements of decreased emotionality. In conclusion, our results indicate common sites of structural and functional alterations in higher order association cortex areas and may therefore provide multimodal imaging support to the long-standing hypothesis of autism as a disorder of impaired higher-order multisensory integration.


Assuntos
Transtorno Autístico/fisiopatologia , Encéfalo/fisiopatologia , Adulto , Transtorno Autístico/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Adulto Jovem
5.
Med Image Comput Comput Assist Interv ; 15(Pt 2): 609-16, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23286099

RESUMO

The alignment of the lower limb in high tibial osteotomy (HTO) or total knee arthroplasty (TKA) must be determined intraoperatively. One way to do so is to deform the mechanical axis deviation (MAD), for which a tolerance measurement of 10 mm is widely accepted. Many techniques are proposed in clinical practice such as visual inspection, cable method, grid with lead impregnated reference lines, or more recently, navigation systems. Each has their disadvantages including reliability of the MAD measurement, excess radiation, prolonged operation time, complicated setup and high cost. To alleviate such shortcomings, we propose a novel clinical protocol that allows quick and accurate intraoperative calculation of MAD. This is achieved by an X-ray stitching method requiring only three X-ray images placed into a panoramic image frame during the entire procedure. The method has been systematically analyzed in a simulation framework in order to investigate its accuracy and robustness. Furthermore, we validated our protocol via a preclinical study comprising 19 human cadaver legs. Four surgeons determined MAD measurements using our X-ray panorama and compared these values to a gold-standard CT-based technique. The maximum average MAD error was 3.5mm which shows great potential for the technique.


Assuntos
Artroplastia de Substituição/métodos , Imageamento Tridimensional/métodos , Perna (Membro)/diagnóstico por imagem , Perna (Membro)/cirurgia , Reconhecimento Automatizado de Padrão/métodos , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Cadáver , Humanos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA