Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Vis Comput Graph ; 29(1): 171-181, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36166532

RESUMO

What can we learn about a scene by watching it for months or years? A video recorded over a long timespan will depict interesting phenomena at multiple timescales, but identifying and viewing them presents a challenge. The video is too long to watch in full, and some things are too slow to experience in real-time, such as glacial retreat or the gradual shift from summer to fall. Timelapse videography is a common approach to summarizing long videos and visualizing slow timescales. However, a timelapse is limited to a single chosen temporal frequency, and often appears flickery due to aliasing. Also, the length of the timelapse video is directly tied to its temporal resolution, which necessitates tradeoffs between those two facets. In this paper, we propose Video Temporal Pyramids, a technique that addresses these limitations and expands the possibilities for visualizing the passage of time. Inspired by spatial image pyramids from computer vision, we developed an algorithm that builds video pyramids in the temporal domain. Each level of a Video Temporal Pyramid visualizes a different timescale; for instance, videos from the monthly timescale are usually good for visualizing seasonal changes, while videos from the one-minute timescale are best for visualizing sunrise or the movement of clouds across the sky. To help explore the different pyramid levels, we also propose a Video Spectrogram to visualize the amount of activity across the entire pyramid, providing a holistic overview of the scene dynamics and the ability to explore and discover phenomena across time and timescales. To demonstrate our approach, we have built Video Temporal Pyramids from ten outdoor scenes, each containing months or years of data. We compare Video Temporal Pyramid layers to naive timelapse and find that our pyramids enable alias-free viewing of longer-term changes. We also demonstrate that the Video Spectrogram facilitates exploration and discovery of phenomena across pyramid levels, by enabling both overview and detail-focused perspectives.

2.
IEEE Trans Vis Comput Graph ; 27(4): 2495-2501, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32396092

RESUMO

When observing the visual world, temporal phenomena are ubiquitous: people walk, cars drive, rivers flow, clouds drift, and shadows elongate. Some of these, like water splashing and cloud motion, occur over time intervals that are either too short or too long for humans to easily observe. High-speed and timelapse videos provide a popular and compelling way to visualize these phenomena, but many real-world scenes exhibit motions occurring at a variety of rates. Once a framerate is chosen, phenomena at other rates are at best invisible, and at worst create distracting artifacts. In this article, we propose to automatically normalize the pixel-space speed of different motions in an input video to produce a seamless output with spatiotemporally varying framerate. To achieve this, we propose to analyze scenes at different timescales to isolate and analyze motions that occur at vastly different rates. Our method optionally allows a user to specify additional constraints according to artistic preferences. The motion normalized output provides a novel way to compactly visualize the changes occurring in a scene over a broad range of timescales.

3.
IEEE Trans Pattern Anal Mach Intell ; 38(4): 639-51, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26959670

RESUMO

We present a method for computing ambient occlusion (AO) for a stack of images of a Lambertian scene from a fixed viewpoint. Ambient occlusion, a concept common in computer graphics, characterizes the local visibility at a point: it approximates how much light can reach that point from different directions without getting blocked by other geometry. While AO has received surprisingly little attention in vision, we show that it can be approximated using simple, per-pixel statistics over image stacks, based on a simplified image formation model. We use our derived AO measure to compute reflectance and illumination for objects without relying on additional smoothness priors, and demonstrate state-of-the art performance on the MIT Intrinsic Images benchmark. We also demonstrate our method on several synthetic and real scenes, including 3D printed objects with known ground truth geometry.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA