Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37938966

RESUMO

Image alignment and registration methods typically rely on visual correspondences across common regions and boundaries to guide the alignment process. Without them, the problem becomes significantly more challenging. Nevertheless, in real world, image fragments may be corrupted with no common boundaries and little or no overlap. In this work, we address the problem of learning the alignment of image fragments with gaps (i.e., without common boundaries or overlapping regions). Our setting is unsupervised, having only the fragments at hand with no ground truth to guide the alignment process. This is usually the situation in the restoration of unique archaeological artifacts such as frescoes and mosaics. Hence, we suggest a self-supervised approach utilizing self-examples which we generate from the existing data and then feed into an adversarial neural network. Our idea is that available information inside fragments is often sufficiently rich to guide their alignment with good accuracy. Following this observation, our method splits the initial fragments into sub-fragments yielding a set of aligned pieces. Thus, sub-fragmentation allows exposing new alignment relations and revealing inner structures and feature statistics. In fact, the new sub-fragments construct true and false alignment relations between fragments. We feed this data to a spatial transformer GAN which learns to predict the alignment between fragments gaps. We test our technique on various synthetic datasets as well as large scale frescoes and mosaics. Results demonstrate our method's capability to learn the alignment of deteriorated image fragments in a self-supervised manner, by examining inner image statistics for both synthetic and real data.

2.
IEEE Trans Vis Comput Graph ; 27(7): 3238-3249, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-31985422

RESUMO

In this article we introduce a differentiable rendering module which allows neural networks to efficiently process 3D data. The module is composed of continuous piecewise differentiable functions defined as a sensor array of cells embedded in 3D space. Our module is learnable and can be easily integrated into neural networks allowing to optimize data rendering towards specific learning tasks using gradient based methods in an end-to-end fashion. Essentially, the module's sensor cells are allowed to transform independently and locally focus and sense different parts of the 3D data. Thus, through their optimization process, cells learn to focus on important parts of the data, bypassing occlusions, clutter, and noise. Since sensor cells originally lie on a grid, this equals to a highly non-linear rendering of the scene into a 2D image. Our module performs especially well in presence of clutter and occlusions as well as dealing with non-linear deformations to improve classification accuracy through proper rendering of the data. In our experiments, we apply our module in various learning tasks and demonstrate that using our rendering module we accomplish efficient classification, localization, and segmentation tasks on 2D/3D cluttered and non-cluttered data.

3.
IEEE Trans Vis Comput Graph ; 26(10): 3037-3050, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31056499

RESUMO

3D printed objects are rapidly becoming prevalent in science, technology and daily life. An important question is how to obtain strong and durable 3D models using standard printing techniques. This question is often translated to computing smartly designed interior structures that provide strong support and yield resistant 3D models. In this paper we suggest a combination between 3D printing and material injection to achieve strong 3D printed objects. We utilize triply periodic minimal surfaces (TPMS) to define novel interior support structures. TPMS are closed form and can be computed in a simple and straightforward manner. Since TPMS are smooth and connected, we utilize them to define channels that adequately distribute injected materials in the shape interior. To account for weak regions, TPMS channels are locally optimized according to the shape stress field. After the object is printed, we simply inject the TPMS channels with materials that solidify and yield a strong inner structure that supports the shape. Our method allows injecting a wide range of materials in an object interior in a fast and easy manner. Results demonstrate the efficiency of strong printing by combining 3D printing and injection together.

4.
IEEE Trans Vis Comput Graph ; 22(12): 2608-2618, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-26731769

RESUMO

We introduce a 3D tree modeling technique that utilizes examples of real trees to enhance tree creation with realistic structures and fine-level details. In contrast to previous works that use smooth generalized cylinders to represent tree branches, our method generates realistic looking tree models with complex branching geometry by employing an exemplar database consisting of real-life trees reconstructed from scanned data. These trees are sliced into representative parts (denoted as tree-cuts), representing trunk logs and branching structures. In the modeling process, tree-cuts are positioned in space in an intuitive manner, serving as efficient proxies that guide the creation of the complete tree. Allometry rules are taken into account to ensure reasonable relations between adjacent branches. Realism is further enhanced by automatically transferring geometric textures from our database onto tree branches as well as by guided growing of foliage. Our results demonstrate the complexity and variety of trees that can be generated with our method within few minutes. We carry a user study to test the effectiveness of our modeling technique.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA