Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
iScience ; 21: 428-435, 2019 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-31706138

RESUMO

Oncogene amplification is one of the most common drivers of genetic events in cancer, potently promoting tumor development, growth, and progression. The recent discovery that oncogene amplification commonly occurs on extrachromosomal DNA, driving intratumoral genetic heterogeneity and high copy number owing to its non-chromosomal mechanism of inheritance, raises important questions about how the subnuclear location of amplified oncogenes mediates tumor pathogenesis. Next-generation sequencing is powerful but does not provide spatial resolution for amplified oncogenes, and new approaches are needed for accurately quantifying oncogenes located on ecDNA. Here, we introduce ecSeg, an image analysis tool that integrates conventional microscopy with deep neural networks to accurately resolve ecDNA and oncogene amplification at the single cell level.

2.
IEEE Trans Pattern Anal Mach Intell ; 41(8): 1828-1843, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30106706

RESUMO

Recent data-driven approaches to scene interpretation predominantly pose inference as an end-to-end black-box mapping, commonly performed by a Convolutional Neural Network (CNN). However, decades of work on perceptual organization in both human and machine vision suggest that there are often intermediate representations that are intrinsic to an inference task, and which provide essential structure to improve generalization. In this work, we explore an approach for injecting prior domain structure into neural network training by supervising hidden layers of a CNN with intermediate concepts that normally are not observed in practice. We formulate a probabilistic framework which formalizes these notions and predicts improved generalization via this deep supervision method. One advantage of this approach is that we are able to train only from synthetic CAD renderings of cluttered scenes, where concept values can be extracted, but apply the results to real images. Our implementation achieves the state-of-the-art performance of 2D/3D keypoint localization and image classification on real image benchmarks including KITTI, PASCAL VOC, PASCAL3D+, IKEA, and CIFAR100. We provide additional evidence that our approach outperforms alternative forms of supervision, such as multi-task networks.

3.
IEEE Trans Pattern Anal Mach Intell ; 40(3): 740-754, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28320650

RESUMO

Light-field cameras have recently emerged as a powerful tool for one-shot passive 3D shape capture. However, obtaining the shape of glossy objects like metals or plastics remains challenging, since standard Lambertian cues like photo-consistency cannot be easily applied. In this paper, we derive a spatially-varying (SV)BRDF-invariant theory for recovering 3D shape and reflectance from light-field cameras. Our key theoretical insight is a novel analysis of diffuse plus single-lobe SVBRDFs under a light-field setup. We show that, although direct shape recovery is not possible, an equation relating depths and normals can still be derived. Using this equation, we then propose using a polynomial (quadratic) shape prior to resolve the shape ambiguity. Once shape is estimated, we also recover the reflectance. We present extensive synthetic data on the entire MERL BRDF dataset, as well as a number of real examples to validate the theory, where we simultaneously recover shape and BRDFs from a single image taken with a Lytro Illum camera.

4.
IEEE Trans Pattern Anal Mach Intell ; 38(7): 1283-97, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-26415156

RESUMO

Psychophysical studies show motion cues inform about shape even with unknown reflectance. Recent works in computer vision have considered shape recovery for an object of unknown BRDF using light source or object motions. This paper proposes a theory that addresses the remaining problem of determining shape from the (small or differential) motion of the camera, for unknown isotropic BRDFs. Our theory derives a differential stereo relation that relates camera motion to surface depth, which generalizes traditional Lambertian assumptions. Under orthographic projection, we show differential stereo may not determine shape for general BRDFs, but suffices to yield an invariant for several restricted (still unknown) BRDFs exhibited by common materials. For the perspective case, we show that differential stereo yields the surface depth for unknown isotropic BRDF and unknown directional lighting, while additional constraints are obtained with restrictions on the BRDF or lighting. The limits imposed by our theory are intrinsic to the shape recovery problem and independent of choice of reconstruction method. We also illustrate trends shared by theories on shape from differential motion of light source, object or camera, to relate the hardness of surface reconstruction to the complexity of imaging setup.

5.
IEEE Trans Pattern Anal Mach Intell ; 38(4): 730-43, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26513777

RESUMO

We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

6.
IEEE Trans Pattern Anal Mach Intell ; 35(12): 2941-55, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24136432

RESUMO

This paper presents a comprehensive theory of photometric surface reconstruction from image derivatives in the presence of a general, unknown isotropic BRDF. We derive precise topological classes up to which the surface may be determined and specify exact priors for a full geometric reconstruction. These results are the culmination of a series of fundamental observations. First, we exploit the linearity of chain rule differentiation to discover photometric invariants that relate image derivatives to the surface geometry, regardless of the form of isotropic BRDF. For the problem of shape-from-shading, we show that a reconstruction may be performed up to isocontours of constant magnitude of the gradient. For the problem of photometric stereo, we show that just two measurements of spatial and temporal image derivatives, from unknown light directions on a circle, suffice to recover surface information from the photometric invariant. Surprisingly, the form of the invariant bears a striking resemblance to optical flow; however, it does not suffer from the aperture problem. This photometric flow is shown to determine the surface up to isocontours of constant magnitude of the surface gradient, as well as isocontours of constant depth. Further, we prove that specification of the surface normal at a single point completely determines the surface depth from these isocontours. In addition, we propose practical algorithms that require additional initial or boundary information, but recover depth from lower order derivatives. Our theoretical results are illustrated with several examples on synthetic and real data.

7.
IEEE Trans Pattern Anal Mach Intell ; 33(10): 2122-8, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21670483

RESUMO

Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, it can be shown that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large light transport matrix, which is impractical for realistic resolutions using standard techniques. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion--analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering--that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation, to display images free of global illumination artifacts in real-world environments.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA