Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Appl Opt ; 54(11): 3422-7, 2015 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-25967333

RESUMEN

We report a new technique for building a wide-angle, lightweight, thin-form-factor, cost-effective, easy-to-manufacture near-eye head-mounted display (HMD) for virtual reality applications. Our approach adopts an aperture mask containing an array of pinholes and a screen as a source of imagery. We demonstrate proof-of-concept HMD prototypes with a binocular field of view (FOV) of 70°×45°, or total diagonal FOV of 83°. This FOV should increase with increasing display panel size. The optical angular resolution supported in our prototype can go down to 1.4-2.1 arcmin by adopting a display with 20-30 µm pixel pitch.

2.
IEEE Trans Med Imaging ; 42(7): 2044-2056, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37021996

RESUMEN

Federated learning (FL) allows the collaborative training of AI models without needing to share raw data. This capability makes it especially interesting for healthcare applications where patient and data privacy is of utmost concern. However, recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data. In this work, we show that these attacks presented in the literature are impractical in FL use-cases where the clients' training involves updating the Batch Normalization (BN) statistics and provide a new baseline attack that works for such scenarios. Furthermore, we present new ways to measure and visualize potential data leakage in FL. Our work is a step towards establishing reproducible methods of measuring data leakage in FL and could help determine the optimal tradeoffs between privacy-preserving techniques, such as differential privacy, and model accuracy based on quantifiable metrics.


Asunto(s)
Redes Neurales de la Computación , Aprendizaje Automático Supervisado , Humanos , Privacidad , Informática Médica
3.
Med Image Anal ; 82: 102624, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36208571

RESUMEN

An important challenge and limiting factor in deep learning methods for medical imaging segmentation is the lack of available of annotated data to properly train models. For the specific task of tumor segmentation, the process entails clinicians labeling every slice of volumetric scans for every patient, which becomes prohibitive at the scale of datasets required to train neural networks to optimal performance. To address this, we propose a novel semi-supervised framework that allows training any segmentation (encoder-decoder) model using only information readily available in radiological data, namely the presence of a tumor in the image, in addition to a few annotated images. Specifically, we conjecture that a generative model performing domain translation on this weak label - healthy vs diseased scans - helps achieve tumor segmentation. The proposed GenSeg method first disentangles tumoral tissue from healthy "background" tissue. The latent representation is separated into (1) the common background information across both domains, and (2) the unique tumoral information. GenSeg then achieves diseased-to-healthy image translation by decoding a healthy version of the image from just the common representation, as well as a residual image that allows adding back the tumors. The same decoder that produces this residual tumor image, also outputs a tumor segmentation. Implicit data augmentation is achieved by re-using the same framework for healthy-to-diseased image translation, where a residual tumor image is produced from a prior distribution. By performing both image translation and segmentation simultaneously, GenSeg allows training on only partially annotated datasets. To test the framework, we trained U-Net-like architectures using GenSeg and evaluated their performance on 3 variants of a synthetic task, as well as on 2 benchmark datasets: brain tumor segmentation in MRI (derived from BraTS) and liver metastasis segmentation in CT (derived from LiTS). Our method outperforms the baseline semi-supervised (autoencoder and mean teacher) and supervised segmentation methods, with improvements ranging between 8-14% Dice score on the brain task and 5-8% on the liver task, when only 1% of the training images were annotated. These results show the proposed framework is ideal at addressing the problem of training deep segmentation models when a large portion of the available data is unlabeled and unpaired, a common issue in tumor segmentation.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasia Residual , Redes Neurales de la Computación , Imagen por Resonancia Magnética
4.
IEEE Trans Pattern Anal Mach Intell ; 43(7): 2360-2372, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31995476

RESUMEN

Generating computer graphics (CG) rendered synthetic images has been widely used to create simulation environments for robotics/autonomous driving and generate labeled data. Yet, the problem of training models purely with synthetic data remains challenging due to the considerable domain gaps caused by current limitations on rendering. In this paper, we propose a simple yet effective domain adaptation framework towards closing such gap at image level. Unlike many GAN-based approaches, our method aims to match the covariance of the universal feature embeddings across domains, making the adaptation a fast, convenient step and avoiding the need for potentially difficult GAN training. To align domains more precisely, we further propose a conditional covariance matching framework which iteratively estimates semantic segmentation regions and conditionally matches the class-wise feature covariance given the segmentation regions. We demonstrate that both tasks can mutually refine and considerably improve each other, leading to state-of-the-art domain adaptation results. Extensive experiments under multiple synthetic-to-real settings show that our approach exceeds the performance of latest domain adaptation approaches. In addition, we offer a quantitative analysis where our framework shows considerable reduction in Frechet Inception distance between source and target domains, demonstrating the effectiveness of this work in bridging the synthetic-to-real domain gap.

5.
IEEE Trans Pattern Anal Mach Intell ; 42(6): 1408-1423, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-30676944

RESUMEN

We investigate two crucial and closely-related aspects of CNNs for optical flow estimation: models and training. First, we design a compact but effective CNN model, called PWC-Net, according to simple and well-established principles: pyramidal processing, warping, and cost volume processing. PWC-Net is 17 times smaller in size, 2 times faster in inference, and 11 percent more accurate on Sintel final than the recent FlowNet2 model. It is the winning entry in the optical flow competition of the robust vision challenge. Next, we experimentally analyze the sources of our performance gains. In particular, we use the same training procedure for PWC-Net to retrain FlowNetC, a sub-network of FlowNet2. The retrained FlowNetC is 56 percent more accurate on Sintel final than the previously trained one and even 5 percent more accurate than the FlowNet2 model. We further improve the training procedure and increase the accuracy of PWC-Net on Sintel by 10 percent and on KITTI 2012 and 2015 by 20 percent. Our newly trained model parameters and training protocols are available on https://github.com/NVlabs/PWC-Net.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Bases de Datos Factuales , Humanos , Programas Informáticos
6.
IEEE Trans Image Process ; 28(2): 723-738, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30222562

RESUMEN

Non-local-means image denoising is based on processing a set of neighbors for a given reference patch. few nearest neighbors (NN) can be used to limit the computational burden of the algorithm. Resorting to a toy problem, we show analytically that sampling neighbors with the NN approach introduces a bias in the denoised patch. We propose a different neighbors' collection criterion to alleviate this issue, which we name statistical NN (SNN). Our approach outperforms the traditional one in case of both white and colored noise: fewer SNNs can be used to generate images of superior quality, at a lower computational cost. A detailed investigation of our toy problem explains the differences between NN and SNN from a grounded point of view. The intuition behind SNN is quite general, and it leads to image quality improvement also in the case of bilateral filtering. The MATLAB code to replicate the results presented in the paper is freely available.

7.
IEEE Comput Graph Appl ; 33(6): 48-57, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24808130

RESUMEN

A new method fabricates custom surface reflectance and spatially varying bidirectional reflectance distribution functions (svBRDFs). Researchers optimize a microgeometry for a range of normal distribution functions and simulate the resulting surface's effective reflectance. Using the simulation's results, they reproduce an input svBRDF's appearance by distributing the microgeometry on the printed material's surface. This method lets people print svBRDFs on planar samples with current 3D printing technology, even with a limited set of printing materials. It extends naturally to printing svBRDFs on arbitrary shapes.

8.
IEEE Comput Graph Appl ; 32(6): 10-7, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-24807305

RESUMEN

The Beaming project recreates, virtually, a real environment; using immersive VR, remote participants can visit the virtual model and interact with the people in the real environment. The real environment doesn't need extensive equipment and can be a space such as an office or meeting room, domestic environment, or social space.


Asunto(s)
Robótica/instrumentación , Interfaz Usuario-Computador , Comunicación por Videoconferencia/instrumentación , Humanos , Imagenología Tridimensional
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA