Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38875083

ABSTRACT

Recent telepresence systems have shown significant improvements in quality compared to prior systems. However, they struggle to achieve both low cost and high quality at the same time. In this work, we envision a future where telepresence systems become a commodity and can be installed on typical desktops. To this end, we present a high-quality view synthesis method that uses a cost-effective capture system that consists of commodity hardware accessible to the general public. We propose a neural renderer that uses a few RGBD cameras as input to synthesize novel views of a user and their surroundings. At the core of the renderer is Multi-Layer Point Cloud (MPC), a novel 3D representation that improves reconstruction accuracy by removing non-linear biases in depth cameras. Our temporally-aware renderer further improves the stability of synthesized videos by conditioning on past information. Additionally, we propose Spatial Skip Connections (SSC) to improve image upsampling under limited GPU memory. Experimental results show that our renderer outperforms recent methods in terms of view synthesis quality. Our method generalizes to new users and challenging content (e.g., hand gestures and clothing deformation) without costly per-video optimization, object templates, or heavy pre-processing. The code and dataset will be made available.

2.
bioRxiv ; 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38645018

ABSTRACT

Over-activation of the epidermal growth factor receptor (EGFR) is a hallmark of glioblastoma. However, EGFR-targeted therapies have led to minimal clinical response. While delivery of EGFR inhibitors (EGFRis) to the brain constitutes a major challenge, how additional drug-specific features alter efficacy remains poorly understood. We apply highly multiplex single-cell chemical genomics to define the molecular response of glioblastoma to EGFRis. Using a deep generative framework, we identify shared and drug-specific transcriptional programs that group EGFRis into distinct molecular classes. We identify programs that differ by the chemical properties of EGFRis, including induction of adaptive transcription and modulation of immunogenic gene expression. Finally, we demonstrate that pro-immunogenic expression changes associated with a subset of tyrphostin family EGFRis increase the ability of T-cells to target glioblastoma cells.

3.
Stud Health Technol Inform ; 173: 372-8, 2012.
Article in English | MEDLINE | ID: mdl-22357021

ABSTRACT

We introduce the notion of Shader Lamps Virtual Patients (SLVP) - the combination of projector-based Shader Lamps Avatars and interactive virtual humans. This paradigm uses Shader Lamps Avatars technology to give a 3D physical presence to conversational virtual humans, improving their social interactivity and enabling them to share the physical space with the user. The paradigm scales naturally to multiple viewers, allowing for scenarios where an instructor and multiple students are involved in the training. We have developed a physical-virtual patient for medical students to conduct ophthalmic exams, in an interactive training experience. In this experience, the trainee practices multiple skills simultaneously, including using a surrogate optical instrument in front of a physical head, conversing with the patient about his fears, observing realistic head motion, and practicing patient safety. Here we present a prototype system and results from a preliminary formative evaluation of the system.


Subject(s)
Computer Simulation , Patients , User-Computer Interface , Clinical Competence , Diagnostic Techniques, Ophthalmological , Humans , Imaging, Three-Dimensional
4.
J Imaging ; 8(12)2022 Nov 29.
Article in English | MEDLINE | ID: mdl-36547484

ABSTRACT

This work aims to leverage medical augmented reality (AR) technology to counter the shortage of medical experts in low-resource environments. We present a complete and cross-platform proof-of-concept AR system that enables remote users to teach and train medical procedures without expensive medical equipment or external sensors. By seeing the 3D viewpoint and head movements of the teacher, the student can follow the teacher's actions on the real patient. Alternatively, it is possible to stream the 3D view of the patient from the student to the teacher, allowing the teacher to guide the student during the remote session. A pilot study of our system shows that it is easy to transfer detailed instructions through this remote teaching system and that the interface is easily accessible and intuitive for users. We provide a performant pipeline that synchronizes, compresses, and streams sensor data through parallel efficiency.

5.
J Vasc Interv Radiol ; 22(11): 1613-1618.e1, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21959057

ABSTRACT

PURPOSE: To develop a consistent and reproducible method in an animal model for studies of radiofrequency (RF) ablation of primary hepatocellular carcinoma (HCC). MATERIALS AND METHODS: Fifteen woodchucks were inoculated with woodchuck hepatitis virus (WHV) to establish chronic infections. When serum γ-glutamyl transpeptidase levels became elevated, the animals were evaluated with ultrasound, and, in most cases, preoperative magnetic resonance (MR) imaging to confirm tumor development. Ultimately, RF ablation of tumors was performed by using a 1-cm probe with the animal submerged in a water bath for grounding. Ablation effectiveness was evaluated with contrast-enhanced MR imaging and gross and histopathologic analysis. RESULTS: RF ablation was performed in 15 woodchucks. Modifications were made to the initial study design to adapt methodology for the woodchuck. The last 10 of these animals were treated with a standardized protocol using a 1-cm probe that produced a consistent area of tumor necrosis (mean size of ablation, 10.2 mm × 13.1 mm) and led to no complications. CONCLUSIONS: A safe, reliable and consistent method was developed to study RF ablation of spontaneous primary HCC using chronically WHV-infected woodchucks, an animal model of hepatitis B virus-induced HCC.


Subject(s)
Carcinoma, Hepatocellular/surgery , Catheter Ablation , Hepatitis B Virus, Woodchuck/pathogenicity , Hepatitis B/virology , Liver Neoplasms, Experimental/surgery , Animals , Biopsy , Carcinoma, Hepatocellular/pathology , Carcinoma, Hepatocellular/virology , Catheter Ablation/instrumentation , Contrast Media , Equipment Design , Hepatitis B/complications , Liver Neoplasms, Experimental/pathology , Liver Neoplasms, Experimental/virology , Magnetic Resonance Imaging , Marmota , Necrosis , Reproducibility of Results , Time Factors
6.
IEEE Trans Vis Comput Graph ; 27(11): 4194-4203, 2021 11.
Article in English | MEDLINE | ID: mdl-34449368

ABSTRACT

Computer-generated holographic (CGH) displays show great potential and are emerging as the next-generation displays for augmented and virtual reality, and automotive heads-up displays. One of the critical problems harming the wide adoption of such displays is the presence of speckle noise inherent to holography, that compromises its quality by introducing perceptible artifacts. Although speckle noise suppression has been an active research area, the previous works have not considered the perceptual characteristics of the Human Visual System (HVS), which receives the final displayed imagery. However, it is well studied that the sensitivity of the HVS is not uniform across the visual field, which has led to gaze-contingent rendering schemes for maximizing the perceptual quality in various computer-generated imagery. Inspired by this, we present the first method that reduces the "perceived speckle noise" by integrating foveal and peripheral vision characteristics of the HVS, along with the retinal point spread function, into the phase hologram computation. Specifically, we introduce the anatomical and statistical retinal receptor distribution into our computational hologram optimization, which places a higher priority on reducing the perceived foveal speckle noise while being adaptable to any individual's optical aberration on the retina. Our method demonstrates superior perceptual quality on our emulated holographic display. Our evaluations with objective measurements and subjective studies demonstrate a significant reduction of the human perceived noise.


Subject(s)
Holography , Artifacts , Computer Graphics , Humans , Retina , Visual Fields
7.
IEEE Trans Vis Comput Graph ; 15(3): 383-94, 2009.
Article in English | MEDLINE | ID: mdl-19282546

ABSTRACT

Virtual Environments (VEs) that use a real-walking locomotion interface have typically been restricted in size to the area of the tracked lab space. Techniques proposed to lift this size constraint, enabling real walking in VEs that are larger than the tracked lab space, all require reorientation techniques (ROTs) in the worst-case situation-when a user is close to walking out of the tracked space. We propose a new ROT using visual and audial distractors-objects in the VE that the user focuses on while the VE rotates-and compare our method to current ROTs through three user studies. ROTs using distractors were preferred and ranked more natural by users. Our findings also suggest that improving visual realism and adding sound increased a user's feeling of presence. Users were also less aware of the rotating VE when ROTs with distractors were used. Our findings also suggest that improving visual realism and adding sound increased a user's feeling of presence.


Subject(s)
Gait/physiology , Imaging, Three-Dimensional/methods , Orientation/physiology , Perceptual Masking/physiology , User-Computer Interface , Visual Perception/physiology , Walking/physiology , Computer Graphics , Ecosystem
8.
IEEE Trans Vis Comput Graph ; 25(11): 3125-3134, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31502977

ABSTRACT

Optical see-through augmented reality (AR) systems are a next-generation computing platform that offer unprecedented user experiences by seamlessly combining physical and digital content. Many of the traditional challenges of these displays have been significantly improved over the last few years, but AR experiences offered by today's systems are far from seamless and perceptually realistic. Mutually consistent occlusions between physical and digital objects are typically not supported. When mutual occlusion is supported, it is only supported for a fixed depth. We propose a new optical see-through AR display system that renders mutual occlusion in a depth-dependent, perceptually realistic manner. To this end, we introduce varifocal occlusion displays based on focus-tunable optics, which comprise a varifocal lens system and spatial light modulators that enable depth-corrected hard-edge occlusions for AR experiences. We derive formal optimization methods and closed-form solutions for driving this tunable lens system and demonstrate a monocular varifocal occlusion-capable optical see-through AR display capable of perceptually realistic occlusion across a large depth range.

9.
Laryngoscope ; 129 Suppl 3: S1-S11, 2019 10.
Article in English | MEDLINE | ID: mdl-31260127

ABSTRACT

OBJECTIVES/HYPOTHESIS: Augmented reality (AR) allows for the addition of transparent virtual images and video to one's view of a physical environment. Our objective was to develop a head-worn, AR system for accurate, intraoperative localization of pathology and normal anatomic landmarks during open head and neck surgery. STUDY DESIGN: Face validity and case study. METHODS: A protocol was developed for the creation of three-dimensional (3D) virtual models based on computed tomography scans. Using the HoloLens AR platform, a novel system of registration and tracking was developed. Accuracy was determined in relation to actual physical landmarks. A face validity study was then performed in which otolaryngologists were asked to evaluate the technology and perform a simulated surgical task using AR image guidance. A case study highlighting the potential usefulness of the technology is also presented. RESULTS: An AR system was developed for intraoperative 3D visualization and localization. The average error in measurement of accuracy was 2.47 ± 0.46 millimeters (1.99, 3.30). The face validity study supports the potential of this system to improve safety and efficiency in open head and neck surgical procedures. CONCLUSIONS: An AR system for accurate localization of pathology and normal anatomic landmarks of the head and neck is feasible with current technology. A face validity study reveals the potential value of the system in intraoperative image guidance. This application of AR, among others in the field of otolaryngology-head and neck surgery, promises to improve surgical efficiency and patient safety in the operating room. LEVEL OF EVIDENCE: 2b Laryngoscope, 129:S1-S11, 2019.


Subject(s)
Imaging, Three-Dimensional/methods , Otolaryngology/methods , Otorhinolaryngologic Surgical Procedures/methods , Tomography, X-Ray Computed/methods , Virtual Reality , Anatomic Landmarks/surgery , Computer Simulation , Feasibility Studies , Humans
10.
IEEE Trans Vis Comput Graph ; 25(5): 1928-1939, 2019 05.
Article in English | MEDLINE | ID: mdl-30794179

ABSTRACT

Traditional optical manufacturing poses a great challenge to near-eye display designers due to large lead times in the order of multiple weeks, limiting the abilities of optical designers to iterate fast and explore beyond conventional designs. We present a complete near-eye display manufacturing pipeline with a day lead time using commodity hardware. Our novel manufacturing pipeline consists of several innovations including a rapid production technique to improve surface of a 3D printed component to optical quality suitable for near-eye display application, a computational design methodology using machine learning and ray tracing to create freeform static projection screen surfaces for near-eye displays that can represent arbitrary focal surfaces, and a custom projection lens design that distributes pixels non-uniformly for a foveated near-eye display hardware design candidate. We have demonstrated untethered augmented reality near-eye display prototypes to assess success of our technique, and show that a ski-goggles form factor, a large monocular field of view (30o×55o), and a resolution of 12 cycles per degree can be achieved.

11.
IEEE Trans Vis Comput Graph ; 25(5): 1970-1980, 2019 05.
Article in English | MEDLINE | ID: mdl-30843843

ABSTRACT

This paper presents the implementation and evaluation of a 50,000-pose-sample-per-second, 6-degree-of-freedom optical head tracking instrument with motion-to-pose latency of 28µs and dynamic precision of 1-2 arcminutes. The instrument uses high-intensity infrared emitters and two duo-lateral photodiode-based optical sensors to triangulate pose. This instrument serves two purposes: it is the first step towards the requisite head tracking component in sub- 100µs motion-to-photon latency optical see-through augmented reality (OST AR) head-mounted display (HMD) systems; and it enables new avenues of research into human visual perception - including measuring the thresholds for perceptible real-virtual displacement during head rotation and other human research requiring high-sample-rate motion tracking. The instrument's tracking volume is limited to about 120×120×250 but allows for the full range of natural head rotation and is sufficient for research involving seated users. We discuss how the instrument's tracking volume is scalable in multiple ways and some of the trade-offs involved therein. Finally, we introduce a novel laser-pointer-based measurement technique for assessing the instrument's tracking latency and repeatability. We show that the instrument's motion-to-pose latency is 28µs and that it is repeatable within 1-2 arcminutes at mean rotational velocities (yaw) in excess of 500°/sec.


Subject(s)
Imaging, Three-Dimensional/instrumentation , Smart Glasses , User-Computer Interface , Virtual Reality , Computer Graphics , Equipment Design , Head Movements/physiology , Humans , Imaging, Three-Dimensional/methods , Time Factors
12.
IEEE Trans Vis Comput Graph ; 25(11): 3114-3124, 2019 11.
Article in English | MEDLINE | ID: mdl-31403422

ABSTRACT

In this paper, we present our novel design for switchable AR/VR near-eye displays which can help solve the vergence-accommodation-conflict issue. The principal idea is to time-multiplex virtual imagery and real-world imagery and use a tunable lens to adjust focus for the virtual display and the see-through scene separately. With this novel design, prescription eyeglasses for near- and far-sighted users become unnecessary. This is achieved by integrating the wearer's corrective optical prescription into the tunable lens for both virtual display and see-through environment. We built a prototype based on the design, comprised of micro-display, optical systems, a tunable lens, and active shutters. The experimental results confirm that the proposed near-eye display design can switch between AR and VR and can provide correct accommodation for both.


Subject(s)
Augmented Reality , Computer Graphics , Image Processing, Computer-Assisted/methods , Virtual Reality , Equipment Design , Eyeglasses , Holography , Humans
13.
Stud Health Technol Inform ; 132: 126-31, 2008.
Article in English | MEDLINE | ID: mdl-18391272

ABSTRACT

Radio frequency ablation is a minimally invasive intervention that introduces -- under 2D ultrasound guidance and via a needle-like probe -- high-frequency electrical current into non-resectable hepatic tumors. These recur mostly on the periphery, indicating errors in probe placement. Hypothesizing that a contextually correct 3D display will aid targeting and decrease recurrence, we have developed a prototype guidance system based on a head-tracked 3D display and motion-tracked instruments. We describe our reasoning and our experience in selecting components for, designing and constructing the 3D display. Initial candidates were an augmented reality see-through head-mounted display and a virtual reality "fish tank" system. We describe the system requirements and explain how we arrived at the final decision. We show the operational guidance system in use on phantoms and animals.


Subject(s)
Catheter Ablation , Computer Terminals , Head , Liver Neoplasms/surgery , User-Computer Interface , Equipment Design , Humans , United States
14.
IEEE Trans Vis Comput Graph ; 24(11): 2906-2916, 2018 11.
Article in English | MEDLINE | ID: mdl-30207958

ABSTRACT

We describe a system which corrects dynamically for the focus of the real world surrounding the near-eye display of the user and simultaneously the internal display for augmented synthetic imagery, with an aim of completely replacing the user prescription eyeglasses. The ability to adjust focus for both real and virtual stimuli will be useful for a wide variety of users, but especially for users over 40 years of age who have limited accommodation range. Our proposed solution employs a tunable-focus lens for dynamic prescription vision correction, and a varifocal internal display for setting the virtual imagery at appropriate spatially registered depths. We also demonstrate a proof of concept prototype to verify our design and discuss the challenges to building an auto-focus augmented reality eyeglasses for both real and virtual.


Subject(s)
Computer Graphics , Eyeglasses , Image Processing, Computer-Assisted/instrumentation , Image Processing, Computer-Assisted/methods , User-Computer Interface , Virtual Reality , Adult , Humans
15.
IEEE Trans Vis Comput Graph ; 24(11): 2857-2866, 2018 11.
Article in English | MEDLINE | ID: mdl-30207960

ABSTRACT

We introduce an optical design and a rendering pipeline for a full-color volumetric near-eye display which simultaneously presents imagery with near-accurate per-pixel focus across an extended volume ranging from 15cm (6.7 diopters) to 4M (0.25 diopters), allowing the viewer to accommodate freely across this entire depth range. This is achieved using a focus-tunable lens that continuously sweeps a sequence of 280 synchronized binary images from a high-speed, Digital Micromirror Device (DMD) projector and a high-speed, high dynamic range (HDR) light source that illuminates the DMD images with a distinct color and brightness at each binary frame. Our rendering pipeline converts 3-D scene information into a 2-D surface of color voxels, which are decomposed into 280 binary images in a voxel-oriented manner, such that 280 distinct depth positions for full-color voxels can be displayed.


Subject(s)
Computer Graphics , Imaging, Three-Dimensional/methods , Virtual Reality , Algorithms , Equipment Design , Photography , Video Recording
16.
IEEE Trans Vis Comput Graph ; 24(11): 2993-3004, 2018 11.
Article in English | MEDLINE | ID: mdl-30207957

ABSTRACT

We propose a new approach for 3D reconstruction of dynamic indoor and outdoor scenes in everyday environments, leveraging only cameras worn by a user. This approach allows 3D reconstruction of experiences at any location and virtual tours from anywhere. The key innovation of the proposed ego-centric reconstruction system is to capture the wearer's body pose and facial expression from near-body views, e.g. cameras on the user's glasses, and to capture the surrounding environment using outward-facing views. The main challenge of the ego-centric reconstruction, however, is the poor coverage of the near-body views - that is, the user's body and face are observed from vantage points that are convenient for wear but inconvenient for capture. To overcome these challenges, we propose a parametric-model-based approach to user motion estimation. This approach utilizes convolutional neural networks (CNNs) for near-view body pose estimation, and we introduce a CNN-based approach for facial expression estimation that combines audio and video. For each time-point during capture, the intermediate model-based reconstructions from these systems are used to re-target a high-fidelity pre-scanned model of the user. We demonstrate that the proposed self-sufficient, head-worn capture system is capable of reconstructing the wearer's movements and their surrounding environment in both indoor and outdoor situations without any additional views. As a proof of concept, we show how the resulting 3D-plus-time reconstruction can be immersively experienced within a virtual reality system (e.g., the HTC Vive). We expect that the size of the proposed egocentric capture-and-reconstruction system will eventually be reduced to fit within future AR glasses, and will be widely useful for immersive 3D telepresence, virtual tours, and general use-anywhere 3D content creation.


Subject(s)
Facial Expression , Imaging, Three-Dimensional/methods , Posture/physiology , User-Computer Interface , Video Recording/methods , Humans , Internet , Neural Networks, Computer
17.
IEEE Trans Vis Comput Graph ; 23(4): 1322-1331, 2017 04.
Article in English | MEDLINE | ID: mdl-28129167

ABSTRACT

Accommodative depth cues, a wide field of view, and ever-higher resolutions all present major hardware design challenges for near-eye displays. Optimizing a design to overcome one of these challenges typically leads to a trade-off in the others. We tackle this problem by introducing an all-in-one solution - a new wide field of view, gaze-tracked near-eye display for augmented reality applications. The key component of our solution is the use of a single see-through, varifocal deformable membrane mirror for each eye reflecting a display. They are controlled by airtight cavities and change the effective focal power to present a virtual image at a target depth plane which is determined by the gaze tracker. The benefits of using the membranes include wide field of view (100° diagonal) and fast depth switching (from 20 cm to infinity within 300 ms). Our subjective experiment verifies the prototype and demonstrates its potential benefits for near-eye see-through displays.

18.
IEEE Trans Vis Comput Graph ; 22(4): 1367-76, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26780797

ABSTRACT

We describe an augmented reality, optical see-through display based on a DMD chip with an extremely fast (16 kHz) binary update rate. We combine the techniques of post-rendering 2-D offsets and just-in-time tracking updates with a novel modulation technique for turning binary pixels into perceived gray scale. These processing elements, implemented in an FPGA, are physically mounted along with the optical display elements in a head tracked rig through which users view synthetic imagery superimposed on their real environment. The combination of mechanical tracking at near-zero latency with reconfigurable display processing has given us a measured average of 80 µs of end-to-end latency (from head motion to change in photons from the display) and also a versatile test platform for extremely-low-latency display systems. We have used it to examine the trade-offs between image quality and cost (i.e. power and logical complexity) and have found that quality can be maintained with a fairly simple display modulation scheme.

19.
Stud Health Technol Inform ; 220: 55-62, 2016.
Article in English | MEDLINE | ID: mdl-27046554

ABSTRACT

This paper introduces a computer-based system that is designed to record a surgical procedure with multiple depth cameras and reconstruct in three dimensions the dynamic geometry of the actions and events that occur during the procedure. The resulting 3D-plus-time data takes the form of dynamic, textured geometry and can be immersively examined at a later time; equipped with a Virtual Reality headset such as Oculus Rift DK2, a user can walk around the reconstruction of the procedure room while controlling playback of the recorded surgical procedure with simple VCR-like controls (play, pause, rewind, fast forward). The reconstruction can be annotated in space and time to provide more information of the scene to users. We expect such a system to be useful in applications such as training of medical students and nurses.


Subject(s)
Educational Measurement/methods , General Surgery/education , Imaging, Three-Dimensional/methods , Operating Rooms/methods , Photography/methods , Surgery, Computer-Assisted/methods , Computer-Assisted Instruction , Humans , Imaging, Three-Dimensional/instrumentation , Pattern Recognition, Automated/methods , Photography/instrumentation , Reproducibility of Results , Sensitivity and Specificity , Software , Surgery, Computer-Assisted/instrumentation , Systems Integration , Video Games , Whole Body Imaging/instrumentation , Whole Body Imaging/methods
SELECTION OF CITATIONS
SEARCH DETAIL