ABSTRACT
Mesh Colors provide an effective alternative to standard texture mapping. They significantly simplify the asset production pipeline by removing the need for defining a mapping and eliminate rendering artifacts due to seams. This article addresses the problem that using Mesh Colors for real-time rendering has not been practical, due to the absence of hardware support. We show that it is possible to provide full hardware texture filtering support for Mesh Colors with minimal changes to existing GPUs by introducing a hardware-friendly representation for Mesh Colors that we call Patch Textures, which can have quadrilateral or triangular topology. We discuss the hardware modifications needed for storing and filtering Patch Textures, including anisotropic filtering. This article extends our previous work by discussing and comparing patch edge-handling approaches, including an option for sampling the textures of neighboring patches using an adjacency map. We also provide extensive discussions regarding data duplication, a partial implementation present in existing hardware, and the difficulties with providing a similar hardware support for Ptex.
ABSTRACT
INTRODUCTION: A 20-year-old man with a reported history of asthma presented to the emergency department in cardiac arrest presumed to be caused by respiratory failure. CASE REPORT: The patient was discovered to have central airway obstruction and concomitant superior vena cava compression caused by a large mediastinal mass-a condition termed mediastinal mass syndrome. While the patient regained spontaneous circulation after endotracheal intubation, he was challenging to ventilate requiring escalating interventions to maintain adequate ventilation. CONCLUSION: We describe complications of mediastinal mass syndrome and an approach to resuscitation, including ventilator adjustments, patient repositioning, double-lumen endotracheal tubes, specialty consultation, and extracorporeal life support.
ABSTRACT
We introduce a new motion blur computation method for ray tracing that provides an analytical approximation of motion blurred visibility per ray. Rather than relying on timestamped rays and Monte Carlo sampling to resolve the motion blur, we associate a time interval with rays and directly evaluate when and where each ray intersects with animated object faces. Based on our simplifications, the volume swept by each animated face is represented using a triangulation of the surface of this volume. Thus, we can resolve motion blur through ray intersections with stationary triangles, and we can use any standard ray tracing acceleration structure without modifications to account for the time dimension. Rays are intersected with these triangles to analytically determine the time interval and positions of the intersections with the moving objects. Furthermore, we explain an adaptive strategy to efficiently shade the intersection intervals. As a result, we can produce noise-free motion blur for both primary and secondary rays. We also provide a general framework for emulating various camera shutter mechanisms and an artistic modification that amplifies the visibility of moving objects for emphasizing the motion in videos or static images.