Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Opt Express ; 28(5): 6734-6739, 2020 Mar 02.
Artículo en Inglés | MEDLINE | ID: mdl-32225914

RESUMEN

Foveation and (de)focus are two important visual factors in designing near eye displays. Foveation can reduce computational load by lowering display details towards the visual periphery, while focal cues can reduce vergence-accommodation conflict thereby lessening visual discomfort in using near eye displays. We performed two psychophysical experiments to investigate the relationship between foveation and focus cues. The first study measured blur discrimination sensitivity as a function of visual eccentricity, where we found discrimination thresholds significantly lower than previously reported. The second study measured depth discrimination threshold where we found a clear dependency on visual eccentricity. We discuss the study results and suggest further investigation.


Asunto(s)
Percepción de Profundidad/fisiología , Percepción Visual/fisiología , Adulto , Humanos , Persona de Mediana Edad , Estimulación Luminosa , Umbral Sensorial , Adulto Joven
2.
IEEE Trans Vis Comput Graph ; 29(12): 4832-4844, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-35914058

RESUMEN

The MD-Cave is an immersive analytics system that provides enhanced stereoscopic visualizations to support visual diagnoses performed by radiologists. The system harnesses contemporary paradigms in immersive visualization and 3D interaction, which are better suited for investigating 3D volumetric data. We retain practicality through efficient utilization of desk space and comfort for radiologists in terms of frequent long duration use. MD-Cave is general and incorporates: (1) high resolution stereoscopic visualizations through a surround triple-monitor setup, (2) 3D interactions through head and hand tracking, (3) and a general framework that supports 3D visualization of deep-seated anatomical structures without the need for explicit segmentation algorithms. Such a general framework expands the utility of our system to many diagnostic scenarios. We have developed MD-Cave through close collaboration and feedback from two expert radiologists who evaluated the utility of MD-Cave and the 3D interactions in the context of radiological examinations. We also provide evaluation of MD-Cave through case studies performed by an expert radiologist and concrete examples on multiple real-world diagnostic scenarios, such as pancreatic cancer, shoulder-CT, and COVID-19 Chest CT examination.


Asunto(s)
Algoritmos , Gráficos por Computador , Humanos , Tomografía Computarizada por Rayos X , Retroalimentación , Radiólogos
3.
IEEE Trans Vis Comput Graph ; 29(7): 3182-3194, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35213310

RESUMEN

The growing complexity of spatial and structural information in 3D data makes data inspection and visualization a challenging task. We describe a method to create a planar embedding of 3D treelike structures using their skeleton representations. Our method maintains the original geometry, without overlaps, to the best extent possible, allowing exploration of the topology within a single view. We present a novel camera view generation method which maximizes the visible geometric attributes (segment shape and relative placement between segments). Camera views are created for individual segments and are used to determine local bending angles at each node by projecting them to 2D. The final embedding is generated by minimizing an energy function (the weights of which are user adjustable) based on branch length and the 2D angles, while avoiding intersections. The user can also interactively modify segment placement within the 2D embedding, and the overall embedding will update accordingly. A global to local interactive exploration is provided using hierarchical camera views that are created for subtrees within the structure. We evaluate our method both qualitatively and quantitatively and demonstrate our results by constructing planar visualizations of line data (traced neurons) and volume data (CT vascular and bronchial data).

4.
IEEE Trans Vis Comput Graph ; 29(3): 1651-1663, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34780328

RESUMEN

We present a novel approach for volume exploration that is versatile yet effective in isolating semantic structures in both noisy and clean data. Specifically, we describe a hierarchical active contours approach based on Bhattacharyya gradient flow which is easier to control, robust to noise, and can incorporate various types of statistical information to drive an edge-agnostic exploration process. To facilitate a time-bound user-driven volume exploration process that is applicable to a wide variety of data sources, we present an efficient multi-GPU implementation that (1) is approximately 400 times faster than a single thread CPU implementation, (2) allows hierarchical exploration of 2D and 3D images, (3) supports customization through multidimensional attribute spaces, and (4) is applicable to a variety of data sources and semantic structures. The exploration system follows a 2-step process. It first applies active contours to isolate semantically meaningful subsets of the volume. It then applies transfer functions to the isolated regions locally to produce clear and clutter-free visualizations. We show the effectiveness of our approach in isolating and visualizing structures-of-interest without needing any specialized segmentation methods on a variety of data sources, including 3D optical microscopy, multi-channel optical volumes, abdominal and chest CT, micro-CT, MRI, simulation, and synthetic data. We also gathered feedback from a medical trainee regarding the usefulness of our approach and discussion on potential applications in clinical workflows.

5.
IEEE Trans Vis Comput Graph ; 29(3): 1625-1637, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-34757909

RESUMEN

Recent advances in high-resolution microscopy have allowed scientists to better understand the underlying brain connectivity. However, due to the limitation that biological specimens can only be imaged at a single timepoint, studying changes to neural projections over time is limited to observations gathered using population analysis. In this article, we introduce NeuRegenerate, a novel end-to-end framework for the prediction and visualization of changes in neural fiber morphology within a subject across specified age-timepoints. To predict projections, we present neuReGANerator, a deep-learning network based on cycle-consistent generative adversarial network (GAN) that translates features of neuronal structures across age-timepoints for large brain microscopy volumes. We improve the reconstruction quality of the predicted neuronal structures by implementing a density multiplier and a new loss function, called the hallucination loss. Moreover, to alleviate artifacts that occur due to tiling of large input volumes, we introduce a spatial-consistency module in the training pipeline of neuReGANerator. Finally, to visualize the change in projections, predicted using neuReGANerator, NeuRegenerate offers two modes: (i) neuroCompare to simultaneously visualize the difference in the structures of the neuronal projections, from two age domains (using structural view and bounded view), and (ii) neuroMorph, a vesselness-based morphing technique to interactively visualize the transformation of the structures from one age-timepoint to the other. Our framework is designed specifically for volumes acquired using wide-field microscopy. We demonstrate our framework by visualizing the structural changes within the cholinergic system of the mouse brain between a young and old specimen.


Asunto(s)
Gráficos por Computador , Procesamiento de Imagen Asistido por Computador , Animales , Ratones , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo/diagnóstico por imagen , Cabeza , Microscopía
6.
Artículo en Inglés | MEDLINE | ID: mdl-38096098

RESUMEN

We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language. Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.

7.
Artículo en Inglés | MEDLINE | ID: mdl-37966931

RESUMEN

We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.

8.
J Clin Pediatr Dent ; 37(2): 137-41, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23534318

RESUMEN

AIM: To compare the long-term clinical and radiographic outcomes of pulpotomies in primary molars performed with white or gray Mineral Trioxide Aggregate (MTA) in combination with ferric sulfate (FS), when one package of MTA is used for multiple treatments. DESIGN: Sixty eight children with 86 vital carious primary molars underwent pulpotomy with FS, and grey or white MTA. One package of MTA was used for 7-8 treatments. Clinical and radiographic evaluation was performed before and 6 to 47 months after treatment. RESULTS: Success rates were similar for pulpotomies performed with white (60-teeth) and grey MTA (16 teeth) (p > 0.05), and for those performed with the addition of FS to white or gray MTA when one package of MTA was used for multiple pulpotomies compared to one package of MTA alone. CONCLUSION: Gray and white MTA in conjunction with FS induce comparable clinical and radiographic success rate. The use of one package of MTA for multiple pulpotomies, combined with FS, is a cost-effective treatment.


Asunto(s)
Compuestos de Aluminio/uso terapéutico , Compuestos de Calcio/uso terapéutico , Compuestos Férricos/uso terapéutico , Diente Molar/cirugía , Óxidos/uso terapéutico , Materiales de Recubrimiento Pulpar y Pulpectomía/uso terapéutico , Pulpotomía/métodos , Silicatos/uso terapéutico , Diente Primario/cirugía , Niño , Preescolar , Atención Dental para Niños/métodos , Caries Dental/cirugía , Calcificaciones de la Pulpa Dental/diagnóstico por imagen , Prueba de la Pulpa Dental , Dentina Secundaria/crecimiento & desarrollo , Combinación de Medicamentos , Femenino , Estudios de Seguimiento , Técnicas Hemostáticas , Humanos , Estimación de Kaplan-Meier , Masculino , Diente Molar/diagnóstico por imagen , Complicaciones Posoperatorias , Radiografía , Materiales de Obturación del Conducto Radicular/uso terapéutico , Diente Primario/diagnóstico por imagen , Resultado del Tratamiento
9.
Med Image Comput Comput Assist Interv ; 2022: 519-529, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36178456

RESUMEN

Automated analysis of optical colonoscopy (OC) video frames (to assist endoscopists during OC) is challenging due to variations in color, lighting, texture, and specular reflections. Previous methods either remove some of these variations via preprocessing (making pipelines cumbersome) or add diverse training data with annotations (but expensive and time-consuming). We present CLTS-GAN, a new deep learning model that gives fine control over color, lighting, texture, and specular reflection synthesis for OC video frames. We show that adding these colonoscopy-specific augmentations to the training data can improve state-of-the-art polyp detection/segmentation methods as well as drive next generation of OC simulators for training medical students. The code and pre-trained models for CLTS-GAN are available on Computational Endoscopy Platform GitHub (https://github.com/nadeemlab/CEP).

10.
IEEE Trans Vis Comput Graph ; 28(1): 227-237, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34587075

RESUMEN

Significant work has been done towards deep learning (DL) models for automatic lung and lesion segmentation and classification of COVID-19 on chest CT data. However, comprehensive visualization systems focused on supporting the dual visual+DL diagnosis of COVID-19 are non-existent. We present COVID-view, a visualization application specially tailored for radiologists to diagnose COVID-19 from chest CT data. The system incorporates a complete pipeline of automatic lungs segmentation, localization/isolation of lung abnormalities, followed by visualization, visual and DL analysis, and measurement/quantification tools. Our system combines the traditional 2D workflow of radiologists with newer 2D and 3D visualization techniques with DL support for a more comprehensive diagnosis. COVID-view incorporates a novel DL model for classifying the patients into positive/negative COVID-19 cases, which acts as a reading aid for the radiologist using COVID-view and provides the attention heatmap as an explainable DL for the model output. We designed and evaluated COVID-view through suggestions, close feedback and conducting case studies of real-world patient data by expert radiologists who have substantial experience diagnosing chest CT scans for COVID-19, pulmonary embolism, and other forms of lung infections. We present requirements and task analysis for the diagnosis of COVID-19 that motivate our design choices and results in a practical system which is capable of handling real-world patient cases.


Asunto(s)
COVID-19 , Gráficos por Computador , Humanos , Pulmón/diagnóstico por imagen , SARS-CoV-2 , Tomografía Computarizada por Rayos X
11.
IEEE Trans Vis Comput Graph ; 28(3): 1457-1468, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-32870794

RESUMEN

We present 3D virtual pancreatography (VP), a novel visualization procedure and application for non-invasive diagnosis and classification of pancreatic lesions, the precursors of pancreatic cancer. Currently, non-invasive screening of patients is performed through visual inspection of 2D axis-aligned CT images, though the relevant features are often not clearly visible nor automatically detected. VP is an end-to-end visual diagnosis system that includes: A machine learning based automatic segmentation of the pancreatic gland and the lesions, a semi-automatic approach to extract the primary pancreatic duct, a machine learning based automatic classification of lesions into four prominent types, and specialized 3D and 2D exploratory visualizations of the pancreas, lesions and surrounding anatomy. We combine volume rendering with pancreas- and lesion-centric visualizations and measurements for effective diagnosis. We designed VP through close collaboration and feedback from expert radiologists, and evaluated it on multiple real-world CT datasets with various pancreatic lesions and case studies examined by the expert radiologists.


Asunto(s)
Neoplasias Pancreáticas , Tomografía Computarizada por Rayos X , Gráficos por Computador , Humanos , Aprendizaje Automático , Neoplasias Pancreáticas/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos
12.
IEEE Trans Vis Comput Graph ; 28(12): 4951-4965, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-34478372

RESUMEN

We introduce NeuroConstruct, a novel end-to-end application for the segmentation, registration, and visualization of brain volumes imaged using wide-field microscopy. NeuroConstruct offers a Segmentation Toolbox with various annotation helper functions that aid experts to effectively and precisely annotate micrometer resolution neurites. It also offers an automatic neurites segmentation using convolutional neuronal networks (CNN) trained by the Toolbox annotations and somas segmentation using thresholding. To visualize neurites in a given volume, NeuroConstruct offers a hybrid rendering by combining iso-surface rendering of high-confidence classified neurites, along with real-time rendering of raw volume using a 2D transfer function for voxel classification score versus voxel intensity value. For a complete reconstruction of the 3D neurites, we introduce a Registration Toolbox that provides automatic coarse-to-fine alignment of serially sectioned samples. The quantitative and qualitative analysis show that NeuroConstruct outperforms the state-of-the-art in all design aspects. NeuroConstruct was developed as a collaboration between computer scientists and neuroscientists, with an application to the study of cholinergic neurons, which are severely affected in Alzheimer's disease.


Asunto(s)
Encéfalo , Imagenología Tridimensional , Microscopía , Redes Neurales de la Computación , Encéfalo/diagnóstico por imagen , Gráficos por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Neuritas
13.
Nat Mach Intell ; 4(4): 401-412, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36118303

RESUMEN

Reporting biomarkers assessed by routine immunohistochemical (IHC) staining of tissue is broadly used in diagnostic pathology laboratories for patient care. To date, clinical reporting is predominantly qualitative or semi-quantitative. By creating a multitask deep learning framework referred to as DeepLIIF, we present a single-step solution to stain deconvolution/separation, cell segmentation, and quantitative single-cell IHC scoring. Leveraging a unique de novo dataset of co-registered IHC and multiplex immunofluorescence (mpIF) staining of the same slides, we segment and translate low-cost and prevalent IHC slides to more expensive-yet-informative mpIF images, while simultaneously providing the essential ground truth for the superimposed brightfield IHC channels. Moreover, a new nuclear-envelop stain, LAP2beta, with high (>95%) cell coverage is introduced to improve cell delineation/segmentation and protein expression quantification on IHC slides. By simultaneously translating input IHC images to clean/separated mpIF channels and performing cell segmentation/classification, we show that our model trained on clean IHC Ki67 data can generalize to more noisy and artifact-ridden images as well as other nuclear and non-nuclear markers such as CD3, CD8, BCL2, BCL6, MYC, MUM1, CD10, and TP53. We thoroughly evaluate our method on publicly available benchmark datasets as well as against pathologists' semi-quantitative scoring. The code, the pre-trained models, along with easy-to-run containerized docker files as well as Google CoLab project are available at https://github.com/nadeemlab/deepliif.

14.
Comput Graph ; 35(3): 726-732, 2011 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-21765563

RESUMEN

Volumetric colon wall unfolding is a novel method for virtual colon analysis and visualization with valuable applications in virtual colonoscopy (VC) and computer-aided detection (CAD) systems. A volumetrically unfolded colon enables doctors to visualize the entire colon structure without occlusions due to haustral folds, and is critical for performing efficient and accurate texture analysis on the volumetric colon wall. Though conventional colon surface flattening has been employed for these uses, volumetric colon unfolding offers the advantages of providing the needed quantities of information with needed accuracy. This work presents an efficient and effective volumetric colon unfolding method based on harmonic differentials. The colon volumes are reconstructed from CT images and are represented as tetrahedral meshes. Three harmonic 1-forms, which are linearly independent everywhere, are computed on the tetrahedral mesh. Through integration of the harmonic 1-forms, the colon volume is mapped periodically to a canonical cuboid. The method presented is automatic, simple, and practical. Experimental results are reported to show the performance of the algorithm on real medical datasets. Though applied here specifically to the colon, the method is general and can be generalized for other volumes.

15.
Proc IEEE Int Symp Biomed Imaging ; 2021: 329-333, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34642595

RESUMEN

Optical colonoscopy (OC), the most prevalent colon cancer screening tool, has a high miss rate due to a number of factors, including the geometry of the colon (haustral fold and sharp bends occlusions), endoscopist inexperience or fatigue, endoscope field of view. We present a framework to visualize the missed regions per-frame during OC, and provides a workable clinical solution. Specifically, we make use of 3D reconstructed virtual colonoscopy (VC) data and the insight that VC and OC share the same underlying geometry but differ in color, texture and specular reflections, embedded in the OC. A lossy unpaired image-to-image translation model is introduced with enforced shared latent space for OC and VC. This shared space captures the geometric information while deferring the color, texture, and specular information creation to additional Gaussian noise input. The latter can be utilized to generate one-to-many mappings from VC to OC and OC to OC. The code, data and trained models will be released via our Computational Endoscopy Platform at https://github.com/nadeemlab/CEP.

16.
Artículo en Inglés | MEDLINE | ID: mdl-35403172

RESUMEN

Haustral folds are colon wall protrusions implicated for high polyp miss rate during optical colonoscopy procedures. If segmented accurately, haustral folds can allow for better estimation of missed surface and can also serve as valuable landmarks for registering pre-treatment virtual (CT) and optical colonoscopies, to guide navigation towards the anomalies found in pre-treatment scans. We present a novel generative adversarial network, FoldIt, for feature-consistent image translation of optical colonoscopy videos to virtual colonoscopy renderings with haustral fold overlays. A new transitive loss is introduced in order to leverage ground truth information between haustral fold annotations and virtual colonoscopy renderings. We demonstrate the effectiveness of our model on real challenging optical colonoscopy videos as well as on textured virtual colonoscopy videos with clinician-verified haustral fold annotations. All code and scripts to reproduce the experiments of this paper will be made available via our Computational Endoscopy Platform at https://github.com/nadeemlab/CEP.

17.
IEEE Trans Vis Comput Graph ; 27(6): 2869-2880, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-31751242

RESUMEN

We present a visual analytics framework, CMed, for exploring medical image data annotations acquired from crowdsourcing. CMed can be used to visualize, classify, and filter crowdsourced clinical data based on a number of different metrics such as detection rate, logged events, and clustering of the annotations. CMed provides several interactive linked visualization components to analyze the crowd annotation results for a particular video and the associated workers. Additionally, all results of an individual worker can be inspected using multiple linked views in our CMed framework. We allow a crowdsourcing application analyst to observe patterns and gather insights into the crowdsourced medical data, helping him/her design future crowdsourcing applications for optimal output from the workers. We demonstrate the efficacy of our framework with two medical crowdsourcing studies: polyp detection in virtual colonoscopy videos and lung nodule detection in CT thin-slab maximum intensity projection videos. We also provide experts' feedback to show the effectiveness of our framework. Lastly, we share the lessons we learned from our framework with suggestions for integrating our framework into a clinical workflow.


Asunto(s)
Colaboración de las Masas , Curaduría de Datos , Diagnóstico por Imagen , Colonografía Tomográfica Computarizada , Gráficos por Computador , Humanos , Procesamiento de Imagen Asistido por Computador , Neoplasias Pulmonares/diagnóstico por imagen , Grabación en Video
18.
IEEE Trans Vis Comput Graph ; 27(3): 2174-2185, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-31613771

RESUMEN

Machine learning is a powerful and effective tool for medical image analysis to perform computer-aided diagnosis (CAD). Having great potential in improving the accuracy of a diagnosis, CAD systems are often analyzed in terms of the final accuracy, leading to a limited understanding of the internal decision process, impossibility to gain insights, and ultimately to skepticism from clinicians. We present a visual analytics approach to uncover the decision-making process of a CAD system for classifying pancreatic cystic lesions. This CAD algorithm consists of two distinct components: random forest (RF), which classifies a set of predefined features, including demographic features, and a convolutional neural network (CNN), which analyzes radiological (imaging) features of the lesions. We study the class probabilities generated by the RF and the semantical meaning of the features learned by the CNN. We also use an eye tracker to better understand which radiological features are particularly useful for a radiologist to make a diagnosis and to quantitatively compare with the features that lead the CNN to its final classification decision. Additionally, we evaluate the effects and benefits of supplying the CAD system with a case-based visual aid in a second-reader setting.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Neoplasias Pancreáticas/diagnóstico por imagen , Adulto , Algoritmos , Femenino , Humanos , Masculino , Redes Neurales de la Computación , Páncreas/diagnóstico por imagen , Radiografía Abdominal , Adulto Joven
19.
IEEE Trans Vis Comput Graph ; 16(6): 1348-57, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-20975175

RESUMEN

In virtual colonoscopy, CT scans are typically acquired with the patient in both supine (facing up) and prone (facing down) positions. The registration of these two scans is desirable so that the user can clarify situations or confirm polyp findings at a location in one scan with the same location in the other, thereby improving polyp detection rates and reducing false positives. However, this supine-prone registration is challenging because of the substantial distortions in the colon shape due to the patient's change in position. We present an efficient algorithm and framework for performing this registration through the use of conformal geometry to guarantee that the registration is a diffeomorphism (a one-to-one and onto mapping). The taeniae coli and colon flexures are automatically extracted for each supine and prone surface, employing the colon geometry. The two colon surfaces are then divided into several segments using the flexures, and each segment is cut along a taenia coli and conformally flattened to the rectangular domain using holomorphic differentials. The mean curvature is color encoded as texture images, from which feature points are automatically detected using graph cut segmentation, mathematic morphological operations, and principal component analysis. Corresponding feature points are found between supine and prone and are used to adjust the conformal flattening to be quasi-conformal, such that the features become aligned. We present multiple methods of visualizing our results, including 2D flattened rendering, corresponding 3D endoluminal views, and rendering of distortion measurements. We demonstrate the efficiency and efficacy of our registration method by illustrating matched views on both the 2D flattened colon images and in the 3D volume rendered colon endoluminal view. We analytically evaluate the correctness of the results by measuring the distance between features on the registered colons.


Asunto(s)
Colon/anatomía & histología , Colonografía Tomográfica Computarizada/estadística & datos numéricos , Gráficos por Computador , Algoritmos , Simulación por Computador , Humanos , Imagenología Tridimensional , Modelos Anatómicos , Posición Prona , Posición Supina
20.
Artículo en Inglés | MEDLINE | ID: mdl-33456298

RESUMEN

Colorectal cancer screening modalities, such as optical colonoscopy (OC) and virtual colonoscopy (VC), are critical for diagnosing and ultimately removing polyps (precursors of colon cancer). The non-invasive VC is normally used to inspect a 3D reconstructed colon (from CT scans) for polyps and if found, the OC procedure is performed to physically traverse the colon via endoscope and remove these polyps. In this paper, we present a deep learning framework, Extended and Directional CycleGAN, for lossy unpaired image-to-image translation between OC and VC to augment OC video sequences with scale-consistent depth information from VC, and augment VC with patient-specific textures, color and specular highlights from OC (e.g, for realistic polyp synthesis). Both OC and VC contain structural information, but it is obscured in OC by additional patient-specific texture and specular highlights, hence making the translation from OC to VC lossy. The existing CycleGAN approaches do not handle lossy transformations. To address this shortcoming, we introduce an extended cycle consistency loss, which compares the geometric structures from OC in the VC domain. This loss removes the need for the CycleGAN to embed OC information in the VC domain. To handle a stronger removal of the textures and lighting, a Directional Discriminator is introduced to differentiate the direction of translation (by creating paired information for the discriminator), as opposed to the standard CycleGAN which is direction-agnostic. Combining the extended cycle consistency loss and the Directional Discriminator, we show state-of-the-art results on scale-consistent depth inference for phantom, textured VC and for real polyp and normal colon video sequences. We also present results for realistic pendunculated and flat polyp synthesis from bumps introduced in 3D VC models.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA