Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
NPJ Precis Oncol ; 8(1): 134, 2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38898127

ABSTRACT

While alterations in nucleus size, shape, and color are ubiquitous in cancer, comprehensive quantification of nuclear morphology across a whole-slide histologic image remains a challenge. Here, we describe the development of a pan-tissue, deep learning-based digital pathology pipeline for exhaustive nucleus detection, segmentation, and classification and the utility of this pipeline for nuclear morphologic biomarker discovery. Manually-collected nucleus annotations were used to train an object detection and segmentation model for identifying nuclei, which was deployed to segment nuclei in H&E-stained slides from the BRCA, LUAD, and PRAD TCGA cohorts. Interpretable features describing the shape, size, color, and texture of each nucleus were extracted from segmented nuclei and compared to measurements of genomic instability, gene expression, and prognosis. The nuclear segmentation and classification model trained herein performed comparably to previously reported models. Features extracted from the model revealed differences sufficient to distinguish between BRCA, LUAD, and PRAD. Furthermore, cancer cell nuclear area was associated with increased aneuploidy score and homologous recombination deficiency. In BRCA, increased fibroblast nuclear area was indicative of poor progression-free and overall survival and was associated with gene expression signatures related to extracellular matrix remodeling and anti-tumor immunity. Thus, we developed a powerful pan-tissue approach for nucleus segmentation and featurization, enabling the construction of predictive models and the identification of features linking nuclear morphology with clinically-relevant prognostic biomarkers across multiple cancer types.

2.
IEEE Trans Biomed Eng ; 69(4): 1310-1317, 2022 04.
Article in English | MEDLINE | ID: mdl-34543188

ABSTRACT

OBJECTIVE: A craniotomy is the removal of a part of the skull to allow surgeons to have access to the brain and treat tumors. When accessing the brain, a tissue deformation occurs and can negatively influence the surgical procedure outcome. In this work, we present a novel Augmented Reality neurosurgical system to superimpose pre-operative 3D meshes derived from MRI onto a view of the brain surface acquired during surgery. METHODS: Our method uses cortical vessels as main features to drive a rigid then non-rigid 3D/2D registration. We first use a feature extractor network to produce probability maps that are fed to a pose estimator network to infer the 6-DoF rigid pose. Then, to account for brain deformation, we add a non-rigid refinement step formulated as a Shape-from-Template problem using physics-based constraints that helps propagate the deformation to sub-cortical level and update tumor location. RESULTS: We tested our method retrospectively on 6 clinical datasets and obtained low pose error, and showed using synthetic dataset that considerable brain shift compensation and low TRE can be achieved at cortical and sub-cortical levels. CONCLUSION: The results show that our solution achieved accuracy below the actual clinical errors demonstrating the feasibility of practical use of our system. SIGNIFICANCE: This work shows that we can provide coherent Augmented Reality visualization of 3D cortical vessels observed through the craniotomy using a single camera view and that cortical vessels provide strong features for performing both rigid and non-rigid registration.


Subject(s)
Augmented Reality , Neurosurgery , Surgery, Computer-Assisted , Brain/diagnostic imaging , Brain/surgery , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging , Retrospective Studies , Surgery, Computer-Assisted/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...