Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 60: 101588, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31739281

RESUMO

We propose an image guidance system for robot assisted laparoscopic radical prostatectomy (RALRP). A virtual 3D reconstruction of the surgery scene is displayed underneath the endoscope's feed on the surgeon's console. This scene consists of an annotated preoperative Magnetic Resonance Image (MRI) registered to intraoperative 3D Trans-rectal Ultrasound (TRUS) as well as real-time sagittal 2D TRUS images of the prostate, 3D models of the prostate, the surgical instrument and the TRUS transducer. We display these components with accurate real-time coordinates with respect to the robot system. Since the scene is rendered from the viewpoint of the endoscope, given correct parameters of the camera, an augmented scene can be overlaid on the video output. The surgeon can rotate the ultrasound transducer and determine the position of the projected axial plane in the MRI using one of the registered da Vinci instruments. This system was tested in the laboratory on custom-made agar prostate phantoms. We achieved an average total registration accuracy of 3.2 â€¯±â€¯ 1.3 mm. We also report on the successful application of this system in the operating room in 12 patients. The average registration error between the TRUS and the da Vinci system for the last 8 patients was 1.4 â€¯±â€¯ 0.3 mm and average target registration error of 2.1 â€¯±â€¯ 0.8 mm, resulting in an in vivo overall robot system to MRI mean registration error of 3.5 mm or less, which is consistent with our laboratory studies.


Assuntos
Realidade Aumentada , Laparoscopia/métodos , Prostatectomia , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgia Assistida por Computador/métodos , Ultrassonografia/métodos , Desenho de Equipamento , Humanos , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Masculino , Imagens de Fantasmas
2.
Med Image Anal ; 50: 167-180, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30340027

RESUMO

Prostate cancer (PCa) is a heterogeneous disease that is manifested in a diverse range of histologic patterns and its grading is therefore associated with an inter-observer variability among pathologists, which may lead to an under- or over-treatment of patients. In this work, we develop a computer aided diagnosis system for automatic grading of PCa in digitized histopathology images using supervised learning methods. Our pipeline comprises extraction of multi-scale features that include glandular, cellular, and image-based features. A number of novel features are proposed based on intra- and inter-nuclei properties; these features are shown to be among the most important ones for classification. We train our classifiers on 333 tissue microarray (TMA) cores that were sampled from 231 radical prostatectomy patients and annotated in detail by six pathologists for different Gleason grades. We also demonstrate the TMA-trained classifier's performance on additional 230 whole-mount slides of 56 patients, independent of the training dataset, by examining the automatic grading on manually marked lesions and randomly sampled 10% of the benign tissue. For the first time, we incorporate a probabilistic approach for supervised learning by multiple experts to account for the inter-observer grading variability. Through cross-validation experiments, the overall grading agreement of the classifier with the pathologists was found to be an unweighted kappa of 0.51, while the overall agreements between each pathologist and the others ranged from 0.45 to 0.62. These results suggest that our classifier's performance is within the inter-observer grading variability levels across the pathologists in our study, which are also consistent with those reported in the literature.


Assuntos
Gradação de Tumores/métodos , Neoplasias da Próstata/patologia , Automação , Desenho Assistido por Computador , Diagnóstico por Computador/métodos , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Análise Serial de Tecidos
3.
Med Image Anal ; 34: 30-41, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27498016

RESUMO

Incomplete and inconsistent datasets often pose difficulties in multimodal studies. We introduce the concept of scandent decision trees to tackle these difficulties. Scandent trees are decision trees that optimally mimic the partitioning of the data determined by another decision tree, and crucially, use only a subset of the feature set. We show how scandent trees can be used to enhance the performance of decision forests trained on a small number of multimodal samples when we have access to larger datasets with vastly incomplete feature sets. Additionally, we introduce the concept of tree-based feature transforms in the decision forest paradigm. When combined with scandent trees, the tree-based feature transforms enable us to train a classifier on a rich multimodal dataset, and use it to classify samples with only a subset of features of the training data. Using this methodology, we build a model trained on MRI and PET images of the ADNI dataset, and then test it on cases with only MRI data. We show that this is significantly more effective in staging of cognitive impairments compared to a similar decision forest model trained and tested on MRI only, or one that uses other kinds of feature transform applied to the MRI data.


Assuntos
Algoritmos , Disfunção Cognitiva/diagnóstico por imagem , Árvores de Decisões , Doença de Alzheimer/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Tomografia por Emissão de Pósitrons , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA