Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
J Urol ; 193(1): 302-12, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25150644

RESUMO

PURPOSE: To provide unencumbered real-time ultrasound image guidance during robot-assisted laparoscopic radical prostatectomy, we developed a robotic transrectal ultrasound system that tracks the da Vinci® Surgical System instruments. We describe our initial clinical experience with this system. MATERIALS AND METHODS: After an evaluation in a canine model, 20 patients were enrolled in the study. During each procedure the transrectal ultrasound transducer was manually positioned using a brachytherapy stabilizer to provide good imaging of the prostate. Then the transrectal ultrasound was registered to the da Vinci robot by a previously validated procedure. Finally, automatic rotation of the transrectal ultrasound was enabled such that the transrectal ultrasound imaging plane safely tracked the tip of the da Vinci instrument controlled by the surgeon, while real-time transrectal ultrasound images were relayed to the surgeon at the da Vinci console. Tracking was activated during all critical stages of the surgery. RESULTS: The transrectal ultrasound robot was easy to set up and use, adding 7 minutes (range 5 to 14) to the procedure. It did not require an assistant or additional control devices. Qualitative feedback was acquired from the surgeons, who found transrectal ultrasound useful in identifying the urethra while passing the dorsal venous complex suture, defining the prostate-bladder interface during bladder neck dissection, identifying the seminal vesicles and their location with respect to the rectal wall, and identifying the distal prostate boundary at the apex. CONCLUSIONS: Real-time, registered robotic transrectal ultrasound guidance with automatic instrument tracking during robot-assisted laparoscopic radical prostatectomy is feasible and potentially useful. The results justify further studies to establish whether the approach can improve procedure outcomes.


Assuntos
Cuidados Intraoperatórios , Laparoscopia , Prostatectomia/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Procedimentos Cirúrgicos Robóticos , Cirurgia Assistida por Computador , Ultrassonografia de Intervenção , Idoso , Humanos , Masculino , Pessoa de Meia-Idade , Reto , Ultrassonografia de Intervenção/métodos
2.
IEEE Trans Med Imaging ; 43(7): 2634-2645, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38437151

RESUMO

Quantifying performance of methods for tracking and mapping tissue in endoscopic environments is essential for enabling image guidance and automation of medical interventions and surgery. Datasets developed so far either use rigid environments, visible markers, or require annotators to label salient points in videos after collection. These are respectively: not general, visible to algorithms, or costly and error-prone. We introduce a novel labeling methodology along with a dataset that uses said methodology, Surgical Tattoos in Infrared (STIR). STIR has labels that are persistent but invisible to visible spectrum algorithms. This is done by labelling tissue points with IR-fluorescent dye, indocyanine green (ICG), and then collecting visible light video clips. STIR comprises hundreds of stereo video clips in both in vivo and ex vivo scenes with start and end points labelled in the IR spectrum. With over 3,000 labelled points, STIR will help to quantify and enable better analysis of tracking and mapping methods. After introducing STIR, we analyze multiple different frame-based tracking methods on STIR using both 3D and 2D endpoint error and accuracy metrics. STIR is available at https://dx.doi.org/10.21227/w8g4-g548.


Assuntos
Algoritmos , Verde de Indocianina , Tatuagem , Tatuagem/métodos , Raios Infravermelhos , Animais , Cirurgia Assistida por Computador/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Gravação em Vídeo/métodos
3.
Med Image Anal ; 94: 103131, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38442528

RESUMO

As computer vision algorithms increase in capability, their applications in clinical systems will become more pervasive. These applications include: diagnostics, such as colonoscopy and bronchoscopy; guiding biopsies, minimally invasive interventions, and surgery; automating instrument motion; and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. After which, we review datasets provided in the field and the clinical needs that motivate their design. Then, we delve into the algorithmic side, and summarize recent developments. This summary should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We maintain focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. With the field summarized, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications. We then provide some research directions and questions. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.


Assuntos
Cirurgia Assistida por Computador , Humanos , Cirurgia Assistida por Computador/métodos , Algoritmos , Computadores
4.
Int J Comput Assist Radiol Surg ; 17(8): 1469-1476, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35471624

RESUMO

PURPOSE: Semantic segmentation and activity classification are key components to create intelligent surgical systems able to understand and assist clinical workflow. In the operating room, semantic segmentation is at the core of creating robots aware of clinical surroundings, whereas activity classification aims at understanding OR workflow at a higher level. State-of-the-art semantic segmentation and activity recognition approaches are fully supervised, which is not scalable. Self-supervision can decrease the amount of annotated data needed. METHODS: We propose a new 3D self-supervised task for OR scene understanding utilizing OR scene images captured with ToF cameras. Contrary to other self-supervised approaches, where handcrafted pretext tasks are focused on 2D image features, our proposed task consists of predicting relative 3D distance of image patches by exploiting the depth maps. By learning 3D spatial context, it generates discriminative features for our downstream tasks. RESULTS: Our approach is evaluated on two tasks and datasets containing multiview data captured from clinical scenarios. We demonstrate a noteworthy improvement in performance on both tasks, specifically on low-regime data where utility of self-supervised learning is the highest. CONCLUSION: We propose a novel privacy-preserving self-supervised approach utilizing depth maps. Our proposed method shows performance on par with other self-supervised approaches and could be an interesting way to alleviate the burden of full supervision.


Assuntos
Salas Cirúrgicas , Aprendizado de Máquina Supervisionado , Humanos
5.
Med Image Anal ; 60: 101588, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31739281

RESUMO

We propose an image guidance system for robot assisted laparoscopic radical prostatectomy (RALRP). A virtual 3D reconstruction of the surgery scene is displayed underneath the endoscope's feed on the surgeon's console. This scene consists of an annotated preoperative Magnetic Resonance Image (MRI) registered to intraoperative 3D Trans-rectal Ultrasound (TRUS) as well as real-time sagittal 2D TRUS images of the prostate, 3D models of the prostate, the surgical instrument and the TRUS transducer. We display these components with accurate real-time coordinates with respect to the robot system. Since the scene is rendered from the viewpoint of the endoscope, given correct parameters of the camera, an augmented scene can be overlaid on the video output. The surgeon can rotate the ultrasound transducer and determine the position of the projected axial plane in the MRI using one of the registered da Vinci instruments. This system was tested in the laboratory on custom-made agar prostate phantoms. We achieved an average total registration accuracy of 3.2 â€¯±â€¯ 1.3 mm. We also report on the successful application of this system in the operating room in 12 patients. The average registration error between the TRUS and the da Vinci system for the last 8 patients was 1.4 â€¯±â€¯ 0.3 mm and average target registration error of 2.1 â€¯±â€¯ 0.8 mm, resulting in an in vivo overall robot system to MRI mean registration error of 3.5 mm or less, which is consistent with our laboratory studies.


Assuntos
Realidade Aumentada , Laparoscopia/métodos , Prostatectomia , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgia Assistida por Computador/métodos , Ultrassonografia/métodos , Desenho de Equipamento , Humanos , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Masculino , Imagens de Fantasmas
6.
IEEE Trans Med Imaging ; 37(8): 1877-1886, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29994583

RESUMO

We present a novel technique for real-time deformable registration of 3-D to 2.5-D transrectal ultrasound (TRUS) images for image-guided, robot-assisted laparoscopic radical prostatectomy (RALRP). For RALRP, a pre-operatively acquired 3-D TRUS image is registered to thin-volumes comprised of consecutive intra-operative 2-D TRUS images, where the optimal transformation is found using a gradient descent method based on analytical first and second order derivatives. Our method relies on an efficient algorithm for real-time extraction of arbitrary slices from a 3-D image deformed given a discrete mesh representation. We also propose and demonstrate an evaluation method that generates simulated models and images for RALRP by modeling tissue deformation through patient-specific finite-element models (FEM). We evaluated our method on in-vivo data from 11 patients collected during RALRP and focal therapy interventions. In the presence of an average landmark deformation of 3.89 and 4.62 mm, we achieved accuracies of 1.15 and 0.72 mm, respectively, on the synthetic and in-vivo data sets, with an average registration computation time of 264 ms, using MATLAB on a conventional PC. The results show that the real-time tracking of the prostate motion and deformation is feasible, enabling a real-time augmented reality-based guidance system for RALRP.].


Assuntos
Imageamento Tridimensional/métodos , Próstata/diagnóstico por imagem , Próstata/cirurgia , Prostatectomia/métodos , Cirurgia Assistida por Computador/métodos , Ultrassonografia/métodos , Algoritmos , Bases de Dados Factuais , Análise de Elementos Finitos , Humanos , Masculino , Movimento , Neoplasias da Próstata/tratamento farmacológico , Neoplasias da Próstata/cirurgia
7.
Artigo em Inglês | MEDLINE | ID: mdl-25333163

RESUMO

In this article, we describe a system for detecting dominant prostate tumors, based on a combination of features extracted from a novel multi-parametric quantitative ultrasound elastography technique. The performance of the system was validated on a data-set acquired from n = 10 patients undergoing radical prostatectomy. Multi-frequency steady-state mechanical excitations were applied to each patient's prostate through the perineum and prostate tissue displacements were captured by a transrectal ultrasound system. 3D volumetric data including absolute value of tissue elasticity, strain and frequency-response were computed for each patient. Based on the combination of all extracted features, a random forest classification algorithm was used to separate cancerous regions from normal tissue, and to compute a measure of cancer probability. Registered whole mount histopathology images of the excised prostate gland were used as a ground truth of cancer distribution for classifier training. An area under receiver operating characteristic curve of 0.82 +/- 0.01 was achieved in a leave-one-patient-out cross validation. Our results show the potential of multi-parametric quantitative elastography for prostate cancer detection for the first time in a clinical setting, and justify further studies to establish whether the approach can have clinical use.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Palpação , Reconhecimento Automatizado de Padrão/métodos , Neoplasias da Próstata/diagnóstico , Técnicas de Imagem por Elasticidade , Humanos , Aumento da Imagem/métodos , Masculino , Imagem Multimodal/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
8.
Med Phys ; 41(7): 073505, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24989419

RESUMO

PURPOSE: Ultrasound-based solutions for diagnosis and prognosis of prostate cancer are highly desirable. The authors have devised a method for detecting prostate cancer using a vibroelastography (VE) system developed in our group and a tissue classification approach based on texture analysis of VE images. METHODS: The VE method applies wide-band mechanical vibrations to the tissue. Here, the authors report on the use of this system for cancer detection and show that the texture of VE images characterized by the first and the second order statistics of the pixel intensities form a promising set of features for tissue typing to detect prostate cancer. The system was used to image patients prior to radical surgery. The removed specimens were sectioned and studied by an experienced histopathologist. The authors registered the whole-mount histology sections to the ultrasound images using an automatic registration algorithm. This enabled the quantitative evaluation of the performance of the authors' imaging method in cancer detection in an unbiased manner. The authors used support vector machine (SVM) classification to measure the cancer detection performance of the VE method. Regions of tissue of size 5 × 5 mm, labeled as cancer and noncancer based on automatic registration to histology slides, were classified using SVM. RESULTS: The authors report an area under ROC of 0.81 ± 0.10 in cancer detection on 1066 tissue regions from 203 images. All cancer tumors in all zones were included in this analysis and were classified versus the noncancer tissue in the peripheral zone. This outcome was obtained in leave-one-patient-out validation. CONCLUSIONS: The developed 3D prostate vibroelastography system and the proposed multiparametric approach based on statistical texture parameters from the VE images result in a promising cancer detection method.


Assuntos
Técnicas de Imagem por Elasticidade/métodos , Imageamento Tridimensional/métodos , Imagem Multimodal/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/fisiopatologia , Vibração , Algoritmos , Área Sob a Curva , Humanos , Masculino , Próstata/diagnóstico por imagem , Próstata/patologia , Próstata/fisiopatologia , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/cirurgia , Curva ROC , Máquina de Vetores de Suporte
9.
IEEE Trans Biomed Eng ; 60(9): 2663-72, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23674418

RESUMO

Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical system is the current state-of-the-art treatment option for clinically confined prostate cancer. Given the limited field of view of the surgical site in RALRP, several groups have proposed the integration of transrectal ultrasound (TRUS) imaging in the surgical workflow to assist with accurate resection of the prostate and the sparing of the neurovascular bundles (NVBs). We previously introduced a robotic TRUS manipulator and a method for automatically tracking da Vinci surgical instruments with the TRUS imaging plane, in order to facilitate the integration of intraoperative TRUS in RALRP. Rapid and automatic registration of the kinematic frames of the da Vinci surgical system and the robotic TRUS probe manipulator is a critical component of the instrument tracking system. In this paper, we propose a fully automatic registration technique based on automatic 3-D TRUS localization of robot instrument tips pressed against the air-tissue boundary anterior to the prostate. The detection approach uses a multiscale filtering technique to identify and localize surgical instrument tips in the TRUS volume, and could also be used to detect other surface fiducials in 3-D ultrasound. Experiments have been performed using a tissue phantom and two ex vivo tissue samples to show the feasibility of the proposed methods. Also, an initial in vivo evaluation of the system has been carried out on a live anaesthetized dog with a da Vinci Si surgical system and a target registration error (defined as the root mean square distance of corresponding points after registration) of 2.68 mm has been achieved. Results show this method's accuracy and consistency for automatic registration of TRUS images to the da Vinci surgical system.


Assuntos
Imageamento Tridimensional/métodos , Reto/diagnóstico por imagem , Robótica/instrumentação , Instrumentos Cirúrgicos , Ultrassom Focalizado Transretal de Alta Intensidade/métodos , Animais , Bovinos , Cães , Humanos , Masculino , Modelos Biológicos , Imagens de Fantasmas , Prostatectomia , Ultrassonografia , Ultrassom Focalizado Transretal de Alta Intensidade/instrumentação , Interface Usuário-Computador
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa