Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Assunto da revista
Intervalo de ano de publicação
1.
Int J Comput Assist Radiol Surg ; 17(8): 1419-1427, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35503394

RESUMO

PURPOSE: Automation of sub-tasks during robotic surgery is challenging due to the high variability of the surgical scenes intra- and inter-patients. For example, the pick and place task can be executed different times during the same operation and for distinct purposes. Hence, designing automation solutions that can generalise a skill over different contexts becomes hard. All the experiments are conducted using the Pneumatic Attachable Flexible (PAF) rail, a novel surgical tool designed for robotic-assisted intraoperative organ manipulation. METHODS: We build upon previous open-source surgical Reinforcement Learning (RL) training environment to develop a new RL framework for manipulation skills, rlman. In rlman, contextual RL agents are trained to solve different aspects of the pick and place task using the PAF rail system. rlman is implemented to support both low- and high-dimensional state information to solve surgical sub-tasks in a simulation environment. RESULTS: We use rlman to train state of the art RL agents to solve four different surgical sub-tasks involving manipulation skills using the PAF rail. We compare the results with state-of-the-art benchmarks found in the literature. We evaluate the ability of the agent to be able to generalise over different aspects of the targeted surgical environment. CONCLUSION: We have shown that the rlman framework can support the training of different RL algorithms for solving surgical sub-task, analysing the importance of context information for generalisation capabilities. We are aiming to deploy the trained policy on the real da Vinci using the dVRK and show that the generalisation of the trained policy can be transferred to the real world.


Assuntos
Aprendizagem , Procedimentos Cirúrgicos Robóticos , Algoritmos , Simulação por Computador , Humanos , Procedimentos Cirúrgicos Robóticos/educação
2.
Int J Comput Assist Radiol Surg ; 16(7): 1141-1149, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33991305

RESUMO

PURPOSE: Robotic-assisted partial nephrectomy (RAPN) is a tissue-preserving approach to treating renal cancer, where ultrasound (US) imaging is used for intra-operative identification of tumour margins and localisation of blood vessels. With the da Vinci Surgical System (Sunnyvale, CA), the US probe is inserted through an auxiliary access port, grasped by the robotic tool and moved over the surface of the kidney. Images from US probe are displayed separately to the surgical site video within the surgical console leaving the surgeon to interpret and co-registers information which is challenging and complicates the procedural workflow. METHODS: We introduce a novel software architecture to support a hardware soft robotic rail designed to automate intra-operative US acquisition. As a preliminary step towards complete task automation, we automatically grasp the rail and position it on the tissue surface so that the surgeon is then able to manipulate manually the US probe along it. RESULTS: A preliminary clinical study, involving five surgeons, was carried out to evaluate the potential performance of the system. Results indicate that the proposed semi-autonomous approach reduced the time needed to complete a US scan compared to manual tele-operation. CONCLUSION: Procedural automation can be an important workflow enhancement functionality in future robotic surgery systems. We have shown a preliminary study on semi-autonomous US imaging, and this could support more efficient data acquisition.


Assuntos
Neoplasias Renais/cirurgia , Rim/cirurgia , Laparoscopia/métodos , Nefrectomia/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Ultrassonografia/instrumentação , Desenho de Equipamento , Humanos , Rim/diagnóstico por imagem , Neoplasias Renais/diagnóstico
3.
IEEE Trans Med Imaging ; 40(5): 1450-1460, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33556005

RESUMO

Producing manual, pixel-accurate, image segmentation labels is tedious and time-consuming. This is often a rate-limiting factor when large amounts of labeled images are required, such as for training deep convolutional networks for instrument-background segmentation in surgical scenes. No large datasets comparable to industry standards in the computer vision community are available for this task. To circumvent this problem, we propose to automate the creation of a realistic training dataset by exploiting techniques stemming from special effects and harnessing them to target training performance rather than visual appeal. Foreground data is captured by placing sample surgical instruments over a chroma key (a.k.a. green screen) in a controlled environment, thereby making extraction of the relevant image segment straightforward. Multiple lighting conditions and viewpoints can be captured and introduced in the simulation by moving the instruments and camera and modulating the light source. Background data is captured by collecting videos that do not contain instruments. In the absence of pre-existing instrument-free background videos, minimal labeling effort is required, just to select frames that do not contain surgical instruments from videos of surgical interventions freely available online. We compare different methods to blend instruments over tissue and propose a novel data augmentation approach that takes advantage of the plurality of options. We show that by training a vanilla U-Net on semi-synthetic data only and applying a simple post-processing, we are able to match the results of the same network trained on a publicly available manually labeled real dataset.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Instrumentos Cirúrgicos
4.
Int J Comput Assist Radiol Surg ; 15(7): 1147-1155, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32385597

RESUMO

PURPOSE: In robotic-assisted partial nephrectomy (RAPN), the use of intraoperative ultrasound (IOUS) helps to localise and outline the tumours as well as the blood vessels within the kidney. The aim of this work is to evaluate the use of the pneumatically attachable flexible (PAF) rail system for US 3D reconstruction of malignant masses in RAPN. The PAF rail system is a novel device developed and previously presented by the authors to enable track-guided US scanning. METHODS: We present a comparison study between US 3D reconstruction of masses based on: the da Vinci Surgical System kinematics, single- and stereo-camera tracking of visual markers embedded on the probe. An US-realistic kidney phantom embedding a mass is used for testing. A new design for the US probe attachment to enhance the performance of the kinematic approach is presented. A feature extraction algorithm is proposed to detect the margins of the targeted mass in US images. RESULTS: To evaluate the performance of the investigated approaches the resulting 3D reconstructions have been compared to a CT scan of the phantom. The data collected indicates that single camera reconstruction outperformed the other approaches, reconstructing with a sub-millimetre accuracy the targeted mass. CONCLUSIONS: This work demonstrates that the PAF rail system provides a reliable platform to enable accurate US 3D reconstruction of masses in RAPN procedures. The proposed system has also the potential to be employed in other surgical procedures such as hepatectomy or laparoscopic liver resection.


Assuntos
Laparoscopia/métodos , Nefrectomia/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Ultrassonografia de Intervenção/métodos , Humanos , Imageamento Tridimensional , Tomografia Computadorizada por Raios X , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA