Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Mais filtros

Base de dados
Intervalo de ano de publicação
Sensors (Basel) ; 21(15)2021 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-34372398


Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets for robust performance are either costly or unavailable. This work examines three novel generative adversarial network (GAN) methods of providing usable synthetic tool images using only surgical background images and a few real tool images. The best of these three novel approaches generates realistic tool textures while preserving local background content by incorporating both a style preservation and a content loss component into the proposed multi-level loss function. The approach is quantitatively evaluated, and results suggest that the synthetically generated training tool images enhance UNet tool segmentation performance. More specifically, with a random set of 100 cadaver and live endoscopic images from the University of Washington Sinus Dataset, the UNet trained with synthetically generated images using the presented method resulted in 35.7% and 30.6% improvement over using purely real images in mean Dice coefficient and Intersection over Union scores, respectively. This study is promising towards the use of more widely available and routine screening endoscopy to preoperatively generate synthetic training tool images for intraoperative UNet tool segmentation.

Endoscopia , Processamento de Imagem Assistida por Computador , Humanos , Semântica
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 4903-4908, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33019088


Haptic feedback can render real-time force interactions with computer simulated objects. In several telerobotic applications, it is desired that a haptic simulation reflects a physical task space or interaction accurately. This is particularly true when excessive applied force can result in disastrous consequences, as with the case of robot-assisted minimally invasive surgery (RMIS) and tissue damage. Since force cannot be directly measured in RMIS, non-contact methods are desired. A promising direction of non-contact force estimation involves the primary use of vision sensors to estimate deformation. However, the required fidelity of non-contact force rendering of deformable interaction to maintain surgical operator performance is not well established. This work attempts to empirically evaluate the degree to which haptic feedback may deviate from ground truth yet result in acceptable teleoperated performance in a simulated RMIS-based palpation task. A preliminary user-study is conducted to verify the utility of the simulation platform, and the results of this work have implications in haptic feedback for RMIS and inform guidelines for vision-based tool-tissue force estimation. An adaptive thresholding method is used to collect the minimum and maximum tolerable errors in force orientation and magnitude of presented haptic feedback to maintain sufficient performance.

Robótica , Interface Usuário-Computador , Retroalimentação , Retroalimentação Sensorial , Palpação