Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38753135

RESUMO

PURPOSE: Preoperative imaging plays a pivotal role in sinus surgery where CTs offer patient-specific insights of complex anatomy, enabling real-time intraoperative navigation to complement endoscopy imaging. However, surgery elicits anatomical changes not represented in the preoperative model, generating an inaccurate basis for navigation during surgery progression. METHODS: We propose a first vision-based approach to update the preoperative 3D anatomical model leveraging intraoperative endoscopic video for navigated sinus surgery where relative camera poses are known. We rely on comparisons of intraoperative monocular depth estimates and preoperative depth renders to identify modified regions. The new depths are integrated in these regions through volumetric fusion in a truncated signed distance function representation to generate an intraoperative 3D model that reflects tissue manipulation RESULTS: We quantitatively evaluate our approach by sequentially updating models for a five-step surgical progression in an ex vivo specimen. We compute the error between correspondences from the updated model and ground-truth intraoperative CT in the region of anatomical modification. The resulting models show a decrease in error during surgical progression as opposed to increasing when no update is employed. CONCLUSION: Our findings suggest that preoperative 3D anatomical models can be updated using intraoperative endoscopy video in navigated sinus surgery. Future work will investigate improvements to monocular depth estimation as well as removing the need for external navigation systems. The resulting ability to continuously update the patient model may provide surgeons with a more precise understanding of the current anatomical state and paves the way toward a digital twin paradigm for sinus surgery.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38775904

RESUMO

PURPOSE: Monocular SLAM algorithms are the key enabling technology for image-based surgical navigation systems for endoscopic procedures. Due to the visual feature scarcity and unique lighting conditions encountered in endoscopy, classical SLAM approaches perform inconsistently. Many of the recent approaches to endoscopic SLAM rely on deep learning models. They show promising results when optimized on singular domains such as arthroscopy, sinus endoscopy, colonoscopy or laparoscopy, but are limited by an inability to generalize to different domains without retraining. METHODS: To address this generality issue, we propose OneSLAM a monocular SLAM algorithm for surgical endoscopy that works out of the box for several endoscopic domains, including sinus endoscopy, colonoscopy, arthroscopy and laparoscopy. Our pipeline builds upon robust tracking any point (TAP) foundation models to reliably track sparse correspondences across multiple frames and runs local bundle adjustment to jointly optimize camera poses and a sparse 3D reconstruction of the anatomy. RESULTS: We compare the performance of our method against three strong baselines previously proposed for monocular SLAM in endoscopy and general scenes. OneSLAM presents better or comparable performance over existing approaches targeted to that specific data in all four tested domains, generalizing across domains without the need for retraining. CONCLUSION: OneSLAM benefits from the convincing performance of TAP foundation models but generalizes to endoscopic sequences of different anatomies all while demonstrating better or comparable performance over domain-specific SLAM approaches. Future research on global loop closure will investigate how to reliably detect loops in endoscopic scenes to reduce accumulated drift and enhance long-term navigation capabilities.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38704792

RESUMO

PURPOSE: Eye gaze tracking and pupillometry are evolving areas within the field of tele-robotic surgery, particularly in the context of estimating cognitive load (CL). However, this is a recent field, and current solutions for gaze and pupil tracking in robotic surgery require assessment. Considering the necessity of stable pupillometry signals for reliable cognitive load estimation, we compare the accuracy of three eye trackers, including head and console-mounted designs. METHODS: We conducted a user study with the da Vinci Research Kit (dVRK), to compare the three designs. We collected eye tracking and dVRK video data while participants observed nine markers distributed over the dVRK screen. We compute and analyze pupil detection stability and gaze prediction accuracy for the three designs. RESULTS: Head-worn devices present better stability and accuracy of gaze prediction and pupil detection compared to console-mounted systems. Tracking stability along the field of view varies between trackers, with gaze predictions detected at invalid zones of the image with high confidence. CONCLUSION: While head-worn solutions show benefits in confidence and stability, our results demonstrate the need to improve eye tacker performance regarding pupil detection, stability, and gaze accuracy in tele-robotic scenarios.

4.
Int J Comput Assist Radiol Surg ; 19(6): 1113-1120, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38589579

RESUMO

PURPOSE: Gaze tracking and pupillometry are established proxies for cognitive load, giving insights into a user's mental effort. In tele-robotic surgery, knowing a user's cognitive load can inspire novel human-machine interaction designs, fostering contextual surgical assistance systems and personalized training programs. While pupillometry-based methods for estimating cognitive effort have been proposed, their application in surgery is limited by the pupil's sensitivity to brightness changes, which can mask pupil's response to cognitive load. Thus, methods considering pupil and brightness conditions are essential for detecting cognitive effort in unconstrained scenarios. METHODS: To contend with this challenge, we introduce a personalized pupil response model integrating pupil and brightness-based features. Discrepancies between predicted and measured pupil diameter indicate dilations due to non-brightness-related sources, i.e., cognitive effort. Combined with gaze entropy, it can detect cognitive load using a random forest classifier. To test our model, we perform a user study with the da Vinci Research Kit, where 17 users perform pick-and-place tasks in addition to auditory tasks known to generate cognitive effort responses. RESULTS: We compare our method to two baselines (BCPD and CPD), demonstrating favorable performance in varying brightness conditions. Our method achieves an average true positive rate of 0.78, outperforming the baselines (0.57 and 0.64). CONCLUSION: We present a personalized brightness-aware model for cognitive effort detection able to operate under unconstrained brightness conditions, comparing favorably to competing approaches, contributing to the advancement of cognitive effort detection in tele-robotic surgery. Future work will consider alternative learning strategies, handling the difficult positive-unlabeled scenario in user studies, where only some positive and no negative events are reliably known.


Assuntos
Cognição , Pupila , Procedimentos Cirúrgicos Robóticos , Humanos , Pupila/fisiologia , Cognição/fisiologia , Procedimentos Cirúrgicos Robóticos/métodos , Telemedicina , Masculino , Adulto , Feminino
5.
Sci Rep ; 10(1): 2748, 2020 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-32066744

RESUMO

We present a comprehensive analysis of the submissions to the first edition of the Endoscopy Artefact Detection challenge (EAD). Using crowd-sourcing, this initiative is a step towards understanding the limitations of existing state-of-the-art computer vision methods applied to endoscopy and promoting the development of new approaches suitable for clinical translation. Endoscopy is a routine imaging technique for the detection, diagnosis and treatment of diseases in hollow-organs; the esophagus, stomach, colon, uterus and the bladder. However the nature of these organs prevent imaged tissues to be free of imaging artefacts such as bubbles, pixel saturation, organ specularity and debris, all of which pose substantial challenges for any quantitative analysis. Consequently, the potential for improved clinical outcomes through quantitative assessment of abnormal mucosal surface observed in endoscopy videos is presently not realized accurately. The EAD challenge promotes awareness of and addresses this key bottleneck problem by investigating methods that can accurately classify, localize and segment artefacts in endoscopy frames as critical prerequisite tasks. Using a diverse curated multi-institutional, multi-modality, multi-organ dataset of video frames, the accuracy and performance of 23 algorithms were objectively ranked for artefact detection and segmentation. The ability of methods to generalize to unseen datasets was also evaluated. The best performing methods (top 15%) propose deep learning strategies to reconcile variabilities in artefact appearance with respect to size, modality, occurrence and organ type. However, no single method outperformed across all tasks. Detailed analyses reveal the shortcomings of current training strategies and highlight the need for developing new optimal metrics to accurately quantify the clinical applicability of methods.


Assuntos
Algoritmos , Artefatos , Endoscopia/normas , Interpretação de Imagem Assistida por Computador/normas , Imageamento Tridimensional/normas , Redes Neurais de Computação , Colo/diagnóstico por imagem , Colo/patologia , Conjuntos de Dados como Assunto , Endoscopia/estatística & dados numéricos , Esôfago/diagnóstico por imagem , Esôfago/patologia , Feminino , Humanos , Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Imageamento Tridimensional/estatística & dados numéricos , Cooperação Internacional , Masculino , Estômago/diagnóstico por imagem , Estômago/patologia , Bexiga Urinária/diagnóstico por imagem , Bexiga Urinária/patologia , Útero/diagnóstico por imagem , Útero/patologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...