Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
BJU Int ; 133(6): 709-716, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38294145

RESUMO

OBJECTIVE: To report the learning curve of multiple operators for fusion magnetic resonance imaging (MRI) targeted biopsy and to determine the number of cases needed to achieve proficiency. MATERIALS AND METHODS: All adult males who underwent fusion MRI targeted biopsy between February 2012 and July 2021 for clinically suspected prostate cancer (PCa) in a single centre were included. Fusion transrectal MRI targeted biopsy was performed under local anaesthesia using the Koelis platform. Learning curves for segmentation of transrectal ultrasonography (TRUS) images and the overall MRI targeted biopsy procedure were estimated with locally weighted scatterplot smoothing by computing each operator's timestamps for consecutive procedures. Non-risk-adjusted cumulative sum (CUSUM) methods were used to create learning curves for clinically significant (i.e., International Society of Urological Pathology grade ≥ 2) PCa detection. RESULTS: Overall, 1721 patients underwent MRI targeted biopsy in our centre during the study period. The median (interquartile range) times for TRUS segmentation and for the MRI targeted biopsy procedure were 4.5 (3.5, 6.0) min and 13.2 (10.6, 16.9) min, respectively. Among the 14 operators with experience of more than 50 cases, a plateau was reached after 40 cases for TRUS segmentation time and 50 cases for overall MRI targeted biopsy procedure time. CUSUM analysis showed that the learning curve for clinically significant PCa detection required 25 to 45 procedures to achieve clinical proficiency. Pain scores ranged between 0 and 1 for 84% of patients, and a plateau phase was reached after 20 to 100 cases. CONCLUSIONS: A minimum of 50 cases of MRI targeted biopsy are necessary to achieve clinical and technical proficiency and to reach reproducibility in terms of timing, clinically significant PCa detection, and pain.


Assuntos
Biópsia Guiada por Imagem , Curva de Aprendizado , Próstata , Neoplasias da Próstata , Humanos , Masculino , Neoplasias da Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Biópsia Guiada por Imagem/métodos , Idoso , Pessoa de Meia-Idade , Próstata/patologia , Próstata/diagnóstico por imagem , Ultrassonografia de Intervenção/métodos , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Imagem por Ressonância Magnética Intervencionista , Competência Clínica , Estudos Retrospectivos
2.
Lasers Surg Med ; 55(2): 226-232, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36573443

RESUMO

OBJECTIVES: Nerve-sparing techniques during radical prostatectomy have been associated with an increased risk of positive surgical margins. The intra-operative detection of residual prostatic tissue could help mitigate this risk. The objectives of the present study were to assess the feasibility of using an anti-prostate-specific membrane antigen (anti-PSMA) antibody conjugated with a fluorophore to characterize fresh prostate tissue as prostatic or non-prostatic for intra-operative surgical margin detection. METHODS: Fresh prostatic tissue samples were collected from transurethral resections of the prostate (TURP) or prostate biopsies, and either immunolabelled with anti-PSMA antibody conjugated with Alexa Fluor 488 or used as controls. A dedicated, laparoscopy-compliant fluorescence device was developed for real-time fluorescence detection. Confocal microscopy was used as the gold standard for comparison. Spectral unmixing was used to distinguish specific, Alexa Fluor 488 fluorescence from nonspecific autofluorescence. RESULTS: The average peak wavelength of the immuno-labeled TURP samples (n = 4) was 541.7 ± 0.9 nm and of the control samples (n = 4) was 540.8 ± 2.2 nm. Spectral unmixing revealed that these similar measures were explained by significant autofluorescence, linked to electrocautery. Three biopsy samples were then obtained from seven patients and also displayed significant nonspecific fluorescence, raising questions regarding the reproducibility of the fixation of the anti-PSMA antibodies on the samples. Comparing the fluorescence results with final pathology proved challenging due to the small sample size and tissue alterations. CONCLUSIONS: This study showed similar fluorescence of immuno-labeled prostate tissue samples and controls, failing to demonstrate the feasibility of intra-operative margin detection using PSMA immuno-labeling, due to marked tissue autofluorescence. We successfully developed a fluorescence device that could be used intraoperatively in a laparoscopic setting. Use of the infrared range as well as newly available antibodies could prove interesting options for future research.


Assuntos
Margens de Excisão , Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/cirurgia , Neoplasias da Próstata/patologia , Reprodutibilidade dos Testes , Prostatectomia/métodos
3.
Surg Endosc ; 35(5): 2403-2415, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33650002

RESUMO

BACKGROUND: For many abdominal surgical interventions, laparotomy has gradually been replaced by laparoscopy, with numerous benefits for the patient in terms of post-operative recovery. However, during laparoscopy, the endoscope only provides a single viewpoint to the surgeon, leaving numerous blind spots and opening the way to peri-operative adverse events. Alternative camera systems have been proposed, but many lack the requisite resolution/robustness for use during surgery or cannot provide real-time images. Here, we present the added value of the Enhanced Laparoscopic Vision System (ELViS) which overcomes these limitations and provides a broad view of the surgical field in addition to the usual high-resolution endoscope. METHODS: Experienced laparoscopy surgeons performed several typical procedure steps on a live pig model. The time-to-completion for surgical exercises performed by conventional endoscopy and ELViS-assisted surgery was measured. A debriefing interview following each operating session was conducted by an ergonomist, and a System Usability Scale (SUS) score was determined. RESULTS: Proof of concept of ELVIS was achieved in an animal model with seven expert surgeons without peroperative adverse events related to the surgical device. No differences were found in time-to-completion. Mean SUS score was 74.7, classifying the usability of the ELViS as "good". During the debriefing interview, surgeons highlighted several situations where the ELViS provided a real advantage (such as during instrument insertion, exploration of the abdominal cavity or for orientation during close work) and also suggested avenues for improvement of the system. CONCLUSIONS: This first test of the ELViS prototype on a live animal model demonstrated its usability and provided promising and useful feedback for further development.


Assuntos
Laparoscopia/instrumentação , Animais , Endoscópios , Desenho de Equipamento , Laparoscopia/métodos , Estudo de Prova de Conceito , Cirurgiões , Suínos
4.
J Biomed Inform ; 67: 34-41, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28179119

RESUMO

OBJECTIVE: Each surgical procedure is unique due to patient's and also surgeon's particularities. In this study, we propose a new approach to distinguish surgical behaviors between surgical sites, levels of expertise and individual surgeons thanks to a pattern discovery method. METHODS: The developed approach aims to distinguish surgical behaviors based on shared longest frequent sequential patterns between surgical process models. To allow clustering, we propose a new metric called SLFSP. The approach is validated by comparison with a clustering method using Dynamic Time Warping as a metric to characterize the similarity between surgical process models. RESULTS: Our method outperformed the existing approach. It was able to make a perfect distinction between surgical sites (accuracy of 100%). We reached an accuracy superior to 90% and 85% for distinguishing levels of expertise and individual surgeons. CONCLUSION: Clustering based on shared longest frequent sequential patterns outperformed the previous study based on time analysis. SIGNIFICANCE: The proposed method shows the feasibility of comparing surgical process models, not only by their duration but also by their structure of activities. Furthermore, patterns may show risky behaviors, which could be an interesting information for surgical training to prevent adverse events.


Assuntos
Competência Clínica , Análise por Conglomerados , Cirurgia Geral/educação , Cirurgia Geral/métodos , Procedimentos Cirúrgicos Operatórios , Humanos , Modelos Anatômicos , Risco , Fatores de Tempo
5.
J Urol ; 196(1): 244-50, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-26820551

RESUMO

PURPOSE: To guide the surgeon during laparoscopic or robot-assisted radical prostatectomy an innovative laparoscopic/ultrasound fusion platform was developed using a motorized 3-dimensional transurethral ultrasound probe. We present what is to our knowledge the first preclinical evaluation of 3-dimensional prostate visualization using transurethral ultrasound and the preliminary results of this new augmented reality. MATERIALS AND METHODS: The transurethral probe and laparoscopic/ultrasound registration were tested on realistic prostate phantoms made of standard polyvinyl chloride. The quality of transurethral ultrasound images and the detection of passive markers placed on the prostate surface were evaluated on 2-dimensional dynamic views and 3-dimensional reconstructions. The feasibility, precision and reproducibility of laparoscopic/transurethral ultrasound registration was then determined using 4, 5, 6 and 7 markers to assess the optimal amount needed. The root mean square error was calculated for each registration and the median root mean square error and IQR were calculated according to the number of markers. RESULTS: The transurethral ultrasound probe was easy to manipulate and the prostatic capsule was well visualized in 2 and 3 dimensions. Passive markers could precisely be localized in the volume. Laparoscopic/transurethral ultrasound registration procedures were performed on 74 phantoms of various sizes and shapes. All were successful. The median root mean square error of 1.1 mm (IQR 0.8-1.4) was significantly associated with the number of landmarks (p = 0.001). The highest accuracy was achieved using 6 markers. However, prostate volume did not affect registration precision. CONCLUSIONS: Transurethral ultrasound provided high quality prostate reconstruction and easy marker detection. Laparoscopic/ultrasound registration was successful with acceptable mm precision. Further investigations are necessary to achieve sub mm accuracy and assess feasibility in a human model.


Assuntos
Laparoscopia/métodos , Próstata/diagnóstico por imagem , Próstata/cirurgia , Prostatectomia/métodos , Cirurgia Assistida por Computador/métodos , Ultrassonografia de Intervenção/métodos , Estudos de Viabilidade , Humanos , Imageamento Tridimensional , Masculino , Modelos Anatômicos
6.
Med Phys ; 51(6): 4056-4068, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38687086

RESUMO

BACKGROUND: Accurate tomographic reconstructions require the knowledge of the actual acquisition geometry. Many mobile C-arm CT scanners have poorly reproducible acquisition geometries and thus need acquisition-specific calibration procedures. Most of geometric self-calibration methods based on projection data either need prior information or are limited to the estimation of a low number of geometric calibration parameters. Other self-calibration methods generally use a calibration pattern with known geometry and are hardly implementable in practice for clinical applications. PURPOSE: We present a three-step marker based self-calibration method which does not require the prior knowledge of the calibration pattern and thus enables the use of calibration patterns with arbitrary markers positions. METHODS: The first step of the method aims at detecting the set of markers of the calibration pattern in each projection of the CT scan and is performed using the YOLO (You Only Look Once) Convolutional Neural Network. The projected marker trajectories are then estimated by a sequential projection-wise marker association scheme based on the Linear Assignment Problem which uses Kalman filters to predict the markers 2D positions in the projections. The acquisition geometry is finally estimated from the marker trajectories using the Bundle-adjustment algorithm. RESULTS: The calibration method has been tested on realistic simulated images of the ICRP (International Commission on Radiological Protection) phantom, using calibration patterns with 10 and 20 markers. The backprojection error was used to evaluate the self-calibration method and exhibited sub-millimeter errors. Real images of two human knees with 10 and 30 markers calibration patterns were then used to perform a qualitative evaluation of the method, which showed a remarkable artifacts reduction and bone structures visibility improvement. CONCLUSIONS: The proposed calibration method gave promising results that pave the way to patient-specific geometric self-calibrations in clinics.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Calibragem , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Humanos
7.
IEEE Trans Biomed Eng ; 70(8): 2338-2349, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37022829

RESUMO

OBJECTIVE: The accuracy of biopsy targeting is a major issue for prostate cancer diagnosis and therapy. However, navigation to biopsy targets remains challenging due to the limitations of transrectal ultrasound (TRUS) guidance added to prostate motion issues. This article describes a rigid 2D/3D deep registration method, which provides a continuous tracking of the biopsy location w.r.t the prostate for enhanced navigation. METHODS: A spatiotemporal registration network (SpT-Net) is proposed to localize the live 2D US image relatively to a previously aquired US reference volume. The temporal context relies on prior trajectory information based on previous registration results and probe tracking. Different forms of spatial context were compared through inputs (local, partial or global) or using an additional spatial penalty term. The proposed 3D CNN architecture with all combinations of spatial and temporal context was evaluated in an ablation study. For providing a realistic clinical validation, a cumulative error was computed through series of registrations along trajectories, simulating a complete clinical navigation procedure. We also proposed two dataset generation processes with increasing levels of registration complexity and clinical realism. RESULTS: The experiments show that a model using local spatial information combined with temporal information performs better than more complex spatiotemporal combination. CONCLUSION: The best proposed model demonstrates robust real-time 2D/3D US cumulated registration performance on trajectories. Those results respect clinical requirements, application feasibility, and they outperform similar state-of-the-art methods. SIGNIFICANCE: Our approach seems promising for clinical prostate biopsy navigation assistance or other US image-guided procedure.


Assuntos
Próstata , Neoplasias da Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Próstata/patologia , Imageamento Tridimensional/métodos , Biópsia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Ultrassonografia/métodos
8.
Eur Urol Oncol ; 2023 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-37599199

RESUMO

BACKGROUND: Segmentation of three-dimensional (3D) transrectal ultrasound (TRUS) images is known to be challenging, and the clinician often lacks a reliable and easy-to-use indicator to assess its accuracy during the fusion magnetic resonance imaging (MRI)-targeted prostate biopsy procedure. OBJECTIVE: To assess the effect of the relative volume difference between 3D-TRUS and MRI segmentation on the outcome of a targeted biopsy. DESIGN, SETTING, AND PARTICIPANTS: All adult males who underwent an MRI-targeted prostate biopsy for clinically suspected prostate cancer between February 2012 and July 2021 were consecutively included. INTERVENTION: All patients underwent a fusion MRI-targeted prostate biopsy with a Koelis device. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: Three-dimensional TRUS and MRI prostate volumes were calculated using 3D prostate models issued from the segmentations. The primary outcome was the relative segmentation volume difference (SVD) between transrectal ultrasound and MRI divided by the MRI volume (SVD = MRI volume - TRUS volume/MRI volume) and its correlation with clinically significant prostate cancer (eg, International Society of Urological Pathology [ISUP] ≥2) positiveness on targeted biopsy cores. RESULTS AND LIMITATIONS: Overall, 1721 patients underwent a targeted biopsy resulting in a total of 5593 targeted cores. The median relative SVD was significantly lower in patients diagnosed with clinically significant prostate cancer than in those with ISUP 0-1: (6.7% [interquartile range {IQR} -2.7, 13.6] vs 8.0% [IQR 3.3, 16.4], p < 0.01). A multivariate regression analysis showed that a relative SVD of >10% of the MRI volume was associated with a lower detection rate of clinically significant prostate cancer (odds ratio = 0.74 [95% confidence interval: 0.55-0.98]; p = 0.038). CONCLUSIONS: A relative SVD of >10% of the MRI segmented volume was associated with a lower detection rate of clinically significant prostate cancer on targeted biopsy cores. The relative SVD can be used as a per-procedure quality indicator of 3D-TRUS segmentation. PATIENT SUMMARY: A discrepancy of ≥10% between segmented magnetic resonance imaging and transrectal ultrasound volume is associated with a reduced ability to detect significant prostate cancer on targeted biopsy cores.

9.
J Imaging ; 8(3)2022 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-35324607

RESUMO

Multi-camera systems were recently introduced into laparoscopy to increase the narrow field of view of the surgeon. The video streams are stitched together to create a panorama that is easier for the surgeon to comprehend. Multi-camera prototypes for laparoscopy use quite basic algorithms and have only been evaluated on simple laparoscopic scenarios. The more recent state-of-the-art algorithms, mainly designed for the smartphone industry, have not yet been evaluated in laparoscopic conditions. We developed a simulated environment to generate a dataset of multi-view images displaying a wide range of laparoscopic situations, which is adaptable to any multi-camera system. We evaluated classical and state-of-the-art image stitching techniques used in non-medical applications on this dataset, including one unsupervised deep learning approach. We show that classical techniques that use global homography fail to provide a clinically satisfactory rendering and that even the most recent techniques, despite providing high quality panorama images in non-medical situations, may suffer from poor alignment or severe distortions in simulated laparoscopic scenarios. We highlight the main advantages and flaws of each algorithm within a laparoscopic context, identify the main remaining challenges that are specific to laparoscopy, and propose methods to improve these approaches. We provide public access to the simulated environment and dataset.

10.
Med Phys ; 48(3): 1144-1156, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33511658

RESUMO

PURPOSE: New radiation therapy protocols, in particular adaptive, focal or boost brachytherapy treatments, require determining precisely the position and orientation of the implanted radioactive seeds from real-time ultrasound (US) images. This is necessary to compare them to the planned one and to adjust automatically the dosimetric plan accordingly for next seeds implantations. The image modality, the small size of the seeds, and the artifacts they produce make it a very challenging problem. The objective of the presented work is to setup and to evaluate a robust and automatic method for seed localization in three-dimensional (3D) US images. METHODS: The presented method is based on a prelocalization of the needles through which the seeds are injected in the prostate. This prelocalization allows focusing the search on a region of interest (ROI) around the needle tip. Seeds localization starts by binarizing the ROI and removing false positives using, respectively, a Bayesian classifier and a support vector machine (SVM). This is followed by a registration stage using first an iterative closest point (ICP) for localizing the connected set of seeds (named strand) inserted through a needle, and secondly refining each seed position using sum of squared differences (SSD) as a similarity criterion. ICP registers a geometric model of the strand to the candidate voxels while SSD compares an appearance model of a single seed to a subset of the image. The method was evaluated both for 3D images of an Agar-agar phantom and a dataset of clinical 3D images. It was tested on stranded and on loose seeds. RESULTS: Results on phantom and clinical images were compared with a manual localization giving mean errors of 1.09 ± 0.61 mm on phantom image and 1.44 ± 0.45 mm on clinical images. On clinical images, the mean errors of individual seeds orientation was 4.33 ± 8 . 51 ∘ . CONCLUSIONS: The proposed algorithm for radioactive seed localization is robust, tested on different US images, accurate, giving small mean error values, and returns the five cylindrical seeds degrees of freedom.


Assuntos
Braquiterapia , Aprendizado de Máquina , Neoplasias da Próstata , Teorema de Bayes , Humanos , Masculino , Imagens de Fantasmas , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia
11.
Int J Comput Assist Radiol Surg ; 16(11): 2009-2019, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34143373

RESUMO

PURPOSE: Surgical Data Science (SDS) is an emerging research domain offering data-driven answers to challenges encountered by clinicians during training and practice. We previously developed a framework to assess quality of practice based on two aspects: exposure of the surgical scene (ESS) and the surgeon's profile of practice (SPP). Here, we wished to investigate the clinical relevance of the parameters learned by this model by (1) interpreting these parameters and identifying associated representative video samples and (2) presenting this information to surgeons in the form of a video-enhanced questionnaire. To our knowledge, this is the first approach in the field of SDS for laparoscopy linking the choices made by a machine learning model predicting surgical quality to clinical expertise. METHOD: Spatial features and quality of practice scores extracted from labeled and segmented frames in 30 laparoscopic videos were used to predict the ESS and the SPP. The relationships between the inputs and outputs of the model were then analyzed and translated into meaningful sentences (statements, e.g., "To optimize the ESS, it is very important to correctly handle the spleen"). Representative video clips illustrating these statements were semi-automatically identified. Eleven statements and video clips were used in a survey presented to six experienced digestive surgeons to gather their opinions on the algorithmic analyses. RESULTS: All but one of the surgeons agreed with the proposed questionnaire overall. On average, surgeons agreed with 7/11 statements. CONCLUSION: This proof-of-concept study provides preliminary validation of our model which has a high potential for use to analyze and understand surgical practices.


Assuntos
Laparoscopia , Cirurgiões , Competência Clínica , Humanos , Gravação em Vídeo
12.
Int J Comput Assist Radiol Surg ; 15(7): 1195-1203, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32436131

RESUMO

PURPOSE: Percutaneous procedures are increasingly used for the treatment of tumors in abdominal structures. Most of the time, these procedures are planned based on static preoperative images and do not take into account any motions, while breathing control is not always applicable. In this paper, we present a method to automatically adjust the planned path in real time according to the breathing. METHODS: First, an estimation of the organs motions during breathing is performed during an observation phase. Then we propose an approach named Real Time Intelligent Trajectory (RTIT) that consists in finding the most appropriate moments to push the needle along the initially planned path, based on the motions and the distance to surrounding organs. We also propose a second approach called Real Time Straight Trajectory (RTST) that examines sixteen scenarios of needle insertion at constant speed, starting at eight different moments of the breathing cycle with two different speeds. RESULTS: We evaluated our methods on six 3D models of abdominal structures built using image datasets and a real-time simulation of breathing movements. We measured the deviation from the initial path, the target positioning error, and the distance of the actual path to risky structures. The path proposed by RTIT approach is compared to the best path proposed by RTST. CONCLUSIONS: We show that the RTIT approach is relevant and adapted to breathing movements. The modification of the path remains minimal while collisions with obstacles are avoided. This study on simulations constitutes a first step towards intelligent robotic insertion under real-time image guidance.


Assuntos
Abdome/cirurgia , Movimentos dos Órgãos , Respiração , Humanos , Modelos Anatômicos
13.
Artif Intell Med ; 104: 101837, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32499005

RESUMO

OBJECTIVE: According to a meta-analysis of 7 studies, the median number of patients with at least one adverse event during the surgery is 14.4%, and a third of those adverse events were preventable. The occurrence of adverse events forces surgeons to implement corrective strategies and, thus, deviate from the standard surgical process. Therefore, it is clear that the automatic identification of adverse events is a major challenge for patient safety. In this paper, we have proposed a method enabling us to identify such deviations. We have focused on identifying surgeons' deviations from standard surgical processes due to surgical events rather than anatomic specificities. This is particularly challenging, given the high variability in typical surgical procedure workflows. METHODS: We have introduced a new approach designed to automatically detect and distinguish surgical process deviations based on multi-dimensional non-linear temporal scaling with a hidden semi-Markov model using manual annotation of surgical processes. The approach was then evaluated using cross-validation. RESULTS: The best results have over 90% accuracy. Recall and precision for event deviations, i.e. related to adverse events, are respectively below 80% and 40%. To understand these results, we have provided a detailed analysis of the incorrectly-detected observations. CONCLUSION: Multi-dimensional non-linear temporal scaling with a hidden semi-Markov model provides promising results for detecting deviations. Our error analysis of the incorrectly-detected observations offers different leads in order to further improve our method. SIGNIFICANCE: Our method demonstrated the feasibility of automatically detecting surgical deviations that could be implemented for both skill analysis and developing situation awareness-based computer-assisted surgical systems.


Assuntos
Laparoscopia , Cirurgiões , Sistemas Computacionais , Humanos , Fluxo de Trabalho
14.
Int J Comput Assist Radiol Surg ; 15(1): 59-67, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31673963

RESUMO

PURPOSE : Evaluating the quality of surgical procedures is a major concern in minimally invasive surgeries. We propose a bottom-up approach based on the study of Sleeve Gastrectomy procedures, for which we analyze what we assume to be an important indicator of the surgical expertise: the exposure of the surgical scene. We first aim at predicting this indicator with features extracted from the laparoscopic video feed, and second to analyze how the extracted features describing the surgical practice influence this indicator. METHOD : Twenty-nine patients underwent Sleeve Gastrectomy performed by two confirmed surgeons in a monocentric study. Features were extracted from spatial and procedural annotations of the videos, and an expert surgeon evaluated the quality of the surgical exposure at specific instants. The features were used as input of a classifier (linear discriminant analysis followed by a support vector machine) to predict the expertise indicator. Features selected in different configurations of the algorithm were compared to understand their relationships with the surgical exposure and the surgeon's practice. RESULTS : The optimized algorithm giving the best performance used spatial features as input ([Formula: see text]). It also predicted equally the two classes of the indicator, despite their strong imbalance. Analyzing the selection of input features in the algorithm allowed a comparison of different configurations of the algorithm and showed a link between the surgical exposure and the surgeon's practice. CONCLUSION : This preliminary study validates that a prediction of the surgical exposure from spatial features is possible. The analysis of the clusters of feature selected by the algorithm also shows encouraging results and potential clinical interpretations.


Assuntos
Algoritmos , Gastrectomia/métodos , Laparoscopia/métodos , Máquina de Vetores de Suporte/normas , Gravação em Vídeo/métodos , Humanos
15.
Orthop Traumatol Surg Res ; 106(6): 1153-1157, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32917579

RESUMO

INTRODUCTION: Certain structures and pathologies can be difficult to reveal under videoscopy alone during arthroscopic surgery. Ultrasound can be a useful contribution in arthroscopic diagnostic and therapeutic procedures. The main aim of the present study was to assess equivalence between endoscopic and external ultrasound for shoulder exploration. Secondary objectives comprised qualitative assessment of endoscopic ultrasound images and comparative assessment of acquisition time between the two techniques. MATERIAL AND METHODS: An anatomic non-inferiority study was conducted on 6 shoulders from 3 subjects with a mean age of 84 years. After ultrasound examination by a radiologist specializing in osteoarticular imaging, shoulder arthroscopy was performed by a single specialized surgeon, using an ultrasound endoscope. Number of visualized structures and image quality were assessed by independent observers. RESULTS: Ten of the 11 structures of interest (91%) were visualizable on endoscopic ultrasound, versus 4 (36%) on external ultrasound (p<0.05). Mean endoscopic acquisition time was 9.5±6.3minutes [range, 5;22]. In the 11 structures, image quality was better on endoscopic than external ultrasound, except for the acromioclavicular joint, where quality was better on external ultrasound, and the lateral side of the rotator cuff, where quality was equivalent. CONCLUSION: The present study demonstrated equivalence between endoscopic and external ultrasound for shoulder exploration. LEVEL OF EVIDENCE: IV, Non-inferiority cadaver study.


Assuntos
Lesões do Manguito Rotador , Articulação do Ombro , Idoso de 80 Anos ou mais , Artroscopia , Humanos , Manguito Rotador , Lesões do Manguito Rotador/diagnóstico por imagem , Lesões do Manguito Rotador/cirurgia , Ombro , Articulação do Ombro/diagnóstico por imagem , Articulação do Ombro/cirurgia , Resultado do Tratamento
16.
Comput Assist Surg (Abingdon) ; 24(sup1): 20-29, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-30760050

RESUMO

ABSTARCT Real-time tool tracking in minimally invasive-surgery (MIS) has numerous applications for computer-assisted interventions (CAIs). Visual tracking approaches are a promising solution to real-time surgical tool tracking, however, many approaches may fail to complete tracking when the tracker suffers from issues such as motion blur, adverse lighting, specular reflections, shadows, and occlusions. We propose an automatic real-time method for two-dimensional tool detection and tracking based on a spatial transformer network (STN) and spatio-temporal context (STC). Our method exploits both the ability of a convolutional neural network (CNN) with an in-house trained STN and STC to accurately locate the tool at high speed. Then we compared our method experimentally with other four general of CAIs' visual tracking methods using eight existing online and in-house datasets, covering both in vivo abdominal, cardiac and retinal clinical cases in which different surgical instruments were employed. The experiments demonstrate that our method achieved great performance with respect to the accuracy and the speed. It can track a surgical tool without labels in real time in the most challenging of cases, with an accuracy that is equal to and sometimes surpasses most state-of-the-art tracking algorithms. Further improvements to our method will focus on conditions of occlusion and multi-instruments.


Assuntos
Aprendizado Profundo , Procedimentos Cirúrgicos Minimamente Invasivos/instrumentação , Análise Espaço-Temporal , Instrumentos Cirúrgicos , Algoritmos , Humanos , Redes Neurais de Computação
17.
Int J Comput Assist Radiol Surg ; 13(1): 95-103, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28825199

RESUMO

PURPOSE: Evaluation of surgical technical abilities is a major issue in minimally invasive surgery. Devices such as training benches offer specific scores to evaluate surgeons but cannot transfer in the operating room (OR). A contrario, several scores measure performance in the OR, but have not been evaluated on training benches. Our aim was to demonstrate that the GOALS score, which can effectively grade in the OR the abilities involved in laparoscopy, can be used for evaluation on a laparoscopic testbench (MISTELS). This could lead to training systems that can identify more precisely the skills that have been acquired or must still be worked on. METHODS: 32 volunteers (surgeons, residents and medical students) performed the 5 tasks of the MISTELS training bench and were simultaneously video-recorded. Their performance was evaluated with the MISTELS score and with the GOALS score based on the review of the recording by two experienced, blinded laparoscopic surgeons. The concurrent validity of the GOALS score was assessed using Pearson and Spearman correlation coefficients with the MISTELS score. The construct validity of the GOALS score was assessed with k-means clustering and accuracy rates. Lastly, abilities explored by each MISTELS task were identified with multiple linear regression. RESULTS: GOALS and MISTELS scores are strongly correlated (Pearson correlation coefficient = 0.85 and Spearman correlation coefficient = 0.82 for the overall score). The GOALS score proves to be valid for construction for the tasks of the training bench, with a better accuracy rate between groups of level after k-means clustering, when compared to the original MISTELS score (accuracy rates, respectively, 0.75 and 0.56). CONCLUSION: GOALS score is well suited for the evaluation of the performance of surgeons of different levels during the completion of the tasks of the MISTELS training bench.


Assuntos
Competência Clínica , Laparoscopia/educação , Cirurgiões/educação , Objetivos , Humanos , Internato e Residência , Salas Cirúrgicas , Estudantes de Medicina
18.
J Endourol ; 21(8): 911-4, 2007 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-17867952

RESUMO

PURPOSE: We have conducted experiments with an innovatively designed robot endoscope holder for laparoscopic surgery that is small and low cost. MATERIALS AND METHODS: A compact light endoscope robot (LER) that is placed on the patient's skin and can be used with the patient in the lateral or dorsal supine position was tested on cadavers and laboratory pigs in order to allow successive modifications. The current control system is based on voice recognition. The range of vision is 360 degrees with an angle of 160 degrees . Twenty-three procedures were performed. RESULTS: The tests made it possible to advance the prototype on a variety of aspects, including reliability, steadiness, ergonomics, and dimensions. The ease of installation of the robot, which takes only 5 minutes, and the easy handling made it possible for 21 of the 23 procedures to be performed without an assistant. CONCLUSION: The LER is a camera holder guided by the surgeon's voice that can eliminate the need for an assistant during laparoscopic surgery. The ease of installation and manufacture should make it an effective and inexpensive system for use on patients in the lateral and dorsal supine positions. Randomized clinical trials will soon validate a new version of this robot prior to marketing.


Assuntos
Desenho de Equipamento , Laparoscópios , Robótica , Procedimentos Cirúrgicos Urológicos/instrumentação , Interface Usuário-Computador , Animais , Cadáver , Humanos , Miniaturização , Interface para o Reconhecimento da Fala , Suínos
19.
Comput Assist Surg (Abingdon) ; 22(sup1): 26-35, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-28937281

RESUMO

BACKGROUND: Worldwide propagation of minimally invasive surgeries (MIS) is hindered by their drawback of indirect observation and manipulation, while monitoring of surgical instruments moving in the operated body required by surgeons is a challenging problem. Tracking of surgical instruments by vision-based methods is quite lucrative, due to its flexible implementation via software-based control with no need to modify instruments or surgical workflow. METHODS: A MIS instrument is conventionally split into a shaft and end-effector portions, while a 2D/3D tracking-by-detection framework is proposed, which performs the shaft tracking followed by the end-effector one. The former portion is described by line features via the RANSAC scheme, while the latter is depicted by special image features based on deep learning through a well-trained convolutional neural network. RESULTS: The method verification in 2D and 3D formulation is performed through the experiments on ex-vivo video sequences, while qualitative validation on in-vivo video sequences is obtained. CONCLUSION: The proposed method provides robust and accurate tracking, which is confirmed by the experimental results: its 3D performance in ex-vivo video sequences exceeds those of the available state-of -the-art methods. Moreover, the experiments on in-vivo sequences demonstrate that the proposed method can tackle the difficult condition of tracking with unknown camera parameters. Further refinements of the method will refer to the occlusion and multi-instrumental MIS applications.


Assuntos
Aprendizado Profundo , Imageamento Tridimensional , Procedimentos Cirúrgicos Minimamente Invasivos/instrumentação , Redes Neurais de Computação , Instrumentos Cirúrgicos , Algoritmos , Endoscópios , Humanos , Laparoscópios , Procedimentos Cirúrgicos Minimamente Invasivos/métodos
20.
Prog Urol ; 16(1): 45-51, 2006 Feb.
Artigo em Francês | MEDLINE | ID: mdl-16526539

RESUMO

INTRODUCTION: The authors participated in the development of an innovative endoscope robot in laparoscopic surgery designed by TIMC-GMCAO, providing a solution to the disadvantages of currently available systems, i.e. their cost and large dimensions. MATERIAL AND METHODS: A compact robot (LER) placed on the patient's skin that can be used in the lateral and dorsal supine position was tested on cadavres and laboratory pigs in order to allow successive modifications. The current control system is based on voice recognition. The amplitude of vision is 360 degrees with an angle of 160 degrees. Twenty three procedures were performed (2 radical prostatectomies, 4 pelvic lymph node dissections, 6 nephrectomies, 2 adrenalectomies, 3 cholecystectomies, 1 small bowel resection-anastomosis, 1 cystectomy, 1 splenectomy, and 3 appendicectomies). RESULTS: Among the various control systems tested, we adopted voice recognition on the basis of its intuitive nature and the fact that it leaves one hand free. In the light of these studies, several aspects of the prototype were modified: reliability, fixation, ergonomy and dimensions. The ease of installation, which takes only 5 minutes, and the easy handling of the robot allowed 21 out of 23 laparoscopic procedures to be performed without the need for an assistant. CONCLUSION: The LER robot is an endoscope robot guided by the surgeon's voice that can eliminate the need for an assistant to hold the camera during laparoscopic surgery in the lateral and dorsal supine positions. The ease of installation and manufacture should make this an effective and inexpensive system. The gain in operating time was not evaluated during these trials on cadavres and pigs, as various prototypes were tested and several problems of reliability were successively resolved. Ongoing randomized, prospective clinical trials should soon validate this robot prior to marketing.


Assuntos
Laparoscopia , Robótica/instrumentação , Animais , Desenho de Equipamento , Humanos , Luz , Suínos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa