Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Cancers (Basel) ; 16(5)2024 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-38473404

RESUMEN

The aim of "Precision Surgery" is to reduce the impact of surgeries on patients' global health. In this context, over the last years, the use of three-dimensional virtual models (3DVMs) of organs has allowed for intraoperative guidance, showing hidden anatomical targets, thus limiting healthy-tissue dissections and subsequent damage during an operation. In order to provide an automatic 3DVM overlapping in the surgical field, we developed and tested a new software, called "ikidney", based on convolutional neural networks (CNNs). From January 2022 to April 2023, patients affected by organ-confined renal masses amenable to RAPN were enrolled. A bioengineer, a software developer, and a surgeon collaborated to create hyper-accurate 3D models for automatic 3D AR-guided RAPN, using CNNs. For each patient, demographic and clinical data were collected. A total of 13 patients were included in the present study. The average anchoring time was 11 (6-13) s. Unintended 3D-model automatic co-registration temporary failures happened in a static setting in one patient, while this happened in one patient in a dynamic setting. There was one failure; in this single case, an ultrasound drop-in probe was used to detect the neoplasm, and the surgery was performed under ultrasound guidance instead of AR guidance. No major intraoperative nor postoperative complications (i.e., Clavien Dindo > 2) were recorded. The employment of AI has unveiled several new scenarios in clinical practice, thanks to its ability to perform specific tasks autonomously. We employed CNNs for an automatic 3DVM overlapping during RAPN, thus improving the accuracy of the superimposition process.

2.
Technol Cancer Res Treat ; 23: 15330338241229368, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38374643

RESUMEN

OBJECTIVES: The research's purpose is to develop a software that automatically integrates and overlay 3D virtual models of kidneys harboring renal masses into the Da Vinci robotic console, assisting surgeon during the intervention. INTRODUCTION: Precision medicine, especially in the field of minimally-invasive partial nephrectomy, aims to use 3D virtual models as a guidance for augmented reality robotic procedures. However, the co-registration process of the virtual images over the real operative field is performed manually. METHODS: In this prospective study, two strategies for the automatic overlapping of the model over the real kidney were explored: the computer vision technology, leveraging the super-enhancement of the kidney allowed by the intraoperative injection of Indocyanine green for superimposition and the convolutional neural network technology, based on the processing of live images from the endoscope, after a training of the software on frames from prerecorded videos of the same surgery. The work-team, comprising a bioengineer, a software-developer and a surgeon, collaborated to create hyper-accuracy 3D models for automatic 3D-AR-guided RAPN. For each patient, demographic and clinical data were collected. RESULTS: Two groups (group A for the first technology with 12 patients and group B for the second technology with 8 patients) were defined. They showed comparable preoperative and post-operative characteristics. Concerning the first technology the average co-registration time was 7 (3-11) seconds while in the case of the second technology 11 (6-13) seconds. No major intraoperative or postoperative complications were recorded. There were no differences in terms of functional outcomes between the groups at every time-point considered. CONCLUSION: The first technology allowed a successful anchoring of the 3D model to the kidney, despite minimal manual refinements. The second technology improved kidney automatic detection without relying on indocyanine injection, resulting in better organ boundaries identification during tests. Further studies are needed to confirm this preliminary evidence.


Asunto(s)
Realidad Aumentada , Neoplasias Renales , Procedimientos Quirúrgicos Robotizados , Cirugía Asistida por Computador , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Cirugía Asistida por Computador/métodos , Estudios Prospectivos , Nefrectomía/métodos , Imagenología Tridimensional/métodos , Neoplasias Renales/diagnóstico por imagen , Neoplasias Renales/cirugía , Computadores
3.
J Clin Med ; 12(23)2023 Nov 28.
Artículo en Inglés | MEDLINE | ID: mdl-38068407

RESUMEN

BACKGROUND: Addressing intraoperative bleeding remains a significant challenge in the field of robotic surgery. This research endeavors to pioneer a groundbreaking solution utilizing convolutional neural networks (CNNs). The objective is to establish a system capable of forecasting instances of intraoperative bleeding during robot-assisted radical prostatectomy (RARP) and promptly notify the surgeon about bleeding risks. METHODS: To achieve this, a multi-task learning (MTL) CNN was introduced, leveraging a modified version of the U-Net architecture. The aim was to categorize video input as either "absence of blood accumulation" (0) or "presence of blood accumulation" (1). To facilitate seamless interaction with the neural networks, the Bleeding Artificial Intelligence-based Detector (BLAIR) software was created using the Python Keras API and built upon the PyQT framework. A subsequent clinical assessment of BLAIR's efficacy was performed, comparing its bleeding identification performance against that of a urologist. Various perioperative variables were also gathered. For optimal MTL-CNN training parameterization, a multi-task loss function was adopted to enhance the accuracy of event detection by taking advantage of surgical tools' semantic segmentation. Additionally, the Multiple Correspondence Analysis (MCA) approach was employed to assess software performance. RESULTS: The MTL-CNN demonstrated a remarkable event recognition accuracy of 90.63%. When evaluating BLAIR's predictive ability and its capacity to pre-warn surgeons of potential bleeding incidents, the density plot highlighted a striking similarity between BLAIR and human assessments. In fact, BLAIR exhibited a faster response. Notably, the MCA analysis revealed no discernible distinction between the software and human performance in accurately identifying instances of bleeding. CONCLUSION: The BLAIR software proved its competence by achieving over 90% accuracy in predicting bleeding events during RARP. This accomplishment underscores the potential of AI to assist surgeons during interventions. This study exemplifies the positive impact AI applications can have on surgical procedures.

4.
Diagnostics (Basel) ; 13(22)2023 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-37998590

RESUMEN

More than ever, precision surgery is making its way into modern surgery for functional organ preservation. This is possible mainly due to the increasing number of technologies available, including 3D models, virtual reality, augmented reality, and artificial intelligence. Intraoperative surgical navigation represents an interesting application of these technologies, allowing to understand in detail the surgical anatomy, planning a patient-tailored approach. Automatic superimposition comes into this context to optimally perform surgery as accurately as possible. Through a dedicated software (the first version) called iKidney, it is possible to superimpose the images using 3D models and live endoscopic images during partial nephrectomy, targeting the renal mass only. The patient is 31 years old with a 28 mm totally endophytic right-sided renal mass, with a PADUA score of 9. Thanks to the automatic superimposition and selective clamping, an enucleoresection of the renal mass alone was performed with no major postoperative complication (i.e., Clavien-Dindo < 2). iKidney-guided partial nephrectomy is safe, feasible, and yields excellent results in terms of organ preservation and functional outcomes. Further validation studies are needed to improve the prototype software, particularly to improve the rotational axes and avoid human help. Furthermore, it is important to reduce the costs associated with these technologies to increase its use in smaller hospitals.

5.
Asian J Urol ; 10(4): 407-415, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38024433

RESUMEN

Objective: To evaluate the accuracy of our new three-dimensional (3D) automatic augmented reality (AAR) system guided by artificial intelligence in the identification of tumour's location at the level of the preserved neurovascular bundle (NVB) at the end of the extirpative phase of nerve-sparing robot-assisted radical prostatectomy. Methods: In this prospective study, we enrolled patients with prostate cancer (clinical stages cT1c-3, cN0, and cM0) with a positive index lesion at target biopsy, suspicious for capsular contact or extracapsular extension at preoperative multiparametric magnetic resonance imaging. Patients underwent robot-assisted radical prostatectomy at San Luigi Gonzaga Hospital (Orbassano, Turin, Italy), from December 2020 to December 2021. At the end of extirpative phase, thanks to our new AAR artificial intelligence driven system, the virtual prostate 3D model allowed to identify the tumour's location at the level of the preserved NVB and to perform a selective excisional biopsy, sparing the remaining portion of the bundle. Perioperative and postoperative data were evaluated, especially focusing on the positive surgical margin (PSM) rates, potency, continence recovery, and biochemical recurrence. Results: Thirty-four patients were enrolled. In 15 (44.1%) cases, the target lesion was in contact with the prostatic capsule at multiparametric magnetic resonance imaging (Wheeler grade L2) while in 19 (55.9%) cases extracapsular extension was detected (Wheeler grade L3). 3D AAR guided biopsies were negative in all pathological tumour stage 2 (pT2) patients while they revealed the presence of cancer in 14 cases in the pT3 cohort (14/16; 87.5%). PSM rates were 0% and 7.1% in the pathological stages pT2 and pT3 (<3 mm, Gleason score 3), respectively. Conclusion: With the proposed 3D AAR system, it is possible to correctly identify the lesion's location on the NVB in 87.5% of pT3 patients and perform a 3D-guided tailored nerve-sparing even in locally advanced diseases, without compromising the oncological safety in terms of PSM rates.

6.
World J Urol ; 40(9): 2221-2229, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35790535

RESUMEN

PURPOSE: To evaluate the role of 3D models on positive surgical margin rate (PSM) rate in patients who underwent robot-assisted radical prostatectomy (RARP) compared to a no-3D control group. Secondarily, we evaluated the postoperative functional and oncological outcomes. METHODS: Prospective study enrolling patients with localized prostate cancer (PCa) undergoing RARP with mp-MRI-based 3D model reconstruction, displayed in a cognitive or augmented-reality fashion, at our Centre from 01/2016 to 01/2020. A control no-3D group was extracted from the last two years of our Institutional RARP database. PSMr between the two groups was evaluated and multivariable linear regression (MLR) models were applied. Finally, Kaplan-Meier estimator was used to calculate biochemical recurrence at 12 months after the intervention. RESULTS: 160 patients were enrolled in the 3D Group, while 640 were selected for the Control Group. A more conservative NS approach was registered in the 3D Group (full NS 20.6% vs 12.7%; intermediate NS 38.1% vs 38.0%; standard NS 41.2% vs 49.2%; p = 0.02). 3D Group patients had lower PSM rates (25 vs. 35.1%, p = 0.01). At MLR models, the availability of 3D technology (p = 0.005) and the absence of extracapsular extension (ECE, p = 0.004) at mp-MRI were independent predictors of lower PSMr. Moreover, 3D model represented a significant protective factor for PSM in patients with ECE or pT3 disease. CONCLUSION: The availability of 3D models during the intervention allows to modulate the NS approach, limiting the occurrence of PSM, especially in patients with ECE at mp-MRI or pT3 PCa.


Asunto(s)
Neoplasias de la Próstata , Procedimientos Quirúrgicos Robotizados , Robótica , Humanos , Masculino , Márgenes de Escisión , Estudios Prospectivos , Prostatectomía , Neoplasias de la Próstata/cirugía
7.
Urology ; 164: e316, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35710185
8.
Int J Med Robot ; 18(3): e2387, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35246913

RESUMEN

INTRODUCTION: The current study presents a deep learning framework to determine, in real-time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot-assisted procedures. METHODS: This framework exploits semantic segmentation and, thereafter, two techniques, based on Convolutional Neural Networks and motion analysis, were used to infer the rotation. RESULTS: The segmentation shows optimal accuracies, with a mean IoU score greater than 80% in all tests. Different performance levels are obtained for rotation, depending on the surgical procedure. DISCUSSION: Even if the presented methodology has various degrees of precision depending on the testing scenario, this work sets the first step for the adoption of deep learning and augmented reality to generalise the automatic registration process.


Asunto(s)
Aprendizaje Profundo , Laparoscopía , Procedimientos Quirúrgicos Robotizados , Robótica , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Laparoscopía/métodos , Redes Neurales de la Computación
9.
Urology ; 164: e312-e316, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35063460

RESUMEN

Augmented reality robot-assisted partial nephrectomy (AR-RAPN) is limited by the need of a constant manual overlapping of the hyper-accuracy 3D (HA3D) virtual models to the real anatomy. To present our preliminary experience with automatic 3D virtual model overlapping during AR-RAPN. To reach a fully automated HA3D model overlapping, we pursued computer vision strategies, based on the identification of landmarks to link the virtual model. Due to the limited field of view of RAPN, we used the whole kidney as a marker. Moreover, to overcome the limit of similarity of colors between the kidney and its neighboring structures, we super-enhanced the organ, using the NIRF Firefly fluorescence imaging technology. A specifically developed software named "IGNITE" (Indocyanine GreeN automatIc augmenTed rEality) allowed the automatic anchorage of the HA3D model to the real organ, leveraging the enhanced view offered by NIRF technology. Ten automatic AR-RAPN were performed. For all the patients a HA3D model was produced and visualized as AR image inside the robotic console. During all the surgical procedures, the automatic ICG-guided AR technology successfully anchored the virtual model to the real organ without hand-assistance (mean anchorage time: 7 seconds), even when moving the camera throughout the operative field, while zooming and translating the organ. In 7 patients with totally endophytic or posterior lesions, the renal masses were correctly identified with automatic AR technology, performing a successful enucleoresection. No intraoperative or postoperative Clavien >2 complications or positive surgical margins were recorded. Our pilot study provides the first demonstration of the application of computer vision technology for AR procedures, with a software automatically performing a visual concordance during the overlap of 3D models and in vivo anatomy. Its actual limitations, related to the kidney deformations during surgery altering the automatic anchorage, will be overcome implementing the organ recognition with deep learning algorithms.


Asunto(s)
Realidad Aumentada , Procedimientos Quirúrgicos Robotizados , Robótica , Cirugía Asistida por Computador , Computadores , Humanos , Imagenología Tridimensional/métodos , Verde de Indocianina , Nefrectomía/métodos , Proyectos Piloto , Procedimientos Quirúrgicos Robotizados/métodos , Cirugía Asistida por Computador/métodos
10.
Int J Comput Assist Radiol Surg ; 16(9): 1435-1445, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34165672

RESUMEN

PURPOSE: The current study aimed to propose a Deep Learning (DL) and Augmented Reality (AR) based solution for a in-vivo robot-assisted radical prostatectomy (RARP), to improve the precision of a published work from our group. We implemented a two-steps automatic system to align a 3D virtual ad-hoc model of a patient's organ with its 2D endoscopic image, to assist surgeons during the procedure. METHODS: This approach was carried out using a Convolutional Neural Network (CNN) based structure for semantic segmentation and a subsequent elaboration of the obtained output, which produced the needed parameters for attaching the 3D model. We used a dataset obtained from 5 endoscopic videos (A, B, C, D, E), selected and tagged by our team's specialists. We then evaluated the most performing couple of segmentation architecture and neural network and tested the overlay performances. RESULTS: U-Net stood out as the most effecting architectures for segmentation. ResNet and MobileNet obtained similar Intersection over Unit (IoU) results but MobileNet was able to elaborate almost twice operations per seconds. This segmentation technique outperformed the results from the former work, obtaining an average IoU for the catheter of 0.894 (σ = 0.076) compared to 0.339 (σ = 0.195). This modifications lead to an improvement also in the 3D overlay performances, in particular in the Euclidean Distance between the predicted and actual model's anchor point, from 12.569 (σ= 4.456) to 4.160 (σ = 1.448) and in the Geodesic Distance between the predicted and actual model's rotations, from 0.266 (σ = 0.131) to 0.169 (σ = 0.073). CONCLUSION: This work is a further step through the adoption of DL and AR in the surgery domain. In future works, we will overcome the limits of this approach and finally improve every step of the surgical procedure.


Asunto(s)
Realidad Aumentada , Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Redes Neurales de la Computación , Semántica
11.
Minerva Urol Nephrol ; 73(3): 367-375, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-31486325

RESUMEN

BACKGROUND: 3D reconstructions are gaining a wide diffusion in nephron-sparing surgery (NSS) planning. They have usually been studied on common 2D flat supports, with limitations regarding real depth comprehension and interaction. Nowadays, it is possible to visualize kidney 3D reconstructions as holograms in a "mixed reality" (MR) setting. The aim of this study was to test the face and content validity of this technology, and to assess the role of 3D holograms in aiding preoperative planning for highly complex renal tumors amenable by NSS. METHODS: We evaluated surgeons' perception of mixed reality for partial nephrectomy during a urological international meeting organized at our Institution in January 2019. Thanks to the images of preoperative CT, hyper-accuracy 3D (HA3DTM) reconstructions were performed. Then, a virtual environment was created, and it interacted with the models in mixed reality setting by using HoloLens. We submitted to all the attendees a questionnaire, expressed by the Likert scale (1-10), about their opinion over the use and application of the MR. Moreover, the attendees had the chance to perform a first-hand MR experience; then, they were asked to choose their clamping and resection approach. RESULTS: Overall 172 questionnaires were collected. The scores obtained regarding both surgical planning (scored 8/10) and anatomical accuracy (9/10) were very positive. High satisfaction toward the potential role of this technology in surgical planning and understanding of surgical complexity (both scored 9/10) were expressed. After a first-hand experience with HoloLens and MR, 64.4% and 44.4% of the surgeons changed their clamping and resection approach, respectively - compared to CT image visualization only - choosing a more selective one. CONCLUSIONS: Our study suggests that surgeons perceive holograms and MR as a useful and interesting tool for the preoperative setting before partial nephrectomy, in the direction of an ever more precise surgery.


Asunto(s)
Realidad Aumentada , Holografía , Imagenología Tridimensional , Neoplasias Renales/cirugía , Riñón/diagnóstico por imagen , Nefrectomía/métodos , Cuidados Preoperatorios/métodos , Actitud del Personal de Salud , Congresos como Asunto , Humanos , Riñón/cirugía , Neoplasias Renales/diagnóstico por imagen , Modelos Anatómicos , Proyectos Piloto , Reproducibilidad de los Resultados , Cirujanos , Encuestas y Cuestionarios
12.
Int J Med Robot ; 16(5): 1-12, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32510857

RESUMEN

PURPOSE: The current study aimed to systematically review the literature addressing the use of deep learning (DL) methods in intraoperative surgery applications, focusing on the data collection, the objectives of these tools and, more technically, the DL-based paradigms utilized. METHODS: A literature search with classic databases was performed: we identified, with the use of specific keywords, a total of 996 papers. Among them, we selected 52 for effective analysis, focusing on articles published after January 2015. RESULTS: The preliminary results of the implementation of DL in clinical setting are encouraging. Almost all the surgery sub-fields have seen the advent of artificial intelligence (AI) applications and the results outperformed the previous techniques in the majority of the cases. From these results, a conceptualization of an intelligent operating room (IOR) is also presented. CONCLUSION: This evaluation outlined how AI and, in particular, DL are revolutionizing the surgery field, with numerous applications, such as context detection and room management. This process is evolving years by years into the realization of an IOR, equipped with technologies perfectly suited to drastically improve the surgical workflow.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Humanos , Quirófanos
13.
Comput Methods Programs Biomed ; 191: 105505, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32387863

RESUMEN

BACKGROUND AND OBJECTIVE: We present an original approach to the development of augmented reality (AR) real-time solutions for robotic surgery navigation. The surgeon operating the robotic system through a console and a visor experiences reduced awareness of the operatory scene. In order to improve the surgeon's spatial perception during robot-assisted minimally invasive procedures, we provide him/her with a solid automatic software system to position, rotate and scale in real-time the 3D virtual model of a patient's organ aligned over its image captured by the endoscope. METHODS: We observed that the surgeon may benefit differently from the 3D augmentation during each stage of the surgical procedure; moreover, each stage may present different visual elements that provide specific challenges and opportunities to exploit for organ detection strategies implementation. Hence we integrate different solutions, each dedicated to a specific stage of the surgical procedure, into a single software system. RESULTS: We present a formal model that generalizes our approach, describing a system composed of integrated solutions for AR in robot-assisted surgery. Following the proposed framework, and application has been developed which is currently used during in vivo surgery, for extensive testing, by the Urology unity of the San Luigi Hospital, in Orbassano (To), Italy. CONCLUSIONS: The main contribution of this paper is in presenting a modular approach to the tracking problem during in-vivo robotic surgery, whose efficacy from a medical point of view has been assessed in cited works. The segmentation of the whole procedure in a set of stages allows associating the best tracking strategy to each of them, as well as to re-utilize implemented software mechanisms in stages with similar features.


Asunto(s)
Aumento de la Imagen/métodos , Imagenología Tridimensional , Procedimientos Quirúrgicos Robotizados , Humanos , Procedimientos Quirúrgicos Mínimamente Invasivos , Procedimientos Quirúrgicos Robotizados/métodos , Programas Informáticos
14.
Eur Urol ; 78(2): 229-238, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-31898992

RESUMEN

BACKGROUND: Despite technical improvements introduced with robotic surgery, management of complex tumours (PADUA score ≥10) is still a matter of debate within the field of transperitoneal robot-assisted partial nephrectomy (RAPN). OBJECTIVE: To evaluate the accuracy of our three-dimensional (3D) static and elastic augmented reality (AR) systems based on hyperaccuracy models (HA3D) in identifying tumours and intrarenal structures during transperitoneal RAPN (AR-RAPN), compared with standard ultrasound (US). DESIGN, SETTING, AND PARTICIPANTS: A retrospective study was conducted, including 91 patients who underwent RAPN for complex renal tumours, 48 with 3D AR guidance and 43 with 2D US guidance, from July 2017 to May 2019. SURGICAL PROCEDURE: In patients who underwent 3D AR-RAPN, virtual image overlapping guided the surgeon during resection and suture phases. In the 2D US group, interventions were driven by US only. MEASUREMENTS: Patient characteristics were tested using the Fisher's exact test for categorical variables and the Mann-Whitney test for continuous ones. Intraoperative, postoperative, and surgical outcomes were collected. All results for continuous variables were expressed as medians (range), and frequencies and proportions were reported as percentages. RESULTS AND LIMITATIONS: The use of 3D AR guidance makes it possible to correctly identify the lesion and intraparenchymal structures with a more accurate 3D perception of the location and the nature of the different structures relative to the standard 2D US guidance. This translates to a lower rate of global ischaemia (45.8% in the 3D group vs 69.7% in the US group; p = 0.03), higher rate of enucleation (62.5% vs 37.5% in the 3D and US groups, respectively; p = 0.02), and lower rate of collecting system violation (10.4% vs 45.5%; p = 0.003). Postoperatively, 3D AR guidance use correlates to a low risk of surgery-related complications in 3D AR groups and a lower drop in estimated renal plasma flow at renal scan at 3 mo of follow-up (-12.38 in the 3D group vs -18.14 in the US group; p = 0.01). The main limitations of this study are short follow-up time and small sample size. CONCLUSIONS: HA3D models that overlap in vivo anatomy during AR-RAPN for complex tumours can be useful for identifying the lesion and intraparenchymal structures that are difficult to visualise with US only. This translates to a potential improvement in the quality of the resection phase and a reduction in postoperative complications, with better functional recovery. PATIENT SUMMARY: Based on our findings, three-dimensional augmented reality robot-assisted partial nephrectomy seems to help surgeons in the management of complex renal tumours, with potential early postoperative benefits.


Asunto(s)
Realidad Aumentada , Imagenología Tridimensional , Neoplasias Renales/diagnóstico por imagen , Neoplasias Renales/cirugía , Monitoreo Intraoperatorio , Nefrectomía/métodos , Procedimientos Quirúrgicos Robotizados , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos
15.
World J Urol ; 38(4): 869-881, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31456017

RESUMEN

CONTEXT: Despite the current era of precision surgery in robotics, an unmet need still remains for optimal surgical planning and navigation for most genitourinary diseases. 3D virtual reconstruction of 2D cross-sectional imaging has been increasingly adopted to help surgeons better understand the surgical anatomy. OBJECTIVES: To provide a short overview of the most recent evidence on current applications of 3D imaging in robotic urologic surgery. EVIDENCE ACQUISITION: A non-systematic review of the literature was performed. Medline, PubMed, the Cochrane Database and Embase were screened for studies regarding the use of 3D models in robotic urology. EVIDENCE SYNTHESIS: 3D reconstruction technology creates 3D virtual and printed models that first appeared in urology to aid surgical planning and intraoperative navigation, especially in the treatment of oncological diseases of the prostate and kidneys. The latest revolution in the field involves models overlapping onto the real anatomy and performing augmented reality procedures. CONCLUSION: 3D virtual/printing technology has entered daily practice in some tertiary centres, especially for the management of urological tumours. The 3D models can be virtual or printed, and can help the surgeon in surgical planning, physician education and training, and patient counselling. Moreover, integration of robotic platforms with the 3D models and the possibility of performing augmented reality surgeries increase the surgeon's confidence with the pathology, with potential benefits in precision and tailoring of the procedures.


Asunto(s)
Imagenología Tridimensional , Procedimientos Quirúrgicos Robotizados/métodos , Cirugía Asistida por Computador/métodos , Procedimientos Quirúrgicos Urológicos/métodos , Predicción , Humanos
16.
Minerva Urol Nefrol ; 72(1): 49-57, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-31833725

RESUMEN

INTRODUCTION: As we enter the era of "big data," an increasing amount of complex health-care data will become available. These data are often redundant, "noisy," and characterized by wide variability. In order to offer a precise and transversal view of a clinical scenario the artificial intelligence (AI) with machine learning (ML) algorithms and Artificial neuron networks (ANNs) process were adopted, with a promising wide diffusion in the near future. The present work aims to provide a comprehensive and critical overview of the current and potential applications of AI and ANNs in urology. EVIDENCE ACQUISITION: A non-systematic review of the literature was performed by screening Medline, PubMed, the Cochrane Database, and Embase to detect pertinent studies regarding the application of AI and ANN in Urology. EVIDENCE SYNTHESIS: The main application of AI in urology is the field of genitourinary cancers. Focusing on prostate cancer, AI was applied for the prediction of prostate biopsy results. For bladder cancer, the prediction of recurrence-free probability and diagnostic evaluation were analysed with ML algorithms. For kidney and testis cancer, anecdotal experiences were reported for staging and prediction of diseases recurrence. More recently, AI has been applied in non-oncological diseases like stones and functional urology. CONCLUSIONS: AI technologies are growing their role in health care; but, up to now, their "real-life" implementation remains limited. However, in the near future, the potential of AI-driven era could change the clinical practice in Urology, improving overall patient outcomes.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Urología/métodos , Macrodatos , Femenino , Humanos , Masculino
17.
Eur Urol ; 76(4): 505-514, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-30979636

RESUMEN

BACKGROUND: In prostate cancer (PCa) surgical procedures, in order to maximize potency recovery, a nerve-sparing (NS) procedure is preferred. However, cancer abutting or focally extending beyond the prostate capsule increases the risk of a positive surgical margin. OBJECTIVE: To evaluate the accuracy of our new three-dimensional (3D) elastic augmented-reality (AR) system in identifying capsular involvement (CI) location of PCa during the NS phase of robot-assisted radical prostatectomy (RARP). Secondarily, the accuracy of this technology was compared with two-dimensional (2D)-based cognitive procedures. DESIGN, SETTING, AND PARTICIPANTS: A prospective study, enrolling 40 patients with PCa undergoing RARP at our center, from May to October 2018. SURGICAL PROCEDURE: Patients underwent 3D AR RARP or, in case of unavailability of this technology, 2D cognitive RARP. In all patients, total anatomical reconstruction was used. MEASUREMENTS: Clinical data were collected. In order to compare the two groups, nonparametric Mann-Whitney and chi-square tests were performed. A metallic clip was placed at the level of suspicious CI on the basis of images given by the 3D AR or magnetic resonance imaging (MRI) report. The pathological analysis evaluated the presence of tumor at the level of the clip. RESULTS AND LIMITATIONS: Twenty patients were enrolled in each group. Focusing on the 3D AR group at macroscopic evaluation, the metallic clip was placed at the tumor and capsular bulging in all cases. At microscopic assessment, cancer presence was confirmed in the suspicious area in 95.4% of the cases. Moreover, CI was correctly identified in 100.0% of the cases, thanks to the 3D image overlap. These results were compared with the 2D MRI cognitive group, showing, at microscopic analysis, statistically significant superiority of the 3D AR group in CI detection during the NS phase (100% vs 47.0%; p<0.05). The main limitation of this technique is that the segmentation and overlapping of the images are performed manually. CONCLUSIONS: Our findings suggest that, with the introduction of the elastic 3D virtual models, prostate deformation is correctly simulated during surgery and lesion location is correctly identified, even in dynamic reality with a subsequent potential reduction of positive surgical margin rate and, in the meantime, maximization of functional outcomes. PATIENT SUMMARY: On the basis of our findings, the three-dimensional elastic augmented-reality technology seems to help the surgeon in lesion location identification even in a dynamic phase of the intervention, optimizing the oncological outcomes.


Asunto(s)
Realidad Aumentada , Imagenología Tridimensional , Prostatectomía/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/cirugía , Procedimientos Quirúrgicos Robotizados , Cirugía Asistida por Computador , Anciano , Elasticidad , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Neoplasias de la Próstata/patología
18.
BJU Int ; 123(5): 834-845, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30246936

RESUMEN

OBJECTIVES: To assess the use of hyper-accuracy three-dimensional (HA3D™; MEDICS, Moncalieri, Turin, Italy) reconstruction based on multiparametric magnetic resonance imaging (mpMRI) and superimposed imaging during augmented-reality robot-assisted radical prostatectomy (AR-RARP). PATIENTS AND METHODS: Patients with prostate cancer (clinical stages cT1-3, cN0, cM0) undergoing RARP at our Centre, from June 2017 to April 2018, were enrolled. In all cases, cancer was diagnosed with targeted biopsy at the level of index lesion based on high-resolution (1-mm slices) mpMRI. HA3D reconstruction was created by dedicated software to obtain the 3D virtual model of the prostate and surrounding structures. A specific system was used to overlay virtual data on the endoscopic video displayed by the remote da Vinci® surgical console (Intuitive Surgical Inc., Sunnyvale, CA, USA), and the virtual images were superimposed by the surgeon by the means of the TilePro™ multi-input display technology (Intuitive Surgical Inc.). The AR technology was used in four standardised key steps during RARP. The procedures were modulated differently in cases of prostate cancer without extracapsular extension (ECE) at mpMRI (Group A) or in cases of prostate cancer with ECE (Group B) at mpMRI. In Group A, the virtual image of the prostate was overlaid on the endoscopic view and the intraprostatic lesion was marked on the prostate surface by a metallic clip at the level of the suspicious lesion as identified by the 3D virtual AR image. In Group B, the same step was performed; moreover, a metallic clip was placed at the level of the suspicious ECE on the neurovascular bundles (NVBs) according to the virtual images. Finally, selective biopsies were taken from the NVBs at this level, and then, the entire NVBs were removed for final pathological examination, according to standard clinical indications. For Group A, the pathologist performed a targeted needle biopsy at the level of the metallic clip on the surface of prostate before the sample reduction. For Group B, the presence of tumour was evaluated during the reduction phase, at the level of metallic clip on the prostate surface and at the level of NVBs, sent separately. Finally, an image 3D scanner (Kinect, Microsoft) was used to perform a dimensional comparison between the mpMRI-based 3D virtual reconstruction and the whole-mount specimen. RESULTS: In all, 30 patients were enrolled in the present study, 11 (36.6%) included in Group A and 19 (63.4%) in Group B. In all cases (30/30), final pathology confirmed the location of the index lesion, as cancer was found at the level of the metallic clip. The suspected ECE was confirmed on final pathology in 15/19 cases (79%). The AR-guided selective biopsies at the level of the NVBs confirmed the ECE location, with 11/15 (73.3%) biopsies at the level of NVBs positive for cancer. The mismatch between the 3D virtual reconstruction and the prostate 3D scanning based on the whole-mount specimen was <3 mm in >85% of the gland. CONCLUSION: Our results suggest that a HA3D virtual reconstruction of the prostate based on mpMRI data and real-time superimposed imaging allow performance of an effective AR-RARP. Potentially, this approach translates into better outcomes, as the surgeon can tailor the procedure for each patient.


Asunto(s)
Imagen por Resonancia Magnética , Próstata/patología , Prostatectomía , Neoplasias de la Próstata/cirugía , Procedimientos Quirúrgicos Robotizados , Realidad Virtual , Humanos , Imagenología Tridimensional , Masculino , Persona de Mediana Edad , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Reproducibilidad de los Resultados , Cirugía Asistida por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...