Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Surg Innov ; 25(1): 69-76, 2018 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-29303068

RESUMEN

BACKGROUND: Combining the strengths of surgical robotics and minimally invasive surgery (MIS) holds the potential to revolutionize surgical interventions. The MIS advantages for the patients are obvious, but the use of instrumentation suitable for MIS often translates in limiting the surgeon capabilities (eg, reduction of dexterity and maneuverability and demanding navigation around organs). To overcome these shortcomings, the application of soft robotics technologies and approaches can be beneficial. The use of devices based on soft materials is already demonstrating several advantages in all the exploitation areas where dexterity and safe interaction are needed. In this article, the authors demonstrate that soft robotics can be synergistically used with traditional rigid tools to improve the robotic system capabilities and without affecting the usability of the robotic platform. MATERIALS AND METHODS: A bioinspired soft manipulator equipped with a miniaturized camera has been integrated with the Endoscopic Camera Manipulator arm of the da Vinci Research Kit both from hardware and software viewpoints. Usability of the integrated system has been evaluated with nonexpert users through a standard protocol to highlight difficulties in controlling the soft manipulator. RESULTS AND CONCLUSION: This is the first time that an endoscopic tool based on soft materials has been integrated into a surgical robot. The soft endoscopic camera can be easily operated through the da Vinci Research Kit master console, thus increasing the workspace and the dexterity, and without limiting intuitive and friendly use.


Asunto(s)
Endoscopios , Endoscopía/educación , Endoscopía/instrumentación , Procedimientos Quirúrgicos Robotizados/educación , Procedimientos Quirúrgicos Robotizados/instrumentación , Adulto , Diseño de Equipo , Femenino , Humanos , Masculino , Análisis y Desempeño de Tareas , Adulto Joven
2.
Int J Comput Assist Radiol Surg ; 19(3): 531-539, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37934401

RESUMEN

PURPOSE: Computer-assisted surgical systems provide support information to the surgeon, which can improve the execution and overall outcome of the procedure. These systems are based on deep learning models that are trained on complex and challenging-to-annotate data. Generating synthetic data can overcome these limitations, but it is necessary to reduce the domain gap between real and synthetic data. METHODS: We propose a method for image-to-image translation based on a Stable Diffusion model, which generates realistic images starting from synthetic data. Compared to previous works, the proposed method is better suited for clinical application as it requires a much smaller amount of input data and allows finer control over the generation of details by introducing different variants of supporting control networks. RESULTS: The proposed method is applied in the context of laparoscopic cholecystectomy, using synthetic and real data from public datasets. It achieves a mean Intersection over Union of 69.76%, significantly improving the baseline results (69.76 vs. 42.21%). CONCLUSIONS: The proposed method for translating synthetic images into images with realistic characteristics will enable the training of deep learning methods that can generalize optimally to real-world contexts, thereby improving computer-assisted intervention guidance systems.


Asunto(s)
Endoscopía , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
3.
Artículo en Inglés | MEDLINE | ID: mdl-38761319

RESUMEN

PURPOSE: Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers. METHODS: In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70. RESULTS: The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)). CONCLUSION: MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.

4.
Int J Comput Assist Radiol Surg ; 18(7): 1295-1302, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37259011

RESUMEN

PURPOSE: A computer-assisted surgical system must provide up-to-date and accurate information of the patient's anatomy during the procedure to improve clinical outcome. It is therefore essential to consider the tissue deformations, and a patient-specific biomechanical model (PBM) is usually adopted. The predictive capability of the PBM is highly influenced by proper definition of attachments to the surrounding anatomy, which are difficult to estimate preoperatively. METHODS: We propose to predict the location of attachments using a deep neural network fed with multiple partial views of the intraoperative deformed organ surface directly encoded as point clouds. Compared to previous works, providing a sequence of deformed views as input allows the network to consider the temporal evolution of deformations and to handle the intrinsic ambiguity of estimating attachments from a single view. RESULTS: The method is applied to computer-assisted hepatic surgery and tested on both a synthetic and in vivo human open-surgery scenario. The network is trained on a patient-specific synthetic dataset in less than 5 h and produces a more accurate intraoperative estimation of attachments than applying the ones generally used in liver surgery (i.e., fixing vena cava or falciform ligament). The obtained results show 26% more accurate predictions than other solution previously proposed. CONCLUSIONS: Trained with patient-specific simulated data, the proposed network estimates the attachments in a fast and accurate manner also considering the temporal evolution of the deformations, improving patient-specific intraoperative guidance in computer-assisted surgical systems.


Asunto(s)
Hepatopatías , Cirugía Asistida por Computador , Humanos , Redes Neurales de la Computación , Cirugía Asistida por Computador/métodos
5.
Int J Comput Assist Radiol Surg ; 18(9): 1665-1672, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36944845

RESUMEN

PURPOSE: Automatic recognition of surgical activities from intraoperative surgical videos is crucial for developing intelligent support systems for computer-assisted interventions. Current state-of-the-art recognition methods are based on deep learning where data augmentation has shown the potential to improve the generalization of these methods. This has spurred work on automated and simplified augmentation strategies for image classification and object detection on datasets of still images. Extending such augmentation methods to videos is not straightforward, as the temporal dimension needs to be considered. Furthermore, surgical videos pose additional challenges as they are composed of multiple, interconnected, and long-duration activities. METHODS: This work proposes a new simplified augmentation method, called TRandAugment, specifically designed for long surgical videos, that treats each video as an assemble of temporal segments and applies consistent but random transformations to each segment. The proposed augmentation method is used to train an end-to-end spatiotemporal model consisting of a CNN (ResNet50) followed by a TCN. RESULTS: The effectiveness of the proposed method is demonstrated on two surgical video datasets, namely Bypass40 and CATARACTS, and two tasks, surgical phase and step recognition. TRandAugment adds a performance boost of 1-6% over previous state-of-the-art methods, that uses manually designed augmentations. CONCLUSION: This work presents a simplified and automated augmentation method for long surgical videos. The proposed method has been validated on different datasets and tasks indicating the importance of devising temporal augmentation methods for long surgical videos.


Asunto(s)
Extracción de Catarata , Redes Neurales de la Computación , Humanos , Algoritmos , Extracción de Catarata/métodos
6.
IEEE Trans Med Imaging ; 42(9): 2592-2602, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37030859

RESUMEN

Automatic recognition of fine-grained surgical activities, called steps, is a challenging but crucial task for intelligent intra-operative computer assistance. The development of current vision-based activity recognition methods relies heavily on a high volume of manually annotated data. This data is difficult and time-consuming to generate and requires domain-specific knowledge. In this work, we propose to use coarser and easier-to-annotate activity labels, namely phases, as weak supervision to learn step recognition with fewer step annotated videos. We introduce a step-phase dependency loss to exploit the weak supervision signal. We then employ a Single-Stage Temporal Convolutional Network (SS-TCN) with a ResNet-50 backbone, trained in an end-to-end fashion from weakly annotated videos, for temporal activity segmentation and recognition. We extensively evaluate and show the effectiveness of the proposed method on a large video dataset consisting of 40 laparoscopic gastric bypass procedures and the public benchmark CATARACTS containing 50 cataract surgeries.


Asunto(s)
Redes Neurales de la Computación , Cirugía Asistida por Computador
7.
IEEE Trans Biomed Eng ; 69(1): 209-219, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34156935

RESUMEN

In Robot Assisted Minimally Invasive Surgery, discriminating critical subsurface structures is essential to make the surgical procedure safer and more efficient. In this paper, a novel robot assisted electrical bio-impedance scanning (RAEIS) system is developed and validated using a series of experiments. The proposed system constructs a tri-polar sensing configuration for tissue homogeneity inspection. Specifically, two robotic forceps are used as electrodes for applying electric current and measuring reciprocal voltages relative to a ground electrode which is placed distal from the measuring site. Compared to existing electrical bioimpedance sensing technology, the proposed system is able to use miniaturized electrodes to measure a site flexibly with enhanced subsurfacial detection capability. This paper presents the concept, the modeling of the sensing method, the hardware design, and the system calibration. Subsequently, a series of experiments are conducted for system evaluation including finite element simulation, saline solution bath experiments and experiments based on ex vivo animal tissues. The experimental results demonstrate that the proposed system can measure the resistivity of the material with high accuracy, and detect a subsurface non-homogeneous object with 100% success rate. The proposed parameters estimation algorithm is able to approximate the resistivity and the depth of the subsurface object effectively with one fast scanning.


Asunto(s)
Robótica , Algoritmos , Animales , Calibración , Impedancia Eléctrica , Procedimientos Quirúrgicos Mínimamente Invasivos
8.
Med Image Anal ; 77: 102355, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35139483

RESUMEN

Optical Coherence Tomography (OCT) is increasingly used in endoluminal procedures since it provides high-speed and high resolution imaging. Distortion and instability of images obtained with a proximal scanning endoscopic OCT system are significant due to the motor rotation irregularity, the friction between the rotating probe and outer sheath and synchronization issues. On-line compensation of artefacts is essential to ensure image quality suitable for real-time assistance during diagnosis or minimally invasive treatment. In this paper, we propose a new online correction method to tackle both B-scan distortion, video stream shaking and drift problem of endoscopic OCT linked to A-line level image shifting. The proposed computational approach for OCT scanning video correction integrates a Convolutional Neural Network (CNN) to improve the estimation of azimuthal shifting of each A-line. To suppress the accumulative error of integral estimation we also introduce another CNN branch to estimate a dynamic overall orientation angle. We train the network with semi-synthetic OCT videos by intentionally adding rotational distortion into real OCT scanning images. The results show that networks trained on this semi-synthetic data generalize to stabilize real OCT videos, and the algorithm efficacy is demonstrated on both ex vivo and in vivo data, where strong scanning artifacts are successfully corrected.


Asunto(s)
Aprendizaje Profundo , Tomografía de Coherencia Óptica , Algoritmos , Artefactos , Humanos , Redes Neurales de la Computación , Tomografía de Coherencia Óptica/métodos
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3729-3733, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34892047

RESUMEN

The electrical impedance tomography (EIT) technology is an important medical imaging approach to show the electrical characteristics and the homogeneity of a tissue region noninvasively. Recently, this technology has been introduced to the Robot Assisted Minimally Invasive Surgery (RAMIS) for assisting the detection of surgical margin with relevant clinical benefits. Nevertheless, most EIT technologies are based on a fixed multiple-electrodes probe which limits the sensing flexibility and capability significantly. In this study, we present a method for acquiring the EIT measurements during a RAMIS procedure using two already existing robotic forceps as electrodes. The robot controls the forceps tips to a series of predefined positions for injecting excitation current and measuring electric potentials. Given the relative positions of electrodes and the measured electric potentials, the spatial distribution of electrical conductivity in a section view can be reconstructed. Realistic experiments are designed and conducted to simulate two tasks: subsurface abnormal tissue detection and surgical margin localization. According to the reconstructed images, the system is demonstrated to display the location of the abnormal tissue and the contrast of the tissues' conductivity with an accuracy suitable for clinical applications.


Asunto(s)
Robótica , Tomografía , Conductividad Eléctrica , Impedancia Eléctrica , Tomografía Computarizada por Rayos X
10.
Int J Comput Assist Radiol Surg ; 16(8): 1287-1295, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-33886045

RESUMEN

PURPOSE: The automatic extraction of knowledge about intervention execution from surgical manuals would be of the utmost importance to develop expert surgical systems and assistants. In this work we assess the feasibility of automatically identifying the sentences of a surgical intervention text containing procedural information, a subtask of the broader goal of extracting intervention workflows from surgical manuals. METHODS: We frame the problem as a binary classification task. We first introduce a new public dataset of 1958 sentences from robotic surgery texts, manually annotated as procedural or non-procedural. We then apply different classification methods, from classical machine learning algorithms, to more recent neural-network approaches and classification methods exploiting transformers (e.g., BERT, ClinicalBERT). We also analyze the benefits of applying balancing techniques to the dataset. RESULTS: The architectures based on neural-networks fed with FastText's embeddings and the one based on ClinicalBERT outperform all the tested methods, empirically confirming the feasibility of the task. Adopting balancing techniques does not lead to substantial improvements in classification. CONCLUSION: This is the first work experimenting with machine / deep learning algorithms for automatically identifying procedural sentences in surgical texts. It also introduces the first public dataset that can be used for benchmarking different classification methods for the task.


Asunto(s)
Algoritmos , Aprendizaje Automático , Redes Neurales de la Computación , Procedimientos Quirúrgicos Robotizados/métodos , Humanos
11.
Int J Comput Assist Radiol Surg ; 16(7): 1111-1119, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34013464

RESUMEN

PURPOSE: Automatic segmentation and classification of surgical activity is crucial for providing advanced support in computer-assisted interventions and autonomous functionalities in robot-assisted surgeries. Prior works have focused on recognizing either coarse activities, such as phases, or fine-grained activities, such as gestures. This work aims at jointly recognizing two complementary levels of granularity directly from videos, namely phases and steps. METHODS: We introduce two correlated surgical activities, phases and steps, for the laparoscopic gastric bypass procedure. We propose a multi-task multi-stage temporal convolutional network (MTMS-TCN) along with a multi-task convolutional neural network (CNN) training setup to jointly predict the phases and steps and benefit from their complementarity to better evaluate the execution of the procedure. We evaluate the proposed method on a large video dataset consisting of 40 surgical procedures (Bypass40). RESULTS: We present experimental results from several baseline models for both phase and step recognition on the Bypass40. The proposed MTMS-TCN method outperforms single-task methods in both phase and step recognition by 1-2% in accuracy, precision and recall. Furthermore, for step recognition, MTMS-TCN achieves a superior performance of 3-6% compared to LSTM-based models on all metrics. CONCLUSION: In this work, we present a multi-task multi-stage temporal convolutional network for surgical activity recognition, which shows improved results compared to single-task models on a gastric bypass dataset with multi-level annotations. The proposed method shows that the joint modeling of phases and steps is beneficial to improve the overall recognition of each type of activity.


Asunto(s)
Derivación Gástrica/métodos , Laparoscopía/métodos , Redes Neurales de la Computación , Procedimientos Quirúrgicos Robotizados/métodos , Humanos
12.
Int J Comput Assist Radiol Surg ; 15(8): 1379-1387, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-32445126

RESUMEN

PURPOSE: Biomechanical simulation of anatomical deformations caused by ultrasound probe pressure is of outstanding importance for several applications, from the testing of robotic acquisition systems to multi-modal image fusion and development of ultrasound training platforms. Different approaches can be exploited for modelling the probe-tissue interaction, each achieving different trade-offs among accuracy, computation time and stability. METHODS: We assess the performances of different strategies based on the finite element method for modelling the interaction between the rigid probe and soft tissues. Probe-tissue contact is modelled using (i) penalty forces, (ii) constraint forces, and (iii) by prescribing the displacement of the mesh surface nodes. These methods are tested in the challenging context of ultrasound scanning of the breast, an organ undergoing large nonlinear deformations during the procedure. RESULTS: The obtained results are evaluated against those of a non-physically based method. While all methods achieve similar accuracy, performance in terms of stability and speed shows high variability, especially for those methods modelling the contacts explicitly. Overall, prescribing surface displacements is the approach with best performances, but it requires prior knowledge of the contact area and probe trajectory. CONCLUSIONS: In this work, we present different strategies for modelling probe-tissue interaction, each able to achieve different compromises among accuracy, speed and stability. The choice of the preferred approach highly depends on the requirements of the specific clinical application. Since the presented methodologies can be applied to describe general tool-tissue interactions, this work can be seen as a reference for researchers seeking the most appropriate strategy to model anatomical deformation induced by the interaction with medical tools.


Asunto(s)
Modelos Anatómicos , Ultrasonografía/métodos , Fenómenos Biomecánicos , Simulación por Computador , Humanos
13.
Int J Comput Assist Radiol Surg ; 14(11): 2043, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31250254

RESUMEN

The original version of this article unfortunately contained a mistake.

14.
Int J Comput Assist Radiol Surg ; 14(8): 1329-1339, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31161556

RESUMEN

PURPOSE: Although ultrasound (US) images represent the most popular modality for guiding breast biopsy, malignant regions are often missed by sonography, thus preventing accurate lesion localization which is essential for a successful procedure. Biomechanical models can support the localization of suspicious areas identified on a preoperative image during US scanning since they are able to account for anatomical deformations resulting from US probe pressure. We propose a deformation model which relies on position-based dynamics (PBD) approach to predict the displacement of internal targets induced by probe interaction during US acquisition. METHODS: The PBD implementation available in NVIDIA FleX is exploited to create an anatomical model capable of deforming online. Simulation parameters are initialized on a calibration phantom under different levels of probe-induced deformations; then, they are fine-tuned by minimizing the localization error of a US-visible landmark of a realistic breast phantom. The updated model is used to estimate the displacement of other internal lesions due to probe-tissue interaction. RESULTS: The localization error obtained when applying the PBD model remains below 11 mm for all the tumors even for input displacements in the order of 30 mm. This proposed method obtains results aligned with FE models with faster computational performance, suitable for real-time applications. In addition, it outperforms rigid model used to track lesion position in US-guided breast biopsies, at least halving the localization error for all the displacement ranges considered. CONCLUSION: Position-based dynamics approach has proved to be successful in modeling breast tissue deformations during US acquisition. Its stability, accuracy and real-time performance make such model suitable for tracking lesions displacement during US-guided breast biopsy.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Mama/diagnóstico por imagen , Biopsia Guiada por Imagen , Imagenología Tridimensional , Ultrasonografía Mamaria , Algoritmos , Biopsia , Calibración , Simulación por Computador , Humanos , Modelos Anatómicos , Posicionamiento del Paciente , Fantasmas de Imagen , Robótica , Programas Informáticos
15.
Med Biol Eng Comput ; 57(4): 913-924, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30483912

RESUMEN

The modeling of breast deformations is of interest in medical applications such as image-guided biopsy, or image registration for diagnostic purposes. In order to have such information, it is needed to extract the mechanical properties of the tissues. In this work, we propose an iterative technique based on finite element analysis that estimates the elastic modulus of realistic breast phantoms, starting from MRI images acquired in different positions (prone and supine), when deformed only by the gravity force. We validated the method using both a single-modality evaluation in which we simulated the effect of the gravity force to generate four different configurations (prone, supine, lateral, and vertical) and a multi-modality evaluation in which we simulated a series of changes in orientation (prone to supine). Validation is performed, respectively, on surface points and lesions using as ground-truth data from MRI images, and on target lesions inside the breast phantom compared with the actual target segmented from the US image. The use of pre-operative images is limited at the moment to diagnostic purposes. By using our method we can compute patient-specific mechanical properties that allow compensating deformations. Graphical Abstract Workflow of the proposed method and comparative results of the prone-to-supine simulation (red volumes) validated using MRI data (blue volumes).


Asunto(s)
Simulación por Computador , Elasticidad , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Ultrasonografía , Femenino , Análisis de Elementos Finitos , Humanos , Modelos Biológicos , Fantasmas de Imagen
16.
Front Robot AI ; 6: 55, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-33501070

RESUMEN

The integration of intra-operative sensors into surgical robots is a hot research topic since this can significantly facilitate complex surgical procedures by enhancing surgical awareness with real-time tissue information. However, currently available intra-operative sensing technologies are mainly based on image processing and force feedback, which normally require heavy computation or complicated hardware modifications of existing surgical tools. This paper presents the design and integration of electrical bio-impedance sensing into a commercial surgical robot tool, leading to the creation of a novel smart instrument that allows the identification of tissues by simply touching them. In addition, an advanced user interface is designed to provide guidance during the use of the system and to allow augmented-reality visualization of the tissue identification results. The proposed system imposes minor hardware modifications to an existing surgical tool, but adds the capability to provide a wealth of data about the tissue being manipulated. This has great potential to allow the surgeon (or an autonomous robotic system) to better understand the surgical environment. To evaluate the system, a series of ex-vivo experiments were conducted. The experimental results demonstrate that the proposed sensing system can successfully identify different tissue types with 100% classification accuracy. In addition, the user interface was shown to effectively and intuitively guide the user to measure the electrical impedance of the target tissue, presenting the identification results as augmented-reality markers for simple and immediate recognition.

17.
Int J Comput Assist Radiol Surg ; 13(10): 1641-1650, 2018 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-29869320

RESUMEN

PURPOSE: Patient-specific biomedical modeling of the breast is of interest for medical applications such as image registration, image guided procedures and the alignment for biopsy or surgery purposes. The computation of elastic properties is essential to simulate deformations in a realistic way. This study presents an innovative analytical method to compute the elastic modulus and evaluate the elasticity of a breast using magnetic resonance (MRI) images of breast phantoms. METHODS: An analytical method for elasticity computation was developed and subsequently validated on a series of geometric shapes, and on four physical breast phantoms that are supported by a planar frame. This method can compute the elasticity of a shape directly from a set of MRI scans. For comparison, elasticity values were also computed numerically using two different simulation software packages. RESULTS: Application of the different methods on the geometric shapes shows that the analytically derived elongation differs from simulated elongation by less than 9% for cylindrical shapes, and up to 18% for other shapes that are also substantially vertically supported by a planar base. For the four physical breast phantoms, the analytically derived elasticity differs from numeric elasticity by 18% on average, which is in accordance with the difference in elongation estimation for the geometric shapes. The analytic method has shown to be multiple orders of magnitude faster than the numerical methods. CONCLUSION: It can be concluded that the analytical elasticity computation method has good potential to supplement or replace numerical elasticity simulations in gravity-induced deformations, for shapes that are substantially supported by a planar base perpendicular to the gravitational field. The error is manageable, while the calculation procedure takes less than one second as opposed to multiple minutes with numerical methods. The results will be used in the MRI and Ultrasound Robotic Assisted Biopsy (MURAB) project.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Mama/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Fantasmas de Imagen , Procedimientos Quirúrgicos Robotizados , Algoritmos , Biopsia , Calibración , Simulación por Computador , Diagnóstico por Computador , Elasticidad , Femenino , Análisis de Elementos Finitos , Humanos , Imagenología Tridimensional , Modelos Estadísticos , Reconocimiento de Normas Patrones Automatizadas , Ultrasonografía
18.
J Vis Surg ; 3: 23, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29078586

RESUMEN

The comparison of the developments obtained by training for aviation with the ones obtained by training for surgery highlights the efforts that are still required to define shared and validated training curricula for surgeons. This work focuses on robotic assisted surgery and the related training systems to analyze the current approaches to surgery training based on virtual environments. Limits of current simulation technology are highlighted and the systems currently on the market are compared in terms of their mechanical design and characteristics of the virtual environments offered. In particular the analysis focuses on the level of realism, both graphical and physical, and on the set of training tasks proposed. Some multimedia material is proposed to support the analysis and to highlight the differences between the simulations and the approach to training. From this analysis it is clear that, although there are several training systems on the market, some of them with a lot of scientific literature proving their validity, there is no consensus about the tasks to include in a training curriculum or the level of realism required to virtual environments to be useful.

19.
Int J Comput Assist Radiol Surg ; 10(6): 843-54, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25930712

RESUMEN

PURPOSE: Detection of feature points in medical ultrasound (US) images is the starting point of many clinical tasks, such as segmentation of lesions in pathological areas, estimation of organ deformation, and multimodality image fusion. However, obtaining a reliable feature point localization is a complex task even for an expert radiologist due to the US image characteristics: strong presence of noise, insidious artifacts, and low contrast. In this work, we describe a feature detector based on phase congruency (PhC) combined with a binary pattern descriptor. METHODS: We introduce a feature detector specifically designed for US images and based on PhC analysis. We also introduce a descriptor based on local binary pattern (LBP) operator to improve and simplify the matching between feature points extracted from different images. LBP is not applied directly to the intensity values; instead, it is applied to the PhC output obtained during the detection step to improve robustness to intensity transformation, and the rejection of noise. RESULTS: We tested the proposed approach compared to state-of- the-art methods applied to real US images subject to realistic synthetic transformations. The results of the proposed method, in terms of accuracy and precision, outperform the state-of-the-art approaches that are not designed for US data. CONCLUSIONS: The methods described in this work will enable the development of US-based navigation system, which supports automatic feature point detection and matching from US images acquired at different times during the procedure.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía/métodos , Algoritmos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA