RESUMO
Robot-assisted minimally invasive surgery is the gold standard for the surgical treatment of many pathological conditions since it guarantees to the patient shorter hospital stay and quicker recovery. Several manuals and academic papers describe how to perform these interventions and thus contain important domain-specific knowledge. This information, if automatically extracted and processed, can be used to extract or summarize surgical practices or develop decision making systems that can help the surgeon or nurses to optimize the patient's management before, during, and after the surgery by providing theoretical-based suggestions. However, general English natural language understanding algorithms have lower efficacy and coverage issues when applied to domain others than those they are typically trained on, and a domain specific textual annotated corpus is missing. To overcome this problem, we annotated the first robotic-surgery procedural corpus, with PropBank-style semantic labels. Starting from the original PropBank framebank, we enriched it by adding new lemmas, frames and semantic arguments required to cover missing information in general English but needed in procedural surgical language, releasing the Robotic-Surgery Procedural Framebank (RSPF). We then collected from robotic-surgery textbooks as-is sentences for a total of 32,448 tokens, and we annotated them with RSPF labels. We so obtained and publicly released the first annotated corpus of the robotic-surgical domain that can be used to foster further research on language understanding and procedural entities and relations extraction from clinical and surgical scientific literature.
RESUMO
Surgical robots have been widely adopted with over 4000 robots being used in practice daily. However, these are telerobots that are fully controlled by skilled human surgeons. Introducing "surgeon-assist"-some forms of autonomy-has the potential to reduce tedium and increase consistency, analogous to driver-assist functions for lanekeeping, cruise control, and parking. This article examines the scientific and technical backgrounds of robotic autonomy in surgery and some ethical, social, and legal implications. We describe several autonomous surgical tasks that have been automated in laboratory settings, and research concepts and trends.
RESUMO
Clinical laboratory-based nucleic acid amplification tests (NAT) play an important role in diagnosing viral infections. However, laboratory infrastructure requirements and their failure to diagnose at the point-of-need (PON) limit their clinical utility in both resource-rich and -limited clinical settings. The development of fast and sensitive PON viral NAT may overcome these limitations. The scalability of silicon microchip manufacturing combined with advances in silicon microfluidics present an opportunity for development of rapid and sensitive PON NAT on silicon microchips. In the present study, we present rapid and sensitive NAT for a number of RNA and DNA viruses on the same silicon microchip platform. We first developed sensitive (4 copies per reaction) one-step RT-qPCR and qPCR assays detecting HCV, HIV, Zika, HPV 16, and HPV 18 on a benchtop real-time PCR instrument. A silicon microchip was designed with an etched 1.3 µL meandering microreactor, integrated aluminum heaters, thermal insulation trenches and microfluidic channels; this chip was used in all on-chip experiments. Melting curve analysis confirmed precise and localized heating of the microreactor. Following minimal optimization of reaction conditions, the bench-scale assays were successfully transferred to 1.3 µL silicon microreactors with reaction times of 25 min with no reduction in sensitivity, reproducibility, or reaction efficiencies. Taken together, these results demonstrate that rapid and sensitive detection of multiple viruses on the same silicon microchip platform is feasible. Further development of this technology, coupled with silicon microchip-based nucleic acid extraction solutions, could potentially shift viral nucleic acid detection and diagnosis from centralized clinical laboratories to the PON.
Assuntos
DNA Viral/análise , Técnicas Analíticas Microfluídicas , RNA Viral/análise , Silício , Técnicas de Amplificação de Ácido Nucleico , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
BACKGROUND: Combining the strengths of surgical robotics and minimally invasive surgery (MIS) holds the potential to revolutionize surgical interventions. The MIS advantages for the patients are obvious, but the use of instrumentation suitable for MIS often translates in limiting the surgeon capabilities (eg, reduction of dexterity and maneuverability and demanding navigation around organs). To overcome these shortcomings, the application of soft robotics technologies and approaches can be beneficial. The use of devices based on soft materials is already demonstrating several advantages in all the exploitation areas where dexterity and safe interaction are needed. In this article, the authors demonstrate that soft robotics can be synergistically used with traditional rigid tools to improve the robotic system capabilities and without affecting the usability of the robotic platform. MATERIALS AND METHODS: A bioinspired soft manipulator equipped with a miniaturized camera has been integrated with the Endoscopic Camera Manipulator arm of the da Vinci Research Kit both from hardware and software viewpoints. Usability of the integrated system has been evaluated with nonexpert users through a standard protocol to highlight difficulties in controlling the soft manipulator. RESULTS AND CONCLUSION: This is the first time that an endoscopic tool based on soft materials has been integrated into a surgical robot. The soft endoscopic camera can be easily operated through the da Vinci Research Kit master console, thus increasing the workspace and the dexterity, and without limiting intuitive and friendly use.
Assuntos
Endoscópios , Endoscopia/educação , Endoscopia/instrumentação , Procedimentos Cirúrgicos Robóticos/educação , Procedimentos Cirúrgicos Robóticos/instrumentação , Adulto , Desenho de Equipamento , Feminino , Humanos , Masculino , Análise e Desempenho de Tarefas , Adulto JovemRESUMO
In this paper, we present a low-cost, adaptable, and flexible pressure sensor that can be applied as a smart skin over both stiff and deformable media. The sensor can be easily adapted for use in applications related to the fields of robotics, rehabilitation, or costumer electronic devices. In order to remove most of the stiff components that block the flexibility of the sensor, we based the sensing capability on the use of a tomographic technique known as Electrical Impedance Tomography. The technique allows the internal structure of the domain under study to be inferred by reconstructing its conductivity map. By applying the technique to a material that changes its resistivity according to applied forces, it is possible to identify these changes and then localise the area where the force was applied. We tested the system when applied to flat and curved surfaces. For all configurations, we evaluate the artificial skin capabilities to detect forces applied over a single point, over multiple points, and changes in the underlying geometry. The results are all promising, and open the way for the application of such sensors in different robotic contexts where deformability is the key point.
Assuntos
Impedância Elétrica , Tomografia/métodos , Dispositivos Eletrônicos VestíveisRESUMO
PURPOSE: Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers. METHODS: In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70. RESULTS: The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)). CONCLUSION: MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.
RESUMO
The automatic extraction of procedural surgical knowledge from surgery manuals, academic papers or other high-quality textual resources, is of the utmost importance to develop knowledge-based clinical decision support systems, to automatically execute some procedure's step or to summarize the procedural information, spread throughout the texts, in a structured form usable as a study resource by medical students. In this work, we propose a first benchmark on extracting detailed surgical actions from available intervention procedure textbooks and papers. We frame the problem as a Semantic Role Labeling task. Exploiting a manually annotated dataset, we apply different Transformer-based information extraction methods. Starting from RoBERTa and BioMedRoBERTa pre-trained language models, we first investigate a zero-shot scenario and compare the obtained results with a full fine-tuning setting. We then introduce a new ad-hoc surgical language model, named SurgicBERTa, pre-trained on a large collection of surgical materials, and we compare it with the previous ones. In the assessment, we explore different dataset splits (one in-domain and two out-of-domain) and we investigate also the effectiveness of the approach in a few-shot learning scenario. Performance is evaluated on three correlated sub-tasks: predicate disambiguation, semantic argument disambiguation and predicate-argument disambiguation. Results show that the fine-tuning of a pre-trained domain-specific language model achieves the highest performance on all splits and on all sub-tasks. All models are publicly released.
Assuntos
Armazenamento e Recuperação da Informação , Processamento de Linguagem Natural , Humanos , Semântica , IdiomaRESUMO
PURPOSE: Automatic recognition of surgical activities from intraoperative surgical videos is crucial for developing intelligent support systems for computer-assisted interventions. Current state-of-the-art recognition methods are based on deep learning where data augmentation has shown the potential to improve the generalization of these methods. This has spurred work on automated and simplified augmentation strategies for image classification and object detection on datasets of still images. Extending such augmentation methods to videos is not straightforward, as the temporal dimension needs to be considered. Furthermore, surgical videos pose additional challenges as they are composed of multiple, interconnected, and long-duration activities. METHODS: This work proposes a new simplified augmentation method, called TRandAugment, specifically designed for long surgical videos, that treats each video as an assemble of temporal segments and applies consistent but random transformations to each segment. The proposed augmentation method is used to train an end-to-end spatiotemporal model consisting of a CNN (ResNet50) followed by a TCN. RESULTS: The effectiveness of the proposed method is demonstrated on two surgical video datasets, namely Bypass40 and CATARACTS, and two tasks, surgical phase and step recognition. TRandAugment adds a performance boost of 1-6% over previous state-of-the-art methods, that uses manually designed augmentations. CONCLUSION: This work presents a simplified and automated augmentation method for long surgical videos. The proposed method has been validated on different datasets and tasks indicating the importance of devising temporal augmentation methods for long surgical videos.
Assuntos
Extração de Catarata , Redes Neurais de Computação , Humanos , Algoritmos , Extração de Catarata/métodosRESUMO
Automatic recognition of fine-grained surgical activities, called steps, is a challenging but crucial task for intelligent intra-operative computer assistance. The development of current vision-based activity recognition methods relies heavily on a high volume of manually annotated data. This data is difficult and time-consuming to generate and requires domain-specific knowledge. In this work, we propose to use coarser and easier-to-annotate activity labels, namely phases, as weak supervision to learn step recognition with fewer step annotated videos. We introduce a step-phase dependency loss to exploit the weak supervision signal. We then employ a Single-Stage Temporal Convolutional Network (SS-TCN) with a ResNet-50 backbone, trained in an end-to-end fashion from weakly annotated videos, for temporal activity segmentation and recognition. We extensively evaluate and show the effectiveness of the proposed method on a large video dataset consisting of 40 laparoscopic gastric bypass procedures and the public benchmark CATARACTS containing 50 cataract surgeries.
Assuntos
Redes Neurais de Computação , Cirurgia Assistida por ComputadorRESUMO
Objective: During nerve-sparing robot-assisted radical prostatectomy (RARP) bipolar electrocoagulation is often used but its use is controversial for the possible thermal damage of neurovascular bundles. Aim of the study was to evaluate the spatial-temporal thermal distribution in the tissue and the correlation with the electrosurgery-induced tissue damage in a controlled, CO2-rich environment modelling the laparoscopy conditions.. Methods: We manufactured a sealed plexiglass chamber (SPC) equipped with sensors to reproduce experimentally the environmental conditions of pneumoperitoneum during RARP. We evaluated in 64 pig musculofascial tissues (PMTs) of approximately 3 cm3 × 3 cm3 × 2 cm3 the spatial-temporal thermal distribution in the tissue and the correlation with the electrosurgery-induced tissue damage in a controlled CO2-rich environment modeling the laparoscopy conditions. Critical heat spread of bipolar cauterizing during surgical procedure was assessed by the employment of a compact thermal camera (C2) with a small core sensor (60 × 80 microbolometer array in the range 7-14 µm). Results: Bipolar instruments used at 30 W showed a thermal spread area of 18 mm2 when applied for 2 s and 28 mm2 when applied for 4 s. At 60 W, bipolar instruments showed a mean thermal spread and 19 mm2 when applied for 2 s; and 21 mm2 when applied for 4 s. Finally, histopathological analysis showed that thermal damage is distributed predominantly on the surface rather than in depth. Conclusions: The application of these results is very interesting for the definition of an accurate use of bipolar cautery during nerve-sparing RARP. It demonstrates the feasibility of using miniaturized thermal sensors, thus addressing the potential for next developments regarding the design of thermal endoscopic devices for robotic use.
RESUMO
This study aims to evaluate the abdominal aortic atherosclerotic plaque index (API)'s predictive role in patients with pre-operatively or post-operatively developed chronic kidney disease (CKD) treated with robot-assisted partial nephrectomy (RAPN) for renal cell carcinoma (RCC). One hundred and eighty-three patients (134 with no pre- and post-operative CKD (no CKD) and 49 with persistent or post-operative CKD development (post-op CKD)) who underwent RAPN between January 2019 and January 2022 were deemed eligible for the analysis. The API was calculated using dedicated software by assessing the ratio between the CT scan atherosclerotic plaque volume and the abdominal aortic volume. The ROC regression model demonstrated the influence of API on CKD development, with an increasing effect according to its value (coefficient 0.13; 95% CI 0.04-0.23; p = 0.006). The Model 1 multivariable analysis of the predictors of post-op CKD found that the following are independently associated with post-op CKD: Charlson Comorbidity Index (OR 1.31; p = 0.01), last follow-up (FU) Δ%eGFR (OR 0.95; p < 0.01), and API ≥ 10 (OR 25.4; p = 0.01). Model 2 showed API ≥ 10 as the only factor associated with CKD development (OR 25.2; p = 0.04). The median follow-up was 22 months. Our results demonstrate API to be a strong predictor of post-operative CKD, allowing the surgeon to tailor the best treatment for each patient, especially in those who might be at higher risk of CKD.
RESUMO
The implementation of a thermal endoscope based on the LWIR camera cores Lepton and a custom miniaturized electronics is reported. The sensor and the PCB can be inserted into a cylindrical protective case of diameter down to 15mm, inox tube or plastic, 3D printable envelope, with an optical window in Germanium. Two PCBs were developed for assembling the endoscope in two different schemes, to enable frontal or lateral thermal vision setup. The thermal endoscope unit is controlled by a Raspberry external unit. The Infrared Vision Software is provided for controlling the acquisition of thermal frames, and for the thermographic calculation of the object temperature from the input parameters on object surface emissivity and environment. In general, the device enables to perform thermography in applications in which traditional larger equipment cannot be employed, as nondestructive diagnostics in confined space in the engineering field. The thermal endoscope was designed with dimensions also compatible for robotic-assisted/traditional minimally-invasive surgery.
RESUMO
In Robot Assisted Minimally Invasive Surgery, discriminating critical subsurface structures is essential to make the surgical procedure safer and more efficient. In this paper, a novel robot assisted electrical bio-impedance scanning (RAEIS) system is developed and validated using a series of experiments. The proposed system constructs a tri-polar sensing configuration for tissue homogeneity inspection. Specifically, two robotic forceps are used as electrodes for applying electric current and measuring reciprocal voltages relative to a ground electrode which is placed distal from the measuring site. Compared to existing electrical bioimpedance sensing technology, the proposed system is able to use miniaturized electrodes to measure a site flexibly with enhanced subsurfacial detection capability. This paper presents the concept, the modeling of the sensing method, the hardware design, and the system calibration. Subsequently, a series of experiments are conducted for system evaluation including finite element simulation, saline solution bath experiments and experiments based on ex vivo animal tissues. The experimental results demonstrate that the proposed system can measure the resistivity of the material with high accuracy, and detect a subsurface non-homogeneous object with 100% success rate. The proposed parameters estimation algorithm is able to approximate the resistivity and the depth of the subsurface object effectively with one fast scanning.
Assuntos
Robótica , Algoritmos , Animais , Calibragem , Impedância Elétrica , Procedimentos Cirúrgicos Minimamente InvasivosRESUMO
Optical Coherence Tomography (OCT) is increasingly used in endoluminal procedures since it provides high-speed and high resolution imaging. Distortion and instability of images obtained with a proximal scanning endoscopic OCT system are significant due to the motor rotation irregularity, the friction between the rotating probe and outer sheath and synchronization issues. On-line compensation of artefacts is essential to ensure image quality suitable for real-time assistance during diagnosis or minimally invasive treatment. In this paper, we propose a new online correction method to tackle both B-scan distortion, video stream shaking and drift problem of endoscopic OCT linked to A-line level image shifting. The proposed computational approach for OCT scanning video correction integrates a Convolutional Neural Network (CNN) to improve the estimation of azimuthal shifting of each A-line. To suppress the accumulative error of integral estimation we also introduce another CNN branch to estimate a dynamic overall orientation angle. We train the network with semi-synthetic OCT videos by intentionally adding rotational distortion into real OCT scanning images. The results show that networks trained on this semi-synthetic data generalize to stabilize real OCT videos, and the algorithm efficacy is demonstrated on both ex vivo and in vivo data, where strong scanning artifacts are successfully corrected.
Assuntos
Aprendizado Profundo , Tomografia de Coerência Óptica , Algoritmos , Artefatos , Humanos , Redes Neurais de Computação , Tomografia de Coerência Óptica/métodosRESUMO
To assess differences between laparoscopic hysterectomy performed with or without robot-assistance, we performed metaanalyses of 5 key indices strongly associated with societal and hospital costs, patient safety, and intervention quality. The 5 indexes included estimated blood loss (EBL), operative time, number of conversions to laparotomy, hospital length of stay (LOS), and number of postoperative complications. A search of PubMed, Medline, Embase, and Science citation index online databases yielded a total of 605 studies. After a systematic review, we proceeded with meta-analysis of 14 articles for EBL, with a summary effect of -0.61 (95% confidence interval [CI], -42.42 to 46.20); 20 for operative time, with a summary effect of 0.66 (95% CI, -15.72 to 17.04); 17 for LOS, with a summary effect of -0.43 (95% CI, -0.68 to -0.17); 15 for conversion to laparotomy (odds ratio, 0.50; 95% CI, 0.31 to 0.79 with a random model); and 14 for postoperative complications (odds ratio, 0.69; 95% CI, 0.43 to 1.09 with a random model). In conclusion, compared with traditional laparoscopic hysterectomy, robot-assisted laparoscopic hysterectomy was associated with shorter LOS and fewer postoperative complications and conversions to laparotomy; there were no differences in EBL and operative time. These results confirm that robot-assisted laparoscopy has less deletorious effect on hospital, society, and patient stress and leads to better intervention quality.
Assuntos
Histerectomia/métodos , Laparoscopia/métodos , Robótica , Cirurgia Assistida por Computador/métodos , Feminino , Humanos , Histerectomia/instrumentação , Resultado do TratamentoRESUMO
The electrical impedance tomography (EIT) technology is an important medical imaging approach to show the electrical characteristics and the homogeneity of a tissue region noninvasively. Recently, this technology has been introduced to the Robot Assisted Minimally Invasive Surgery (RAMIS) for assisting the detection of surgical margin with relevant clinical benefits. Nevertheless, most EIT technologies are based on a fixed multiple-electrodes probe which limits the sensing flexibility and capability significantly. In this study, we present a method for acquiring the EIT measurements during a RAMIS procedure using two already existing robotic forceps as electrodes. The robot controls the forceps tips to a series of predefined positions for injecting excitation current and measuring electric potentials. Given the relative positions of electrodes and the measured electric potentials, the spatial distribution of electrical conductivity in a section view can be reconstructed. Realistic experiments are designed and conducted to simulate two tasks: subsurface abnormal tissue detection and surgical margin localization. According to the reconstructed images, the system is demonstrated to display the location of the abnormal tissue and the contrast of the tissues' conductivity with an accuracy suitable for clinical applications.
Assuntos
Robótica , Tomografia , Condutividade Elétrica , Impedância Elétrica , Tomografia Computadorizada por Raios XRESUMO
PURPOSE: The automatic extraction of knowledge about intervention execution from surgical manuals would be of the utmost importance to develop expert surgical systems and assistants. In this work we assess the feasibility of automatically identifying the sentences of a surgical intervention text containing procedural information, a subtask of the broader goal of extracting intervention workflows from surgical manuals. METHODS: We frame the problem as a binary classification task. We first introduce a new public dataset of 1958 sentences from robotic surgery texts, manually annotated as procedural or non-procedural. We then apply different classification methods, from classical machine learning algorithms, to more recent neural-network approaches and classification methods exploiting transformers (e.g., BERT, ClinicalBERT). We also analyze the benefits of applying balancing techniques to the dataset. RESULTS: The architectures based on neural-networks fed with FastText's embeddings and the one based on ClinicalBERT outperform all the tested methods, empirically confirming the feasibility of the task. Adopting balancing techniques does not lead to substantial improvements in classification. CONCLUSION: This is the first work experimenting with machine / deep learning algorithms for automatically identifying procedural sentences in surgical texts. It also introduces the first public dataset that can be used for benchmarking different classification methods for the task.
Assuntos
Algoritmos , Aprendizado de Máquina , Redes Neurais de Computação , Procedimentos Cirúrgicos Robóticos/métodos , HumanosRESUMO
PURPOSE: Automatic segmentation and classification of surgical activity is crucial for providing advanced support in computer-assisted interventions and autonomous functionalities in robot-assisted surgeries. Prior works have focused on recognizing either coarse activities, such as phases, or fine-grained activities, such as gestures. This work aims at jointly recognizing two complementary levels of granularity directly from videos, namely phases and steps. METHODS: We introduce two correlated surgical activities, phases and steps, for the laparoscopic gastric bypass procedure. We propose a multi-task multi-stage temporal convolutional network (MTMS-TCN) along with a multi-task convolutional neural network (CNN) training setup to jointly predict the phases and steps and benefit from their complementarity to better evaluate the execution of the procedure. We evaluate the proposed method on a large video dataset consisting of 40 surgical procedures (Bypass40). RESULTS: We present experimental results from several baseline models for both phase and step recognition on the Bypass40. The proposed MTMS-TCN method outperforms single-task methods in both phase and step recognition by 1-2% in accuracy, precision and recall. Furthermore, for step recognition, MTMS-TCN achieves a superior performance of 3-6% compared to LSTM-based models on all metrics. CONCLUSION: In this work, we present a multi-task multi-stage temporal convolutional network for surgical activity recognition, which shows improved results compared to single-task models on a gastric bypass dataset with multi-level annotations. The proposed method shows that the joint modeling of phases and steps is beneficial to improve the overall recognition of each type of activity.
Assuntos
Derivação Gástrica/métodos , Laparoscopia/métodos , Redes Neurais de Computação , Procedimentos Cirúrgicos Robóticos/métodos , HumanosRESUMO
PURPOSE: We present the validation of PROST, a robotic device for prostate biopsy. PROST is designed to minimize human error by introducing some autonomy in the execution of the key steps of the procedure, i.e., target selection, image fusion and needle positioning. The robot allows executing a targeted biopsy through ultrasound (US) guidance and fusion with magnetic resonance (MR) images, where the target was defined. METHODS: PROST is a parallel robot with 4 degrees of freedom (DOF) to orient the needle and 1 DOF to rotate the US probe. We reached a calibration error of less than 2 mm, computed as the difference between the needle positioning in robot coordinates and in the US image. The autonomy of the robot is given by the image analysis software, which employs deep learning techniques, the integrated image fusion algorithms and automatic computation of the needle trajectory. For safety reasons, the insertion of the needle is assigned to the doctor. RESULTS: System performance was evaluated in terms of positioning accuracy. Tests were performed on a 3D printed object with nine 2-mm spherical targets and on an anatomical commercial phantom that simulates human prostate with three lesions and the surrounding structures. The average accuracy reached in the laboratory experiments was [Formula: see text] in the first test and [Formula: see text] in the second test. CONCLUSIONS: We introduced a first prototype of a prostate biopsy robot that has the potential to increase the detection of clinically significant prostate cancer and, by including some level of autonomy, to simplify the procedure, to reduce human errors and shorten training time. The use of a robot for the biopsy of the prostate will create the possibility to include also a treatment, such as focal ablation, to be delivered through the same system.
Assuntos
Processamento de Imagem Assistida por Computador/métodos , Biópsia Guiada por Imagem/métodos , Neoplasias da Próstata/diagnóstico , Robótica/métodos , Software , Biópsia por Agulha/métodos , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Imagens de Fantasmas , Projetos Piloto , UltrassonografiaRESUMO
PURPOSE: Biomechanical simulation of anatomical deformations caused by ultrasound probe pressure is of outstanding importance for several applications, from the testing of robotic acquisition systems to multi-modal image fusion and development of ultrasound training platforms. Different approaches can be exploited for modelling the probe-tissue interaction, each achieving different trade-offs among accuracy, computation time and stability. METHODS: We assess the performances of different strategies based on the finite element method for modelling the interaction between the rigid probe and soft tissues. Probe-tissue contact is modelled using (i) penalty forces, (ii) constraint forces, and (iii) by prescribing the displacement of the mesh surface nodes. These methods are tested in the challenging context of ultrasound scanning of the breast, an organ undergoing large nonlinear deformations during the procedure. RESULTS: The obtained results are evaluated against those of a non-physically based method. While all methods achieve similar accuracy, performance in terms of stability and speed shows high variability, especially for those methods modelling the contacts explicitly. Overall, prescribing surface displacements is the approach with best performances, but it requires prior knowledge of the contact area and probe trajectory. CONCLUSIONS: In this work, we present different strategies for modelling probe-tissue interaction, each able to achieve different compromises among accuracy, speed and stability. The choice of the preferred approach highly depends on the requirements of the specific clinical application. Since the presented methodologies can be applied to describe general tool-tissue interactions, this work can be seen as a reference for researchers seeking the most appropriate strategy to model anatomical deformation induced by the interaction with medical tools.