Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38761319

RESUMO

PURPOSE: Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers. METHODS: In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70. RESULTS: The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)). CONCLUSION: MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.

2.
Diagnostics (Basel) ; 13(21)2023 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-37958223

RESUMO

This study aims to evaluate the abdominal aortic atherosclerotic plaque index (API)'s predictive role in patients with pre-operatively or post-operatively developed chronic kidney disease (CKD) treated with robot-assisted partial nephrectomy (RAPN) for renal cell carcinoma (RCC). One hundred and eighty-three patients (134 with no pre- and post-operative CKD (no CKD) and 49 with persistent or post-operative CKD development (post-op CKD)) who underwent RAPN between January 2019 and January 2022 were deemed eligible for the analysis. The API was calculated using dedicated software by assessing the ratio between the CT scan atherosclerotic plaque volume and the abdominal aortic volume. The ROC regression model demonstrated the influence of API on CKD development, with an increasing effect according to its value (coefficient 0.13; 95% CI 0.04-0.23; p = 0.006). The Model 1 multivariable analysis of the predictors of post-op CKD found that the following are independently associated with post-op CKD: Charlson Comorbidity Index (OR 1.31; p = 0.01), last follow-up (FU) Δ%eGFR (OR 0.95; p < 0.01), and API ≥ 10 (OR 25.4; p = 0.01). Model 2 showed API ≥ 10 as the only factor associated with CKD development (OR 25.2; p = 0.04). The median follow-up was 22 months. Our results demonstrate API to be a strong predictor of post-operative CKD, allowing the surgeon to tailor the best treatment for each patient, especially in those who might be at higher risk of CKD.

3.
Front Surg ; 10: 1115570, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37383383

RESUMO

Objective: During nerve-sparing robot-assisted radical prostatectomy (RARP) bipolar electrocoagulation is often used but its use is controversial for the possible thermal damage of neurovascular bundles. Aim of the study was to evaluate the spatial-temporal thermal distribution in the tissue and the correlation with the electrosurgery-induced tissue damage in a controlled, CO2-rich environment modelling the laparoscopy conditions.. Methods: We manufactured a sealed plexiglass chamber (SPC) equipped with sensors to reproduce experimentally the environmental conditions of pneumoperitoneum during RARP. We evaluated in 64 pig musculofascial tissues (PMTs) of approximately 3 cm3 × 3 cm3 × 2 cm3 the spatial-temporal thermal distribution in the tissue and the correlation with the electrosurgery-induced tissue damage in a controlled CO2-rich environment modeling the laparoscopy conditions. Critical heat spread of bipolar cauterizing during surgical procedure was assessed by the employment of a compact thermal camera (C2) with a small core sensor (60 × 80 microbolometer array in the range 7-14 µm). Results: Bipolar instruments used at 30 W showed a thermal spread area of 18 mm2 when applied for 2 s and 28 mm2 when applied for 4 s. At 60 W, bipolar instruments showed a mean thermal spread and 19 mm2 when applied for 2 s; and 21 mm2 when applied for 4 s. Finally, histopathological analysis showed that thermal damage is distributed predominantly on the surface rather than in depth. Conclusions: The application of these results is very interesting for the definition of an accurate use of bipolar cautery during nerve-sparing RARP. It demonstrates the feasibility of using miniaturized thermal sensors, thus addressing the potential for next developments regarding the design of thermal endoscopic devices for robotic use.

5.
IEEE Trans Med Imaging ; 42(9): 2592-2602, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37030859

RESUMO

Automatic recognition of fine-grained surgical activities, called steps, is a challenging but crucial task for intelligent intra-operative computer assistance. The development of current vision-based activity recognition methods relies heavily on a high volume of manually annotated data. This data is difficult and time-consuming to generate and requires domain-specific knowledge. In this work, we propose to use coarser and easier-to-annotate activity labels, namely phases, as weak supervision to learn step recognition with fewer step annotated videos. We introduce a step-phase dependency loss to exploit the weak supervision signal. We then employ a Single-Stage Temporal Convolutional Network (SS-TCN) with a ResNet-50 backbone, trained in an end-to-end fashion from weakly annotated videos, for temporal activity segmentation and recognition. We extensively evaluate and show the effectiveness of the proposed method on a large video dataset consisting of 40 laparoscopic gastric bypass procedures and the public benchmark CATARACTS containing 50 cataract surgeries.


Assuntos
Redes Neurais de Computação , Cirurgia Assistida por Computador
6.
Int J Comput Assist Radiol Surg ; 18(9): 1665-1672, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36944845

RESUMO

PURPOSE: Automatic recognition of surgical activities from intraoperative surgical videos is crucial for developing intelligent support systems for computer-assisted interventions. Current state-of-the-art recognition methods are based on deep learning where data augmentation has shown the potential to improve the generalization of these methods. This has spurred work on automated and simplified augmentation strategies for image classification and object detection on datasets of still images. Extending such augmentation methods to videos is not straightforward, as the temporal dimension needs to be considered. Furthermore, surgical videos pose additional challenges as they are composed of multiple, interconnected, and long-duration activities. METHODS: This work proposes a new simplified augmentation method, called TRandAugment, specifically designed for long surgical videos, that treats each video as an assemble of temporal segments and applies consistent but random transformations to each segment. The proposed augmentation method is used to train an end-to-end spatiotemporal model consisting of a CNN (ResNet50) followed by a TCN. RESULTS: The effectiveness of the proposed method is demonstrated on two surgical video datasets, namely Bypass40 and CATARACTS, and two tasks, surgical phase and step recognition. TRandAugment adds a performance boost of 1-6% over previous state-of-the-art methods, that uses manually designed augmentations. CONCLUSION: This work presents a simplified and automated augmentation method for long surgical videos. The proposed method has been validated on different datasets and tasks indicating the importance of devising temporal augmentation methods for long surgical videos.


Assuntos
Extração de Catarata , Redes Neurais de Computação , Humanos , Algoritmos , Extração de Catarata/métodos
7.
Comput Biol Med ; 152: 106415, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36527782

RESUMO

The automatic extraction of procedural surgical knowledge from surgery manuals, academic papers or other high-quality textual resources, is of the utmost importance to develop knowledge-based clinical decision support systems, to automatically execute some procedure's step or to summarize the procedural information, spread throughout the texts, in a structured form usable as a study resource by medical students. In this work, we propose a first benchmark on extracting detailed surgical actions from available intervention procedure textbooks and papers. We frame the problem as a Semantic Role Labeling task. Exploiting a manually annotated dataset, we apply different Transformer-based information extraction methods. Starting from RoBERTa and BioMedRoBERTa pre-trained language models, we first investigate a zero-shot scenario and compare the obtained results with a full fine-tuning setting. We then introduce a new ad-hoc surgical language model, named SurgicBERTa, pre-trained on a large collection of surgical materials, and we compare it with the previous ones. In the assessment, we explore different dataset splits (one in-domain and two out-of-domain) and we investigate also the effectiveness of the approach in a few-shot learning scenario. Performance is evaluated on three correlated sub-tasks: predicate disambiguation, semantic argument disambiguation and predicate-argument disambiguation. Results show that the fine-tuning of a pre-trained domain-specific language model achieves the highest performance on all splits and on all sub-tasks. All models are publicly released.


Assuntos
Armazenamento e Recuperação da Informação , Processamento de Linguagem Natural , Humanos , Semântica , Idioma
9.
Proc IEEE Inst Electr Electron Eng ; 110(7): 993-1011, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35911127

RESUMO

Surgical robots have been widely adopted with over 4000 robots being used in practice daily. However, these are telerobots that are fully controlled by skilled human surgeons. Introducing "surgeon-assist"-some forms of autonomy-has the potential to reduce tedium and increase consistency, analogous to driver-assist functions for lanekeeping, cruise control, and parking. This article examines the scientific and technical backgrounds of robotic autonomy in surgery and some ethical, social, and legal implications. We describe several autonomous surgical tasks that have been automated in laboratory settings, and research concepts and trends.

10.
HardwareX ; 11: e00300, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35509906

RESUMO

The implementation of a thermal endoscope based on the LWIR camera cores Lepton and a custom miniaturized electronics is reported. The sensor and the PCB can be inserted into a cylindrical protective case of diameter down to 15mm, inox tube or plastic, 3D printable envelope, with an optical window in Germanium. Two PCBs were developed for assembling the endoscope in two different schemes, to enable frontal or lateral thermal vision setup. The thermal endoscope unit is controlled by a Raspberry external unit. The Infrared Vision Software is provided for controlling the acquisition of thermal frames, and for the thermographic calculation of the object temperature from the input parameters on object surface emissivity and environment. In general, the device enables to perform thermography in applications in which traditional larger equipment cannot be employed, as nondestructive diagnostics in confined space in the engineering field. The thermal endoscope was designed with dimensions also compatible for robotic-assisted/traditional minimally-invasive surgery.

11.
Med Image Anal ; 77: 102355, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35139483

RESUMO

Optical Coherence Tomography (OCT) is increasingly used in endoluminal procedures since it provides high-speed and high resolution imaging. Distortion and instability of images obtained with a proximal scanning endoscopic OCT system are significant due to the motor rotation irregularity, the friction between the rotating probe and outer sheath and synchronization issues. On-line compensation of artefacts is essential to ensure image quality suitable for real-time assistance during diagnosis or minimally invasive treatment. In this paper, we propose a new online correction method to tackle both B-scan distortion, video stream shaking and drift problem of endoscopic OCT linked to A-line level image shifting. The proposed computational approach for OCT scanning video correction integrates a Convolutional Neural Network (CNN) to improve the estimation of azimuthal shifting of each A-line. To suppress the accumulative error of integral estimation we also introduce another CNN branch to estimate a dynamic overall orientation angle. We train the network with semi-synthetic OCT videos by intentionally adding rotational distortion into real OCT scanning images. The results show that networks trained on this semi-synthetic data generalize to stabilize real OCT videos, and the algorithm efficacy is demonstrated on both ex vivo and in vivo data, where strong scanning artifacts are successfully corrected.


Assuntos
Aprendizado Profundo , Tomografia de Coerência Óptica , Algoritmos , Artefatos , Humanos , Redes Neurais de Computação , Tomografia de Coerência Óptica/métodos
12.
IEEE Trans Biomed Eng ; 69(1): 209-219, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34156935

RESUMO

In Robot Assisted Minimally Invasive Surgery, discriminating critical subsurface structures is essential to make the surgical procedure safer and more efficient. In this paper, a novel robot assisted electrical bio-impedance scanning (RAEIS) system is developed and validated using a series of experiments. The proposed system constructs a tri-polar sensing configuration for tissue homogeneity inspection. Specifically, two robotic forceps are used as electrodes for applying electric current and measuring reciprocal voltages relative to a ground electrode which is placed distal from the measuring site. Compared to existing electrical bioimpedance sensing technology, the proposed system is able to use miniaturized electrodes to measure a site flexibly with enhanced subsurfacial detection capability. This paper presents the concept, the modeling of the sensing method, the hardware design, and the system calibration. Subsequently, a series of experiments are conducted for system evaluation including finite element simulation, saline solution bath experiments and experiments based on ex vivo animal tissues. The experimental results demonstrate that the proposed system can measure the resistivity of the material with high accuracy, and detect a subsurface non-homogeneous object with 100% success rate. The proposed parameters estimation algorithm is able to approximate the resistivity and the depth of the subsurface object effectively with one fast scanning.


Assuntos
Robótica , Algoritmos , Animais , Calibragem , Impedância Elétrica , Procedimentos Cirúrgicos Minimamente Invasivos
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3729-3733, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892047

RESUMO

The electrical impedance tomography (EIT) technology is an important medical imaging approach to show the electrical characteristics and the homogeneity of a tissue region noninvasively. Recently, this technology has been introduced to the Robot Assisted Minimally Invasive Surgery (RAMIS) for assisting the detection of surgical margin with relevant clinical benefits. Nevertheless, most EIT technologies are based on a fixed multiple-electrodes probe which limits the sensing flexibility and capability significantly. In this study, we present a method for acquiring the EIT measurements during a RAMIS procedure using two already existing robotic forceps as electrodes. The robot controls the forceps tips to a series of predefined positions for injecting excitation current and measuring electric potentials. Given the relative positions of electrodes and the measured electric potentials, the spatial distribution of electrical conductivity in a section view can be reconstructed. Realistic experiments are designed and conducted to simulate two tasks: subsurface abnormal tissue detection and surgical margin localization. According to the reconstructed images, the system is demonstrated to display the location of the abnormal tissue and the contrast of the tissues' conductivity with an accuracy suitable for clinical applications.


Assuntos
Robótica , Tomografia , Condutividade Elétrica , Impedância Elétrica , Tomografia Computadorizada por Raios X
14.
Int J Comput Assist Radiol Surg ; 16(8): 1393-1401, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34224068

RESUMO

PURPOSE: We present the validation of PROST, a robotic device for prostate biopsy. PROST is designed to minimize human error by introducing some autonomy in the execution of the key steps of the procedure, i.e., target selection, image fusion and needle positioning. The robot allows executing a targeted biopsy through ultrasound (US) guidance and fusion with magnetic resonance (MR) images, where the target was defined. METHODS: PROST is a parallel robot with 4 degrees of freedom (DOF) to orient the needle and 1 DOF to rotate the US probe. We reached a calibration error of less than 2 mm, computed as the difference between the needle positioning in robot coordinates and in the US image. The autonomy of the robot is given by the image analysis software, which employs deep learning techniques, the integrated image fusion algorithms and automatic computation of the needle trajectory. For safety reasons, the insertion of the needle is assigned to the doctor. RESULTS: System performance was evaluated in terms of positioning accuracy. Tests were performed on a 3D printed object with nine 2-mm spherical targets and on an anatomical commercial phantom that simulates human prostate with three lesions and the surrounding structures. The average accuracy reached in the laboratory experiments was [Formula: see text] in the first test and [Formula: see text] in the second test. CONCLUSIONS: We introduced a first prototype of a prostate biopsy robot that has the potential to increase the detection of clinically significant prostate cancer and, by including some level of autonomy, to simplify the procedure, to reduce human errors and shorten training time. The use of a robot for the biopsy of the prostate will create the possibility to include also a treatment, such as focal ablation, to be delivered through the same system.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Biópsia Guiada por Imagem/métodos , Neoplasias da Próstata/diagnóstico , Robótica/métodos , Software , Biópsia por Agulha/métodos , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Imagens de Fantasmas , Projetos Piloto , Ultrassonografia
15.
Int J Comput Assist Radiol Surg ; 16(7): 1111-1119, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34013464

RESUMO

PURPOSE: Automatic segmentation and classification of surgical activity is crucial for providing advanced support in computer-assisted interventions and autonomous functionalities in robot-assisted surgeries. Prior works have focused on recognizing either coarse activities, such as phases, or fine-grained activities, such as gestures. This work aims at jointly recognizing two complementary levels of granularity directly from videos, namely phases and steps. METHODS: We introduce two correlated surgical activities, phases and steps, for the laparoscopic gastric bypass procedure. We propose a multi-task multi-stage temporal convolutional network (MTMS-TCN) along with a multi-task convolutional neural network (CNN) training setup to jointly predict the phases and steps and benefit from their complementarity to better evaluate the execution of the procedure. We evaluate the proposed method on a large video dataset consisting of 40 surgical procedures (Bypass40). RESULTS: We present experimental results from several baseline models for both phase and step recognition on the Bypass40. The proposed MTMS-TCN method outperforms single-task methods in both phase and step recognition by 1-2% in accuracy, precision and recall. Furthermore, for step recognition, MTMS-TCN achieves a superior performance of 3-6% compared to LSTM-based models on all metrics. CONCLUSION: In this work, we present a multi-task multi-stage temporal convolutional network for surgical activity recognition, which shows improved results compared to single-task models on a gastric bypass dataset with multi-level annotations. The proposed method shows that the joint modeling of phases and steps is beneficial to improve the overall recognition of each type of activity.


Assuntos
Derivação Gástrica/métodos , Laparoscopia/métodos , Redes Neurais de Computação , Procedimentos Cirúrgicos Robóticos/métodos , Humanos
16.
Int J Comput Assist Radiol Surg ; 16(8): 1287-1295, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33886045

RESUMO

PURPOSE: The automatic extraction of knowledge about intervention execution from surgical manuals would be of the utmost importance to develop expert surgical systems and assistants. In this work we assess the feasibility of automatically identifying the sentences of a surgical intervention text containing procedural information, a subtask of the broader goal of extracting intervention workflows from surgical manuals. METHODS: We frame the problem as a binary classification task. We first introduce a new public dataset of 1958 sentences from robotic surgery texts, manually annotated as procedural or non-procedural. We then apply different classification methods, from classical machine learning algorithms, to more recent neural-network approaches and classification methods exploiting transformers (e.g., BERT, ClinicalBERT). We also analyze the benefits of applying balancing techniques to the dataset. RESULTS: The architectures based on neural-networks fed with FastText's embeddings and the one based on ClinicalBERT outperform all the tested methods, empirically confirming the feasibility of the task. Adopting balancing techniques does not lead to substantial improvements in classification. CONCLUSION: This is the first work experimenting with machine / deep learning algorithms for automatically identifying procedural sentences in surgical texts. It also introduces the first public dataset that can be used for benchmarking different classification methods for the task.


Assuntos
Algoritmos , Aprendizado de Máquina , Redes Neurais de Computação , Procedimentos Cirúrgicos Robóticos/métodos , Humanos
17.
IEEE Trans Neural Syst Rehabil Eng ; 28(9): 2053-2062, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32746325

RESUMO

Selecting actuators for assistive exoskeletons involves decisions in which designers usually face contrasting requirements. While certain choices may depend on the application context or design philosophy, it is generally desirable to avoid oversizing actuators in order to obtain more lightweight and transparent systems, ultimately promoting the adoption of a given device. In many cases, the torque and power requirements can be relaxed by exploiting the contribution of an elastic element acting in mechanical parallel. This contribution considers one such case and introduces a methodology for the evaluation of different actuator choices resulting from the combination of different motors, reduction gears, and parallel stiffness profiles, helping to match actuator capabilities to the task requirements. Such methodology is based on a graphical tool showing how different design choices affect the actuator as a whole. To illustrate the approach, a back-support exoskeleton for lifting tasks is considered as a case study.


Assuntos
Exoesqueleto Energizado , Desenho de Equipamento , Humanos , Aparelhos Ortopédicos , Torque
18.
Int J Comput Assist Radiol Surg ; 15(8): 1379-1387, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32445126

RESUMO

PURPOSE: Biomechanical simulation of anatomical deformations caused by ultrasound probe pressure is of outstanding importance for several applications, from the testing of robotic acquisition systems to multi-modal image fusion and development of ultrasound training platforms. Different approaches can be exploited for modelling the probe-tissue interaction, each achieving different trade-offs among accuracy, computation time and stability. METHODS: We assess the performances of different strategies based on the finite element method for modelling the interaction between the rigid probe and soft tissues. Probe-tissue contact is modelled using (i) penalty forces, (ii) constraint forces, and (iii) by prescribing the displacement of the mesh surface nodes. These methods are tested in the challenging context of ultrasound scanning of the breast, an organ undergoing large nonlinear deformations during the procedure. RESULTS: The obtained results are evaluated against those of a non-physically based method. While all methods achieve similar accuracy, performance in terms of stability and speed shows high variability, especially for those methods modelling the contacts explicitly. Overall, prescribing surface displacements is the approach with best performances, but it requires prior knowledge of the contact area and probe trajectory. CONCLUSIONS: In this work, we present different strategies for modelling probe-tissue interaction, each able to achieve different compromises among accuracy, speed and stability. The choice of the preferred approach highly depends on the requirements of the specific clinical application. Since the presented methodologies can be applied to describe general tool-tissue interactions, this work can be seen as a reference for researchers seeking the most appropriate strategy to model anatomical deformation induced by the interaction with medical tools.


Assuntos
Modelos Anatômicos , Ultrassonografia/métodos , Fenômenos Biomecânicos , Simulação por Computador , Humanos
19.
Int J Comput Assist Radiol Surg ; 14(11): 2043, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31250254

RESUMO

The original version of this article unfortunately contained a mistake.

20.
Int J Comput Assist Radiol Surg ; 14(8): 1329-1339, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31161556

RESUMO

PURPOSE: Although ultrasound (US) images represent the most popular modality for guiding breast biopsy, malignant regions are often missed by sonography, thus preventing accurate lesion localization which is essential for a successful procedure. Biomechanical models can support the localization of suspicious areas identified on a preoperative image during US scanning since they are able to account for anatomical deformations resulting from US probe pressure. We propose a deformation model which relies on position-based dynamics (PBD) approach to predict the displacement of internal targets induced by probe interaction during US acquisition. METHODS: The PBD implementation available in NVIDIA FleX is exploited to create an anatomical model capable of deforming online. Simulation parameters are initialized on a calibration phantom under different levels of probe-induced deformations; then, they are fine-tuned by minimizing the localization error of a US-visible landmark of a realistic breast phantom. The updated model is used to estimate the displacement of other internal lesions due to probe-tissue interaction. RESULTS: The localization error obtained when applying the PBD model remains below 11 mm for all the tumors even for input displacements in the order of 30 mm. This proposed method obtains results aligned with FE models with faster computational performance, suitable for real-time applications. In addition, it outperforms rigid model used to track lesion position in US-guided breast biopsies, at least halving the localization error for all the displacement ranges considered. CONCLUSION: Position-based dynamics approach has proved to be successful in modeling breast tissue deformations during US acquisition. Its stability, accuracy and real-time performance make such model suitable for tracking lesions displacement during US-guided breast biopsy.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Biópsia Guiada por Imagem , Imageamento Tridimensional , Ultrassonografia Mamária , Algoritmos , Biópsia , Calibragem , Simulação por Computador , Humanos , Modelos Anatômicos , Posicionamento do Paciente , Imagens de Fantasmas , Robótica , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...