Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Neurosurg Rev ; 44(4): 1853-1867, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32944808

RESUMO

At a time of significant global unrest and uncertainty surrounding how the delivery of clinical training will unfold over the coming years, we offer a systematic review, meta-analysis, and bibliometric analysis of global studies showing the crucial role simulation will play in training. Our aim was to determine the types of simulators in use, their effectiveness in improving clinical skills, and whether we have reached a point of global acceptance. A PRISMA-guided global systematic review of the neurosurgical simulators available, a meta-analysis of their effectiveness, and an extended analysis of their progressive scholarly acceptance on studies meeting our inclusion criteria of simulation in neurosurgical education were performed. Improvement in procedural knowledge and technical skills was evaluated. Of the identified 7405 studies, 56 studies met the inclusion criteria, collectively reporting 50 simulator types ranging from cadaveric, low-fidelity, and part-task to virtual reality (VR) simulators. In all, 32 studies were included in the meta-analysis, including 7 randomised controlled trials. A random effects, ratio of means effects measure quantified statistically significant improvement in procedural knowledge by 50.2% (ES 0.502; CI 0.355; 0.649, p < 0.001), technical skill including accuracy by 32.5% (ES 0.325; CI - 0.482; - 0.167, p < 0.001), and speed by 25% (ES - 0.25, CI - 0.399; - 0.107, p < 0.001). The initial number of VR studies (n = 91) was approximately double the number of refining studies (n = 45) indicating it is yet to reach progressive scholarly acceptance. There is strong evidence for a beneficial impact of adopting simulation in the improvement of procedural knowledge and technical skill. We show a growing trend towards the adoption of neurosurgical simulators, although we have not fully gained progressive scholarly acceptance for VR-based simulation technologies in neurosurgical education.


Assuntos
Neurocirurgia , Realidade Virtual , Competência Clínica , Simulação por Computador , Humanos , Neurocirurgia/educação , Procedimentos Neurocirúrgicos
2.
IEEE Trans Med Imaging ; PP2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38801689

RESUMO

Probe-based confocal laser endomicroscopy (pCLE) has a role in characterising tissue intraoperatively to guide tumour resection during surgery. To capture good quality pCLE data which is important for diagnosis, the probe-tissue contact needs to be maintained within a working range of micrometre scale. This can be achieved through micro-surgical robotic manipulation which requires the automatic estimation of the probe-tissue distance. In this paper, we propose a novel deep regression framework composed of the Deep Regression Generative Adversarial Network (DR-GAN) and a Sequence Attention (SA) module. The aim of DR-GAN is to train the network using an enhanced image-based supervision approach. It extents the standard generator by using a well-defined function for image generation, instead of a learnable decoder. Also, DR-GAN uses a novel learnable neural perceptual loss which combines for the first time spatial and frequency domain features. This effectively suppresses the adverse effects of noise in the pCLE data. To incorporate temporal information, we've designed the SA module which is a cross-attention module, enhanced with Radial Basis Function based encoding (SA-RBF). Furthermore, to train the regression framework, we designed a multi-step training mechanism. During inference, the trained network is used to generate data representations which are fused along time in the SA-RBF module to boost the regression stability. Our proposed network advances SOTA networks by addressing the challenge of excessive noise in the pCLE data and enhancing regression stability. It outperforms SOTA networks applied on the pCLE Regression dataset (PRD) in terms of accuracy, data quality and stability.

3.
Healthc Technol Lett ; 11(2-3): 108-116, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638493

RESUMO

Generalizable and accurate stereo depth estimation is vital for 3D reconstruction, especially in surgery. Supervised learning methods obtain best performance however, limited ground truth data for surgical scenes limits generalizability. Self-supervised methods don't need ground truth, but suffer from scale ambiguity and incorrect disparity prediction due to inconsistency of photometric loss. This work proposes a two-phase training procedure that is generalizable and retains the high performance of supervised methods. It entails: (1) performing self-supervised representation learning of left and right views via masked image modelling (MIM) to learn generalizable semantic stereo features (2) utilizing the MIM pre-trained model to learn robust depth representation via supervised learning for disparity estimation on synthetic data only. To improve stereo representations learnt via MIM, perceptual loss terms are introduced, which improve the model's stereo representations learnt by explicitly encouraging the learning of higher scene-level features. Qualitative and quantitative performance evaluation on surgical and natural scenes shows that the approach achieves sub-millimetre accuracy and lowest errors respectively, setting a new state-of-the-art. Despite not training on surgical nor natural scene data for disparity estimation.

4.
Int J Comput Assist Radiol Surg ; 19(6): 1061-1073, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38538880

RESUMO

PURPOSE: Probe-based confocal laser endomicroscopy (pCLE) enables intraoperative tissue characterization with improved resection rates of brain tumours. Although a plethora of deep learning models have been developed for automating tissue characterization, their lack of transparency is a concern. To tackle this issue, techniques like Class Activation Map (CAM) and its variations highlight image regions related to model decisions. However, they often fall short of providing human-interpretable visual explanations for surgical decision support, primarily due to the shattered gradient problem or insufficient theoretical underpinning. METHODS: In this paper, we introduce XRelevanceCAM, an explanation method rooted in a better backpropagation approach, incorporating sensitivity and conservation axioms. This enhanced method offers greater theoretical foundation and effectively mitigates the shattered gradient issue when compared to other CAM variants. RESULTS: Qualitative and quantitative evaluations are based on ex vivo pCLE data of brain tumours. XRelevanceCAM effectively highlights clinically relevant areas that characterize the tissue type. Specifically, it yields a remarkable 56% improvement over our closest baseline, RelevanceCAM, in the network's shallowest layer as measured by the mean Intersection over Union (mIoU) metric based on ground-truth annotations (from 18 to 28.07%). Furthermore, a 6% improvement in mIoU is observed when generating the final saliency map from all network layers. CONCLUSION: We introduce a new CAM variation, XRelevanceCAM, for precise identification of clinically important structures in pCLE data. This can aid introperative decision support in brain tumour resection surgery, as validated in our performance study.


Assuntos
Neoplasias Encefálicas , Microscopia Confocal , Microscopia Confocal/métodos , Humanos , Neoplasias Encefálicas/cirurgia , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Aprendizado Profundo
5.
Artigo em Inglês | MEDLINE | ID: mdl-38900623

RESUMO

Conventional approaches to dietary assessment are primarily grounded in self-reporting methods or structured interviews conducted under the supervision of dietitians. These methods, however, are often subjective, potentially inaccurate, and time-intensive. Although artificial intelligence (AI)-based solutions have been devised to automate the dietary assessment process, prior AI methodologies tackle dietary assessment in a fragmented landscape (e.g., merely recognizing food types or estimating portion size), and encounter challenges in their ability to generalize across a diverse range of food categories, dietary behaviors, and cultural contexts. Recently, the emergence of multimodal foundation models, such as GPT-4V, has exhibited transformative potential across a wide range of tasks (e.g., scene understanding and image captioning) in various research domains. These models have demonstrated remarkable generalist intelligence and accuracy, owing to their large-scale pre-training on broad datasets and substantially scaled model size. In this study, we explore the application of GPT-4V powering multimodal ChatGPT for dietary assessment, along with prompt engineering and passive monitoring techniques. We evaluated the proposed pipeline using a self-collected, semi free-living dietary intake dataset comprising 16 real-life eating episodes, captured through wearable cameras. Our findings reveal that GPT-4V excels in food detection under challenging conditions without any fine-tuning or adaptation using food-specific datasets. By guiding the model with specific language prompts (e.g., African cuisine), it shifts from recognizing common staples like rice and bread to accurately identifying regional dishes like banku and ugali. Another GPT-4V's standout feature is its contextual awareness. GPT-4V can leverage surrounding objects as scale references to deduce the portion sizes of food items, further facilitating the process of dietary assessment.

6.
Med Image Anal ; 91: 102985, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37844472

RESUMO

This paper introduces the "SurgT: Surgical Tracking" challenge which was organized in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2022). There were two purposes for the creation of this challenge: (1) the establishment of the first standardized benchmark for the research community to assess soft-tissue trackers; and (2) to encourage the development of unsupervised deep learning methods, given the lack of annotated data in surgery. A dataset of 157 stereo endoscopic videos from 20 clinical cases, along with stereo camera calibration parameters, have been provided. Participants were assigned the task of developing algorithms to track the movement of soft tissues, represented by bounding boxes, in stereo endoscopic videos. At the end of the challenge, the developed methods were assessed on a previously hidden test subset. This assessment uses benchmarking metrics that were purposely developed for this challenge, to verify the efficacy of unsupervised deep learning algorithms in tracking soft-tissue. The metric used for ranking the methods was the Expected Average Overlap (EAO) score, which measures the average overlap between a tracker's and the ground truth bounding boxes. Coming first in the challenge was the deep learning submission by ICVS-2Ai with a superior EAO score of 0.617. This method employs ARFlow to estimate unsupervised dense optical flow from cropped images, using photometric and regularization losses. Second, Jmees with an EAO of 0.583, uses deep learning for surgical tool segmentation on top of a non-deep learning baseline method: CSRT. CSRT by itself scores a similar EAO of 0.563. The results from this challenge show that currently, non-deep learning methods are still competitive. The dataset and benchmarking tool created for this challenge have been made publicly available at https://surgt.grand-challenge.org/. This challenge is expected to contribute to the development of autonomous robotic surgery and other digital surgical technologies.


Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Benchmarking , Algoritmos , Endoscopia , Processamento de Imagem Assistida por Computador/métodos
7.
IEEE Trans Pattern Anal Mach Intell ; 44(7): 3779-3790, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-33566758

RESUMO

Among the greatest of the challenges of minimally invasive surgery (MIS) is the inadequate visualisation of the surgical field through keyhole incisions. Moreover, occlusions caused by instruments or bleeding can completely obfuscate anatomical landmarks, reduce surgical vision and lead to iatrogenic injury. The aim of this paper is to propose an unsupervised end-to-end deep learning framework, based on fully convolutional neural networks to reconstruct the view of the surgical scene under occlusions and provide the surgeon with intraoperative see-through vision in these areas. A novel generative densely connected encoder-decoder architecture has been designed which enables the incorporation of temporal information by introducing a new type of 3D convolution, the so called 3D partial convolution, to enhance the learning capabilities of the network and fuse temporal and spatial information. To train the proposed framework, a unique loss function has been proposed which combines feature matching, reconstruction, style, temporal and adversarial loss terms, for generating high fidelity image reconstructions. Advancing the state-of-the-art, our method can reconstruct the underlying view obstructed by irregularly shaped occlusions of divergent size, location and orientation. The proposed method has been validated on in vivo MIS video data, as well as natural scenes on a range of occlusion-to-image (OIR) ratios. It has also been compared against the latest video inpainting models in terms of image reconstruction quality using different assessment metrics. The performance evaluation analysis verifies the superiority of our proposed method and its potential clinical value.


Assuntos
Algoritmos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
8.
IEEE Trans Med Robot Bionics ; 4(2): 335-338, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-36148137

RESUMO

Surgical instrument segmentation and depth estimation are crucial steps to improve autonomy in robotic surgery. Most recent works treat these problems separately, making the deployment challenging. In this paper, we propose a unified framework for depth estimation and surgical tool segmentation in laparoscopic images. The network has an encoder-decoder architecture and comprises two branches for simultaneously performing depth estimation and segmentation. To train the network end to end, we propose a new multi-task loss function that effectively learns to estimate depth in an unsupervised manner, while requiring only semi-ground truth for surgical tool segmentation. We conducted extensive experiments on different datasets to validate these findings. The results showed that the end-to-end network successfully improved the state-of-the-art for both tasks while reducing the complexity during their deployment.

9.
Eur Urol Focus ; 8(2): 613-622, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-33941503

RESUMO

CONTEXT: As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them. OBJECTIVES: To provide ethical guidance on developing narrow AI applications for surgical training curricula. We define standardised approaches to developing AI driven applications in surgical training that address current recognised ethical implications of utilising AI on surgical data. We aim to describe an ethical approach based on the current evidence, understanding of AI and available technologies, by seeking consensus from an expert committee. EVIDENCE ACQUISITION: The project was carried out in 3 phases: (1) A steering group was formed to review the literature and summarize current evidence. (2) A larger expert panel convened and discussed the ethical implications of AI application based on the current evidence. A survey was created, with input from panel members. (3) Thirdly, panel-based consensus findings were determined using an online Delphi process to formulate guidance. 30 experts in AI implementation and/or training including clinicians, academics and industry contributed. The Delphi process underwent 3 rounds. Additions to the second and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement. EVIDENCE SYNTHESIS: There was 100% response from all 3 rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of >0.8. There was 100% consensus that there is currently a lack of guidance on the utilisation of AI in the setting of robotic surgical training. Consensus was reached in multiple areas, including: 1. Data protection and privacy; 2. Reproducibility and transparency; 3. Predictive analytics; 4. Inherent biases; 5. Areas of training most likely to benefit from AI. CONCLUSIONS: Using the Delphi methodology, we achieved international consensus among experts to develop and reach content validation for guidance on ethical implications of AI in surgical training. Providing an ethical foundation for launching narrow AI applications in surgical training. This guidance will require further validation. PATIENT SUMMARY: As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.In this paper we provide guidance on ethical implications of AI in surgical training.


Assuntos
Procedimentos Cirúrgicos Robóticos , Inteligência Artificial , Consenso , Técnica Delphi , Humanos , Reprodutibilidade dos Testes
10.
Med Image Anal ; 76: 102306, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34879287

RESUMO

Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.


Assuntos
Ciência de Dados , Aprendizado de Máquina , Humanos
11.
World Neurosurg ; 149: e669-e686, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33588081

RESUMO

BACKGROUND/OBJECTIVE: Technical skill acquisition is an essential component of neurosurgical training. Educational theory suggests that optimal learning and improvement in performance depends on the provision of objective feedback. Therefore, the aim of this study was to develop a vision-based framework based on a novel representation of surgical tool motion and interactions capable of automated and objective assessment of microsurgical skill. METHODS: Videos were obtained from 1 expert, 6 intermediate, and 12 novice surgeons performing arachnoid dissection in a validated clinical model using a standard operating microscope. A mask region convolutional neural network framework was used to segment the tools present within the operative field in a recorded video frame. Tool motion analysis was achieved using novel triangulation metrics. Performance of the framework in classifying skill levels was evaluated using the area under the curve and accuracy. Objective measures of classifying the surgeons' skill level were also compared using the Mann-Whitney U test, and a value of P < 0.05 was considered statistically significant. RESULTS: The area under the curve was 0.977 and the accuracy was 84.21%. A number of differences were found, which included experts having a lower median dissector velocity (P = 0.0004; 190.38 ms-1 vs. 116.38 ms-1), and a smaller inter-tool tip distance (median 46.78 vs. 75.92; P = 0.0002) compared with novices. CONCLUSIONS: Automated and objective analysis of microsurgery is feasible using a mask region convolutional neural network, and a novel tool motion and interaction representation. This may support technical skills training and assessment in neurosurgery.


Assuntos
Competência Clínica , Aprendizado Profundo , Microcirurgia/normas , Neurocirurgia/educação , Procedimentos Neurocirúrgicos/normas , Gravação em Vídeo , Inteligência Artificial , Automação , Humanos , Modelos Anatômicos , Reprodutibilidade dos Testes , Treinamento por Simulação
12.
Int J Comput Assist Radiol Surg ; 15(5): 819-826, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32333360

RESUMO

PURPOSE: In the last decade, there has been a great effort to bring mixed reality (MR) into the operating room to assist surgeons intraoperatively. However, progress towards this goal is still at an early stage. The aim of this paper is to propose a MR visualisation platform which projects multiple imaging modalities to assist intraoperative surgical guidance. METHODOLOGY: In this work, a MR visualisation platform has been developed for the Microsoft HoloLens. The platform contains three visualisation components, namely a 3D organ model, volumetric data, and tissue morphology captured with intraoperative imaging modalities. Furthermore, a set of novel interactive functionalities have been designed including scrolling through volumetric data and adjustment of the virtual objects' transparency. A pilot user study has been conducted to evaluate the usability of the proposed platform in the operating room. The participants were allowed to interact with the visualisation components and test the different functionalities. Each surgeon answered a questionnaire on the usability of the platform and provided their feedback and suggestions. RESULTS: The analysis of the surgeons' scores showed that the 3D model is the most popular MR visualisation component and neurosurgery is the most relevant speciality for this platform. The majority of the surgeons found the proposed visualisation platform intuitive and would use it in their operating rooms for intraoperative surgical guidance. Our platform has several promising potential clinical applications, including vascular neurosurgery. CONCLUSION: The presented pilot study verified the potential of the proposed visualisation platform and its usability in the operating room. Our future work will focus on enhancing the platform by incorporating the surgeons' suggestions and conducting extensive evaluation on a large group of surgeons.


Assuntos
Realidade Aumentada , Procedimentos Neurocirúrgicos/métodos , Cirurgia Assistida por Computador/métodos , Humanos , Projetos Piloto , Interface Usuário-Computador
13.
Int J Comput Assist Radiol Surg ; 15(8): 1389-1397, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32556919

RESUMO

PURPOSE: In surgical oncology, complete cancer resection and lymph node identification are challenging due to the lack of reliable intraoperative visualization. Recently, endoscopic radio-guided cancer resection has been introduced where a novel tethered laparoscopic gamma detector can be used to determine the location of tracer activity, which can complement preoperative nuclear imaging data and endoscopic imaging. However, these probes do not clearly indicate where on the tissue surface the activity originates, making localization of pathological sites difficult and increasing the mental workload of the surgeons. Therefore, a robust real-time gamma probe tracking system integrated with augmented reality is proposed. METHODS: A dual-pattern marker has been attached to the gamma probe, which combines chessboard vertices and circular dots for higher detection accuracy. Both patterns are detected simultaneously based on blob detection and the pixel intensity-based vertices detector and used to estimate the pose of the probe. Temporal information is incorporated into the framework to reduce tracking failure. Furthermore, we utilized the 3D point cloud generated from structure from motion to find the intersection between the probe axis and the tissue surface. When presented as an augmented image, this can provide visual feedback to the surgeons. RESULTS: The method has been validated with ground truth probe pose data generated using the OptiTrack system. When detecting the orientation of the pose using circular dots and chessboard dots alone, the mean error obtained is [Formula: see text] and [Formula: see text], respectively. As for the translation, the mean error for each pattern is 1.78 mm and 1.81 mm. The detection limits for pitch, roll and yaw are [Formula: see text] and [Formula: see text]-[Formula: see text]-[Formula: see text] . CONCLUSION: The performance evaluation results show that this dual-pattern marker can provide high detection rates, as well as more accurate pose estimation and a larger workspace than the previously proposed hybrid markers. The augmented reality will be used to provide visual feedback to the surgeons on the location of the affected lymph nodes or tumor.


Assuntos
Laparoscopia/métodos , Neoplasias da Próstata/cirurgia , Cirurgia Assistida por Computador/métodos , Raios gama , Humanos , Masculino , Procedimentos Cirúrgicos Minimamente Invasivos/métodos
14.
Int J Comput Assist Radiol Surg ; 13(8): 1187-1199, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29948845

RESUMO

PURPOSE: Probe-based confocal laser endomicroscopy (pCLE) enables in vivo, in situ tissue characterisation without changes in the surgical setting and simplifies the oncological surgical workflow. The potential of this technique in identifying residual cancer tissue and improving resection rates of brain tumours has been recently verified in pilot studies. The interpretation of endomicroscopic information is challenging, particularly for surgeons who do not themselves routinely review histopathology. Also, the diagnosis can be examiner-dependent, leading to considerable inter-observer variability. Therefore, automatic tissue characterisation with pCLE would support the surgeon in establishing diagnosis as well as guide robot-assisted intervention procedures. METHODS: The aim of this work is to propose a deep learning-based framework for brain tissue characterisation for context aware diagnosis support in neurosurgical oncology. An efficient representation of the context information of pCLE data is presented by exploring state-of-the-art CNN models with different tuning configurations. A novel video classification framework based on the combination of convolutional layers with long-range temporal recursion has been proposed to estimate the probability of each tumour class. The video classification accuracy is compared for different network architectures and data representation and video segmentation methods. RESULTS: We demonstrate the application of the proposed deep learning framework to classify Glioblastoma and Meningioma brain tumours based on endomicroscopic data. Results show significant improvement of our proposed image classification framework over state-of-the-art feature-based methods. The use of video data further improves the classification performance, achieving accuracy equal to 99.49%. CONCLUSIONS: This work demonstrates that deep learning can provide an efficient representation of pCLE data and accurately classify Glioblastoma and Meningioma tumours. The performance evaluation analysis shows the potential clinical value of the technique.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/cirurgia , Endoscopia , Microscopia Confocal , Sistemas de Apoio a Decisões Clínicas , Humanos , Variações Dependentes do Observador
15.
Med Image Anal ; 30: 144-157, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-26970592

RESUMO

With recent advances in biophotonics, techniques such as narrow band imaging, confocal laser endomicroscopy, fluorescence spectroscopy, and optical coherence tomography, can be combined with normal white-light endoscopes to provide in vivo microscopic tissue characterisation, potentially avoiding the need for offline histological analysis. Despite the advantages of these techniques to provide online optical biopsy in situ, it is challenging for gastroenterologists to retarget the optical biopsy sites during endoscopic examinations. This is because optical biopsy does not leave any mark on the tissue. Furthermore, typical endoscopic cameras only have a limited field-of-view and the biopsy sites often enter or exit the camera view as the endoscope moves. In this paper, a framework for online tracking and retargeting is proposed based on the concept of tracking-by-detection. An online detection cascade is proposed where a random binary descriptor using Haar-like features is included as a random forest classifier. For robust retargeting, we have also proposed a RANSAC-based location verification component that incorporates shape context. The proposed detection cascade can be readily integrated with other temporal trackers. Detailed performance evaluation on in vivo gastrointestinal video sequences demonstrates the performance advantage of the proposed method over the current state-of-the-art.


Assuntos
Algoritmos , Endoscopia Gastrointestinal/métodos , Interpretação de Imagem Assistida por Computador/métodos , Biópsia Guiada por Imagem/métodos , Reconhecimento Automatizado de Padrão/métodos , Aumento da Imagem/métodos , Sistemas On-Line , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
16.
Int J Comput Assist Radiol Surg ; 11(6): 929-36, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27008473

RESUMO

PURPOSE: In microsurgery, accurate recovery of the deformation of the surgical environment is important for mitigating the risk of inadvertent tissue damage and avoiding instrument maneuvers that may cause injury. The analysis of intraoperative microscopic data can allow the estimation of tissue deformation and provide to the surgeon useful feedback on the instrument forces exerted on the tissue. In practice, vision-based recovery of tissue deformation during tool-tissue interaction can be challenging due to tissue elasticity and unpredictable motion. METHODS: The aim of this work is to propose an approach for deformation recovery based on quasi-dense 3D stereo reconstruction. The proposed framework incorporates a new stereo correspondence method for estimating the underlying 3D structure. Probabilistic tracking and surface mapping are used to estimate 3D point correspondences across time and recover localized tissue deformations in the surgical site. RESULTS: We demonstrate the application of this method to estimating forces exerted on tissue surfaces. A clinically relevant experimental setup was used to validate the proposed framework on phantom data. The quantitative and qualitative performance evaluation results show that the proposed 3D stereo reconstruction and deformation recovery methods achieve submillimeter accuracy. The force-displacement model also provides accurate estimates of the exerted forces. CONCLUSIONS: A novel approach for tissue deformation recovery has been proposed based on reliable quasi-dense stereo correspondences. The proposed framework does not rely on additional equipment, allowing seamless integration with the existing surgical workflow. The performance evaluation analysis shows the potential clinical value of the technique.


Assuntos
Imageamento Tridimensional/métodos , Microcirurgia/métodos , Procedimentos Neurocirúrgicos/métodos , Cirurgia Assistida por Computador/métodos , Humanos , Modelos Teóricos , Imagens de Fantasmas
17.
Int J Comput Assist Radiol Surg ; 11(4): 553-68, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26450107

RESUMO

PURPOSE: Advances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room. METHODS: The review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive. RESULTS: Studies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices. CONCLUSION: ML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the state of the art in surgical robotics. Current devices possess no intelligence whatsoever and are merely advanced and expensive instruments.


Assuntos
Aprendizado de Máquina , Robótica/instrumentação , Desenho de Equipamento , Humanos
19.
Int J Comput Assist Radiol Surg ; 10(6): 801-13, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25903774

RESUMO

PURPOSE: Bronchoscopy is a standard technique for airway examination, providing a minimally invasive approach for both diagnosis and treatment of pulmonary diseases. To target lesions identified pre-operatively, it is necessary to register the location of the bronchoscope to the CT bronchial model during the examination. Existing vision-based techniques rely on the registration between virtually rendered endobronchial images and videos based on image intensity or surface geometry. However, intensity-based approaches are sensitive to illumination artefacts, while gradient-based approaches are vulnerable to surface texture. METHODS: In this paper, depth information is employed in a novel way to achieve continuous and robust camera localisation. Surface shading has been used to recover depth from endobronchial images. The pose of the bronchoscopic camera is estimated by maximising the similarity between the depth recovered from a video image and that captured from a virtual camera projection of the CT model. The normalised cross-correlation and mutual information have both been used and compared for the similarity measure. RESULTS: The proposed depth-based tracking approach has been validated on both phantom and in vivo data. It outperforms the existing vision-based registration methods resulting in smaller pose estimation error of the bronchoscopic camera. It is shown that the proposed approach is more robust to illumination artefacts and surface texture and less sensitive to camera pose initialisation. CONCLUSIONS: A reliable camera localisation technique has been proposed based on depth information for bronchoscopic navigation. Qualitative and quantitative performance evaluations show the clinical value of the proposed framework.


Assuntos
Broncoscópios , Broncoscopia/métodos , Imageamento Tridimensional/métodos , Algoritmos , Humanos , Iluminação , Reprodutibilidade dos Testes
20.
Pulm Circ ; 5(3): 498-505, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26401250

RESUMO

In a subgroup of patients with systemic sclerosis (SSc), vasospasm affecting the pulmonary circulation may contribute to worsening respiratory symptoms, including dyspnea. Noninvasive assessment of pulmonary blood flow (PBF), utilizing inert-gas rebreathing (IGR) and dual-energy computed-tomography pulmonary angiography (DE-CTPA), may be useful for identifying pulmonary vasospasm. Thirty-one participants (22 SSc patients and 9 healthy volunteers) underwent PBF assessment with IGR and DE-CTPA at baseline and after provocation with a cold-air inhalation challenge (CACh). Before the study investigations, participants were assigned to subgroups: group A included SSc patients who reported increased breathlessness after exposure to cold air (n = 11), group B included SSc patients without cold-air sensitivity (n = 11), and group C patients included the healthy volunteers. Median change in PBF from baseline was compared between groups A, B, and C after CACh. Compared with groups B and C, in group A there was a significant decline in median PBF from baseline at 10 minutes (-10%; range: -52.2% to 4.0%; P < 0.01), 20 minutes (-17.4%; -27.9% to 0.0%; P < 0.01), and 30 minutes (-8.5%; -34.4% to 2.0%; P < 0.01) after CACh. There was no significant difference in median PBF change between groups B or C at any time point and no change in pulmonary perfusion on DE-CTPA. Reduction in pulmonary blood flow following CACh suggests that pulmonary vasospasm may be present in a subgroup of patients with SSc and may contribute to worsening dyspnea on exposure to cold.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA