Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38652416

RESUMO

PURPOSE: Obtaining large volumes of medical images, required for deep learning development, can be challenging in rare pathologies. Image augmentation and preprocessing offer viable solutions. This work explores the case of necrotising enterocolitis (NEC), a rare but life-threatening condition affecting premature neonates, with challenging radiological diagnosis. We investigate data augmentation and preprocessing techniques and propose two optimised pipelines for developing reliable computer-aided diagnosis models on a limited NEC dataset. METHODS: We present a NEC dataset of 1090 Abdominal X-rays (AXRs) from 364 patients and investigate the effect of geometric augmentations, colour scheme augmentations and their combination for NEC classification based on the ResNet-50 backbone. We introduce two pipelines based on colour contrast and edge enhancement, to increase the visibility of subtle, difficult-to-identify, critical NEC findings on AXRs and achieve robust accuracy in a challenging three-class NEC classification task. RESULTS: Our results show that geometric augmentations improve performance, with Translation achieving +6.2%, while Flipping and Occlusion decrease performance. Colour augmentations, like Equalisation, yield modest improvements. The proposed Pr-1 and Pr-2 pipelines enhance model accuracy by +2.4% and +1.7%, respectively. Combining Pr-1/Pr-2 with geometric augmentation, we achieve a maximum performance increase of 7.1%, achieving robust NEC classification. CONCLUSION: Based on an extensive validation of preprocessing and augmentation techniques, our work showcases the previously unreported potential of image preprocessing in AXR classification tasks with limited datasets. Our findings can be extended to other medical tasks for designing reliable classifier models with limited X-ray datasets. Ultimately, we also provide a benchmark for automated NEC detection and classification from AXRs.

2.
Cancers (Basel) ; 16(6)2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38539533

RESUMO

Post-operative tumour progression in patients with non-functioning pituitary neuroendocrine tumours is variable. The aim of this study was to use machine learning (ML) models to improve the prediction of post-operative outcomes in patients with NF PitNET. We studied data from 383 patients who underwent surgery with or without radiotherapy, with a follow-up period between 6 months and 15 years. ML models, including k-nearest neighbour (KNN), support vector machine (SVM), and decision tree, showed superior performance in predicting tumour progression when compared with parametric statistical modelling using logistic regression, with SVM achieving the highest performance. The strongest predictor of tumour progression was the extent of surgical resection, with patient age, tumour volume, and the use of radiotherapy also showing influence. No features showed an association with tumour recurrence following a complete resection. In conclusion, this study demonstrates the potential of ML models in predicting post-operative outcomes for patients with NF PitNET. Future work should look to include additional, more granular, multicentre data, including incorporating imaging and operative video data.

3.
Br J Surg ; 111(1)2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-37951600

RESUMO

BACKGROUND: There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS: A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS: Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS: Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.


BACKGROUND: Robotic surgery is increasingly used worldwide to treat many different diseases. The robot is controlled by a surgeon, which may give them greater precision and better outcomes for patients. However, surgeons' robotic skills should be assessed properly, to make sure patients are safe, to improve feedback and for exam assessments for certification to indicate competency. This should be done by experts, using assessment tools that have been agreed upon and proven to work. AIM: This review's aim was to find and explain which training and examination tools are best for assessing surgeons' robotic skills and to find out what gaps remain requiring future research. METHOD: This review searched for all available studies looking at assessment tools in robotic surgery and summarized their findings using several different methods. FINDINGS AND CONCLUSION: Two hundred and forty-seven studies were looked at, finding many assessment tools. Further research is needed for operation-specific and automatic assessment tools before they should be used in the clinical setting.


Assuntos
Laparoscopia , Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Procedimentos Cirúrgicos Robóticos/educação , Inteligência Artificial , Competência Clínica , Laparoscopia/educação
4.
Anesthesiology ; 140(1): 85-101, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-37944114

RESUMO

BACKGROUND: The utilization of artificial intelligence and machine learning as diagnostic and predictive tools in perioperative medicine holds great promise. Indeed, many studies have been performed in recent years to explore the potential. The purpose of this systematic review is to assess the current state of machine learning in perioperative medicine, its utility in prediction of complications and prognostication, and limitations related to bias and validation. METHODS: A multidisciplinary team of clinicians and engineers conducted a systematic review using the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) protocol. Multiple databases were searched, including Scopus, Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Cochrane Library, PubMed, Medline, Embase, and Web of Science. The systematic review focused on study design, type of machine learning model used, validation techniques applied, and reported model performance on prediction of complications and prognostication. This review further classified outcomes and machine learning applications using an ad hoc classification system. The Prediction model Risk Of Bias Assessment Tool (PROBAST) was used to assess risk of bias and applicability of the studies. RESULTS: A total of 103 studies were identified. The models reported in the literature were primarily based on single-center validations (75%), with only 13% being externally validated across multiple centers. Most of the mortality models demonstrated a limited ability to discriminate and classify effectively. The PROBAST assessment indicated a high risk of systematic errors in predicted outcomes and artificial intelligence or machine learning applications. CONCLUSIONS: The findings indicate that the development of this field is still in its early stages. This systematic review indicates that application of machine learning in perioperative medicine is still at an early stage. While many studies suggest potential utility, several key challenges must be first overcome before their introduction into clinical practice.


Assuntos
Inteligência Artificial , Medicina Perioperatória , Viés , Bases de Dados Factuais , Aprendizado de Máquina
5.
Br J Hosp Med (Lond) ; 84(12): 1-5, 2023 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-38153019

RESUMO

Artificial intelligence is paving the way in contemporary medical advances, with the potential to revolutionise orthopaedic surgical care. By harnessing the power of complex algorithms, artificial intelligence yields outputs that have diverse applications including, but not limited to, identifying implants, diagnostic imaging for fracture and tumour recognition, prognostic tools through the use of electronic medical records, assessing arthroplasty outcomes, length of hospital stay and economic costs, monitoring the progress of functional rehabilitation, and innovative surgical training via simulation. However, amid the promising potential and enthusiasm surrounding artificial intelligence, clinicians should understand its limitations, and caution is needed before artificial intelligence-driven tools are introduced to clinical practice.


Assuntos
Inteligência Artificial , Ortopedia , Humanos , Aprendizado de Máquina , Algoritmos , Artroplastia
6.
Sensors (Basel) ; 23(21)2023 Nov 03.
Artigo em Inglês | MEDLINE | ID: mdl-37960645

RESUMO

Microsurgery serves as the foundation for numerous operative procedures. Given its highly technical nature, the assessment of surgical skill becomes an essential component of clinical practice and microsurgery education. The interaction forces between surgical tools and tissues play a pivotal role in surgical success, making them a valuable indicator of surgical skill. In this study, we employ six distinct deep learning architectures (LSTM, GRU, Bi-LSTM, CLDNN, TCN, Transformer) specifically designed for the classification of surgical skill levels. We use force data obtained from a novel sensorized surgical glove utilized during a microsurgical task. To enhance the performance of our models, we propose six data augmentation techniques. The proposed frameworks are accompanied by a comprehensive analysis, both quantitative and qualitative, including experiments conducted with two cross-validation schemes and interpretable visualizations of the network's decision-making process. Our experimental results show that CLDNN and TCN are the top-performing models, achieving impressive accuracy rates of 96.16% and 97.45%, respectively. This not only underscores the effectiveness of our proposed architectures, but also serves as compelling evidence that the force data obtained through the sensorized surgical glove contains valuable information regarding surgical skill.


Assuntos
Aprendizado Profundo , Microcirurgia , Microcirurgia/educação , Microcirurgia/métodos , Competência Clínica , Luvas Cirúrgicas
7.
Bone Joint Res ; 12(7): 447-454, 2023 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-37423607

RESUMO

The use of artificial intelligence (AI) is rapidly growing across many domains, of which the medical field is no exception. AI is an umbrella term defining the practical application of algorithms to generate useful output, without the need of human cognition. Owing to the expanding volume of patient information collected, known as 'big data', AI is showing promise as a useful tool in healthcare research and across all aspects of patient care pathways. Practical applications in orthopaedic surgery include: diagnostics, such as fracture recognition and tumour detection; predictive models of clinical and patient-reported outcome measures, such as calculating mortality rates and length of hospital stay; and real-time rehabilitation monitoring and surgical training. However, clinicians should remain cognizant of AI's limitations, as the development of robust reporting and validation frameworks is of paramount importance to prevent avoidable errors and biases. The aim of this review article is to provide a comprehensive understanding of AI and its subfields, as well as to delineate its existing clinical applications in trauma and orthopaedic surgery. Furthermore, this narrative review expands upon the limitations of AI and future direction.

8.
Cancers (Basel) ; 15(10)2023 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-37345108

RESUMO

Post-operative endocrine outcomes in patients with non-functioning pituitary adenoma (NFPA) are variable. The aim of this study was to use machine learning (ML) models to better predict medium- and long-term post-operative hypopituitarism in patients with NFPAs. We included data from 383 patients who underwent surgery with or without radiotherapy for NFPAs, with a follow-up period between 6 months and 15 years. ML models, including k-nearest neighbour (KNN), support vector machine (SVM), and decision tree models, showed a superior ability to predict panhypopituitarism compared with non-parametric statistical modelling (mean accuracy: 0.89; mean AUC-ROC: 0.79), with SVM achieving the highest performance (mean accuracy: 0.94; mean AUC-ROC: 0.88). Pre-operative endocrine function was the strongest feature for predicting panhypopituitarism within 1 year post-operatively, while endocrine outcomes at 1 year post-operatively supported strong predictions of panhypopituitarism at 5 and 10 years post-operatively. Other features found to contribute to panhypopituitarism prediction were age, volume of tumour, and the use of radiotherapy. In conclusion, our study demonstrates that ML models show potential in predicting post-operative panhypopituitarism in the medium and long term in patients with NFPM. Future work will include incorporating additional, more granular data, including imaging and operative video data, across multiple centres.

9.
Int J Comput Assist Radiol Surg ; 18(6): 1033-1041, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37002466

RESUMO

PURPOSE: Microsurgical Aneurysm Clipping Surgery (MACS) carries a high risk for intraoperative aneurysm rupture. Automated recognition of instances when the aneurysm is exposed in the surgical video would be a valuable reference point for neuronavigation, indicating phase transitioning and more importantly designating moments of high risk for rupture. This article introduces the MACS dataset containing 16 surgical videos with frame-level expert annotations and proposes a learning methodology for surgical scene understanding identifying video frames with the aneurysm present in the operating microscope's field-of-view. METHODS: Despite the dataset imbalance (80% no presence, 20% presence) and developed without explicit annotations, we demonstrate the applicability of Transformer-based deep learning architectures (MACSSwin-T, vidMACSSwin-T) to detect the aneurysm and classify MACS frames accordingly. We evaluate the proposed models in multiple-fold cross-validation experiments with independent sets and in an unseen set of 15 images against 10 human experts (neurosurgeons). RESULTS: Average (across folds) accuracy of 80.8% (range 78.5-82.4%) and 87.1% (range 85.1-91.3%) is obtained for the image- and video-level approach, respectively, demonstrating that the models effectively learn the classification task. Qualitative evaluation of the models' class activation maps shows these to be localized on the aneurysm's actual location. Depending on the decision threshold, MACSWin-T achieves 66.7-86.7% accuracy in the unseen images, compared to 82% of human raters, with moderate to strong correlation. CONCLUSIONS: Proposed architectures show robust performance and with an adjusted threshold promoting detection of the underrepresented (aneurysm presence) class, comparable to human expert accuracy. Our work represents the first step towards landmark detection in MACS with the aim to inform surgical teams to attend to high-risk moments, taking precautionary measures to avoid rupturing.


Assuntos
Aneurisma Roto , Aneurisma Intracraniano , Humanos , Aneurisma Intracraniano/diagnóstico , Aneurisma Intracraniano/cirurgia , Microcirurgia/métodos , Aneurisma Roto/diagnóstico , Aneurisma Roto/cirurgia , Neuronavegação/métodos
10.
J Gastroenterol Hepatol ; 38(5): 768-774, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36652526

RESUMO

BACKGROUND AND AIM: Lack of visual recognition of colorectal polyps may lead to interval cancers. The mechanisms contributing to perceptual variation, particularly for subtle and advanced colorectal neoplasia, have scarcely been investigated. We aimed to evaluate visual recognition errors and provide novel mechanistic insights. METHODS: Eleven participants (seven trainees and four medical students) evaluated images from the UCL polyp perception dataset, containing 25 polyps, using eye-tracking equipment. Gaze errors were defined as those where the lesion was not observed according to eye-tracking technology. Cognitive errors occurred when lesions were observed but not recognized as polyps by participants. A video study was also performed including 39 subtle polyps, where polyp recognition performance was compared with a convolutional neural network. RESULTS: Cognitive errors occurred more frequently than gaze errors overall (65.6%), with a significantly higher proportion in trainees (P = 0.0264). In the video validation, the convolutional neural network detected significantly more polyps than trainees and medical students, with per-polyp sensitivities of 79.5%, 30.0%, and 15.4%, respectively. CONCLUSIONS: Cognitive errors were the most common reason for visual recognition errors. The impact of interventions such as artificial intelligence, particularly on different types of perceptual errors, needs further investigation including potential effects on learning curves. To facilitate future research, a publicly accessible visual perception colonoscopy polyp database was created.


Assuntos
Pólipos do Colo , Neoplasias Colorretais , Humanos , Pólipos do Colo/diagnóstico , Pólipos do Colo/patologia , Tecnologia de Rastreamento Ocular , Inteligência Artificial , Colonoscopia/métodos , Neoplasias Colorretais/diagnóstico , Neoplasias Colorretais/patologia
11.
Med Image Anal ; 84: 102709, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36549045

RESUMO

We propose an endoscopic image mosaicking algorithm that is robust to light conditioning changes, specular reflections, and feature-less scenes. These conditions are especially common in minimally invasive surgery where the light source moves with the camera to dynamically illuminate close range scenes. This makes it difficult for a single image registration method to robustly track camera motion and then generate consistent mosaics of the expanded surgical scene across different and heterogeneous environments. Instead of relying on one specialised feature extractor or image registration method, we propose to fuse different image registration algorithms according to their uncertainties, formulating the problem as affine pose graph optimisation. This allows to combine landmarks, dense intensity registration, and learning-based approaches in a single framework. To demonstrate our application we consider deep learning-based optical flow, hand-crafted features, and intensity-based registration, however, the framework is general and could take as input other sources of motion estimation, including other sensor modalities. We validate the performance of our approach on three datasets with very different characteristics to highlighting its generalisability, demonstrating the advantages of our proposed fusion framework. While each individual registration algorithm eventually fails drastically on certain surgical scenes, the fusion approach flexibly determines which algorithms to use and in which proportion to more robustly obtain consistent mosaics.


Assuntos
Algoritmos , Endoscopia , Humanos , Endoscopia/métodos , Movimento (Física) , Procedimentos Cirúrgicos Minimamente Invasivos , Processamento de Imagem Assistida por Computador/métodos
12.
IEEE Trans Med Imaging ; 41(11): 3218-3230, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35675257

RESUMO

Reconstructing the 3D geometry of the surgical site and detecting instruments within it are important tasks for surgical navigation systems and robotic surgery automation. Traditional approaches treat each problem in isolation and do not account for the intrinsic relationship between segmentation and stereo matching. In this paper, we present a learning-based framework that jointly estimates disparity and binary tool segmentation masks. The core component of our architecture is a shared feature encoder which allows strong interaction between the aforementioned tasks. Experimentally, we train two variants of our network with different capacities and explore different training schemes including both multi-task and single-task learning. Our results show that supervising the segmentation task improves our network's disparity estimation accuracy. We demonstrate a domain adaptation scheme where we supervise the segmentation task with monocular data and achieve domain adaptation of the adjacent disparity task, reducing disparity End-Point-Error and depth mean absolute error by 77.73% and 61.73% respectively compared to the pre-trained baseline model. Our best overall multi-task model, trained with both disparity and segmentation data in subsequent phases, achieves 89.15% mean Intersection-over-Union in RIS and 3.18 millimetre depth mean absolute error in SCARED test sets. Our proposed multi-task architecture is real-time, able to process ( 1280×1024 ) stereo input and simultaneously estimate disparity maps and segmentation masks at 22 frames per second. The model code and pre-trained models are made available: https://github.com/dimitrisPs/msdesis.


Assuntos
Processamento de Imagem Assistida por Computador , Procedimentos Cirúrgicos Robóticos , Processamento de Imagem Assistida por Computador/métodos , Instrumentos Cirúrgicos , Automação
13.
Endosc Int Open ; 9(2): E171-E180, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33532555

RESUMO

Background and study aims Colonoscopy is a technically challenging procedure that requires extensive training to minimize discomfort and avoid trauma due to its drive mechanism. Our academic team developed a magnetic flexible endoscope (MFE) actuated by magnetic coupling under supervisory robotic control to enable a front-pull maneuvering mechanism, with a motion controller user interface, to minimize colon wall stress and potentially reduce the learning curve. We aimed to evaluate this learning curve and understand the user experience. Methods Five novices (no endoscopy experience), five experienced endoscopists, and five experienced MFE users each performed 40 trials on a model colon using 1:1 block randomization between a pediatric colonoscope (PCF) and the MFE. Cecal intubation (CI) success, time to cecum, and user experience (NASA task load index) were measured. Learning curves were determined by the number of trials needed to reach minimum and average proficiency-defined as the slowest average CI time by an experienced user and the average CI time by all experienced users, respectively. Results MFE minimum proficiency was achieved by all five novices (median 3.92 trials) and five experienced endoscopists (median 2.65 trials). MFE average proficiency was achieved by four novices (median 14.21 trials) and four experienced endoscopists (median 7.00 trials). PCF minimum and average proficiency levels were achieved by only one novice. Novices' perceived workload with the MFE significantly improved after obtaining minimum proficiency. Conclusions The MFE has a short learning curve for users with no prior experience-requiring relatively few attempts to reach proficiency and at a reduced perceived workload.

14.
Prenat Diagn ; 41(2): 271-277, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33103808

RESUMO

OBJECTIVE: Widely accepted, validated and objective measures of ultrasound competency have not been established for clinical practice. Outcomes of training curricula are often based on arbitrary thresholds, such as the number of clinical cases completed. We aimed to define metrics against which competency could be measured. METHOD: We undertook a prospective, observational study of obstetric sonographers at a UK University Teaching Hospital. Participants were either experienced in fetal ultrasound (n = 10, >200 ultrasound examinations) or novice operators (n = 10, <25 ultrasound examinations). We recorded probe motion data during the performance of biometry on a commercially available mid-trimester phantom. RESULTS: We report that Dimensionless squared jerk, an assessment of deliberate hand movements, independent of movement duration, extent, spurious peaks and dimension differed significantly different between groups, 19.26 (SD 3.02) for experienced and 22.08 (SD 1.05, p = 0.01) for novice operators, respectively. Experienced operator performance, was associated with a shorter time to task completion of 176.46 s (SD 47.31) compared to 666.94 s (SD 490.36, p = 0.0004) for novice operators. Probe travel was also shorter for experienced operators 521.23 mm (SD 27.41) versus 2234.82 mm (SD 188.50, p = 0.007) when compared to novice operators. CONCLUSION: Our results represent progress toward an objective assessment of technical skill in obstetric ultrasound. Repeating this methodology in a clinical environment may develop insight into the generalisability of these findings into ultrasound education.


Assuntos
Competência Clínica , Feto/diagnóstico por imagem , Mãos , Movimento , Ultrassonografia Pré-Natal/normas , Biometria , Feminino , Feto/anatomia & histologia , Humanos , Imagens de Fantasmas , Gravidez
15.
Comput Methods Programs Biomed ; 192: 105420, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32171151

RESUMO

Background and objectivesAutomated segmentation and tracking of surgical instruments and catheters under X-ray fluoroscopy hold the potential for enhanced image guidance in catheter-based endovascular procedures. This article presents a novel method for real-time segmentation of catheters and guidewires in 2d X-ray images. We employ Convolutional Neural Networks (CNNs) and propose a transfer learning approach, using synthetic fluoroscopic images, to develop a lightweight version of the U-Net architecture. Our strategy, requiring a small amount of manually annotated data, streamlines the training process and results in a U-Net model, which achieves comparable performance to the state-of-the-art segmentation, with a decreased number of trainable parameters. MethodsThe proposed transfer learning approach exploits high-fidelity synthetic images generated from real fluroscopic backgrounds. We implement a two-stage process, initial end-to-end training and fine-tuning, to develop two versions of our model, using synthetic and phantom fluoroscopic images independently. A small number of manually annotated in-vivo images is employed to fine-tune the deepest 7 layers of the U-Net architecture, producing a network specialized for pixel-wise catheter/guidewire segmentation. The network takes as input a single grayscale image and outputs the segmentation result as a binary mask against the background. ResultsEvaluation is carried out with images from in-vivo fluoroscopic video sequences from six endovascular procedures, with different surgical setups. We validate the effectiveness of developing the U-Net models using synthetic data, in tests where fine-tuning and testing in-vivo takes place both by dividing data from all procedures into independent fine-tuning/testing subsets as well as by using different in-vivo sequences. Accurate catheter/guidewire segmentation (average Dice coefficient of  ~ 0.55,  ~ 0.26 and  ~ 0.17) is obtained with both U-Net models. Compared to the state-of-the-art CNN models, the proposed U-Net achieves comparable performance ( ± 5% average Dice coefficients) in terms of segmentation accuracy, while yielding a 84% reduction of the testing time. This adds flexibility for real-time operation and makes our network adaptable to increased input resolution. ConclusionsThis work presents a new approach in the development of CNN models for pixel-wise segmentation of surgical catheters in X-ray fluoroscopy, exploiting synthetic images and transfer learning. Our methodology reduces the need for manually annotating large volumes of data for training. This represents an important advantage, given that manual pixel-wise annotations is a key bottleneck in developing CNN segmentation models. Combined with a simplified U-Net model, our work yields significant advantages compared to current state-of-the-art solutions.


Assuntos
Catéteres , Aprendizado Profundo , Fluoroscopia , Redes Neurais de Computação , Cirurgia Assistida por Computador , Raios X , Humanos , Processamento de Imagem Assistida por Computador
16.
Visc Med ; 36(6): 456-462, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33447601

RESUMO

BACKGROUND: Multiple types of surgical cameras are used in modern surgical practice and provide a rich visual signal that is used by surgeons to visualize the clinical site and make clinical decisions. This signal can also be used by artificial intelligence (AI) methods to provide support in identifying instruments, structures, or activities both in real-time during procedures and postoperatively for analytics and understanding of surgical processes. SUMMARY: In this paper, we provide a succinct perspective on the use of AI and especially computer vision to power solutions for the surgical operating room (OR). The synergy between data availability and technical advances in computational power and AI methodology has led to rapid developments in the field and promising advances. KEY MESSAGES: With the increasing availability of surgical video sources and the convergence of technologies around video storage, processing, and understanding, we believe clinical solutions and products leveraging vision are going to become an important component of modern surgical capabilities. However, both technical and clinical challenges remain to be overcome to efficiently make use of vision-based approaches into the clinic.

17.
Pattern Recognit Lett ; 120: 75-81, 2019 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-31007321

RESUMO

Computational stereo is one of the classical problems in computer vision. Numerous algorithms and solutions have been reported in recent years focusing on developing methods for computing similarity, aggregating it to obtain spatial support and finally optimizing an energy function to find the final disparity. In this paper, we focus on the feature extraction component of stereo matching architecture and we show standard CNNs operation can be used to improve the quality of the features used to find point correspondences. Furthermore, we use a simple space aggregation that hugely simplifies the correlation learning problem, allowing us to better evaluate the quality of the features extracted. Our results on benchmark data are compelling and show promising potential even without refining the solution.

18.
Lancet Gastroenterol Hepatol ; 4(1): 71-80, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30527583

RESUMO

Computer-aided diagnosis offers a promising solution to reduce variation in colonoscopy performance. Pooled miss rates for polyps are as high as 22%, and associated interval colorectal cancers after colonoscopy are of concern. Optical biopsy, whereby in-vivo classification of polyps based on enhanced imaging replaces histopathology, has not been incorporated into routine practice because it is limited by interobserver variability and generally only meets accepted standards in expert settings. Real-time decision-support software has been developed to detect and characterise polyps, and also to offer feedback on the technical quality of inspection. Some of the current algorithms, particularly with recent advances in artificial intelligence techniques, match human expert performance for optical biopsy. In this Review, we summarise the evidence for clinical applications of computer-aided diagnosis and artificial intelligence in colonoscopy.


Assuntos
Colonoscopia/métodos , Neoplasias Colorretais/diagnóstico , Aprendizado Profundo , Diagnóstico por Computador/métodos , Pólipos Intestinais/diagnóstico , Algoritmos , Colonoscopia/normas , Neoplasias Colorretais/patologia , Técnicas de Apoio para a Decisão , Diagnóstico por Computador/normas , Humanos , Pólipos Intestinais/patologia , Controle de Qualidade , Software , Espectrometria de Fluorescência
19.
Knee ; 25(6): 1214-1221, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-29933932

RESUMO

PURPOSE: This study aimed to determine the effect of a simulation course on gaze fixation strategies of participants performing arthroscopy. METHODS: Participants (n = 16) were recruited from two one-day simulation-based knee arthroscopy courses, and were asked to undergo a task before and after the course, which involved identifying a series of arthroscopic landmarks. The gaze fixation of the participants was recorded with a wearable eye-tracking system. The time taken to complete the task and proportion of time participants spent with their gaze fixated on the arthroscopic stack, the knee model, and away from the stack or knee model were recorded. RESULTS: Participants demonstrated a statistically decreased completion time in their second attempt compared to the first attempt (P = 0.001). In their second attempt, they also demonstrated improved gaze fixation strategies, with a significantly increased amount (P = 0.008) and proportion of time (P = 0.003) spent fixated on the screen vs. knee model. CONCLUSION: Simulation improved arthroscopic skills in orthopaedic surgeons, specifically by improving their gaze control strategies and decreasing the amount of time taken to identify and mark landmarks in an arthroscopic task.


Assuntos
Artroscopia/educação , Fixação Ocular , Articulação do Joelho/cirurgia , Treinamento por Simulação , Competência Clínica , Feminino , Humanos , Masculino , Modelos Anatômicos , Ortopedia
20.
Endosc Int Open ; 6(2): E205-E210, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-29399619

RESUMO

BACKGROUND AND STUDY AIMS: Capsule endoscopy (CE) is invaluable for minimally invasive endoscopy of the gastrointestinal tract; however, several technological limitations remain including lack of reliable lesion localization. We present an approach to 3D reconstruction and localization using visual information from 2D CE images. PATIENTS AND METHODS: Colored thumbtacks were secured in rows to the internal wall of a LifeLike bowel model. A PillCam SB3 was calibrated and navigated linearly through the lumen by a high-precision robotic arm. The motion estimation algorithm used data (light falling on the object, fraction of reflected light and surface geometry) from 2D CE images in the video sequence to achieve 3D reconstruction of the bowel model at various frames. The ORB-SLAM technique was used for 3D reconstruction and CE localization within the reconstructed model. This algorithm compared pairs of points between images for reconstruction and localization. RESULTS: As the capsule moved through the model bowel 42 to 66 video frames were obtained per pass. Mean absolute error in the estimated distance travelled by the CE was 4.1 ±â€Š3.9 cm. Our algorithm was able to reconstruct the cylindrical shape of the model bowel with details of the attached thumbtacks. ORB-SLAM successfully reconstructed the bowel wall from simultaneous frames of the CE video. The "track" in the reconstruction corresponded well with the linear forwards-backwards movement of the capsule through the model lumen. CONCLUSION: The reconstruction methods, detailed above, were able to achieve good quality reconstruction of the bowel model and localization of the capsule trajectory using information from the CE video and images alone.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...