Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38758289

RESUMO

PURPOSE: The recent segment anything model (SAM) has demonstrated impressive performance with point, text or bounding box prompts, in various applications. However, in safety-critical surgical tasks, prompting is not possible due to (1) the lack of per-frame prompts for supervised learning, (2) it is unrealistic to prompt frame-by-frame in a real-time tracking application, and (3) it is expensive to annotate prompts for offline applications. METHODS: We develop Surgical-DeSAM to generate automatic bounding box prompts for decoupling SAM to obtain instrument segmentation in real-time robotic surgery. We utilise a commonly used detection architecture, DETR, and fine-tuned it to obtain bounding box prompt for the instruments. We then empolyed decoupling SAM (DeSAM) by replacing the image encoder with DETR encoder and fine-tune prompt encoder and mask decoder to obtain instance segmentation for the surgical instruments. To improve detection performance, we adopted the Swin-transformer to better feature representation. RESULTS: The proposed method has been validated on two publicly available datasets from the MICCAI surgical instruments segmentation challenge EndoVis 2017 and 2018. The performance of our method is also compared with SOTA instrument segmentation methods and demonstrated significant improvements with dice metrics of 89.62 and 90.70 for the EndoVis 2017 and 2018 CONCLUSION: Our extensive experiments and validations demonstrate that Surgical-DeSAM enables real-time instrument segmentation without any additional prompting and outperforms other SOTA segmentation methods.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38528306

RESUMO

PURPOSE: Endoscopic pituitary surgery entails navigating through the nasal cavity and sphenoid sinus to access the sella using an endoscope. This procedure is intricate due to the proximity of crucial anatomical structures (e.g. carotid arteries and optic nerves) to pituitary tumours, and any unintended damage can lead to severe complications including blindness and death. Intraoperative guidance during this surgery could support improved localization of the critical structures leading to reducing the risk of complications. METHODS: A deep learning network PitSurgRT is proposed for real-time localization of critical structures in endoscopic pituitary surgery. The network uses high-resolution net (HRNet) as a backbone with a multi-head for jointly localizing critical anatomical structures while segmenting larger structures simultaneously. Moreover, the trained model is optimized and accelerated by using TensorRT. Finally, the model predictions are shown to neurosurgeons, to test their guidance capabilities. RESULTS: Compared with the state-of-the-art method, our model significantly reduces the mean error in landmark detection of the critical structures from 138.76 to 54.40 pixels in a 1280 × 720-pixel image. Furthermore, the semantic segmentation of the most critical structure, sella, is improved by 4.39% IoU. The inference speed of the accelerated model achieves 298 frames per second with floating-point-16 precision. In the study of 15 neurosurgeons, 88.67% of predictions are considered accurate enough for real-time guidance. CONCLUSION: The results from the quantitative evaluation, real-time acceleration, and neurosurgeon study demonstrate the proposed method is highly promising in providing real-time intraoperative guidance of the critical anatomical structures in endoscopic pituitary surgery.

3.
Int J Comput Assist Radiol Surg ; 19(3): 481-492, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38066354

RESUMO

PURPOSE: In twin-to-twin transfusion syndrome (TTTS), abnormal vascular anastomoses in the monochorionic placenta can produce uneven blood flow between the two fetuses. In the current practice, TTTS is treated surgically by closing abnormal anastomoses using laser ablation. This surgery is minimally invasive and relies on fetoscopy. Limited field of view makes anastomosis identification a challenging task for the surgeon. METHODS: To tackle this challenge, we propose a learning-based framework for in vivo fetoscopy frame registration for field-of-view expansion. The novelties of this framework rely on a learning-based keypoint proposal network and an encoding strategy to filter (i) irrelevant keypoints based on fetoscopic semantic image segmentation and (ii) inconsistent homographies. RESULTS: We validate our framework on a dataset of six intraoperative sequences from six TTTS surgeries from six different women against the most recent state-of-the-art algorithm, which relies on the segmentation of placenta vessels. CONCLUSION: The proposed framework achieves higher performance compared to the state of the art, paving the way for robust mosaicking to provide surgeons with context awareness during TTTS surgery.


Assuntos
Transfusão Feto-Fetal , Terapia a Laser , Gravidez , Feminino , Humanos , Fetoscopia/métodos , Transfusão Feto-Fetal/diagnóstico por imagem , Transfusão Feto-Fetal/cirurgia , Placenta/cirurgia , Placenta/irrigação sanguínea , Terapia a Laser/métodos , Algoritmos
5.
Med Image Anal ; 92: 103066, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38141453

RESUMO

Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.


Assuntos
Transfusão Feto-Fetal , Placenta , Feminino , Humanos , Gravidez , Algoritmos , Transfusão Feto-Fetal/diagnóstico por imagem , Transfusão Feto-Fetal/cirurgia , Transfusão Feto-Fetal/patologia , Fetoscopia/métodos , Feto , Placenta/diagnóstico por imagem
6.
Med Image Anal ; 91: 102985, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37844472

RESUMO

This paper introduces the "SurgT: Surgical Tracking" challenge which was organized in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2022). There were two purposes for the creation of this challenge: (1) the establishment of the first standardized benchmark for the research community to assess soft-tissue trackers; and (2) to encourage the development of unsupervised deep learning methods, given the lack of annotated data in surgery. A dataset of 157 stereo endoscopic videos from 20 clinical cases, along with stereo camera calibration parameters, have been provided. Participants were assigned the task of developing algorithms to track the movement of soft tissues, represented by bounding boxes, in stereo endoscopic videos. At the end of the challenge, the developed methods were assessed on a previously hidden test subset. This assessment uses benchmarking metrics that were purposely developed for this challenge, to verify the efficacy of unsupervised deep learning algorithms in tracking soft-tissue. The metric used for ranking the methods was the Expected Average Overlap (EAO) score, which measures the average overlap between a tracker's and the ground truth bounding boxes. Coming first in the challenge was the deep learning submission by ICVS-2Ai with a superior EAO score of 0.617. This method employs ARFlow to estimate unsupervised dense optical flow from cropped images, using photometric and regularization losses. Second, Jmees with an EAO of 0.583, uses deep learning for surgical tool segmentation on top of a non-deep learning baseline method: CSRT. CSRT by itself scores a similar EAO of 0.563. The results from this challenge show that currently, non-deep learning methods are still competitive. The dataset and benchmarking tool created for this challenge have been made publicly available at https://surgt.grand-challenge.org/. This challenge is expected to contribute to the development of autonomous robotic surgery and other digital surgical technologies.


Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Benchmarking , Algoritmos , Endoscopia , Processamento de Imagem Assistida por Computador/métodos
7.
Int J Comput Assist Radiol Surg ; 18(7): 1245-1252, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37233893

RESUMO

PURPOSE: Robotic ophthalmic microsurgery has significant potential to help improve the success of challenging procedures and overcome the physical limitations of the surgeon. Intraoperative optical coherence tomography (iOCT) has been reported for the visualisation of ophthalmic surgical manoeuvres, where deep learning methods can be used for real-time tissue segmentation and surgical tool tracking. However, many of these methods rely heavily on labelled datasets, where producing annotated segmentation datasets is a time-consuming and tedious task. METHODS: To address this challenge, we propose a robust and efficient semi-supervised method for boundary segmentation in retinal OCT to guide a robotic surgical system. The proposed method uses U-Net as the base model and implements a pseudo-labelling strategy which combines the labelled data with unlabelled OCT scans during training. After training, the model is optimised and accelerated with the use of TensorRT. RESULTS: Compared with fully supervised learning, the pseudo-labelling method can improve the generalisability of the model and show better performance for unseen data from a different distribution using only 2% of labelled training samples. The accelerated GPU inference takes less than 1 millisecond per frame with FP16 precision. CONCLUSION: Our approach demonstrates the potential of using pseudo-labelling strategies in real-time OCT segmentation tasks to guide robotic systems. Furthermore, the accelerated GPU inference of our network is highly promising for segmenting OCT images and guiding the position of a surgical tool (e.g. needle) for sub-retinal injections.


Assuntos
Procedimentos Cirúrgicos Oftalmológicos , Retina , Humanos , Retina/diagnóstico por imagem , Retina/cirurgia , Tomografia de Coerência Óptica/métodos , Microcirurgia , Processamento de Imagem Assistida por Computador/métodos
8.
Comput Biol Med ; 152: 106424, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36543005

RESUMO

Gastrointestinal stromal tumour (GIST) lesions are mesenchymal neoplasms commonly found in the upper gastrointestinal tract, but non-invasive GIST detection during an endoscopy remains challenging because their ultrasonic images resemble several benign lesions. Techniques for automatic GIST detection and other lesions from endoscopic ultrasound (EUS) images offer great potential to advance the precision and automation of traditional endoscopy and treatment procedures. However, GIST recognition faces several intrinsic challenges, including the input restriction of a single image modality and the mismatch between tasks and models. To address these challenges, we propose a novel Query2 (Query over Queries) framework to identify GISTs from ultrasound images. The proposed Query2 framework applies an anatomical location embedding layer to break the single image modality. A cross-attention module is then applied to query the queries generated from the basic detection head. Moreover, a single-object restricted detection head is applied to infer the lesion categories. Meanwhile, to drive this network, we present GIST514-DB, a GIST dataset that will be made publicly available, which includes the ultrasound images, bounding boxes, categories and anatomical locations from 514 cases. Extensive experiments on the GIST514-DB demonstrate that the proposed Query2 outperforms most of the state-of-the-art methods.


Assuntos
Tumores do Estroma Gastrointestinal , Humanos , Tumores do Estroma Gastrointestinal/diagnóstico por imagem , Tumores do Estroma Gastrointestinal/patologia , Endossonografia/métodos , Endoscopia Gastrointestinal
9.
Surg Innov ; 30(1): 45-49, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36377296

RESUMO

BACKGROUND: Fluorescence angiography in colorectal surgery is a technique that may lead to lower anastomotic leak rates. However, the interpretation of the fluorescent signal is not standardised and there is a paucity of data regarding interobserver agreement. The aim of this study is to assess interobserver variability in selection of the transection point during fluorescence angiography before anastomosis. METHODS: An online survey with still images of fluorescence angiography was distributed through colorectal surgery channels containing images from 13 patients where several areas for transection were displayed to be chosen by raters. Agreement was assessed overall and between pre-planned rater cohorts (experts vs non-experts; trainees vs consultants; colorectal specialists vs non colorectal specialists), using Fleiss' kappa statistic. RESULTS: 101 raters had complete image ratings. No significant difference was found between raters when choosing a point of optimal bowel transection based on fluorescence angiography still images. There was no difference between pre-planned cohorts analysed (experts vs non-experts; trainees vs consultants; colorectal specialists vs non colorectal specialists). Agreement between these cohorts was poor (<.26). CONCLUSION: Whilst there is no learning curve for the technical adoption of FA, understanding the fluorescent signal characteristics is key to successful use. We found significant variation exists in interpretation of static fluorescence angiography data. Further efforts should be employed to standardise fluorescence angiography assessment.


Assuntos
Neoplasias Colorretais , Humanos , Angiofluoresceinografia/métodos , Variações Dependentes do Observador , Neoplasias Colorretais/cirurgia , Verde de Indocianina , Anastomose Cirúrgica/métodos , Fístula Anastomótica , Corantes
10.
Int J Comput Assist Radiol Surg ; 17(6): 1125-1134, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35503395

RESUMO

PURPOSE: Fetoscopic laser photocoagulation is a minimally invasive procedure to treat twin-to-twin transfusion syndrome during pregnancy by stopping irregular blood flow in the placenta. Building an image mosaic of the placenta and its network of vessels could assist surgeons to navigate in the challenging fetoscopic environment during the procedure. METHODOLOGY: We propose a fetoscopic mosaicking approach by combining deep learning-based optical flow with robust estimation for filtering inconsistent motions that occurs due to floating particles and specularities. While the current state of the art for fetoscopic mosaicking relies on clearly visible vessels for registration, our approach overcomes this limitation by considering the motion of all consistent pixels within consecutive frames. We also overcome the challenges in applying off-the-shelf optical flow to fetoscopic mosaicking through the use of robust estimation and local refinement. RESULTS: We compare our proposed method against the state-of-the-art vessel-based and optical flow-based image registration methods, and robust estimation alternatives. We also compare our proposed pipeline using different optical flow and robust estimation alternatives. CONCLUSIONS: Through analysis of our results, we show that our method outperforms both the vessel-based state of the art and LK, noticeably when vessels are either poorly visible or too thin to be reliably identified. Our approach is thus able to build consistent placental vessel mosaics in challenging cases where currently available alternatives fail.


Assuntos
Transfusão Feto-Fetal , Placenta , Feminino , Transfusão Feto-Fetal/diagnóstico por imagem , Transfusão Feto-Fetal/cirurgia , Fetoscopia/métodos , Humanos , Fotocoagulação a Laser/métodos , Movimento (Física) , Placenta/cirurgia , Gravidez
11.
Int J Comput Assist Radiol Surg ; 17(8): 1445-1452, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35362848

RESUMO

PURPOSE: Workflow recognition can aid surgeons before an operation when used as a training tool, during an operation by increasing operating room efficiency, and after an operation in the completion of operation notes. Although several methods have been applied to this task, they have been tested on few surgical datasets. Therefore, their generalisability is not well tested, particularly for surgical approaches utilising smaller working spaces which are susceptible to occlusion and necessitate frequent withdrawal of the endoscope. This leads to rapidly changing predictions, which reduces the clinical confidence of the methods, and hence limits their suitability for clinical translation. METHODS: Firstly, the optimal neural network is found using established methods, using endoscopic pituitary surgery as an exemplar. Then, prediction volatility is formally defined as a new evaluation metric as a proxy for uncertainty, and two temporal smoothing functions are created. The first (modal, [Formula: see text]) mode-averages over the previous n predictions, and the second (threshold, [Formula: see text]) ensures a class is only changed after being continuously predicted for n predictions. Both functions are independently applied to the predictions of the optimal network. RESULTS: The methods are evaluated on a 50-video dataset using fivefold cross-validation, and the optimised evaluation metric is weighted-[Formula: see text] score. The optimal model is ResNet-50+LSTM achieving 0.84 in 3-phase classification and 0.74 in 7-step classification. Applying threshold smoothing further improves these results, achieving 0.86 in 3-phase classification, and 0.75 in 7-step classification, while also drastically reducing the prediction volatility. CONCLUSION: The results confirm the established methods generalise to endoscopic pituitary surgery, and show simple temporal smoothing not only reduces prediction volatility, but actively improves performance.


Assuntos
Endoscopia , Redes Neurais de Computação , Humanos , Fluxo de Trabalho
12.
Int J Comput Assist Radiol Surg ; 17(5): 885-893, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35355212

RESUMO

PURPOSE: Robotic-assisted laparoscopic surgery has become the trend in medicine thanks to its convenience and lower risk of infection against traditional open surgery. However, the visibility during these procedures may severely deteriorate due to electrocauterisation which generates smoke in the operating cavity. This decreased visibility hinders the procedural time and surgical performance. Recent deep learning-based techniques have shown the potential for smoke and glare removal, but few targets laparoscopic videos. METHOD: We propose DeSmoke-LAP, a new method for removing smoke from real robotic laparoscopic hysterectomy videos. The proposed method is based on the unpaired image-to-image cycle-consistent generative adversarial network in which two novel loss functions, namely, inter-channel discrepancies and dark channel prior, are integrated to facilitate smoke removal while maintaining the true semantics and illumination of the scene. RESULTS: DeSmoke-LAP is compared with several state-of-the-art desmoking methods qualitatively and quantitatively using referenceless image quality metrics on 10 laparoscopic hysterectomy videos through 5-fold cross-validation. CONCLUSION: DeSmoke-LAP outperformed existing methods and generated smoke-free images without applying ground truths (paired images) and atmospheric scattering model. This shows distinctive achievement in dehazing in surgery, even in scenarios with partial inhomogenenous smoke. Our code and hysterectomy dataset will be made publicly available at https://www.ucl.ac.uk/interventional-surgical-sciences/weiss-open-research/weiss-open-data-server/desmoke-lap .


Assuntos
Processamento de Imagem Assistida por Computador , Laparoscopia , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Semântica
13.
Surgery ; 172(1): 69-73, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35168814

RESUMO

BACKGROUND: Traditional methods of assessing colonic perfusion are based on the surgeon's visual inspection of tissue. Fluorescence angiography provides qualitative information, but there remains disagreement on how the observed signal should be interpreted. It is unclear whether fluorescence correlates with physiological properties of the tissue, such as tissue oxygen saturation. The aim of this study was to correlate fluorescence intensity and colonic tissue oxygen saturation. METHODS: Prospective cohort study performed in a single academic tertiary referral center. Patients undergoing colorectal surgery who required an anastomosis underwent dual-modality perfusion assessment of a segment of bowel before transection and creation of the anastomosis, using near-infrared and multispectral imaging. Perfusion was assessed using maximal fluorescence intensity measurement during fluorescence angiography, and its correlation with tissue oxygen saturation was calculated. RESULTS: In total, 18 patients were included. Maximal fluorescence intensity occurred at a mean of 101 seconds after indocyanine green injection. The correlation coefficient was 0.73 (95% confidence interval of 0.65-0.79) with P < .0001, showing a statistically significant strong positive correlation between normalized fluorescence intensity and tissue oxygen saturation. The use of time averaging improved the correlation coefficient to 0.78. CONCLUSION: Fluorescence intensity is a potential surrogate for tissue oxygenation. This is expected to lead to improved decision making when transecting the bowel and, consequently, a reduction in anastomotic leak rates. A larger, phase II study is needed to confirm this result and form the basis of computational algorithms to infer biological or physiological information from the fluorescence imaging data.


Assuntos
Neoplasias Colorretais , Cirurgia Colorretal , Anastomose Cirúrgica/métodos , Fístula Anastomótica/diagnóstico , Fístula Anastomótica/etiologia , Fístula Anastomótica/prevenção & controle , Estudos de Coortes , Neoplasias Colorretais/cirurgia , Angiofluoresceinografia/métodos , Humanos , Verde de Indocianina , Perfusão , Estudos Prospectivos
14.
Int J Comput Assist Radiol Surg ; 17(3): 467-477, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35050468

RESUMO

PURPOSE: Laparoscopic sacrocolpopexy is the gold standard procedure for the management of vaginal vault prolapse. Studying surgical skills and different approaches to this procedure requires an analysis at the level of each of its individual phases, thus motivating investigation of automated surgical workflow for expediting this research. Phase durations in this procedure are significantly larger and more variable than commonly available benchmarks such as Cholec80, and we assess these differences. METHODOLOGY: We introduce sequence-to-sequence (seq2seq) models for coarse-level phase segmentation in order to deal with highly variable phase durations in Sacrocolpopexy. Multiple architectures (LSTM and transformer), configurations (time-shifted, time-synchronous), and training strategies are tested with this novel framework to explore its flexibility. RESULTS: We perform 7-fold cross-validation on a dataset with 14 complete videos of sacrocolpopexy. We perform both a frame-based (accuracy, F1-score) and an event-based (Ward metric) evaluation of our algorithms and show that different architectures present a trade-off between higher number of accurate frames (LSTM, Mode average) or more consistent ordering of phase transitions (Transformer). We compare the implementations on the widely used Cholec80 dataset and verify that relative performances are different to those in Sacrocolpopexy. CONCLUSIONS: We show that workflow segmentation of Sacrocolpopexy videos has specific challenges that are different to the widely used benchmark Cholec80 and require dedicated approaches to deal with the significantly larger phase durations. We demonstrate the feasibility of seq2seq models in Sacrocolpopexy, a broad framework that can be further explored with new configurations. We show that an event-based evaluation metric is useful to evaluate workflow segmentation algorithms and provides complementary insight to the more commonly used metrics such as accuracy or F1-score.


Assuntos
Laparoscopia , Prolapso de Órgão Pélvico , Algoritmos , Feminino , Humanos , Laparoscopia/métodos , Prolapso de Órgão Pélvico/diagnóstico por imagem , Prolapso de Órgão Pélvico/cirurgia , Fluxo de Trabalho
15.
Nanoscale Adv ; 3(22): 6403-6414, 2021 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-34913024

RESUMO

Intraoperative frozen section analysis can be used to improve the accuracy of tumour margin estimation during cancer resection surgery through rapid processing and pathological assessment of excised tissue. Its applicability is limited in some cases due to the additional risks associated with prolonged surgery, largely from the time-consuming staining procedure. Our work uses a measurable property of bulk tissue to bypass the staining process: as tumour cells proliferate, they influence the surrounding extra-cellular matrix, and the resulting change in elastic modulus provides a signature of the underlying pathology. In this work we accurately localise atomic force microscopy measurements of human liver tissue samples and train a generative adversarial network to infer elastic modulus from low-resolution images of unstained tissue sections. Pathology is predicted through unsupervised clustering of parameters characterizing the distributions of inferred values, achieving 89% accuracy for all samples based on the nominal assessment (n = 28), and 95% for samples that have been validated by two independent pathologists through post hoc staining (n = 20). Our results demonstrate that this technique could increase the feasibility of intraoperative frozen section analysis for use during resection surgery and improve patient outcomes.

16.
J Biomed Opt ; 26(10)2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34628734

RESUMO

SIGNIFICANCE: The early detection of dysplasia in patients with Barrett's esophagus could improve outcomes by enabling curative intervention; however, dysplasia is often inconspicuous using conventional white-light endoscopy. AIM: We sought to determine whether multispectral imaging (MSI) could be applied in endoscopy to improve detection of dysplasia in the upper gastrointestinal (GI) tract. APPROACH: We used a commercial fiberscope to relay imaging data from within the upper GI tract to a snapshot MSI camera capable of collecting data from nine spectral bands. The system was deployed in a pilot clinical study of 20 patients (ClinicalTrials.gov NCT03388047) to capture 727 in vivo image cubes matched with gold-standard diagnosis from histopathology. We compared the performance of seven learning-based methods for data classification, including linear discriminant analysis, k-nearest neighbor classification, and a neural network. RESULTS: Validation of our approach using a Macbeth color chart achieved an image-based classification accuracy of 96.5%. Although our patient cohort showed significant intra- and interpatient variance, we were able to resolve disease-specific contributions to the recorded MSI data. In classification, a combined principal component analysis and k-nearest-neighbor approach performed best, achieving accuracies of 95.8%, 90.7%, and 76.1%, respectively, for squamous, non-dysplastic Barrett's esophagus and neoplasia based on majority decisions per-image. CONCLUSIONS: MSI shows promise for disease classification in Barrett's esophagus and merits further investigation as a tool in high-definition "chip-on-tip" endoscopes.


Assuntos
Esôfago de Barrett , Neoplasias Esofágicas , Esôfago de Barrett/diagnóstico por imagem , Estudos de Coortes , Neoplasias Esofágicas/diagnóstico por imagem , Esofagoscopia , Humanos , Projetos Piloto
17.
Int J Comput Assist Radiol Surg ; 16(7): 1189-1199, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34152567

RESUMO

PURPOSE: Periodontitis is the sixth most prevalent disease worldwide and periodontal bone loss (PBL) detection is crucial for its early recognition and establishment of the correct diagnosis and prognosis. Current radiographic assessment by clinicians exhibits substantial interobserver variation. Computer-assisted radiographic assessment can calculate bone loss objectively and aid in early bone loss detection. Understanding the rate of disease progression can guide the choice of treatment and lead to early initiation of periodontal therapy. METHODOLOGY: We propose an end-to-end system that includes a deep neural network with hourglass architecture to predict dental landmarks in single, double and triple rooted teeth using periapical radiographs. We then estimate the PBL and disease severity stage using the predicted landmarks. We also introduce a novel adaptation of MixUp data augmentation that improves the landmark localisation. RESULTS: We evaluate the proposed system using cross-validation on 340 radiographs from 63 patient cases containing 463, 115 and 56 single, double and triple rooted teeth. The landmark localisation achieved Percentage Correct Keypoints (PCK) of 88.9%, 73.9% and 74.4%, respectively, and a combined PCK of 83.3% across all root morphologies, outperforming the next best architecture by 1.7%. When compared to clinicians' visual evaluations of full radiographs, the average PBL error was 10.69%, with a severity stage accuracy of 58%. This simulates current interobserver variation, implying that diverse data could improve accuracy. CONCLUSIONS: The system showed a promising capability to localise landmarks and estimate periodontal bone loss on periapical radiographs. An agreement was found with other literature that non-CEJ (Cemento-Enamel Junction) landmarks are the hardest to localise. Honing the system's clinical pipeline will allow for its use in intervention applications.


Assuntos
Perda do Osso Alveolar/diagnóstico , Redes Neurais de Computação , Periodontite/diagnóstico , Radiografia/métodos , Humanos , Variações Dependentes do Observador
18.
Med Image Anal ; 70: 102002, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33657508

RESUMO

The Endoscopy Computer Vision Challenge (EndoCV) is a crowd-sourcing initiative to address eminent problems in developing reliable computer aided detection and diagnosis endoscopy systems and suggest a pathway for clinical translation of technologies. Whilst endoscopy is a widely used diagnostic and treatment tool for hollow-organs, there are several core challenges often faced by endoscopists, mainly: 1) presence of multi-class artefacts that hinder their visual interpretation, and 2) difficulty in identifying subtle precancerous precursors and cancer abnormalities. Artefacts often affect the robustness of deep learning methods applied to the gastrointestinal tract organs as they can be confused with tissue of interest. EndoCV2020 challenges are designed to address research questions in these remits. In this paper, we present a summary of methods developed by the top 17 teams and provide an objective comparison of state-of-the-art methods and methods designed by the participants for two sub-challenges: i) artefact detection and segmentation (EAD2020), and ii) disease detection and segmentation (EDD2020). Multi-center, multi-organ, multi-class, and multi-modal clinical endoscopy datasets were compiled for both EAD2020 and EDD2020 sub-challenges. The out-of-sample generalization ability of detection algorithms was also evaluated. Whilst most teams focused on accuracy improvements, only a few methods hold credibility for clinical usability. The best performing teams provided solutions to tackle class imbalance, and variabilities in size, origin, modality and occurrences by exploring data augmentation, data fusion, and optimal class thresholding techniques.


Assuntos
Artefatos , Aprendizado Profundo , Algoritmos , Endoscopia Gastrointestinal , Humanos
19.
Biomed Opt Express ; 12(12): 7556-7567, 2021 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-35003852

RESUMO

In colorectal surgery an anastomosis performed using poorly-perfused, ischaemic bowel segments may result in a leak and consequent morbidity. Traditional measures of perfusion assessment rely on clinical judgement and are mainly subjective, based on tissue appearance, leading to variability between clinicians. This paper describes a multispectral imaging (MSI) laparoscope that can derive quantitative measures of tissue oxygen saturation (SO2 ). The system uses a xenon surgical light source and fast filter wheel camera to capture eight narrow waveband images across the visible range in approximately 0.3 s. Spectral validation measurements were performed by imaging standardised colour tiles and comparing reflectance with ground truth spectrometer data. Tissue spectra were decomposed into individual contributions from haemoglobin, adipose tissue and scattering, using a previously-developed regression approach. Initial clinical results from seven patients undergoing colorectal surgery are presented and used to characterise measurement stability and reproducibility in vivo. Strategies to improve signal-to-noise ratio and correct for motion are described. Images of healthy bowel tissue (in vivo) indicate that baseline SO2 is approximately 75 ± 6%. The SO2 profile along a bowel segment following ligation of the inferior mesenteric artery (IMA) shows a decrease from the proximal to distal end. In the clinical cases shown, imaging results concurred with clinical judgements of the location of well-perfused tissue. Adipose tissue, visibly yellow in the RGB images, is shown to surround the mesentery and cover some of the serosa. SO2 in this tissue is consistently high, with mean value of 90%. These results show that MSI is a potential intraoperative guidance tool for assessment of perfusion. Mapping of SO2 in the colon could be used by surgeons to guide choice of transection points and ensure that well-perfused tissue is used to form an anastomosis. The observation of high mesenteric SO2 agrees with work in the literature and warrants further exploration. Larger studies incorporating with a wider cohort of clinicians will help to provide retrospective evidence of how this imaging technique may be able to reduce inter-operator variability.

20.
Int J Comput Assist Radiol Surg ; 15(11): 1807-1816, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32808148

RESUMO

PURPOSE: Fetoscopic laser photocoagulation is a minimally invasive surgical procedure used to treat twin-to-twin transfusion syndrome (TTTS), which involves localization and ablation of abnormal vascular connections on the placenta to regulate the blood flow in both fetuses. This procedure is particularly challenging due to the limited field of view, poor visibility, occasional bleeding, and poor image quality. Fetoscopic mosaicking can help in creating an image with the expanded field of view which could facilitate the clinicians during the TTTS procedure. METHODS: We propose a deep learning-based mosaicking framework for diverse fetoscopic videos captured from different settings such as simulation, phantoms, ex vivo, and in vivo environments. The proposed mosaicking framework extends an existing deep image homography model to handle video data by introducing the controlled data generation and consistent homography estimation modules. Training is performed on a small subset of fetoscopic images which are independent of the testing videos. RESULTS: We perform both quantitative and qualitative evaluations on 5 diverse fetoscopic videos (2400 frames) that captured different environments. To demonstrate the robustness of the proposed framework, a comparison is performed with the existing feature-based and deep image homography methods. CONCLUSION: The proposed mosaicking framework outperformed existing methods and generated meaningful mosaic, while reducing the accumulated drift, even in the presence of visual challenges such as specular highlights, reflection, texture paucity, and low video resolution.


Assuntos
Aprendizado Profundo , Transfusão Feto-Fetal/cirurgia , Fetoscopia/métodos , Fotocoagulação a Laser/métodos , Placenta/cirurgia , Simulação por Computador , Feminino , Humanos , Imagens de Fantasmas , Gravidez
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...