Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
Med Image Anal ; 96: 103195, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38815359

ABSTRACT

Colorectal cancer is one of the most common cancers in the world. While colonoscopy is an effective screening technique, navigating an endoscope through the colon to detect polyps is challenging. A 3D map of the observed surfaces could enhance the identification of unscreened colon tissue and serve as a training platform. However, reconstructing the colon from video footage remains difficult. Learning-based approaches hold promise as robust alternatives, but necessitate extensive datasets. Establishing a benchmark dataset, the 2022 EndoVis sub-challenge SimCol3D aimed to facilitate data-driven depth and pose prediction during colonoscopy. The challenge was hosted as part of MICCAI 2022 in Singapore. Six teams from around the world and representatives from academia and industry participated in the three sub-challenges: synthetic depth prediction, synthetic pose prediction, and real pose prediction. This paper describes the challenge, the submitted methods, and their results. We show that depth prediction from synthetic colonoscopy images is robustly solvable, while pose estimation remains an open research question.


Subject(s)
Colonoscopy , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Colorectal Neoplasms/diagnostic imaging , Colonic Polyps/diagnostic imaging
2.
IEEE Trans Med Imaging ; PP2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38669168

ABSTRACT

Many of the tissues/lesions in the medical images may be ambiguous. Therefore, medical segmentation is typically annotated by a group of clinical experts to mitigate personal bias. A common solution to fuse different annotations is the majority vote, e.g., taking the average of multiple labels. However, such a strategy ignores the difference between the grader expertness. Inspired by the observation that medical image segmentation is usually used to assist the disease diagnosis in clinical practice, we propose the diagnosis-first principle, which is to take disease diagnosis as the criterion to calibrate the inter-observer segmentation uncertainty. Following this idea, a framework named Diagnosis-First segmentation Framework (DiFF) is proposed. Specifically, DiFF will first learn to fuse the multi-rater segmentation labels to a single ground-truth which could maximize the disease diagnosis performance. We dubbed the fused ground-truth as Diagnosis-First Ground-truth (DF-GT). Then, the Take and Give Model (T&G Model) to segment DF-GT from the raw image is proposed. With the T&G Model, DiFF can learn the segmentation with the calibrated uncertainty that facilitate the disease diagnosis. We verify the effectiveness of DiFF on three different medical segmentation tasks: optic-disc/optic-cup (OD/OC) segmentation on fundus images, thyroid nodule segmentation on ultrasound images, and skin lesion segmentation on dermoscopic images. Experimental results show that the proposed DiFF can effectively calibrate the segmentation uncertainty, and thus significantly facilitate the corresponding disease diagnosis, which outperforms previous state-of-the-art multi-rater learning methods.

3.
IEEE Trans Med Imaging ; 43(1): 297-308, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37494156

ABSTRACT

Personalized federated learning (PFL) addresses the data heterogeneity challenge faced by general federated learning (GFL). Rather than learning a single global model, with PFL a collection of models are adapted to the unique feature distribution of each site. However, current PFL methods rarely consider self-attention networks which can handle data heterogeneity by long-range dependency modeling and they do not utilize prediction inconsistencies in local models as an indicator of site uniqueness. In this paper, we propose FedDP, a novel fed erated learning scheme with d ual p ersonalization, which improves model personalization from both feature and prediction aspects to boost image segmentation results. We leverage long-range dependencies by designing a local query (LQ) that decouples the query embedding layer out of each local model, whose parameters are trained privately to better adapt to the respective feature distribution of the site. We then propose inconsistency-guided calibration (IGC), which exploits the inter-site prediction inconsistencies to accommodate the model learning concentration. By encouraging a model to penalize pixels with larger inconsistencies, we better tailor prediction-level patterns to each local site. Experimentally, we compare FedDP with the state-of-the-art PFL methods on two popular medical image segmentation tasks with different modalities, where our results consistently outperform others on both tasks. Our code and models are available at https://github.com/jcwang123/PFL-Seg-Trans.


Subject(s)
Calibration
4.
Med Image Anal ; 91: 102985, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37844472

ABSTRACT

This paper introduces the "SurgT: Surgical Tracking" challenge which was organized in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2022). There were two purposes for the creation of this challenge: (1) the establishment of the first standardized benchmark for the research community to assess soft-tissue trackers; and (2) to encourage the development of unsupervised deep learning methods, given the lack of annotated data in surgery. A dataset of 157 stereo endoscopic videos from 20 clinical cases, along with stereo camera calibration parameters, have been provided. Participants were assigned the task of developing algorithms to track the movement of soft tissues, represented by bounding boxes, in stereo endoscopic videos. At the end of the challenge, the developed methods were assessed on a previously hidden test subset. This assessment uses benchmarking metrics that were purposely developed for this challenge, to verify the efficacy of unsupervised deep learning algorithms in tracking soft-tissue. The metric used for ranking the methods was the Expected Average Overlap (EAO) score, which measures the average overlap between a tracker's and the ground truth bounding boxes. Coming first in the challenge was the deep learning submission by ICVS-2Ai with a superior EAO score of 0.617. This method employs ARFlow to estimate unsupervised dense optical flow from cropped images, using photometric and regularization losses. Second, Jmees with an EAO of 0.583, uses deep learning for surgical tool segmentation on top of a non-deep learning baseline method: CSRT. CSRT by itself scores a similar EAO of 0.563. The results from this challenge show that currently, non-deep learning methods are still competitive. The dataset and benchmarking tool created for this challenge have been made publicly available at https://surgt.grand-challenge.org/. This challenge is expected to contribute to the development of autonomous robotic surgery and other digital surgical technologies.


Subject(s)
Robotic Surgical Procedures , Humans , Benchmarking , Algorithms , Endoscopy , Image Processing, Computer-Assisted/methods
5.
Nat Commun ; 14(1): 6676, 2023 10 21.
Article in English | MEDLINE | ID: mdl-37865629

ABSTRACT

Recent advancements in artificial intelligence have witnessed human-level performance; however, AI-enabled cognitive assistance for therapeutic procedures has not been fully explored nor pre-clinically validated. Here we propose AI-Endo, an intelligent surgical workflow recognition suit, for endoscopic submucosal dissection (ESD). Our AI-Endo is trained on high-quality ESD cases from an expert endoscopist, covering a decade time expansion and consisting of 201,026 labeled frames. The learned model demonstrates outstanding performance on validation data, including cases from relatively junior endoscopists with various skill levels, procedures conducted with different endoscopy systems and therapeutic skills, and cohorts from international multi-centers. Furthermore, we integrate our AI-Endo with the Olympus endoscopic system and validate the AI-enabled cognitive assistance system with animal studies in live ESD training sessions. Dedicated data analysis from surgical phase recognition results is summarized in an automatically generated report for skill assessment.


Subject(s)
Endometriosis , Endoscopic Mucosal Resection , Animals , Female , Humans , Endoscopic Mucosal Resection/education , Endoscopic Mucosal Resection/methods , Artificial Intelligence , Workflow , Endoscopy , Learning
6.
Med Image Anal ; 86: 102770, 2023 05.
Article in English | MEDLINE | ID: mdl-36889206

ABSTRACT

PURPOSE: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.


Subject(s)
Artificial Intelligence , Benchmarking , Humans , Workflow , Algorithms , Machine Learning
7.
Int J Comput Assist Radiol Surg ; 17(12): 2193-2202, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36129573

ABSTRACT

PURPOSE: Real-time surgical workflow analysis has been a key component for computer-assisted intervention system to improve cognitive assistance. Most existing methods solely rely on conventional temporal models and encode features with a successive spatial-temporal arrangement. Supportive benefits of intermediate features are partially lost from both visual and temporal aspects. In this paper, we rethink feature encoding to attend and preserve the critical information for accurate workflow recognition and anticipation. METHODS: We introduce Transformer in surgical workflow analysis, to reconsider complementary effects of spatial and temporal representations. We propose a hybrid embedding aggregation Transformer, named Trans-SVNet, to effectively interact with the designed spatial and temporal embeddings, by employing spatial embedding to query temporal embedding sequence. We jointly optimized by loss objectives from both analysis tasks to leverage their high correlation. RESULTS: We extensively evaluate our method on three large surgical video datasets. Our method consistently outperforms the state-of-the-arts across three datasets on workflow recognition task. Jointly learning with anticipation, recognition results can gain a large improvement. Our approach also shows its effectiveness on anticipation with promising performance achieved. Our model achieves a real-time inference speed of 0.0134 second per frame. CONCLUSION: Experimental results demonstrate the efficacy of our hybrid embeddings integration by rediscovering the crucial cues from complementary spatial-temporal embeddings. The better performance by multi-task learning indicates that anticipation task brings the additional knowledge to recognition task. Promising effectiveness and efficiency of our method also show its promising potential to be used in operating room.


Subject(s)
Operating Rooms , Humans , Workflow
8.
IEEE Trans Med Imaging ; 41(11): 2991-3002, 2022 11.
Article in English | MEDLINE | ID: mdl-35604967

ABSTRACT

Automatic surgical scene segmentation is fundamental for facilitating cognitive intelligence in the modern operating theatre. Previous works rely on conventional aggregation modules (e.g., dilated convolution, convolutional LSTM), which only make use of the local context. In this paper, we propose a novel framework STswinCL that explores the complementary intra- and inter-video relations to boost segmentation performance, by progressively capturing the global context. We firstly develop a hierarchy Transformer to capture intra-video relation that includes richer spatial and temporal cues from neighbor pixels and previous frames. A joint space-time window shift scheme is proposed to efficiently aggregate these two cues into each pixel embedding. Then, we explore inter-video relation via pixel-to-pixel contrastive learning, which well structures the global embedding space. A multi-source contrast training objective is developed to group the pixel embeddings across videos with the ground-truth guidance, which is crucial for learning the global property of the whole data. We extensively validate our approach on two public surgical video benchmarks, including EndoVis18 Challenge and CaDIS dataset. Experimental results demonstrate the promising performance of our method, which consistently exceeds previous state-of-the-art approaches. Code is available at https://github.com/YuemingJin/STswinCL.


Subject(s)
Neural Networks, Computer , Semantics , Endoscopy
9.
IEEE Trans Med Imaging ; 41(3): 621-632, 2022 03.
Article in English | MEDLINE | ID: mdl-34633927

ABSTRACT

Multimodal learning usually requires a complete set of modalities during inference to maintain performance. Although training data can be well-prepared with high-quality multiple modalities, in many cases of clinical practice, only one modality can be acquired and important clinical evaluations have to be made based on the limited single modality information. In this work, we propose a privileged knowledge learning framework with the 'Teacher-Student' architecture, in which the complete multimodal knowledge that is only available in the training data (called privileged information) is transferred from a multimodal teacher network to a unimodal student network, via both a pixel-level and an image-level distillation scheme. Specifically, for the pixel-level distillation, we introduce a regularized knowledge distillation loss which encourages the student to mimic the teacher's softened outputs in a pixel-wise manner and incorporates a regularization factor to reduce the effect of incorrect predictions from the teacher. For the image-level distillation, we propose a contrastive knowledge distillation loss which encodes image-level structured information to enrich the knowledge encoding in combination with the pixel-level distillation. We extensively evaluate our method on two different multi-class segmentation tasks, i.e., cardiac substructure segmentation and brain tumor segmentation. Experimental results on both tasks demonstrate that our privileged knowledge learning is effective in improving unimodal segmentation and outperforms previous methods.


Subject(s)
Heart , Neural Networks, Computer , Humans
10.
Med Image Anal ; 75: 102291, 2022 01.
Article in English | MEDLINE | ID: mdl-34753019

ABSTRACT

We propose a novel shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection (ESD) surgery. This task is of great clinical significance but extremely challenging due to bleeding, lighting reflection, and motion blur in the complicated surgical environment. Compared with existing solutions, which either neglect geometric relationships among targeting objects or capture the relationships by using complicated aggregation schemes, the proposed network is capable of achieving satisfactory accuracy while maintaining real-time performance by taking full advantage of the spatial relations among landmarks. We first devise an algorithm to automatically generate relation keypoint heatmaps, which are able to intuitively represent the prior knowledge of spatial relations among landmarks without using any extra manual annotation efforts. We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process. While one scheme introduces pixel-level regularization by multi-task learning, the other integrates global-level regularization by harnessing a newly designed grouped consistency evaluator, which adds relation constraints to the proposed network in an adversarial manner. Both schemes are beneficial to the model in training, and can be readily unloaded in inference to achieve real-time detection. We establish a large in-house dataset of ESD surgery for esophageal cancer to validate the effectiveness of our proposed method. Extensive experimental results demonstrate that our approach outperforms state-of-the-art methods in terms of accuracy and efficiency, achieving better detection results faster. Promising results on two downstream applications further corroborate the great potential of our method in ESD clinical practice.


Subject(s)
Endoscopic Mucosal Resection , Algorithms , Humans
11.
Med Image Anal ; 75: 102296, 2022 01.
Article in English | MEDLINE | ID: mdl-34781159

ABSTRACT

In this paper, we propose a novel method of Unsupervised Disentanglement of Scene and Motion (UDSM) representations for minimally invasive surgery video retrieval within large databases, which has the potential to advance intelligent and efficient surgical teaching systems. To extract more discriminative video representations, two designed encoders with a triplet ranking loss and an adversarial learning mechanism are established to respectively capture the spatial and temporal information for achieving disentangled features from each frame with promising interpretability. In addition, the long-range temporal dependencies are improved in an integrated video level using a temporal aggregation module and then a set of compact binary codes that carries representative features is yielded to realize fast retrieval. The entire framework is trained in an unsupervised scheme, i.e., purely learning from raw surgical videos without using any annotation. We construct two large-scale minimally invasive surgery video datasets based on the public dataset Cholec80 and our in-house dataset of laparoscopic hysterectomy, to establish the learning process and validate the effectiveness of our proposed method qualitatively and quantitatively on the surgical video retrieval task. Extensive experiments show that our approach significantly outperforms the state-of-the-art video retrieval methods on both datasets, revealing a promising future for injecting intelligence in the next generation of surgical teaching systems.


Subject(s)
Minimally Invasive Surgical Procedures , Databases, Factual , Humans , Motion
12.
Med Image Anal ; 74: 102240, 2021 12.
Article in English | MEDLINE | ID: mdl-34614476

ABSTRACT

The scarcity of annotated surgical data in robot-assisted surgery (RAS) motivates prior works to borrow related domain knowledge to achieve promising segmentation results in surgical images by adaptation. For dense instrument tracking in a robotic surgical video, collecting one initial scene to specify target instruments (or parts of tools) is desirable and feasible during the preoperative preparation. In this paper, we study the challenging one-shot instrument segmentation for robotic surgical videos, in which only the first frame mask of each video is provided at test time, such that the pre-trained model (learned from easily accessible source) can adapt to the target instruments. Straightforward methods transfer the domain knowledge by fine-tuning the model on each given mask. Such one-shot optimization takes hundred of iterations and the test runtime is unfeasible. We present anchor-guided online meta adaptation (AOMA) for this problem. We achieve fast one-shot test time optimization by meta-learning a good model initialization and learning rates from source videos to avoid the laborious and handcrafted fine-tuning. The trainable two components are optimized in a video-specific task space with a matching-aware loss. Furthermore, we design an anchor-guided online adaptation to tackle the performance drop throughout a robotic surgical sequence. The model is continuously adapted on motion-insensitive pseudo-masks supported by anchor matching. AOMA achieves state-of-the-art results on two practical scenarios: (1) general videos to surgical videos, (2) public surgical videos to in-house surgical videos, while reducing the test runtime substantially.


Subject(s)
Robotic Surgical Procedures , Humans , Learning , Motion , Surgical Instruments
13.
Med Image Anal ; 73: 102158, 2021 10.
Article in English | MEDLINE | ID: mdl-34325149

ABSTRACT

Surgical workflow recognition is a fundamental task in computer-assisted surgery and a key component of various applications in operating rooms. Existing deep learning models have achieved promising results for surgical workflow recognition, heavily relying on a large amount of annotated videos. However, obtaining annotation is time-consuming and requires the domain knowledge of surgeons. In this paper, we propose a novel two-stage Semi-Supervised Learning method for label-efficient Surgical workflow recognition, named as SurgSSL. Our proposed SurgSSL progressively leverages the inherent knowledge held in the unlabeled data to a larger extent: from implicit unlabeled data excavation via motion knowledge excavation, to explicit unlabeled data excavation via pre-knowledge pseudo labeling. Specifically, we first propose a novel intra-sequence Visual and Temporal Dynamic Consistency (VTDC) scheme for implicit excavation. It enforces prediction consistency of the same data under perturbations in both spatial and temporal spaces, encouraging model to capture rich motion knowledge. We further perform explicit excavation by optimizing the model towards our pre-knowledge pseudo label. It is naturally generated by the VTDC regularized model with prior knowledge of unlabeled data encoded, and demonstrates superior reliability for model supervision compared with the label generated by existing methods. We extensively evaluate our method on two public surgical datasets of Cholec80 and M2CAI challenge dataset. Our method surpasses the state-of-the-art semi-supervised methods by a large margin, e.g., improving 10.5% Accuracy under the severest annotation regime of M2CAI dataset. Using only 50% labeled videos on Cholec80, our approach achieves competitive performance compared with full-data training method.


Subject(s)
Neural Networks, Computer , Surgery, Computer-Assisted , Reproducibility of Results , Supervised Machine Learning , Workflow
14.
Int J Comput Assist Radiol Surg ; 16(9): 1607-1614, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34173182

ABSTRACT

PURPOSE: Automatic segmentation of surgical instruments in robot-assisted minimally invasive surgery plays a fundamental role in improving context awareness. In this work, we present an instance segmentation model based on refined Mask R-CNN for accurately segmenting the instruments as well as identifying their types. METHODS: We re-formulate the instrument segmentation task as an instance segmentation task. Then we optimize the Mask R-CNN with anchor optimization and improved Region Proposal Network for instrument segmentation. Moreover, we perform cross-dataset evaluation with different sampling strategies. RESULTS: We evaluate our model on a public dataset of the MICCAI 2017 Endoscopic Vision Challenge with two segmentation tasks, and both achieve new state-of-the-art performance. Besides, cross-dataset training improved the performance on both segmentation tasks compared with those tested on the public dataset. CONCLUSION: Results demonstrate the effectiveness of the proposed instance segmentation network for surgical instruments segmentation. Cross-dataset evaluation shows our instance segmentation model presents certain cross-dataset generalization capability, and cross-dataset training can significantly improve the segmentation performance. Our empirical study also provides guidance on how to allocate the annotation cost for surgeons while labelling a new dataset in practice.


Subject(s)
Robotic Surgical Procedures , Endoscopy , Humans , Image Processing, Computer-Assisted , Minimally Invasive Surgical Procedures , Surgical Instruments
15.
Med Image Anal ; 70: 101920, 2021 05.
Article in English | MEDLINE | ID: mdl-33676097

ABSTRACT

Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions. While numerous methods for detecting, segmenting and tracking of medical instruments based on endoscopic video images have been proposed in the literature, key limitations remain to be addressed: Firstly, robustness, that is, the reliable performance of state-of-the-art methods when run on challenging images (e.g. in the presence of blood, smoke or motion artifacts). Secondly, generalization; algorithms trained for a specific intervention in a specific hospital should generalize to other interventions or institutions. In an effort to promote solutions for these limitations, we organized the Robust Medical Instrument Segmentation (ROBUST-MIS) challenge as an international benchmarking competition with a specific focus on the robustness and generalization capabilities of algorithms. For the first time in the field of endoscopic image processing, our challenge included a task on binary segmentation and also addressed multi-instance detection and segmentation. The challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures from three different types of surgery. The validation of the competing methods for the three tasks (binary segmentation, multi-instance detection and multi-instance segmentation) was performed in three different stages with an increasing domain gap between the training and the test data. The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap. While the average detection and segmentation quality of the best-performing algorithms is high, future research should concentrate on detection and segmentation of small, crossing, moving and transparent instrument(s) (parts).


Subject(s)
Image Processing, Computer-Assisted , Laparoscopy , Algorithms , Artifacts
16.
IEEE Trans Med Imaging ; 40(7): 1911-1923, 2021 07.
Article in English | MEDLINE | ID: mdl-33780335

ABSTRACT

Automatic surgical workflow recognition is a key component for developing context-aware computer-assisted systems in the operating theatre. Previous works either jointly modeled the spatial features with short fixed-range temporal information, or separately learned visual and long temporal cues. In this paper, we propose a novel end-to-end temporal memory relation network (TMRNet) for relating long-range and multi-scale temporal patterns to augment the present features. We establish a long-range memory bank to serve as a memory cell storing the rich supportive information. Through our designed temporal variation layer, the supportive cues are further enhanced by multi-scale temporal-only convolutions. To effectively incorporate the two types of cues without disturbing the joint learning of spatio-temporal features, we introduce a non-local bank operator to attentively relate the past to the present. In this regard, our TMRNet enables the current feature to view the long-range temporal dependency, as well as tolerate complex temporal extents. We have extensively validated our approach on two benchmark surgical video datasets, M2CAI challenge dataset and Cholec80 dataset. Experimental results demonstrate the outstanding performance of our method, consistently exceeding the state-of-the-art methods by a large margin (e.g., 67.0% v.s. 78.9% Jaccard on Cholec80 dataset).


Subject(s)
Computer Systems , Workflow
17.
Int J Comput Assist Radiol Surg ; 15(9): 1573-1584, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32588246

ABSTRACT

PURPOSE: Automatic surgical workflow recognition in video is an essentially fundamental yet challenging problem for developing computer-assisted and robotic-assisted surgery. Existing approaches with deep learning have achieved remarkable performance on analysis of surgical videos, however, heavily relying on large-scale labelled datasets. Unfortunately, the annotation is not often available in abundance, because it requires the domain knowledge of surgeons. Even for experts, it is very tedious and time-consuming to do a sufficient amount of annotations. METHODS: In this paper, we propose a novel active learning method for cost-effective surgical video analysis. Specifically, we propose a non-local recurrent convolutional network, which introduces non-local block to capture the long-range temporal dependency (LRTD) among continuous frames. We then formulate an intra-clip dependency score to represent the overall dependency within this clip. By ranking scores among clips in unlabelled data pool, we select the clips with weak dependencies to annotate, which indicates the most informative ones to better benefit network training. RESULTS: We validate our approach on a large surgical video dataset (Cholec80) by performing surgical workflow recognition task. By using our LRTD based selection strategy, we can outperform other state-of-the-art active learning methods who only consider neighbor-frame information. Using only up to 50% of samples, our approach can exceed the performance of full-data training. CONCLUSION: By modeling the intra-clip dependency, our LRTD based strategy shows stronger capability to select informative video clips for annotation compared with other active learning methods, through the evaluation on a popular public surgical dataset. The results also show the promising potential of our framework for reducing annotation workload in the clinical practice.


Subject(s)
Pattern Recognition, Automated , Problem-Based Learning , Robotic Surgical Procedures , Surgery, Computer-Assisted/methods , Workflow , Algorithms , Computer Simulation , Humans , Learning , Models, Statistical , Neural Networks, Computer , Reproducibility of Results , Surgeons , Surgery, Computer-Assisted/instrumentation , Video Recording
18.
Med Image Anal ; 59: 101572, 2020 01.
Article in English | MEDLINE | ID: mdl-31639622

ABSTRACT

Surgical tool presence detection and surgical phase recognition are two fundamental yet challenging tasks in surgical video analysis as well as very essential components in various applications in modern operating rooms. While these two analysis tasks are highly correlated in clinical practice as the surgical process is typically well-defined, most previous methods tackled them separately, without making full use of their relatedness. In this paper, we present a novel method by developing a multi-task recurrent convolutional network with correlation loss (MTRCNet-CL) to exploit their relatedness to simultaneously boost the performance of both tasks. Specifically, our proposed MTRCNet-CL model has an end-to-end architecture with two branches, which share earlier feature encoders to extract general visual features while holding respective higher layers targeting for specific tasks. Given that temporal information is crucial for phase recognition, long-short term memory (LSTM) is explored to model the sequential dependencies in the phase recognition branch. More importantly, a novel and effective correlation loss is designed to model the relatedness between tool presence and phase identification of each video frame, by minimizing the divergence of predictions from the two branches. Mutually leveraging both low-level feature sharing and high-level prediction correlating, our MTRCNet-CL method can encourage the interactions between the two tasks to a large extent, and hence can bring about benefits to each other. Extensive experiments on a large surgical video dataset (Cholec80) demonstrate outstanding performance of our proposed method, consistently exceeding the state-of-the-art methods by a large margin, e.g., 89.1% v.s. 81.0% for the mAP in tool presence detection and 87.4% v.s. 84.5% for F1 score in phase recognition.


Subject(s)
Cholecystectomy , Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Video Recording , Datasets as Topic , Humans
19.
Radiology ; 291(3): 677-686, 2019 06.
Article in English | MEDLINE | ID: mdl-30912722

ABSTRACT

Background Nasopharyngeal carcinoma (NPC) may be cured with radiation therapy. Tumor proximity to critical structures demands accuracy in tumor delineation to avoid toxicities from radiation therapy; however, tumor target contouring for head and neck radiation therapy is labor intensive and highly variable among radiation oncologists. Purpose To construct and validate an artificial intelligence (AI) contouring tool to automate primary gross tumor volume (GTV) contouring in patients with NPC. Materials and Methods In this retrospective study, MRI data sets covering the nasopharynx from 1021 patients (median age, 47 years; 751 male, 270 female) with NPC between September 2016 and September 2017 were collected and divided into training, validation, and testing cohorts of 715, 103, and 203 patients, respectively. GTV contours were delineated for 1021 patients and were defined by consensus of two experts. A three-dimensional convolutional neural network was applied to 818 training and validation MRI data sets to construct the AI tool, which was tested in 203 independent MRI data sets. Next, the AI tool was compared against eight qualified radiation oncologists in a multicenter evaluation by using a random sample of 20 test MRI examinations. The Wilcoxon matched-pairs signed rank test was used to compare the difference of Dice similarity coefficient (DSC) of pre- versus post-AI assistance. Results The AI-generated contours demonstrated a high level of accuracy when compared with ground truth contours at testing in 203 patients (DSC, 0.79; 2.0-mm difference in average surface distance). In multicenter evaluation, AI assistance improved contouring accuracy (five of eight oncologists had a higher median DSC after AI assistance; average median DSC, 0.74 vs 0.78; P < .001), reduced intra- and interobserver variation (by 36.4% and 54.5%, respectively), and reduced contouring time (by 39.4%). Conclusion The AI contouring tool improved primary gross tumor contouring accuracy of nasopharyngeal carcinoma, which could have a positive impact on tumor control and patient survival. © RSNA, 2019 Online supplemental material is available for this article. See also the editorial by Chang in this issue.


Subject(s)
Deep Learning , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Nasopharyngeal Carcinoma/diagnostic imaging , Nasopharyngeal Neoplasms/diagnostic imaging , Adolescent , Adult , Algorithms , Female , Humans , Male , Middle Aged , Nasopharynx/diagnostic imaging , Retrospective Studies , Young Adult
20.
IEEE Trans Med Imaging ; 37(5): 1114-1126, 2018 05.
Article in English | MEDLINE | ID: mdl-29727275

ABSTRACT

We propose an analysis of surgical videos that is based on a novel recurrent convolutional network (SV-RCNet), specifically for automatic workflow recognition from surgical videos online, which is a key component for developing the context-aware computer-assisted intervention systems. Different from previous methods which harness visual and temporal information separately, the proposed SV-RCNet seamlessly integrates a convolutional neural network (CNN) and a recurrent neural network (RNN) to form a novel recurrent convolutional architecture in order to take full advantages of the complementary information of visual and temporal features learned from surgical videos. We effectively train the SV-RCNet in an end-to-end manner so that the visual representations and sequential dynamics can be jointly optimized in the learning process. In order to produce more discriminative spatio-temporal features, we exploit a deep residual network (ResNet) and a long short term memory (LSTM) network, to extract visual features and temporal dependencies, respectively, and integrate them into the SV-RCNet. Moreover, based on the phase transition-sensitive predictions from the SV-RCNet, we propose a simple yet effective inference scheme, namely the prior knowledge inference (PKI), by leveraging the natural characteristic of surgical video. Such a strategy further improves the consistency of results and largely boosts the recognition performance. Extensive experiments have been conducted with the MICCAI 2016 Modeling and Monitoring of Computer Assisted Interventions Workflow Challenge dataset and Cholec80 dataset to validate SV-RCNet. Our approach not only achieves superior performance on these two datasets but also outperforms the state-of-the-art methods by a significant margin.


Subject(s)
Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Software , Video-Assisted Surgery/classification , Algorithms , Databases, Factual , Humans , Workflow
SELECTION OF CITATIONS
SEARCH DETAIL
...