RESUMO
BACKGROUND & AIMS: Benign ulcerative colorectal diseases (UCDs) such as ulcerative colitis, Crohn's disease, ischemic colitis, and intestinal tuberculosis share similar phenotypes with different etiologies and treatment strategies. To accurately diagnose closely related diseases like UCDs, we hypothesize that contextual learning is critical in enhancing the ability of the artificial intelligence models to differentiate the subtle differences in lesions amidst the vastly divergent spatial contexts. METHODS: White-light colonoscopy datasets of patients with confirmed UCDs and healthy controls were retrospectively collected. We developed a Multiclass Contextual Classification (MCC) model that can differentiate among the mentioned UCDs and healthy controls by incorporating the tissue object contexts surrounding the individual lesion region in a scene and spatial information from other endoscopic frames (video-level) into a unified framework. Internal and external datasets were used to validate the model's performance. RESULTS: Training datasets included 762 patients, and the internal and external testing cohorts included 257 patients and 293 patients, respectively. Our MCC model provided a rapid reference diagnosis on internal test sets with a high averaged area under the receiver operating characteristic curve (image-level: 0.950 and video-level: 0.973) and balanced accuracy (image-level: 76.1% and video-level: 80.8%), which was superior to junior endoscopists (accuracy: 71.8%, P < .0001) and similar to experts (accuracy: 79.7%, P = .732). The MCC model achieved an area under the receiver operating characteristic curve of 0.988 and balanced accuracy of 85.8% using external testing datasets. CONCLUSIONS: These results enable this model to fit in the routine endoscopic workflow, and the contextual framework to be adopted for diagnosing other closely related diseases.
Assuntos
Inteligência Artificial , Colite Ulcerativa , Colonoscopia , Humanos , Colite Ulcerativa/diagnóstico , Estudos Retrospectivos , Feminino , Masculino , Pessoa de Meia-Idade , Adulto , Interpretação de Imagem Assistida por Computador/métodos , Curva ROC , Idoso , Reprodutibilidade dos Testes , Colo/patologia , Colo/diagnóstico por imagem , Valor Preditivo dos Testes , Diagnóstico Diferencial , Gravação em Vídeo , Aprendizado de Máquina , Estudos de Casos e ControlesRESUMO
OBJECTIVES: We proposed a new approach to train deep learning model for aneurysm rupture prediction which only uses a limited amount of labeled data. METHOD: Using segmented aneurysm mask as input, a backbone model was pretrained using a self-supervised method to learn deep embeddings of aneurysm morphology from 947 unlabeled cases of angiographic images. Subsequently, the backbone model was finetuned using 120 labeled cases with known rupture status. Clinical information was integrated with deep embeddings to further improve prediction performance. The proposed model was compared with radiomics and conventional morphology models in prediction performance. An assistive diagnosis system was also developed based on the model and was tested with five neurosurgeons. RESULT: Our method achieved an area under the receiver operating characteristic curve (AUC) of 0.823, outperforming deep learning model trained from scratch (0.787). By integrating with clinical information, the proposed model's performance was further improved to AUC = 0.853, making the results significantly better than model based on radiomics (AUC = 0.805, p = 0.007) or model based on conventional morphology parameters (AUC = 0.766, p = 0.001). Our model also achieved the highest sensitivity, PPV, NPV, and accuracy among the others. Neurosurgeons' prediction performance was improved from AUC=0.877 to 0.945 (p = 0.037) with the assistive diagnosis system. CONCLUSION: Our proposed method could develop competitive deep learning model for rupture prediction using only a limited amount of data. The assistive diagnosis system could be useful for neurosurgeons to predict rupture. KEY POINTS: ⢠A self-supervised learning method was proposed to mitigate the data-hungry issue of deep learning, enabling training deep neural network with a limited amount of data. ⢠Using the proposed method, deep embeddings were extracted to represent intracranial aneurysm morphology. Prediction model based on deep embeddings was significantly better than conventional morphology model and radiomics model. ⢠An assistive diagnosis system was developed using deep embeddings for case-based reasoning, which was shown to significantly improve neurosurgeons' performance to predict rupture.
Assuntos
Aneurisma Roto , Aneurisma Intracraniano , Aneurisma Roto/diagnóstico por imagem , Humanos , Aneurisma Intracraniano/diagnóstico por imagem , Redes Neurais de Computação , Curva ROCRESUMO
Raman spectroscopy is a non-destructive analysis technique that provides detailed information about the chemical structure of tumors. Raman spectra of 52 giant cell tumors of bone (GCTB) and 21 adjacent normal tissues of formalin-fixed paraffin embedded (FFPE) and frozen specimens were obtained using a confocal Raman spectrometer and analyzed with machine learning and deep learning algorithms. We discovered characteristic Raman shifts in the GCTB specimens. They were assigned to phenylalanine and tyrosine. Based on the spectroscopic data, classification algorithms including support vector machine, k-nearest neighbors and long short-term memory (LSTM) were successfully applied to discriminate GCTB from adjacent normal tissues of both the FFPE and frozen specimens, with the accuracy ranging from 82.8% to 94.5%. Importantly, our LSTM algorithm showed the best performance in the discrimination of the frozen specimens, with a sensitivity and specificity of 93.9% and 95.1% respectively, and the AUC was 0.97. The results of our study suggest that confocal Raman spectroscopy accomplished by the LSTM network could non-destructively evaluate a tumor margin by its inherent biochemical specificity which may allow intraoperative assessment of the adequacy of tumor clearance.
Assuntos
Aprendizado Profundo , Tumores de Células Gigantes , Algoritmos , Humanos , Análise Espectral Raman/métodos , Máquina de Vetores de SuporteRESUMO
Background Nasopharyngeal carcinoma (NPC) may be cured with radiation therapy. Tumor proximity to critical structures demands accuracy in tumor delineation to avoid toxicities from radiation therapy; however, tumor target contouring for head and neck radiation therapy is labor intensive and highly variable among radiation oncologists. Purpose To construct and validate an artificial intelligence (AI) contouring tool to automate primary gross tumor volume (GTV) contouring in patients with NPC. Materials and Methods In this retrospective study, MRI data sets covering the nasopharynx from 1021 patients (median age, 47 years; 751 male, 270 female) with NPC between September 2016 and September 2017 were collected and divided into training, validation, and testing cohorts of 715, 103, and 203 patients, respectively. GTV contours were delineated for 1021 patients and were defined by consensus of two experts. A three-dimensional convolutional neural network was applied to 818 training and validation MRI data sets to construct the AI tool, which was tested in 203 independent MRI data sets. Next, the AI tool was compared against eight qualified radiation oncologists in a multicenter evaluation by using a random sample of 20 test MRI examinations. The Wilcoxon matched-pairs signed rank test was used to compare the difference of Dice similarity coefficient (DSC) of pre- versus post-AI assistance. Results The AI-generated contours demonstrated a high level of accuracy when compared with ground truth contours at testing in 203 patients (DSC, 0.79; 2.0-mm difference in average surface distance). In multicenter evaluation, AI assistance improved contouring accuracy (five of eight oncologists had a higher median DSC after AI assistance; average median DSC, 0.74 vs 0.78; P < .001), reduced intra- and interobserver variation (by 36.4% and 54.5%, respectively), and reduced contouring time (by 39.4%). Conclusion The AI contouring tool improved primary gross tumor contouring accuracy of nasopharyngeal carcinoma, which could have a positive impact on tumor control and patient survival. © RSNA, 2019 Online supplemental material is available for this article. See also the editorial by Chang in this issue.
Assuntos
Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Carcinoma Nasofaríngeo/diagnóstico por imagem , Neoplasias Nasofaríngeas/diagnóstico por imagem , Adolescente , Adulto , Algoritmos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Nasofaringe/diagnóstico por imagem , Estudos Retrospectivos , Adulto JovemRESUMO
BACKGROUND: The usefulness of 3D deep learning-based classification of breast cancer and malignancy localization from MRI has been reported. This work can potentially be very useful in the clinical domain and aid radiologists in breast cancer diagnosis. PURPOSE: To evaluate the efficacy of 3D deep convolutional neural network (CNN) for diagnosing breast cancer and localizing the lesions at dynamic contrast enhanced (DCE) MRI data in a weakly supervised manner. STUDY TYPE: Retrospective study. SUBJECTS: A total of 1537 female study cases (mean age 47.5 years ±11.8) were collected from March 2013 to December 2016. All the cases had labels of the pathology results as well as BI-RADS categories assessed by radiologists. FIELD STRENGTH/SEQUENCE: 1.5 T dynamic contrast-enhanced MRI. ASSESSMENT: Deep 3D densely connected networks were trained under image-level supervision to automatically classify the images and localize the lesions. The dataset was randomly divided into training (1073), validation (157), and testing (307) subsets. STATISTICAL TESTS: Accuracy, sensitivity, specificity, area under receiver operating characteristic curve (ROC), and the McNemar test for breast cancer classification. Dice similarity for breast cancer localization. RESULTS: The final algorithm performance for breast cancer diagnosis showed 83.7% (257 out of 307) accuracy (95% confidence interval [CI]: 79.1%, 87.4%), 90.8% (187 out of 206) sensitivity (95% CI: 80.6%, 94.1%), 69.3% (70 out of 101) specificity (95% CI: 59.7%, 77.5%), with the area under the curve ROC of 0.859. The weakly supervised cancer detection showed an overall Dice distance of 0.501 ± 0.274. DATA CONCLUSION: 3D CNNs demonstrated high accuracy for diagnosing breast cancer. The weakly supervised learning method showed promise for localizing lesions in volumetric radiology images with only image-level labels. LEVEL OF EVIDENCE: 4 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;50:1144-1151.
Assuntos
Neoplasias da Mama/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Mama/diagnóstico por imagem , Meios de Contraste , Aprendizado Profundo , Feminino , Humanos , Aumento da Imagem/métodos , Pessoa de Meia-Idade , Redes Neurais de Computação , Estudos Retrospectivos , Sensibilidade e EspecificidadeRESUMO
Segmentation of key brain tissues from 3D medical images is of great significance for brain disease diagnosis, progression assessment and monitoring of neurologic conditions. While manual segmentation is time-consuming, laborious, and subjective, automated segmentation is quite challenging due to the complicated anatomical environment of brain and the large variations of brain tissues. We propose a novel voxelwise residual network (VoxResNet) with a set of effective training schemes to cope with this challenging problem. The main merit of residual learning is that it can alleviate the degradation problem when training a deep network so that the performance gains achieved by increasing the network depth can be fully leveraged. With this technique, our VoxResNet is built with 25 layers, and hence can generate more representative features to deal with the large variations of brain tissues than its rivals using hand-crafted features or shallower networks. In order to effectively train such a deep network with limited training data for brain segmentation, we seamlessly integrate multi-modality and multi-level contextual information into our network, so that the complementary information of different modalities can be harnessed and features of different scales can be exploited. Furthermore, an auto-context version of the VoxResNet is proposed by combining the low-level image appearance features, implicit shape information, and high-level context together for further improving the segmentation performance. Extensive experiments on the well-known benchmark (i.e., MRBrainS) of brain segmentation from 3D magnetic resonance (MR) images corroborated the efficacy of the proposed VoxResNet. Our method achieved the first place in the challenge out of 37 competitors including several state-of-the-art brain segmentation methods. Our method is inherently general and can be readily applied as a powerful tool to many brain-related studies, where accurate segmentation of brain structures is critical.
Assuntos
Encéfalo/anatomia & histologia , Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Encéfalo/patologia , HumanosRESUMO
Importance: Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective: Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. Design, Setting, and Participants: Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures: Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures: The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results: The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). Conclusions and Relevance: In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.
Assuntos
Neoplasias da Mama/patologia , Metástase Linfática/diagnóstico , Aprendizado de Máquina , Patologistas , Algoritmos , Feminino , Humanos , Metástase Linfática/patologia , Patologia Clínica , Curva ROCRESUMO
Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body. This is achieved through superimposing preoperative organ models onto the actual anatomy. Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy. To enable reliable navigation in augmented surgery, modeling of intraoperative deformation to obtain an accurate alignment of the preoperative organ model with the intraoperative anatomy is indispensable. Despite the existence of various methods proposed to model intraoperative organ deformation, there are still few literature reviews that systematically categorize and summarize these approaches. This review aims to fill this gap by providing a comprehensive and technical-oriented overview of modeling methods for intraoperative organ deformation in augmented reality in surgery. Through a systematic search and screening process, 112 closely relevant papers were included in this review. By presenting the current status of organ deformation modeling methods and their clinical applications, this review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.
Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Humanos , Cirurgia Assistida por Computador/métodos , Modelos Anatômicos , Imageamento TridimensionalRESUMO
Intraoperative imaging techniques for reconstructing deformable tissues in vivo are pivotal for advanced surgical systems. Existing methods either compromise on rendering quality or are excessively computationally intensive, often demanding dozens of hours to perform, which significantly hinders their practical application. In this paper, we introduce Fast Orthogonal Plane (Forplane), a novel, efficient framework based on neural radiance fields (NeRF) for the reconstruction of deformable tissues. We conceptualize surgical procedures as 4D volumes, and break them down into static and dynamic fields comprised of orthogonal neural planes. This factorization discretizes the four-dimensional space, leading to a decreased memory usage and faster optimization. A spatiotemporal importance sampling scheme is introduced to improve performance in regions with tool occlusion as well as large motions and accelerate training. An efficient ray marching method is applied to skip sampling among empty regions, significantly improving inference speed. Forplane accommodates both binocular and monocular endoscopy videos, demonstrating its extensive applicability and flexibility. Our experiments, carried out on two in vivo datasets, the EndoNeRF and Hamlyn datasets, demonstrate the effectiveness of our framework. In all cases, Forplane substantially accelerates both the optimization process (by over 100 times) and the inference process (by over 15 times) while maintaining or even improving the quality across a variety of non-rigid deformations. This significant performance improvement promises to be a valuable asset for future intraoperative surgical applications. The code of our project is now available at https://github.com/Loping151/ForPlane.
Assuntos
Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Cirurgia Assistida por Computador/métodos , Endoscopia/métodos , Redes Neurais de ComputaçãoRESUMO
PURPOSE: The healthcare industry has a growing need for realistic modeling and efficient simulation of surgical scenes. With effective models of deformable surgical scenes, clinicians are able to conduct surgical planning and surgery training on scenarios close to real-world cases. However, a significant challenge in achieving such a goal is the scarcity of high-quality soft tissue models with accurate shapes and textures. To address this gap, we present a data-driven framework that leverages emerging neural radiance field technology to enable high-quality surgical reconstruction and explore its application for surgical simulations. METHOD: We first focus on developing a fast NeRF-based surgical scene 3D reconstruction approach that achieves state-of-the-art performance. This method can significantly outperform traditional 3D reconstruction methods, which have failed to capture large deformations and produce fine-grained shapes and textures. We then propose an automated creation pipeline of interactive surgical simulation environments through a closed mesh extraction algorithm. RESULTS: Our experiments have validated the superior performance and efficiency of our proposed approach in surgical scene 3D reconstruction. We further utilize our reconstructed soft tissues to conduct FEM and MPM simulations, showcasing the practical application of our method in data-driven surgical simulations. CONCLUSION: We have proposed a novel NeRF-based reconstruction framework with an emphasis on simulation purposes. Our reconstruction framework facilitates the efficient creation of high-quality surgical soft tissue 3D models. With multiple soft tissue simulations demonstrated, we show that our work has the potential to benefit downstream clinical tasks, such as surgical education.
Assuntos
Simulação por Computador , Imageamento Tridimensional , Humanos , Imageamento Tridimensional/métodos , Algoritmos , Cirurgia Assistida por Computador/métodosRESUMO
Background: Gallbladder cancer (GBC) is different from other biliary tract cancers in terms of molecular phenotype and microenvironment. Specific treatments for GBC need to be urgently explored. This study preliminarily investigated the clinical value of hepatic artery infusion chemotherapy (HAIC) combined with bevacizumab plus a programmed death receptor-1 (PD-1) inhibitor for treatment of GBC with hepatic oligometastasis. Methods: We retrospectively collected data on GBC patients with hepatic oligometastasis, who received this combination therapy. The clinical data, conversion rate, treatment response, adverse events (AEs), and short-term survival were summarized. The responses of primary gallbladder lesions and hepatic metastasis, and their effect on prognosis, were investigated. Results: A total of 27 patients were included in the analysis. No grade 4 AEs were observed. The overall objective response rate (ORR) was 55.6% and the disease control rate (DCR) was 85.2%. Median overall survival (OS) time was 15.0 months and the 1-year survival rate was 64.0%. Median progression-free survival (PFS) time was 7.0 months and the 1-year PFS rate was 16.2%. Six patients (22.2%) were successfully converted to resection. Compared with primary gallbladder lesions, it appeared more difficult for patients with hepatic metastasis to achieve remission (ORR: 40.7% vs. 77.8%; P=0.012), but its response appeared to be closely related to the prognosis [median OS: 16.0 months in the complete response (CR) or partial response (PR) group vs. 11.0 months in the stable disease (SD) or progressive disease (PD) group, P=0.070; median PFS: 12.0 months in the CR or PR group vs. 6.5 months in the SD or PD group, P<0.001]. Preoperative CA19-9 of >1,900 U/mL and >5 cm metastatic lesions were associated with an unsatisfactory response, whereas a significant decrease of 18F-fluorodeoxyglucose (18F-FDG) uptake may be a marker of tumor remission. Conclusions: The combination of HAIC, a PD-1 inhibitor, and bevacizumab shows potential for advanced GBC with hepatic oligometastasis. The therapeutic response of hepatic metastasis had a greater influence on prognosis than that of primary gallbladder lesions.
RESUMO
In this article, we present a novel and generic data-driven method to servo-control the 3-D shape of continuum and soft robots based on proprioceptive sensing feedback. Developments of 3-D shape perception and control technologies are crucial for continuum and soft robots to perform tasks autonomously in surgical interventions. However, owing to the nonlinear properties of continuum robots, one main difficulty lies in the modeling of them, especially for soft robots with variable stiffness. To address this problem, we propose a versatile learning-based adaptive shape controller by leveraging proprioception of 3-D configuration from fiber Bragg grating (FBG) sensors, which can online estimate the unknown model of continuum robot against unexpected disturbances and exhibit an adaptive behavior to the unmodeled system without priori data exploration. Based on a new composite adaptation algorithm, the asymptotic convergences of the closed-loop system with learning parameters have been proven by Lyapunov theory. To validate the proposed method, we present a comprehensive experimental study using two continuum and soft robots both integrated with multicore FBGs, including a robotic-assisted colonoscope and multisection extensible soft manipulators. The results demonstrate the feasibility, adaptability, and superiority of our controller in various unstructured environments, as well as phantom experiments.
RESUMO
Despite that the segment anything model (SAM) achieved impressive results on general-purpose semantic segmentation with strong generalization ability on daily images, its demonstrated performance on medical image segmentation is less precise and unstable, especially when dealing with tumor segmentation tasks that involve objects of small sizes, irregular shapes, and low contrast. Notably, the original SAM architecture is designed for 2D natural images and, therefore would not be able to extract the 3D spatial information from volumetric medical data effectively. In this paper, we propose a novel adaptation method for transferring SAM from 2D to 3D for promptable medical image segmentation. Through a holistically designed scheme for architecture modification, we transfer the SAM to support volumetric inputs while retaining the majority of its pre-trained parameters for reuse. The fine-tuning process is conducted in a parameter-efficient manner, wherein most of the pre-trained parameters remain frozen, and only a few lightweight spatial adapters are introduced and tuned. Regardless of the domain gap between natural and medical data and the disparity in the spatial arrangement between 2D and 3D, the transformer trained on natural images can effectively capture the spatial patterns present in volumetric medical images with only lightweight adaptations. We conduct experiments on four open-source tumor segmentation datasets, and with a single click prompt, our model can outperform domain state-of-the-art medical image segmentation models and interactive segmentation models. We also compared our adaptation method with existing popular adapters and observed significant performance improvement on most datasets. Our code and models are available at: https://github.com/med-air/3DSAM-adapter.
Assuntos
Imageamento Tridimensional , Humanos , Imageamento Tridimensional/métodos , Algoritmos , Neoplasias/diagnóstico por imagemRESUMO
Background: The preoperative conversion therapy for advanced hepatocellular carcinoma (HCC) is still being explored. This study reported the potential of combination of transarterial chemoembolization (TACE), hepatic arterial infusion chemotherapy (HAIC), programmed cell death protein-1 (PD-1) inhibitors and lenvatinib as preoperative conversion therapy for nonmetastatic advanced HCC. Methods: This retrospective study gathered data on patients with nonmetastatic advanced HCC who received this combination therapy. We used drug-eluting bead (DEB) instead of conventional iodized oil in TACE. The clinical data, conversion rate, adverse events (AEs) and short-term survival were summarized. A stratified analysis based on whether or not the patient received surgery was conducted. Results: A total of 28 patients were included in the analysis. No grade 4 AEs were observed. The overall objective response rate (ORR) was 64.3%. Ten (35.7%) patients eventually received R0 resection after 2 cycles of combination therapy. Patients succeeding to resection (surgery group) had significantly higher ORR (90.0% vs. 50.0%, P=0.048). The proportion of patients with alpha-fetoprotein (AFP) >1,000 µg/L was significantly lower in surgery group (10.0% vs. 66.7%, P=0.006). After combination therapy, more patients in surgery group experienced significant reduction of >90% in AFP levels (75.0% vs. 23.1%, P=0.03), as well as standardized uptake value (SUV) of 18F-fluorodeoxyglucose (18F-FDG) both in primary tumors and portal vein tumor thrombosis (PVTT) (60.0% vs. 5.6%, P=0.003; 57.1% vs. 8.3%, P=0.04). Of note, 85.7% of PVTT exhibited major pathological response (MPR) in pathological examination although only 28.6% achieved downstage in preoperative imaging examination. MPR was more commonly observed in PVTT than in main tumors (85.7% vs. 20.0%). In non-surgery group, the median overall survival (OS) was 7 months with a 1-year survival rate of 27.8%, while in surgery group, the median OS was not reached and 1-year survival rate was 60.0%. Conclusions: The combination of TACE-HAIC, PD-1 inhibitors and lenvatinib showed its benefit as a preoperative conversion therapy for nonmetastatic advanced HCC. In addition to imaging evaluation, significant reduction of 18F-FDG uptake and AFP can be used as predictors of successful conversion, especially for PVTT.
RESUMO
The ability to recover tissue deformation from visual features is fundamental for many robotic surgery applications. This has been a long-standing research topic in computer vision, however, is still unsolved due to complex dynamics of soft tissues when being manipulated by surgical instruments. The ambiguous pixel correspondence caused by homogeneous texture makes achieving dense and accurate tissue tracking even more challenging. In this paper, we propose a novel self-supervised framework to recover tissue deformations from stereo surgical videos. Our approach integrates semantics, cross-frame motion flow, and long-range temporal dependencies to enable the recovered deformations to represent actual tissue dynamics. Moreover, we incorporate diffeomorphic mapping to regularize the warping field to be physically realistic. To comprehensively evaluate our method, we collected stereo surgical video clips containing three types of tissue manipulation (i.e., pushing, dissection and retraction) from two different types of surgeries (i.e., hemicolectomy and mesorectal excision). Our method has achieved impressive results in capturing deformation in 3D mesh, and generalized well across manipulations and surgeries. It also outperforms current state-of-the-art methods on non-rigid registration and optical flow estimation. To the best of our knowledge, this is the first work on self-supervised learning for dense tissue deformation modeling from stereo surgical videos. Our code will be released.
RESUMO
Automated classification of breast cancer subtypes from digital pathology images has been an extremely challenging task due to the complicated spatial patterns of cells in the tissue micro-environment. While newly proposed graph transformers are able to capture more long-range dependencies to enhance accuracy, they largely ignore the topological connectivity between graph nodes, which is nevertheless critical to extract more representative features to address this difficult task. In this paper, we propose a novel connectivity-aware graph transformer (CGT) for phenotyping the topology connectivity of the tissue graph constructed from digital pathology images for breast cancer classification. Our CGT seamlessly integrates connectivity embedding to node feature at every graph transformer layer by using local connectivity aggregation, in order to yield more comprehensive graph representations to distinguish different breast cancer subtypes. In light of the realistic intercellular communication mode, we then encode the spatial distance between two arbitrary nodes as connectivity bias in self-attention calculation, thereby allowing the CGT to distinctively harness the connectivity embedding based on the distance of two nodes. We extensively evaluate the proposed CGT on a large cohort of breast carcinoma digital pathology images stained by Haematoxylin & Eosin. Experimental results demonstrate the effectiveness of our CGT, which outperforms state-of-the-art methods by a large margin. Codes are released on https://github.com/wang-kang-6/CGT.
Assuntos
Algoritmos , Neoplasias da Mama , Interpretação de Imagem Assistida por Computador , Humanos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Neoplasias da Mama/classificação , Feminino , Interpretação de Imagem Assistida por Computador/métodos , Mama/diagnóstico por imagem , Mama/patologiaRESUMO
Colorectal cancer plays a dominant role in cancer-related deaths, primarily due to the absence of obvious early-stage symptoms. Whole-stage colorectal disease diagnosis is crucial for assessing lesion evolution and determining treatment plans. However, locality difference and disease progression lead to intra-class disparities and inter-class similarities for colorectal lesion representation. In addition, interpretable algorithms explaining the lesion progression are still lacking, making the prediction process a "black box". In this paper, we propose IPNet, a dual-branch interpretable network with progressive loss for whole-stage colorectal disease diagnosis. The dual-branch architecture captures unbiased features representing diverse localities to suppress intra-class variation. The progressive loss function considers inter-class relationship, using prior knowledge of disease evolution to guide classification. Furthermore, a novel Grain-CAM is designed to interpret IPNet by visualizing pixel-wise attention maps from shallow to deep layers, providing regions semantically related to IPNet's progressive classification. We conducted whole-stage diagnosis on two image modalities, i.e., colorectal lesion classification on 129,893 endoscopic optical images and rectal tumor T-staging on 11,072 endoscopic ultrasound images. IPNet is shown to surpass other state-of-the-art algorithms, accordingly achieving an accuracy of 93.15% and 89.62%. Especially, it establishes effective decision boundaries for challenges like polyp vs. adenoma and T2 vs. T3. The results demonstrate an explainable attempt for colorectal lesion classification at a whole-stage level, and rectal tumor T-staging by endoscopic ultrasound is also unprecedentedly explored. IPNet is expected to be further applied, assisting physicians in whole-stage disease diagnosis and enhancing diagnostic interpretability.
RESUMO
Histopathological tissue classification is a fundamental task in computational pathology. Deep learning (DL)-based models have achieved superior performance but centralized training suffers from the privacy leakage problem. Federated learning (FL) can safeguard privacy by keeping training samples locally, while existing FL-based frameworks require a large number of well-annotated training samples and numerous rounds of communication which hinder their viability in real-world clinical scenarios. In this article, we propose a lightweight and universal FL framework, named federated deep-broad learning (FedDBL), to achieve superior classification performance with limited training samples and only one-round communication. By simply integrating a pretrained DL feature extractor, a fast and lightweight broad learning inference system with a classical federated aggregation approach, FedDBL can dramatically reduce data dependency and improve communication efficiency. Five-fold cross-validation demonstrates that FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications. Furthermore, due to the lightweight design and one-round communication, FedDBL reduces the communication burden from 4.6 GB to only 138.4 KB per client using the ResNet-50 backbone at 50-round training. Extensive experiments also show the scalability of FedDBL on model generalization to the unseen dataset, various client numbers, model personalization and other image modalities. Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk. Code is available at https://github.com/tianpeng-deng/FedDBL.
RESUMO
Aneurysmal subarachnoid hemorrhage is a medical emergency of brain that has high mortality and poor prognosis. Causal effect estimation of treatment strategies on patient outcomes is crucial for aneurysmal subarachnoid hemorrhage treatment decision-making. However, most existing studies on treatment decision-making support of this disease are unable to simultaneously compare the potential outcomes of different treatments for a patient. Furthermore, these studies fail to harmoniously integrate the imaging data with non-imaging clinical data, both of which are useful in clinical scenarios. In this paper, we estimate the causal effect of various treatments on patients with aneurysmal subarachnoid hemorrhage by integrating plain CT with non-imaging clinical data, which is represented using structured tabular data. Specifically, we first propose a novel scheme that uses multi-modality confounders distillation architecture to predict the treatment outcome and treatment assignment simultaneously. With these distilled confounder features, we design an imaging and non-imaging interaction representation learning strategy to use the complementary information extracted from different modalities to balance the feature distribution of different treatment groups. We have conducted extensive experiments using a clinical dataset of 656 subarachnoid hemorrhage cases, which was collected from the Hospital Authority Data Collaboration Laboratory in Hong Kong. Our method shows consistent improvements on the evaluation metrics of treatment effect estimation, achieving state-of-the-art results over strong competitors. Code is released at https://github.com/med-air/TOP-aSAH.
Assuntos
Hemorragia Subaracnóidea , Humanos , Hemorragia Subaracnóidea/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Bases de Dados Factuais , Algoritmos , Sistemas de Apoio a Decisões ClínicasRESUMO
Importance: Intracerebral hemorrhage (ICH) associated with direct oral anticoagulant (DOAC) use carries extremely high morbidity and mortality. The clinical effectiveness of hemostatic therapy is unclear. Objective: To compare the clinical and radiological outcomes of DOAC-associated ICH treated with prothrombin complex concentrate (PCC) vs conservative management. Design, Setting, and Participants: In this population-based, propensity score-weighted retrospective cohort study, patients who developed DOAC-associated ICH from January 1, 2016, to December 31, 2021, in Hong Kong were identified. The outcomes of patients who received 25 to 50 IU/kg PCC with those who received no hemostatic agents were compared. Data were analyzed from May 1, 2022, to June 30, 2023. Main Outcomes and Measures: The primary outcome was modified Rankin scale of 0 to 3 or returning to baseline functional status at 3 months. Secondary outcomes were mortality at 90 days, in-hospital mortality, and hematoma expansion. Weighted logistic regression was performed to evaluate the association of PCC with study outcomes. In unweighted logistic regression models, factors associated with good neurological outcome and hematoma expansion in DOAC-associated ICH were identified. Results: A total of 232 patients with DOAC-associated ICH, with a mean (SD) age of 77.2 (9.3) years and 101 (44%) female patients, were included. Among these, 116 (50%) received conservative treatment and 102 (44%) received PCC. Overall, 74 patients (31%) patients had good neurological recovery and 92 (39%) died within 90 days. Median (IQR) baseline hematoma volume was 21.7 mL (3.6-66.1 mL). Compared with conservative management, PCC was not associated with improved neurological recovery (adjusted odds ratio [aOR], 0.62; 95% CI, 0.33-1.16; P = .14), mortality at 90 days (aOR, 1.03; 95% CI, 0.70-1.53; P = .88), in-hospital mortality (aOR, 1.11; 95% CI, 0.69-1.79; P = .66), or reduced hematoma expansion (aOR, 0.94; 95% CI, 0.38-2.31; P = .90). Higher baseline hematoma volume, lower Glasgow coma scale, and intraventricular hemorrhage were associated with lower odds of good neurological outcome but not hematoma expansion. Conclusions and Relevance: In this cohort study, Chinese patients with DOAC-associated ICH had large baseline hematoma volumes and high rates of mortality and functional disability. PCC treatment was not associated with improved functional outcome, hematoma expansion, or mortality. Further studies on novel hemostatic agents as well as neurosurgical and adjunctive medical therapies are needed to identify the best management algorithm for DOAC-associated ICH.