Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Med Imaging ; PP2024 May 23.
Article in English | MEDLINE | ID: mdl-38781068

ABSTRACT

Multiple Instance Learning (MIL) has demonstrated promise in Whole Slide Image (WSI) classification. However, a major challenge persists due to the high computational cost associated with processing these gigapixel images. Existing methods generally adopt a two-stage approach, comprising a non-learnable feature embedding stage and a classifier training stage. Though it can greatly reduce memory consumption by using a fixed feature embedder pre-trained on other domains, such a scheme also results in a disparity between the two stages, leading to suboptimal classification accuracy. To address this issue, we propose that a bag-level classifier can be a good instance-level teacher. Based on this idea, we design Iteratively Coupled Multiple Instance Learning (ICMIL) to couple the embedder and the bag classifier at a low cost. ICMIL initially fixes the patch embedder to train the bag classifier, followed by fixing the bag classifier to fine-tune the patch embedder. The refined embedder can then generate better representations in return, leading to a more accurate classifier for the next iteration. To realize more flexible and more effective embedder fine-tuning, we also introduce a teacher-student framework to efficiently distill the category knowledge in the bag classifier to help the instance-level embedder fine-tuning. Intensive experiments were conducted on four distinct datasets to validate the effectiveness of ICMIL. The experimental results consistently demonstrated that our method significantly improves the performance of existing MIL backbones, achieving state-of-the-art results. The code and the organized datasets can be accessed by: https://github.com/Dootmaan/ICMIL/tree/confidence-based.

2.
IEEE J Biomed Health Inform ; 28(8): 4737-4750, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38768004

ABSTRACT

Although contrast-enhanced computed tomography (CE-CT) images significantly improve the accuracy of diagnosing focal liver lesions (FLLs), the administration of contrast agents imposes a considerable physical burden on patients. The utilization of generative models to synthesize CE-CT images from non-contrasted CT images offers a promising solution. However, existing image synthesis models tend to overlook the importance of critical regions, inevitably reducing their effectiveness in downstream tasks. To overcome this challenge, we propose an innovative CE-CT image synthesis model called Segmentation Guided Crossing Dual Decoding Generative Adversarial Network (SGCDD-GAN). Specifically, the SGCDD-GAN involves a crossing dual decoding generator including an attention decoder and an improved transformation decoder. The attention decoder is designed to highlight some critical regions within the abdominal cavity, while the improved transformation decoder is responsible for synthesizing CE-CT images. These two decoders are interconnected using a crossing technique to enhance each other's capabilities. Furthermore, we employ a multi-task learning strategy to guide the generator to focus more on the lesion area. To evaluate the performance of proposed SGCDD-GAN, we test it on an in-house CE-CT dataset. In both CE-CT image synthesis tasks-namely, synthesizing ART images and synthesizing PV images-the proposed SGCDD-GAN demonstrates superior performance metrics across the entire image and liver region, including SSIM, PSNR, MSE, and PCC scores. Furthermore, CE-CT images synthetized from our SGCDD-GAN achieve remarkable accuracy rates of 82.68%, 94.11%, and 94.11% in a deep learning-based FLLs classification task, along with a pilot assessment conducted by two radiologists.


Subject(s)
Contrast Media , Liver , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Liver/diagnostic imaging , Neural Networks, Computer , Algorithms , Liver Neoplasms/diagnostic imaging , Deep Learning , Image Processing, Computer-Assisted/methods
3.
Biomed Phys Eng Express ; 10(3)2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38457851

ABSTRACT

Contrast-enhanced computed tomography (CE-CT) images are vital for clinical diagnosis of focal liver lesions (FLLs). However, the use of CE-CT images imposes a significant burden on patients due to the injection of contrast agents and extended shooting. Deep learning-based image synthesis models offer a promising solution that synthesizes CE-CT images from non-contrasted CT (NC-CT) images. Unlike natural images, medical image synthesis requires a specific focus on certain organs or localized regions to ensure accurate diagnosis. Determining how to effectively emphasize target organs poses a challenging issue in medical image synthesis. To solve this challenge, we present a novel CE-CT image synthesis model called, Organ-Aware Generative Adversarial Network (OA-GAN). The OA-GAN comprises an organ-aware (OA) network and a dual decoder-based generator. First, the OA network learns the most discriminative spatial features about the target organ (i.e. liver) by utilizing the ground truth organ mask as localization cues. Subsequently, NC-CT image and captured feature are fed into the dual decoder-based generator, which employs a local and global decoder network to simultaneously synthesize the organ and entire CECT image. Moreover, the semantic information extracted from the local decoder is transferred to the global decoder to facilitate better reconstruction of the organ in entire CE-CT image. The qualitative and quantitative evaluation on a CE-CT dataset demonstrates that the OA-GAN outperforms state-of-the-art approaches for synthesizing two types of CE-CT images such as arterial phase and portal venous phase. Additionally, subjective evaluations by expert radiologists and a deep learning-based FLLs classification also affirm that CE-CT images synthesized from the OA-GAN exhibit a remarkable resemblance to real CE-CT images.


Subject(s)
Arteries , Liver , Humans , Liver/diagnostic imaging , Semantics , Tomography, X-Ray Computed
4.
Liver Int ; 44(6): 1351-1362, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38436551

ABSTRACT

BACKGROUND AND AIMS: Accurate preoperative prediction of microvascular invasion (MVI) and recurrence-free survival (RFS) is vital for personalised hepatocellular carcinoma (HCC) management. We developed a multitask deep learning model to predict MVI and RFS using preoperative MRI scans. METHODS: Utilising a retrospective dataset of 725 HCC patients from seven institutions, we developed and validated a multitask deep learning model focused on predicting MVI and RFS. The model employs a transformer architecture to extract critical features from preoperative MRI scans. It was trained on a set of 234 patients and internally validated on a set of 58 patients. External validation was performed using three independent sets (n = 212, 111, 110). RESULTS: The multitask deep learning model yielded high MVI prediction accuracy, with AUC values of 0.918 for the training set and 0.800 for the internal test set. In external test sets, AUC values were 0.837, 0.815 and 0.800. Radiologists' sensitivity and inter-rater agreement for MVI prediction improved significantly when integrated with the model. For RFS, the model achieved C-index values of 0.763 in the training set and ranged between 0.628 and 0.728 in external test sets. Notably, PA-TACE improved RFS only in patients predicted to have high MVI risk and low survival scores (p < .001). CONCLUSIONS: Our deep learning model allows accurate MVI and survival prediction in HCC patients. Prospective studies are warranted to assess the clinical utility of this model in guiding personalised treatment in conjunction with clinical criteria.


Subject(s)
Carcinoma, Hepatocellular , Deep Learning , Liver Neoplasms , Magnetic Resonance Imaging , Neoplasm Invasiveness , Humans , Carcinoma, Hepatocellular/diagnostic imaging , Carcinoma, Hepatocellular/pathology , Carcinoma, Hepatocellular/mortality , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/pathology , Liver Neoplasms/mortality , Magnetic Resonance Imaging/methods , Retrospective Studies , Female , Male , Middle Aged , Aged , Microvessels/diagnostic imaging , Microvessels/pathology , Disease-Free Survival , Neoplasm Recurrence, Local
5.
Stud Health Technol Inform ; 310: 901-905, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38269939

ABSTRACT

Object detection using convolutional neural networks (CNNs) has achieved high performance and achieved state-of-the-art results with natural images. Compared to natural images, medical images present several challenges for lesion detection. First, the sizes of lesions vary tremendously, from several millimeters to several centimeters. Scale variations significantly affect lesion detection accuracy, especially for the detection of small lesions. Moreover, the effective extraction of temporal and spatial features from multi-phase CT images is also an important issue. In this paper, we propose a group-based deep layer aggregation method with multiphase attention for liver lesion detection in multi-phase CT images. The method, which is called MSPA-DLA++, is a backbone feature extraction network for anchor-free liver lesion detection in multi-phase CT images that addresses scale variations and extracts hidden features from such images. The effectiveness of the proposed method is demonstrated on public datasets (LiTS2017) and our private multiphase dataset. The results of the experiments show that MSPA-DLA++ can improve upon the performance of state-of-the-art networks by approximately 3.7%.


Subject(s)
Liver Neoplasms , Neural Networks, Computer , Humans , Tomography, X-Ray Computed
6.
Stud Health Technol Inform ; 310: 936-940, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38269946

ABSTRACT

Microvascular invasion of HCC is an important factor affecting postoperative recurrence and prognosis of patients. Preoperative diagnosis of MVI is greatly significant to improve the prognosis of HCC. Currently, the diagnosis of MVI is mainly based on the histopathological examination after surgery, which is difficult to meet the requirement of preoperative diagnosis. Also, the sensitivity, specificity and accuracy of MVI diagnosis based on a single imaging feature are low. In this paper, a robust, high-precision cross-modality unified framework for clinical diagnosis is proposed for the prediction of microvascular invasion of hepatocellular carcinoma. It can effectively extract, fuse and locate multi-phase MR Images and clinical data, enrich the semantic context, and comprehensively improve the prediction indicators in different hospitals. The state-of-the-art performance of the approach was validated on a dataset of HCC patients with confirmed pathological types. Moreover, CMIR provides a possible solution for related multimodality tasks in the medical field.


Subject(s)
Carcinoma, Hepatocellular , Liver Neoplasms , Humans , Carcinoma, Hepatocellular/diagnostic imaging , Carcinoma, Hepatocellular/surgery , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/surgery , Hospitals , Postoperative Period , Semantics
SELECTION OF CITATIONS
SEARCH DETAIL