Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
IEEE Trans Med Imaging ; PP2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38781068

RESUMEN

Multiple Instance Learning (MIL) has demonstrated promise in Whole Slide Image (WSI) classification. However, a major challenge persists due to the high computational cost associated with processing these gigapixel images. Existing methods generally adopt a two-stage approach, comprising a non-learnable feature embedding stage and a classifier training stage. Though it can greatly reduce memory consumption by using a fixed feature embedder pre-trained on other domains, such a scheme also results in a disparity between the two stages, leading to suboptimal classification accuracy. To address this issue, we propose that a bag-level classifier can be a good instance-level teacher. Based on this idea, we design Iteratively Coupled Multiple Instance Learning (ICMIL) to couple the embedder and the bag classifier at a low cost. ICMIL initially fixes the patch embedder to train the bag classifier, followed by fixing the bag classifier to fine-tune the patch embedder. The refined embedder can then generate better representations in return, leading to a more accurate classifier for the next iteration. To realize more flexible and more effective embedder fine-tuning, we also introduce a teacher-student framework to efficiently distill the category knowledge in the bag classifier to help the instance-level embedder fine-tuning. Intensive experiments were conducted on four distinct datasets to validate the effectiveness of ICMIL. The experimental results consistently demonstrated that our method significantly improves the performance of existing MIL backbones, achieving state-of-the-art results. The code and the organized datasets can be accessed by: https://github.com/Dootmaan/ICMIL/tree/confidence-based.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38768004

RESUMEN

Although contrast-enhanced computed tomography (CE-CT) images significantly improve the accuracy of diagnosing focal liver lesions (FLLs), the administration of contrast agents imposes a considerable physical burden on patients. The utilization of generative models to synthesize CE-CT images from non-contrasted CT images offers a promising solution. However, existing image synthesis models tend to overlook the importance of critical regions, inevitably reducing their effectiveness in downstream tasks. To overcome this challenge, we propose an innovative CE-CT image synthesis model called Segmentation Guided Crossing Dual Decoding Generative Adversarial Network (SGCDD-GAN). Specifically, the SGCDD-GAN involves a crossing dual decoding generator including an attention decoder and an improved transformation decoder. The attention decoder is designed to highlight some critical regions within the abdominal cavity, while the improved transformation decoder is responsible for synthesizing CE-CT images. These two decoders are interconnected using a crossing technique to enhance each other's capabilities. Furthermore, we employ a multi-task learning strategy to guide the generator to focus more on the lesion area. To evaluate the performance of proposed SGCDD-GAN, we test it on an in-house CE-CT dataset. In both CE-CT image synthesis tasks-namely, synthesizing ART images and synthesizing PV images-the proposed SGCDD-GAN demonstrates superior performance metrics across the entire image and liver region, including SSIM, PSNR, MSE, and PCC scores. Furthermore, CE-CT images synthetized from our SGCDD-GAN achieve remarkable accuracy rates of 82.68%, 94.11%, and 94.11% in a deep learning-based FLLs classification task, along with a pilot assessment conducted by two radiologists.

3.
Liver Int ; 44(6): 1351-1362, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38436551

RESUMEN

BACKGROUND AND AIMS: Accurate preoperative prediction of microvascular invasion (MVI) and recurrence-free survival (RFS) is vital for personalised hepatocellular carcinoma (HCC) management. We developed a multitask deep learning model to predict MVI and RFS using preoperative MRI scans. METHODS: Utilising a retrospective dataset of 725 HCC patients from seven institutions, we developed and validated a multitask deep learning model focused on predicting MVI and RFS. The model employs a transformer architecture to extract critical features from preoperative MRI scans. It was trained on a set of 234 patients and internally validated on a set of 58 patients. External validation was performed using three independent sets (n = 212, 111, 110). RESULTS: The multitask deep learning model yielded high MVI prediction accuracy, with AUC values of 0.918 for the training set and 0.800 for the internal test set. In external test sets, AUC values were 0.837, 0.815 and 0.800. Radiologists' sensitivity and inter-rater agreement for MVI prediction improved significantly when integrated with the model. For RFS, the model achieved C-index values of 0.763 in the training set and ranged between 0.628 and 0.728 in external test sets. Notably, PA-TACE improved RFS only in patients predicted to have high MVI risk and low survival scores (p < .001). CONCLUSIONS: Our deep learning model allows accurate MVI and survival prediction in HCC patients. Prospective studies are warranted to assess the clinical utility of this model in guiding personalised treatment in conjunction with clinical criteria.


Asunto(s)
Carcinoma Hepatocelular , Aprendizaje Profundo , Neoplasias Hepáticas , Imagen por Resonancia Magnética , Invasividad Neoplásica , Humanos , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/patología , Carcinoma Hepatocelular/mortalidad , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/patología , Neoplasias Hepáticas/mortalidad , Imagen por Resonancia Magnética/métodos , Estudios Retrospectivos , Femenino , Masculino , Persona de Mediana Edad , Anciano , Microvasos/diagnóstico por imagen , Microvasos/patología , Supervivencia sin Enfermedad , Recurrencia Local de Neoplasia
4.
Biomed Phys Eng Express ; 10(3)2024 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-38457851

RESUMEN

Contrast-enhanced computed tomography (CE-CT) images are vital for clinical diagnosis of focal liver lesions (FLLs). However, the use of CE-CT images imposes a significant burden on patients due to the injection of contrast agents and extended shooting. Deep learning-based image synthesis models offer a promising solution that synthesizes CE-CT images from non-contrasted CT (NC-CT) images. Unlike natural images, medical image synthesis requires a specific focus on certain organs or localized regions to ensure accurate diagnosis. Determining how to effectively emphasize target organs poses a challenging issue in medical image synthesis. To solve this challenge, we present a novel CE-CT image synthesis model called, Organ-Aware Generative Adversarial Network (OA-GAN). The OA-GAN comprises an organ-aware (OA) network and a dual decoder-based generator. First, the OA network learns the most discriminative spatial features about the target organ (i.e. liver) by utilizing the ground truth organ mask as localization cues. Subsequently, NC-CT image and captured feature are fed into the dual decoder-based generator, which employs a local and global decoder network to simultaneously synthesize the organ and entire CECT image. Moreover, the semantic information extracted from the local decoder is transferred to the global decoder to facilitate better reconstruction of the organ in entire CE-CT image. The qualitative and quantitative evaluation on a CE-CT dataset demonstrates that the OA-GAN outperforms state-of-the-art approaches for synthesizing two types of CE-CT images such as arterial phase and portal venous phase. Additionally, subjective evaluations by expert radiologists and a deep learning-based FLLs classification also affirm that CE-CT images synthesized from the OA-GAN exhibit a remarkable resemblance to real CE-CT images.


Asunto(s)
Arterias , Hígado , Humanos , Hígado/diagnóstico por imagen , Semántica , Tomografía Computarizada por Rayos X
5.
Stud Health Technol Inform ; 310: 901-905, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38269939

RESUMEN

Object detection using convolutional neural networks (CNNs) has achieved high performance and achieved state-of-the-art results with natural images. Compared to natural images, medical images present several challenges for lesion detection. First, the sizes of lesions vary tremendously, from several millimeters to several centimeters. Scale variations significantly affect lesion detection accuracy, especially for the detection of small lesions. Moreover, the effective extraction of temporal and spatial features from multi-phase CT images is also an important issue. In this paper, we propose a group-based deep layer aggregation method with multiphase attention for liver lesion detection in multi-phase CT images. The method, which is called MSPA-DLA++, is a backbone feature extraction network for anchor-free liver lesion detection in multi-phase CT images that addresses scale variations and extracts hidden features from such images. The effectiveness of the proposed method is demonstrated on public datasets (LiTS2017) and our private multiphase dataset. The results of the experiments show that MSPA-DLA++ can improve upon the performance of state-of-the-art networks by approximately 3.7%.


Asunto(s)
Neoplasias Hepáticas , Redes Neurales de la Computación , Humanos , Tomografía Computarizada por Rayos X
6.
Stud Health Technol Inform ; 310: 936-940, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38269946

RESUMEN

Microvascular invasion of HCC is an important factor affecting postoperative recurrence and prognosis of patients. Preoperative diagnosis of MVI is greatly significant to improve the prognosis of HCC. Currently, the diagnosis of MVI is mainly based on the histopathological examination after surgery, which is difficult to meet the requirement of preoperative diagnosis. Also, the sensitivity, specificity and accuracy of MVI diagnosis based on a single imaging feature are low. In this paper, a robust, high-precision cross-modality unified framework for clinical diagnosis is proposed for the prediction of microvascular invasion of hepatocellular carcinoma. It can effectively extract, fuse and locate multi-phase MR Images and clinical data, enrich the semantic context, and comprehensively improve the prediction indicators in different hospitals. The state-of-the-art performance of the approach was validated on a dataset of HCC patients with confirmed pathological types. Moreover, CMIR provides a possible solution for related multimodality tasks in the medical field.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/cirugía , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/cirugía , Hospitales , Periodo Posoperatorio , Semántica
7.
Artículo en Inglés | MEDLINE | ID: mdl-38082813

RESUMEN

MRI is crucial for the diagnosis of HCC patients, especially when combined with CT images for MVI prediction, richer complementary information can be learned. Many studies have shown that whether hepatocellular carcinoma is accompanied by vascular invasion can be evidenced by imaging examinations such as CT or MR, so they can be used as a multimodal joint prediction to improve the prediction accuracy of MVI. However, it is high-risk, time-consuming and expensive in current clinical diagnosis due to the use of gadolinium-based contrast agent (CA) injection. If MRI could be synthesized without CA injection, there is no doubt that it would greatly optimize the diagnosis. Based on this, this paper proposes a high-quality image synthesis network, MVI-Wise GAN, that can be used to improve the prediction of microvascular invasion in HCC. It starts from the underlying imaging perspective, introduces K-space and feature-level constraints, and combines three related networks (an attention-aware generator, a convolutional neural network-based discriminator and a region-based convolutional neural network detector) Together, precise tumor region detection by synthetic tumor-specific MRI. Accurate MRI synthesis is achieved through backpropagation, the feature representation and context learning of HCC MVI are enhanced, and the performance of loss convergence is improved through residual learning. The model was tested on a dataset of 256 subjects from Run Run Shaw Hospital of Zhejiang University. Experimental results and quantitative evaluation show that MVI-Wise GAN achieves high-quality MRI synthesis with a tumor detection accuracy of 92.3%, which is helpful for the clinical diagnosis of liver tumor MVI.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagen , Neoplasias Hepáticas/diagnóstico por imagen , Invasividad Neoplásica , Imagen por Resonancia Magnética/métodos , Medios de Contraste/farmacología , Radiofármacos
8.
Artículo en Inglés | MEDLINE | ID: mdl-38082913

RESUMEN

Computer-aided diagnostic methods, such as automatic and precise liver tumor detection, have a significant impact on healthcare. In recent years, deep learning-based liver tumor detection methods in multi-phase computed tomography (CT) images have achieved noticeable performance. Deep learning frameworks require a substantial amount of annotated training data but obtaining enough training data with high quality annotations is a major issue in medical imaging. Additionally, deep learning frameworks experience domain shift problems when they are trained using one dataset (source domain) and applied to new test data (target domain). To address the lack of training data and domain shift issues in multiphase CT images, here, we present an adversarial learning-based strategy to mitigate the domain gap across different phases of multiphase CT scans. We introduce to use Fourier phase component of CT images in order to improve the semantic information and more reliably identify the tumor tissues. Our approach eliminates the requirement for distinct annotations for each phase of CT scans. The experiment results show that our proposed method performs noticeably better than conventional training and other methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias Hepáticas , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Neoplasias Hepáticas/diagnóstico por imagen
9.
Artículo en Inglés | MEDLINE | ID: mdl-38083232

RESUMEN

As the most common malignant tumor worldwide, hepatocellular carcinoma (HCC) has a high rate of death and recurrence, and microvascular invasion (MVI) is considered to be an independent risk factor affecting its early recurrence and poor survival rate. Accurate preoperative prediction of MVI is of great significance for the formulation of individualized treatment plans and long-term prognosis assessment for HCC patients. However, as the mechanism of MVI is still unclear, existing studies use deep learning methods to directly train CT or MR images, with limited predictive performance and lack of explanation. We map the pathological "7-point" baseline sampling method used to confirm the diagnosis of MVI onto MR images, propose a vision-guided attention-enhanced network to improve the prediction performance of MVI, and validate the prediction on the corresponding pathological images reliability of the results. Specifically, we design a learnable online class activation map (CAM) to guide the network to focus on high-incidence regions of MVI guided by an extended tumor mask. Further, an attention-enhanced module is proposed to force the network to learn image regions that can explain the MVI results. The generated attention maps capture long-distance dependencies and can be used as spatial priors for MVI to promote the learning of vision-guided module. The experimental results on the constructed multi-center dataset show that the proposed algorithm achieves the state-of-the-art compared to other models.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico , Neoplasias Hepáticas/diagnóstico , Reproducibilidad de los Resultados , Estudios Retrospectivos , Invasividad Neoplásica/patología
10.
Artículo en Inglés | MEDLINE | ID: mdl-38083328

RESUMEN

High early recurrence (ER) rate is the main factor leading to the poor outcome of patients with hepatocellular carcinoma (HCC). Accurate preoperative prediction of ER is thus highly desired for HCC treatment. Many radiomics solutions have been proposed for the preoperative prediction of HCC using CT images based on machine learning and deep learning methods. Nevertheless, most current radiomics approaches extract features only from segmented tumor regions that neglect the liver tissue information which is useful for HCC prognosis. In this work, we propose a deep prediction network based on CT images of full liver combined with tumor mask that provides tumor location information for better feature extraction to predict the ER of HCC. While, due to the complex imaging characteristics of HCC, the image-based ER prediction methods suffer from limited capability. Therefore, on the one hand, we propose to employ supervised contrastive loss to jointly train the deep prediction model with cross-entropy loss to alleviate the problem of intra-class variation and inter-class similarity of HCC. On the other hand, we incorporate the clinical data to further improve the prediction ability of the model. Experiments are extensively conducted to verify the effectiveness of our proposed deep prediction model and the contribution of liver tissue for prognosis assessment of HCC.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/cirugía , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/cirugía , Tomografía Computarizada por Rayos X/métodos , Aprendizaje Automático
11.
Artículo en Inglés | MEDLINE | ID: mdl-38083412

RESUMEN

Compared to non-contrast computed tomography (NC-CT) scans, contrast-enhanced (CE) CT scans provide more abundant information about focal liver lesions (FLLs), which play a crucial role in the FLLs diagnosis. However, CE-CT scans require patient to inject contrast agent into the body, which increase the physical and economic burden of the patient. In this paper, we propose a spatial attention-guided generative adversarial network (SAG-GAN), which can directly obtain corresponding CE-CT images from the patient's NC-CT images. In the SAG-GAN, we devise a spatial attention-guided generator, which utilize a lightweight spatial attention module to highlight synthesis task-related areas in NC-CT image and neglect unrelated areas. To assess the performance of our approach, we test it on two tasks: synthesizing CE-CT images in arterial phase and portal venous phase. Both qualitative and quantitative results demonstrate that SAG-GAN is superior to existing GANs-based image synthesis methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos
12.
Comput Biol Med ; 166: 107467, 2023 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-37725849

RESUMEN

Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multi-organ segmentation tasks. However, due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learning-based methods. In this paper, we aim to address this issue by combining off-the-shelf single-organ segmentation models to develop a multi-organ segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multi-organ segmentation. To this end, we propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each off-the-shelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted single-organ segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage off-the-shelf single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy.

13.
IEEE J Biomed Health Inform ; 27(10): 4854-4865, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37585323

RESUMEN

High resolution (HR) 3D medical image segmentation is vital for an accurate diagnosis. However, in the field of medical imaging, it is still a challenging task to achieve a high segmentation performance with cost-effective and feasible computation resources. Previous methods commonly use patch-sampling to reduce the input size, but this inevitably harms the global context and decreases the model's performance. In recent years, a few patch-free strategies have been presented to deal with this issue, but either they have limited performance due to their over-simplified model structures or they follow a complicated training process. In this study, to effectively address these issues, we present Adaptive Decomposition (A-Decomp) and Shared Weight Volumetric Transformer Blocks (SW-VTB). A-Decomp can adaptively decompose features and reduce their spatial size, which greatly lowers GPU memory consumption. SW-VTB is able to capture long-range dependencies at a low cost with its lightweight design and cross-scale weight-sharing mechanism. Our proposed cross-scale weight-sharing approach enhances the network's ability to capture scale-invariant core semantic information in addition to reducing parameter numbers. By combining these two designs together, we present a novel patch-free segmentation framework named VolumeFormer. Experimental results on two datasets show that VolumeFormer outperforms existing patch-based and patch-free methods with a comparatively fast inference speed and relatively compact design.

14.
IEEE Trans Med Imaging ; 42(10): 3091-3103, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37171932

RESUMEN

Multi-modal tumor segmentation exploits complementary information from different modalities to help recognize tumor regions. Known multi-modal segmentation methods mainly have deficiencies in two aspects: First, the adopted multi-modal fusion strategies are built upon well-aligned input images, which are vulnerable to spatial misalignment between modalities (caused by respiratory motions, different scanning parameters, registration errors, etc). Second, the performance of known methods remains subject to the uncertainty of segmentation, which is particularly acute in tumor boundary regions. To tackle these issues, in this paper, we propose a novel multi-modal tumor segmentation method with deformable feature fusion and uncertain region refinement. Concretely, we introduce a deformable aggregation module, which integrates feature alignment and feature aggregation in an ensemble, to reduce inter-modality misalignment and make full use of cross-modal information. Moreover, we devise an uncertain region inpainting module to refine uncertain pixels using neighboring discriminative features. Experiments on two clinical multi-modal tumor datasets demonstrate that our method achieves promising tumor segmentation results and outperforms state-of-the-art methods.


Asunto(s)
Neoplasias , Humanos , Incertidumbre , Neoplasias/diagnóstico por imagen , Movimiento (Física) , Frecuencia Respiratoria
15.
J Neurosci Res ; 101(6): 930-951, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36720002

RESUMEN

Interleukin-1ß (IL-1ß) has been described to exert important effect on synapses in the brain. Here, we explored if the synapses in the hippocampus would be adversely affected following intracerebral IL-1ß injection and, if so, to clarify the underlying molecular mechanisms. Adult male Sprague-Dawley rats were divided into control, IL-1ß, IL-1ß + PD98059, and IL-1ß + MG132 groups and then sacrificed for detection of synaptophysin (syn) protein level, synaptosome glutamate release, and synapse ultrastructure by western blotting, glutamate kit and electron microscopy, respectively. These rats were tested by Morris water maze for learning and memory ability. It was determined by western blotting whether IL-1ß exerted the effect of on syn and siah1 expression in primary neurons via extracellular regulated protein kinases (ERK) signaling pathway. Intrahippocampal injection of IL-1ß in male rats and sacrificed at 8d resulted in a significant decrease in syn protein, damage of synapse structure, and abnormal release of neurotransmitters glutamate. ERK inhibitor and proteosome inhibitor treatment reversed the above changes induced by IL-1ß both in vivo and in vitro. In primary cultured neurons incubated with IL-1ß, the expression level of synaptophysin was significantly downregulated coupled with abnormal glutamate release. Furthermore, use of PD98059 had confirmed that ERK signaling pathway was implicated in synaptic disorders caused by IL-1ß treatment. The present results suggest that exogenous IL-1ß can suppress syn protein level and glutamate release. A possible mechanism for this is that IL-1ß induces syn degradation that is regulated by the E3 ligase siah1 via the ERK signaling pathway.


Asunto(s)
Proteínas Quinasas , Transducción de Señal , Animales , Masculino , Ratas , Glutamatos , Interleucina-1beta/metabolismo , Proteínas Quinasas/metabolismo , Ratas Sprague-Dawley , Sinaptofisina/metabolismo
16.
Eur J Surg Oncol ; 49(1): 156-164, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36333180

RESUMEN

BACKGROUND: Accurate preoperative identification of the microvascular invasion (MVI) can relieve the pressure from personalized treatment adaptation and improve the poor prognosis for hepatocellular carcinoma (HCC). This study aimed to develop and validate a novel multimodal deep learning (DL) model for predicting MVI based on multi-parameter magnetic resonance imaging (MRI) and contrast-enhanced computed tomography (CT). METHODS: A total of 397 HCC patients underwent both CT and MRI examinations before surgery. We established the radiological models (RCT, RMRI) by support vector machine (SVM), DL models (DLCT_ALL, DLMRI_ALL, DLCT + MRI) by ResNet18. The comprehensive model (CALL) involving multi-modality DL features and clinical and radiological features was constructed using SVM. Model performance was quantified by the area under the receiver operating characteristic curve (AUC) and compared by net reclassification index (NRI) and integrated discrimination improvement (IDI). RESULTS: The DLCT + MRI model exhibited superior predicted efficiency over single-modality models, especially over the DLCT_ALL model (AUC: 0.819 vs. 0.742, NRI > 0, IDI > 0). The DLMRI_ALL model improved the performance over the RMRI model (AUC: 0.794 vs. 0.766, NRI > 0, IDI < 0), but no such difference was found between the DLCT_ALL model and RCT model (AUC: 0.742 vs. 0.710, NRI < 0, IDI < 0). Furthermore, both the DLCT + MRI and CALL models revealed the prognostic power in recurrence-free survival stratification (P < 0.001). CONCLUSION: The proposed DLCT + MRI model showed robust capability in predicting MVI and outcomes for HCC. Besides, the identification ability of the multi-modality DL model was better than any single modality, especially for CT.


Asunto(s)
Carcinoma Hepatocelular , Aprendizaje Profundo , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/cirugía , Carcinoma Hepatocelular/patología , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/cirugía , Tomografía Computarizada por Rayos X/métodos , Imagen por Resonancia Magnética/métodos , Estudios Retrospectivos
17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1536-1539, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36085648

RESUMEN

Automatic and efficient liver tumor detection in multi-phase CT images is essential in computer-aided diagnosis of liver tumors. Nowadays, deep learning has been widely used in medical applications. Normally, deep learning-based AI systems need a large quantity of training data, but in the medical field, acquiring sufficient training data with high-quality annotations is a significant challenge. To solve the lack of training data issue, domain adaptation-based methods have recently been developed as a technique to bridge the domain gap across datasets with different feature characteristics and data distributions. This paper presents a domain adaptation-based method for detecting liver tumors in multi-phase CT images. We adopt knowledge for model learning from PV phase images to ART and NC phase images. Clinical Relevance- To minimize the domain gap we employ an adversarial learning scheme with the maximum square loss for mid-level output feature maps using an anchorless detector. Experiments show that our proposed method performs much better for various CT-phase images than normal training.


Asunto(s)
Aclimatación , Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Radiofármacos , Tomografía Computarizada por Rayos X
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2097-2100, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086312

RESUMEN

Contrast-enhanced computed tomography (CE-CT) images are used extensively for the diagnosis of liver cancer in clinical practice. Compared with the non-contrast CT (NC-CT) images (CT scans without injection), the CE-CT images are obtained after injecting the contrast, which will increase physical burden of patients. To handle the limitation, we proposed an improved conditional generative adversarial network (improved cGAN) to generate CE-CT images from non-contrast CT images. In the improved cGAN, we incorporate a pyramid pooling module and an elaborate feature fusion module to the generator to improve the capability of encoder in capturing multi-scale semantic features and prevent the dilution of information in the process of decoding. We evaluate the performance of our proposed method on a contrast-enhanced CT dataset including three phases of CT images, (i.e., non-contrast image, CE-CT images in arterial and portal venous phases). Experimental results suggest that the proposed method is superior to existing GAN-based models in quantitative and qualitative results.


Asunto(s)
Arterias , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1552-1555, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36083929

RESUMEN

Multiphase computed tomography (CT) images are widely used for the diagnosis of liver disease. Since each phase has different contrast enhancement (i.e., different domain), the multiphase CT images should be annotated for all phases to perform liver or tumor segmentation, which is a time-consuming and labor-expensive task. In this paper, we propose a dual discriminator-based unsupervised domain adaptation (DD-UDA) for liver segmentation on multiphase CT images without annotations. Our framework consists of three modules: a task-specific generator and two discriminators. We have performed domain adaptation at two levels: one is at the feature level, and the other is at the output level, to improve accuracy by reducing the difference in distributions between the source and target domains. Experimental results using public data (PV phase only) as the source domain and private multiphase CT data as the target domain show the effectiveness of our proposed DD-UDA method. Clinical relevance- This study helps to efficiently and accurately segment the liver on multiphase CT images, which is an important preprocessing step for diagnosis and surgical support. By using the proposed DD-UDA method, the segmentation accuracy has improved from 5%, 8%, and 6% respectively, for all phases of CT images with comparison to those without UDA.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Hígado/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos
20.
IEEE J Biomed Health Inform ; 26(8): 3988-3998, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35213319

RESUMEN

Organ segmentation is one of the most important step for various medical image analysis tasks. Recently, semi-supervised learning (SSL) has attracted much attentions by reducing labeling cost. However, most of the existing SSLs neglected the prior shape and position information specialized in the medical images, leading to unsatisfactory localization and non-smooth of objects. In this paper, we propose a novel atlas-based semi-supervised segmentation network with multi-task learning for medical organs, named MTL-ABS3Net, which incorporates the anatomical priors and makes full use of unlabeled data in a self-training and multi-task learning manner. The MTL-ABS3Net consists of two components: an Atlas-Based Semi-Supervised Segmentation Network (ABS3Net) and Reconstruction-Assisted Module (RAM). Specifically, the ABS3Net improves the existing SSLs by utilizing atlas prior, which generates credible pseudo labels in a self-training manner; while the RAM further assists the segmentation network by capturing the anatomical structures from the original images in a multi-task learning manner. Better reconstruction quality is achieved by using MS-SSIM loss function, which further improves the segmentation accuracy. Experimental results from the liver and spleen datasets demonstrated that the performance of our method was significantly improved compared to existing state-of-the-art methods.


Asunto(s)
Abdomen , Aprendizaje Automático Supervisado , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Bazo/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA