Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
Liver Int ; 44(6): 1351-1362, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38436551

RESUMO

BACKGROUND AND AIMS: Accurate preoperative prediction of microvascular invasion (MVI) and recurrence-free survival (RFS) is vital for personalised hepatocellular carcinoma (HCC) management. We developed a multitask deep learning model to predict MVI and RFS using preoperative MRI scans. METHODS: Utilising a retrospective dataset of 725 HCC patients from seven institutions, we developed and validated a multitask deep learning model focused on predicting MVI and RFS. The model employs a transformer architecture to extract critical features from preoperative MRI scans. It was trained on a set of 234 patients and internally validated on a set of 58 patients. External validation was performed using three independent sets (n = 212, 111, 110). RESULTS: The multitask deep learning model yielded high MVI prediction accuracy, with AUC values of 0.918 for the training set and 0.800 for the internal test set. In external test sets, AUC values were 0.837, 0.815 and 0.800. Radiologists' sensitivity and inter-rater agreement for MVI prediction improved significantly when integrated with the model. For RFS, the model achieved C-index values of 0.763 in the training set and ranged between 0.628 and 0.728 in external test sets. Notably, PA-TACE improved RFS only in patients predicted to have high MVI risk and low survival scores (p < .001). CONCLUSIONS: Our deep learning model allows accurate MVI and survival prediction in HCC patients. Prospective studies are warranted to assess the clinical utility of this model in guiding personalised treatment in conjunction with clinical criteria.


Assuntos
Carcinoma Hepatocelular , Aprendizado Profundo , Neoplasias Hepáticas , Imageamento por Ressonância Magnética , Invasividade Neoplásica , Humanos , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/patologia , Carcinoma Hepatocelular/mortalidade , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia , Neoplasias Hepáticas/mortalidade , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Microvasos/diagnóstico por imagem , Microvasos/patologia , Intervalo Livre de Doença , Recidiva Local de Neoplasia
2.
J Neurosci Res ; 101(6): 930-951, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36720002

RESUMO

Interleukin-1ß (IL-1ß) has been described to exert important effect on synapses in the brain. Here, we explored if the synapses in the hippocampus would be adversely affected following intracerebral IL-1ß injection and, if so, to clarify the underlying molecular mechanisms. Adult male Sprague-Dawley rats were divided into control, IL-1ß, IL-1ß + PD98059, and IL-1ß + MG132 groups and then sacrificed for detection of synaptophysin (syn) protein level, synaptosome glutamate release, and synapse ultrastructure by western blotting, glutamate kit and electron microscopy, respectively. These rats were tested by Morris water maze for learning and memory ability. It was determined by western blotting whether IL-1ß exerted the effect of on syn and siah1 expression in primary neurons via extracellular regulated protein kinases (ERK) signaling pathway. Intrahippocampal injection of IL-1ß in male rats and sacrificed at 8d resulted in a significant decrease in syn protein, damage of synapse structure, and abnormal release of neurotransmitters glutamate. ERK inhibitor and proteosome inhibitor treatment reversed the above changes induced by IL-1ß both in vivo and in vitro. In primary cultured neurons incubated with IL-1ß, the expression level of synaptophysin was significantly downregulated coupled with abnormal glutamate release. Furthermore, use of PD98059 had confirmed that ERK signaling pathway was implicated in synaptic disorders caused by IL-1ß treatment. The present results suggest that exogenous IL-1ß can suppress syn protein level and glutamate release. A possible mechanism for this is that IL-1ß induces syn degradation that is regulated by the E3 ligase siah1 via the ERK signaling pathway.


Assuntos
Proteínas Quinases , Transdução de Sinais , Animais , Masculino , Ratos , Glutamatos , Interleucina-1beta/metabolismo , Proteínas Quinases/metabolismo , Ratos Sprague-Dawley , Sinaptofisina/metabolismo
3.
BMC Bioinformatics ; 22(1): 91, 2021 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-33637042

RESUMO

BACKGROUND: To effectively detect and investigate various cell-related diseases, it is essential to understand cell behaviour. The ability to detection mitotic cells is a fundamental step in diagnosing cell-related diseases. Convolutional neural networks (CNNs) have been successfully applied to object detection tasks, however, when applied to mitotic cell detection, most existing methods generate high false-positive rates due to the complex characteristics that differentiate normal cells from mitotic cells. Cell size and orientation variations in each stage make detecting mitotic cells difficult in 2D approaches. Therefore, effective extraction of the spatial and temporal features from mitotic data is an important and challenging task. The computational time required for detection is another major concern for mitotic detection in 4D microscopic images. RESULTS: In this paper, we propose a backbone feature extraction network named full scale connected recurrent deep layer aggregation (RDLA++) for anchor-free mitotic detection. We utilize a 2.5D method that includes 3D spatial information extracted from several 2D images from neighbouring slices that form a multi-stream input. CONCLUSIONS: Our proposed technique addresses the scale variation problem and can efficiently extract spatial and temporal features from 4D microscopic images, resulting in improved detection accuracy and reduced computation time compared with those of other state-of-the-art methods.


Assuntos
Microscopia , Redes Neurais de Computação , Fenômenos Fisiológicos Celulares
4.
Sensors (Basel) ; 21(14)2021 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-34300504

RESUMO

Depression is a severe psychological condition that affects millions of people worldwide. As depression has received more attention in recent years, it has become imperative to develop automatic methods for detecting depression. Although numerous machine learning methods have been proposed for estimating the levels of depression via audio, visual, and audiovisual emotion sensing, several challenges still exist. For example, it is difficult to extract long-term temporal context information from long sequences of audio and visual data, and it is also difficult to select and fuse useful multi-modal information or features effectively. In addition, how to include other information or tasks to enhance the estimation accuracy is also one of the challenges. In this study, we propose a multi-modal adaptive fusion transformer network for estimating the levels of depression. Transformer-based models have achieved state-of-the-art performance in language understanding and sequence modeling. Thus, the proposed transformer-based network is utilized to extract long-term temporal context information from uni-modal audio and visual data in our work. This is the first transformer-based approach for depression detection. We also propose an adaptive fusion method for adaptively fusing useful multi-modal features. Furthermore, inspired by current multi-task learning work, we also incorporate an auxiliary task (depression classification) to enhance the main task of depression level regression (estimation). The effectiveness of the proposed method has been validated on a public dataset (AVEC 2019 Detecting Depression with AI Sub-challenge) in terms of the PHQ-8 scores. Experimental results indicate that the proposed method achieves better performance compared with currently state-of-the-art methods. Our proposed method achieves a concordance correlation coefficient (CCC) of 0.733 on AVEC 2019 which is 6.2% higher than the accuracy (CCC = 0.696) of the state-of-the-art method.


Assuntos
Depressão , Aprendizado de Máquina , Depressão/diagnóstico , Humanos
5.
IEEE Trans Med Imaging ; PP2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38781068

RESUMO

Multiple Instance Learning (MIL) has demonstrated promise in Whole Slide Image (WSI) classification. However, a major challenge persists due to the high computational cost associated with processing these gigapixel images. Existing methods generally adopt a two-stage approach, comprising a non-learnable feature embedding stage and a classifier training stage. Though it can greatly reduce memory consumption by using a fixed feature embedder pre-trained on other domains, such a scheme also results in a disparity between the two stages, leading to suboptimal classification accuracy. To address this issue, we propose that a bag-level classifier can be a good instance-level teacher. Based on this idea, we design Iteratively Coupled Multiple Instance Learning (ICMIL) to couple the embedder and the bag classifier at a low cost. ICMIL initially fixes the patch embedder to train the bag classifier, followed by fixing the bag classifier to fine-tune the patch embedder. The refined embedder can then generate better representations in return, leading to a more accurate classifier for the next iteration. To realize more flexible and more effective embedder fine-tuning, we also introduce a teacher-student framework to efficiently distill the category knowledge in the bag classifier to help the instance-level embedder fine-tuning. Intensive experiments were conducted on four distinct datasets to validate the effectiveness of ICMIL. The experimental results consistently demonstrated that our method significantly improves the performance of existing MIL backbones, achieving state-of-the-art results. The code and the organized datasets can be accessed by: https://github.com/Dootmaan/ICMIL/tree/confidence-based.

6.
Artigo em Inglês | MEDLINE | ID: mdl-38768004

RESUMO

Although contrast-enhanced computed tomography (CE-CT) images significantly improve the accuracy of diagnosing focal liver lesions (FLLs), the administration of contrast agents imposes a considerable physical burden on patients. The utilization of generative models to synthesize CE-CT images from non-contrasted CT images offers a promising solution. However, existing image synthesis models tend to overlook the importance of critical regions, inevitably reducing their effectiveness in downstream tasks. To overcome this challenge, we propose an innovative CE-CT image synthesis model called Segmentation Guided Crossing Dual Decoding Generative Adversarial Network (SGCDD-GAN). Specifically, the SGCDD-GAN involves a crossing dual decoding generator including an attention decoder and an improved transformation decoder. The attention decoder is designed to highlight some critical regions within the abdominal cavity, while the improved transformation decoder is responsible for synthesizing CE-CT images. These two decoders are interconnected using a crossing technique to enhance each other's capabilities. Furthermore, we employ a multi-task learning strategy to guide the generator to focus more on the lesion area. To evaluate the performance of proposed SGCDD-GAN, we test it on an in-house CE-CT dataset. In both CE-CT image synthesis tasks-namely, synthesizing ART images and synthesizing PV images-the proposed SGCDD-GAN demonstrates superior performance metrics across the entire image and liver region, including SSIM, PSNR, MSE, and PCC scores. Furthermore, CE-CT images synthetized from our SGCDD-GAN achieve remarkable accuracy rates of 82.68%, 94.11%, and 94.11% in a deep learning-based FLLs classification task, along with a pilot assessment conducted by two radiologists.

7.
Biomed Phys Eng Express ; 10(3)2024 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-38457851

RESUMO

Contrast-enhanced computed tomography (CE-CT) images are vital for clinical diagnosis of focal liver lesions (FLLs). However, the use of CE-CT images imposes a significant burden on patients due to the injection of contrast agents and extended shooting. Deep learning-based image synthesis models offer a promising solution that synthesizes CE-CT images from non-contrasted CT (NC-CT) images. Unlike natural images, medical image synthesis requires a specific focus on certain organs or localized regions to ensure accurate diagnosis. Determining how to effectively emphasize target organs poses a challenging issue in medical image synthesis. To solve this challenge, we present a novel CE-CT image synthesis model called, Organ-Aware Generative Adversarial Network (OA-GAN). The OA-GAN comprises an organ-aware (OA) network and a dual decoder-based generator. First, the OA network learns the most discriminative spatial features about the target organ (i.e. liver) by utilizing the ground truth organ mask as localization cues. Subsequently, NC-CT image and captured feature are fed into the dual decoder-based generator, which employs a local and global decoder network to simultaneously synthesize the organ and entire CECT image. Moreover, the semantic information extracted from the local decoder is transferred to the global decoder to facilitate better reconstruction of the organ in entire CE-CT image. The qualitative and quantitative evaluation on a CE-CT dataset demonstrates that the OA-GAN outperforms state-of-the-art approaches for synthesizing two types of CE-CT images such as arterial phase and portal venous phase. Additionally, subjective evaluations by expert radiologists and a deep learning-based FLLs classification also affirm that CE-CT images synthesized from the OA-GAN exhibit a remarkable resemblance to real CE-CT images.


Assuntos
Artérias , Fígado , Humanos , Fígado/diagnóstico por imagem , Semântica , Tomografia Computadorizada por Raios X
8.
Stud Health Technol Inform ; 310: 901-905, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269939

RESUMO

Object detection using convolutional neural networks (CNNs) has achieved high performance and achieved state-of-the-art results with natural images. Compared to natural images, medical images present several challenges for lesion detection. First, the sizes of lesions vary tremendously, from several millimeters to several centimeters. Scale variations significantly affect lesion detection accuracy, especially for the detection of small lesions. Moreover, the effective extraction of temporal and spatial features from multi-phase CT images is also an important issue. In this paper, we propose a group-based deep layer aggregation method with multiphase attention for liver lesion detection in multi-phase CT images. The method, which is called MSPA-DLA++, is a backbone feature extraction network for anchor-free liver lesion detection in multi-phase CT images that addresses scale variations and extracts hidden features from such images. The effectiveness of the proposed method is demonstrated on public datasets (LiTS2017) and our private multiphase dataset. The results of the experiments show that MSPA-DLA++ can improve upon the performance of state-of-the-art networks by approximately 3.7%.


Assuntos
Neoplasias Hepáticas , Redes Neurais de Computação , Humanos , Tomografia Computadorizada por Raios X
9.
Stud Health Technol Inform ; 310: 936-940, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269946

RESUMO

Microvascular invasion of HCC is an important factor affecting postoperative recurrence and prognosis of patients. Preoperative diagnosis of MVI is greatly significant to improve the prognosis of HCC. Currently, the diagnosis of MVI is mainly based on the histopathological examination after surgery, which is difficult to meet the requirement of preoperative diagnosis. Also, the sensitivity, specificity and accuracy of MVI diagnosis based on a single imaging feature are low. In this paper, a robust, high-precision cross-modality unified framework for clinical diagnosis is proposed for the prediction of microvascular invasion of hepatocellular carcinoma. It can effectively extract, fuse and locate multi-phase MR Images and clinical data, enrich the semantic context, and comprehensively improve the prediction indicators in different hospitals. The state-of-the-art performance of the approach was validated on a dataset of HCC patients with confirmed pathological types. Moreover, CMIR provides a possible solution for related multimodality tasks in the medical field.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/cirurgia , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Hospitais , Período Pós-Operatório , Semântica
10.
IEEE J Biomed Health Inform ; 27(10): 4854-4865, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37585323

RESUMO

High resolution (HR) 3D medical image segmentation is vital for an accurate diagnosis. However, in the field of medical imaging, it is still a challenging task to achieve a high segmentation performance with cost-effective and feasible computation resources. Previous methods commonly use patch-sampling to reduce the input size, but this inevitably harms the global context and decreases the model's performance. In recent years, a few patch-free strategies have been presented to deal with this issue, but either they have limited performance due to their over-simplified model structures or they follow a complicated training process. In this study, to effectively address these issues, we present Adaptive Decomposition (A-Decomp) and Shared Weight Volumetric Transformer Blocks (SW-VTB). A-Decomp can adaptively decompose features and reduce their spatial size, which greatly lowers GPU memory consumption. SW-VTB is able to capture long-range dependencies at a low cost with its lightweight design and cross-scale weight-sharing mechanism. Our proposed cross-scale weight-sharing approach enhances the network's ability to capture scale-invariant core semantic information in addition to reducing parameter numbers. By combining these two designs together, we present a novel patch-free segmentation framework named VolumeFormer. Experimental results on two datasets show that VolumeFormer outperforms existing patch-based and patch-free methods with a comparatively fast inference speed and relatively compact design.

11.
Eur J Surg Oncol ; 49(1): 156-164, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36333180

RESUMO

BACKGROUND: Accurate preoperative identification of the microvascular invasion (MVI) can relieve the pressure from personalized treatment adaptation and improve the poor prognosis for hepatocellular carcinoma (HCC). This study aimed to develop and validate a novel multimodal deep learning (DL) model for predicting MVI based on multi-parameter magnetic resonance imaging (MRI) and contrast-enhanced computed tomography (CT). METHODS: A total of 397 HCC patients underwent both CT and MRI examinations before surgery. We established the radiological models (RCT, RMRI) by support vector machine (SVM), DL models (DLCT_ALL, DLMRI_ALL, DLCT + MRI) by ResNet18. The comprehensive model (CALL) involving multi-modality DL features and clinical and radiological features was constructed using SVM. Model performance was quantified by the area under the receiver operating characteristic curve (AUC) and compared by net reclassification index (NRI) and integrated discrimination improvement (IDI). RESULTS: The DLCT + MRI model exhibited superior predicted efficiency over single-modality models, especially over the DLCT_ALL model (AUC: 0.819 vs. 0.742, NRI > 0, IDI > 0). The DLMRI_ALL model improved the performance over the RMRI model (AUC: 0.794 vs. 0.766, NRI > 0, IDI < 0), but no such difference was found between the DLCT_ALL model and RCT model (AUC: 0.742 vs. 0.710, NRI < 0, IDI < 0). Furthermore, both the DLCT + MRI and CALL models revealed the prognostic power in recurrence-free survival stratification (P < 0.001). CONCLUSION: The proposed DLCT + MRI model showed robust capability in predicting MVI and outcomes for HCC. Besides, the identification ability of the multi-modality DL model was better than any single modality, especially for CT.


Assuntos
Carcinoma Hepatocelular , Aprendizado Profundo , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/cirurgia , Carcinoma Hepatocelular/patologia , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos
12.
IEEE Trans Med Imaging ; 42(10): 3091-3103, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37171932

RESUMO

Multi-modal tumor segmentation exploits complementary information from different modalities to help recognize tumor regions. Known multi-modal segmentation methods mainly have deficiencies in two aspects: First, the adopted multi-modal fusion strategies are built upon well-aligned input images, which are vulnerable to spatial misalignment between modalities (caused by respiratory motions, different scanning parameters, registration errors, etc). Second, the performance of known methods remains subject to the uncertainty of segmentation, which is particularly acute in tumor boundary regions. To tackle these issues, in this paper, we propose a novel multi-modal tumor segmentation method with deformable feature fusion and uncertain region refinement. Concretely, we introduce a deformable aggregation module, which integrates feature alignment and feature aggregation in an ensemble, to reduce inter-modality misalignment and make full use of cross-modal information. Moreover, we devise an uncertain region inpainting module to refine uncertain pixels using neighboring discriminative features. Experiments on two clinical multi-modal tumor datasets demonstrate that our method achieves promising tumor segmentation results and outperforms state-of-the-art methods.


Assuntos
Neoplasias , Humanos , Incerteza , Neoplasias/diagnóstico por imagem , Movimento (Física) , Taxa Respiratória
13.
Artigo em Inglês | MEDLINE | ID: mdl-38082813

RESUMO

MRI is crucial for the diagnosis of HCC patients, especially when combined with CT images for MVI prediction, richer complementary information can be learned. Many studies have shown that whether hepatocellular carcinoma is accompanied by vascular invasion can be evidenced by imaging examinations such as CT or MR, so they can be used as a multimodal joint prediction to improve the prediction accuracy of MVI. However, it is high-risk, time-consuming and expensive in current clinical diagnosis due to the use of gadolinium-based contrast agent (CA) injection. If MRI could be synthesized without CA injection, there is no doubt that it would greatly optimize the diagnosis. Based on this, this paper proposes a high-quality image synthesis network, MVI-Wise GAN, that can be used to improve the prediction of microvascular invasion in HCC. It starts from the underlying imaging perspective, introduces K-space and feature-level constraints, and combines three related networks (an attention-aware generator, a convolutional neural network-based discriminator and a region-based convolutional neural network detector) Together, precise tumor region detection by synthetic tumor-specific MRI. Accurate MRI synthesis is achieved through backpropagation, the feature representation and context learning of HCC MVI are enhanced, and the performance of loss convergence is improved through residual learning. The model was tested on a dataset of 256 subjects from Run Run Shaw Hospital of Zhejiang University. Experimental results and quantitative evaluation show that MVI-Wise GAN achieves high-quality MRI synthesis with a tumor detection accuracy of 92.3%, which is helpful for the clinical diagnosis of liver tumor MVI.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagem , Neoplasias Hepáticas/diagnóstico por imagem , Invasividade Neoplásica , Imageamento por Ressonância Magnética/métodos , Meios de Contraste/farmacologia , Compostos Radiofarmacêuticos
14.
Artigo em Inglês | MEDLINE | ID: mdl-38083232

RESUMO

As the most common malignant tumor worldwide, hepatocellular carcinoma (HCC) has a high rate of death and recurrence, and microvascular invasion (MVI) is considered to be an independent risk factor affecting its early recurrence and poor survival rate. Accurate preoperative prediction of MVI is of great significance for the formulation of individualized treatment plans and long-term prognosis assessment for HCC patients. However, as the mechanism of MVI is still unclear, existing studies use deep learning methods to directly train CT or MR images, with limited predictive performance and lack of explanation. We map the pathological "7-point" baseline sampling method used to confirm the diagnosis of MVI onto MR images, propose a vision-guided attention-enhanced network to improve the prediction performance of MVI, and validate the prediction on the corresponding pathological images reliability of the results. Specifically, we design a learnable online class activation map (CAM) to guide the network to focus on high-incidence regions of MVI guided by an extended tumor mask. Further, an attention-enhanced module is proposed to force the network to learn image regions that can explain the MVI results. The generated attention maps capture long-distance dependencies and can be used as spatial priors for MVI to promote the learning of vision-guided module. The experimental results on the constructed multi-center dataset show that the proposed algorithm achieves the state-of-the-art compared to other models.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico , Neoplasias Hepáticas/diagnóstico , Reprodutibilidade dos Testes , Estudos Retrospectivos , Invasividade Neoplásica/patologia
15.
Artigo em Inglês | MEDLINE | ID: mdl-38083328

RESUMO

High early recurrence (ER) rate is the main factor leading to the poor outcome of patients with hepatocellular carcinoma (HCC). Accurate preoperative prediction of ER is thus highly desired for HCC treatment. Many radiomics solutions have been proposed for the preoperative prediction of HCC using CT images based on machine learning and deep learning methods. Nevertheless, most current radiomics approaches extract features only from segmented tumor regions that neglect the liver tissue information which is useful for HCC prognosis. In this work, we propose a deep prediction network based on CT images of full liver combined with tumor mask that provides tumor location information for better feature extraction to predict the ER of HCC. While, due to the complex imaging characteristics of HCC, the image-based ER prediction methods suffer from limited capability. Therefore, on the one hand, we propose to employ supervised contrastive loss to jointly train the deep prediction model with cross-entropy loss to alleviate the problem of intra-class variation and inter-class similarity of HCC. On the other hand, we incorporate the clinical data to further improve the prediction ability of the model. Experiments are extensively conducted to verify the effectiveness of our proposed deep prediction model and the contribution of liver tissue for prognosis assessment of HCC.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/cirurgia , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Tomografia Computadorizada por Raios X/métodos , Aprendizado de Máquina
16.
Artigo em Inglês | MEDLINE | ID: mdl-38083412

RESUMO

Compared to non-contrast computed tomography (NC-CT) scans, contrast-enhanced (CE) CT scans provide more abundant information about focal liver lesions (FLLs), which play a crucial role in the FLLs diagnosis. However, CE-CT scans require patient to inject contrast agent into the body, which increase the physical and economic burden of the patient. In this paper, we propose a spatial attention-guided generative adversarial network (SAG-GAN), which can directly obtain corresponding CE-CT images from the patient's NC-CT images. In the SAG-GAN, we devise a spatial attention-guided generator, which utilize a lightweight spatial attention module to highlight synthesis task-related areas in NC-CT image and neglect unrelated areas. To assess the performance of our approach, we test it on two tasks: synthesizing CE-CT images in arterial phase and portal venous phase. Both qualitative and quantitative results demonstrate that SAG-GAN is superior to existing GANs-based image synthesis methods.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos
17.
Comput Biol Med ; 166: 107467, 2023 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-37725849

RESUMO

Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multi-organ segmentation tasks. However, due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learning-based methods. In this paper, we aim to address this issue by combining off-the-shelf single-organ segmentation models to develop a multi-organ segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multi-organ segmentation. To this end, we propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each off-the-shelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted single-organ segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage off-the-shelf single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy.

18.
Artigo em Inglês | MEDLINE | ID: mdl-38082913

RESUMO

Computer-aided diagnostic methods, such as automatic and precise liver tumor detection, have a significant impact on healthcare. In recent years, deep learning-based liver tumor detection methods in multi-phase computed tomography (CT) images have achieved noticeable performance. Deep learning frameworks require a substantial amount of annotated training data but obtaining enough training data with high quality annotations is a major issue in medical imaging. Additionally, deep learning frameworks experience domain shift problems when they are trained using one dataset (source domain) and applied to new test data (target domain). To address the lack of training data and domain shift issues in multiphase CT images, here, we present an adversarial learning-based strategy to mitigate the domain gap across different phases of multiphase CT scans. We introduce to use Fourier phase component of CT images in order to improve the semantic information and more reliably identify the tumor tissues. Our approach eliminates the requirement for distinct annotations for each phase of CT scans. The experiment results show that our proposed method performs noticeably better than conventional training and other methods.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Neoplasias Hepáticas/diagnóstico por imagem
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1552-1555, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36083929

RESUMO

Multiphase computed tomography (CT) images are widely used for the diagnosis of liver disease. Since each phase has different contrast enhancement (i.e., different domain), the multiphase CT images should be annotated for all phases to perform liver or tumor segmentation, which is a time-consuming and labor-expensive task. In this paper, we propose a dual discriminator-based unsupervised domain adaptation (DD-UDA) for liver segmentation on multiphase CT images without annotations. Our framework consists of three modules: a task-specific generator and two discriminators. We have performed domain adaptation at two levels: one is at the feature level, and the other is at the output level, to improve accuracy by reducing the difference in distributions between the source and target domains. Experimental results using public data (PV phase only) as the source domain and private multiphase CT data as the target domain show the effectiveness of our proposed DD-UDA method. Clinical relevance- This study helps to efficiently and accurately segment the liver on multiphase CT images, which is an important preprocessing step for diagnosis and surgical support. By using the proposed DD-UDA method, the segmentation accuracy has improved from 5%, 8%, and 6% respectively, for all phases of CT images with comparison to those without UDA.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias , Humanos , Processamento de Imagem Assistida por Computador/métodos , Fígado/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
20.
Front Radiol ; 2: 856460, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37492657

RESUMO

Hepatocellular carcinoma (HCC) is a primary liver cancer that produces a high mortality rate. It is one of the most common malignancies worldwide, especially in Asia, Africa, and southern Europe. Although surgical resection is an effective treatment, patients with HCC are at risk of recurrence after surgery. Preoperative early recurrence prediction for patients with liver cancer can help physicians develop treatment plans and will enable physicians to guide patients in postoperative follow-up. However, the conventional clinical data based methods ignore the imaging information of patients. Certain studies have used radiomic models for early recurrence prediction in HCC patients with good results, and the medical images of patients have been shown to be effective in predicting the recurrence of HCC. In recent years, deep learning models have demonstrated the potential to outperform the radiomics-based models. In this paper, we propose a prediction model based on deep learning that contains intra-phase attention and inter-phase attention. Intra-phase attention focuses on important information of different channels and space in the same phase, whereas inter-phase attention focuses on important information between different phases. We also propose a fusion model to combine the image features with clinical data. Our experiment results prove that our fusion model has superior performance over the models that use clinical data only or the CT image only. Our model achieved a prediction accuracy of 81.2%, and the area under the curve was 0.869.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA