Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 220-227, 2024 Apr 25.
Artículo en Zh | MEDLINE | ID: mdl-38686401

RESUMEN

In computer-aided medical diagnosis, obtaining labeled medical image data is expensive, while there is a high demand for model interpretability. However, most deep learning models currently require a large amount of data and lack interpretability. To address these challenges, this paper proposes a novel data augmentation method for medical image segmentation. The uniqueness and advantages of this method lie in the utilization of gradient-weighted class activation mapping to extract data efficient features, which are then fused with the original image. Subsequently, a new channel weight feature extractor is constructed to learn the weights between different channels. This approach achieves non-destructive data augmentation effects, enhancing the model's performance, data efficiency, and interpretability. Applying the method of this paper to the Hyper-Kvasir dataset, the intersection over union (IoU) and Dice of the U-net were improved, respectively; and on the ISIC-Archive dataset, the IoU and Dice of the DeepLabV3+ were also improved respectively. Furthermore, even when the training data is reduced to 70 %, the proposed method can still achieve performance that is 95 % of that achieved with the entire dataset, indicating its good data efficiency. Moreover, the data-efficient features used in the method have interpretable information built-in, which enhances the interpretability of the model. The method has excellent universality, is plug-and-play, applicable to various segmentation methods, and does not require modification of the network structure, thus it is easy to integrate into existing medical image segmentation method, enhancing the convenience of future research and applications.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Diagnóstico por Imagen/métodos , Diagnóstico por Computador/métodos , Redes Neurales de la Computación
2.
Eur Radiol ; 31(3): 1391-1400, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32901300

RESUMEN

OBJECTIVE: To explore the value of intravoxel incoherent motion diffusion-weighted imaging (IVIM-DWI) for the prediction of pathologic response to neoadjuvant chemotherapy (NAC) in locally advanced esophageal squamous cell carcinoma (ESCC). MATERIAL AND METHODS: Forty patients with locally advanced ESCC who were treated with NAC followed by radical resection were prospectively enrolled from September 2015 to May 2018. MRI and IVIM were performed within 1 week before and 2-3 weeks after NAC, prior to surgery. Parameters including apparent diffusion coefficient (ADC), true diffusion coefficient (D), pseudodiffusion coefficient (D*), and pseudodiffusion fraction (f) before and after NAC were measured. Pathologic response was evaluated according to the AJCC tumor regression grade (TRG) system. The changes in IVIM values before and after therapy in different TRG groups were assessed. Receiver operating characteristic (ROC) curves analysis was used to determine the best cutoff value for predicting the pathologic response to NAC. RESULTS: Twenty-two patients were identified as TRG 2 (responders), and eighteen as TRG 3 (non-responders) in pathologic evaluation. The ADC, D, and f values increased significantly after NAC. The post-NAC D and ΔD values of responders were significantly higher than those of non-responders. The area under the curve (AUC) was 0.722 for post-NAC D and 0.859 for ΔD in predicting pathologic response. The cutoff values of post-NAC D and ΔD were 1.685 × 10-3 mm2/s and 0.350 × 10-3 mm2/s, respectively. CONCLUSION: IVIM-DWI may be used as an effective functional imaging technique to predict pathologic response to NAC in locally advanced ESCC. KEY POINTS: • The optimal cutoff values of post-NAC D and ΔD for predicting pathologic response to NAC in locally advanced ESCC were 1.685 × 10-3 mm2/s and 0.350 × 10-3 mm2/s, respectively. • Pathologic response to NAC in locally advanced ESCC was favorable in patients with post-NAC D and ΔD values that were higher than the optimal cutoff values. • IVIM-DWI can potentially be used to preoperatively predict pathologic response to NAC in esophageal carcinoma. Accurate quantification of the D value derived from IVIM-DWI may eventually translate into an effective and non-invasive marker to predict therapeutic efficacy.


Asunto(s)
Neoplasias Esofágicas , Carcinoma de Células Escamosas de Esófago , Neoplasias de Cabeza y Cuello , Imagen de Difusión por Resonancia Magnética , Neoplasias Esofágicas/diagnóstico por imagen , Neoplasias Esofágicas/tratamiento farmacológico , Carcinoma de Células Escamosas de Esófago/diagnóstico por imagen , Carcinoma de Células Escamosas de Esófago/tratamiento farmacológico , Humanos , Movimiento (Física) , Terapia Neoadyuvante
3.
Sensors (Basel) ; 21(7)2021 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-33800532

RESUMEN

Hyperspectral image (HSI) super-resolution (SR) is a challenging task due to its ill-posed nature, and has attracted extensive attention by the research community. Previous methods concentrated on leveraging various hand-crafted image priors of a latent high-resolution hyperspectral (HR-HS) image to regularize the degradation model of the observed low-resolution hyperspectral (LR-HS) and HR-RGB images. Different optimization strategies for searching a plausible solution, which usually leads to a limited reconstruction performance, were also exploited. Recently, deep-learning-based methods evolved for automatically learning the abundant image priors in a latent HR-HS image. These methods have made great progress for HS image super resolution. Current deep-learning methods have faced difficulties in designing more complicated and deeper neural network architectures for boosting the performance. They also require large-scale training triplets, such as the LR-HS, HR-RGB, and their corresponding HR-HS images for neural network training. These training triplets significantly limit their applicability to real scenarios. In this work, a deep unsupervised fusion-learning framework for generating a latent HR-HS image using only the observed LR-HS and HR-RGB images without previous preparation of any other training triplets is proposed. Based on the fact that a convolutional neural network architecture is capable of capturing a large number of low-level statistics (priors) of images, the automatic learning of underlying priors of spatial structures and spectral attributes in a latent HR-HS image using only its corresponding degraded observations is promoted. Specifically, the parameter space of a generative neural network used for learning the required HR-HS image to minimize the reconstruction errors of the observations using mathematical relations between data is investigated. Moreover, special convolutional layers for approximating the degradation operations between observations and the latent HR-HS image are specifically to construct an end-to-end unsupervised learning framework for HS image super-resolution. Experiments on two benchmark HS datasets, including the CAVE and Harvard, demonstrate that the proposed method can is capable of producing very promising results, even under a large upscaling factor. Furthermore, it can outperform other unsupervised state-of-the-art methods by a large margin, and manifests its superiority and efficiency.

4.
Sensors (Basel) ; 19(24)2019 Dec 07.
Artículo en Inglés | MEDLINE | ID: mdl-31817912

RESUMEN

Hyperspectral imaging is capable of acquiring the rich spectral information of scenes and has great potential for understanding the characteristics of different materials in many applications ranging from remote sensing to medical imaging. However, due to hardware limitations, the existed hyper-/multi-spectral imaging devices usually cannot obtain high spatial resolution. This study aims to generate a high resolution hyperspectral image according to the available low resolution hyperspectral and high resolution RGB images. We propose a novel hyperspectral image superresolution method via non-negative sparse representation of reflectance spectra with a data guided sparsity constraint. The proposed method firstly learns the hyperspectral dictionary from the low resolution hyperspectral image and then transforms it into the RGB one with the camera response function, which is decided by the physical property of the RGB imaging camera. Given the RGB vector and the RGB dictionary, the sparse representation of each pixel in the high resolution image is calculated with the guidance of a sparsity map, which measures pixel material purity. The sparsity map is generated by analyzing the local content similarity of a focused pixel in the available high resolution RGB image and quantifying the spectral mixing degree motivated by the fact that the pixel spectrum of a pure material should have sparse representation of the spectral dictionary. Since the proposed method adaptively adjusts the sparsity in the spectral representation based on the local content of the available high resolution RGB image, it can produce more robust spectral representation for recovering the target high resolution hyperspectral image. Comprehensive experiments on two public hyperspectral datasets and three real remote sensing images validate that the proposed method achieves promising performances compared to the existing state-of-the-art methods.

5.
ScientificWorldJournal ; 2014: 903160, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24693253

RESUMEN

Face hallucination is one of learning-based super resolution techniques, which is focused on resolution enhancement of facial images. Though face hallucination is a powerful and useful technique, some detailed high-frequency components cannot be recovered. It also needs accurate alignment between training samples. In this paper, we propose a high-frequency compensation framework based on residual images for face hallucination method in order to improve the reconstruction performance. The basic idea of proposed framework is to reconstruct or estimate a residual image, which can be used to compensate the high-frequency components of the reconstructed high-resolution image. Three approaches based on our proposed framework are proposed. We also propose a patch-based alignment-free face hallucination. In the patch-based face hallucination, we first segment facial images into overlapping patches and construct training patch pairs. For an input low-resolution (LR) image, the overlapping patches are also used to obtain the corresponding high-resolution (HR) patches by face hallucination. The whole HR image can then be reconstructed by combining all of the HR patches. Experimental results show that the high-resolution images obtained using our proposed approaches can improve the quality of those obtained by conventional face hallucination method even if the training data set is unaligned.


Asunto(s)
Algoritmos , Identificación Biométrica/métodos , Cara/anatomía & histología , Alucinaciones , Gráficos por Computador , Humanos
6.
Artículo en Inglés | MEDLINE | ID: mdl-38768004

RESUMEN

Although contrast-enhanced computed tomography (CE-CT) images significantly improve the accuracy of diagnosing focal liver lesions (FLLs), the administration of contrast agents imposes a considerable physical burden on patients. The utilization of generative models to synthesize CE-CT images from non-contrasted CT images offers a promising solution. However, existing image synthesis models tend to overlook the importance of critical regions, inevitably reducing their effectiveness in downstream tasks. To overcome this challenge, we propose an innovative CE-CT image synthesis model called Segmentation Guided Crossing Dual Decoding Generative Adversarial Network (SGCDD-GAN). Specifically, the SGCDD-GAN involves a crossing dual decoding generator including an attention decoder and an improved transformation decoder. The attention decoder is designed to highlight some critical regions within the abdominal cavity, while the improved transformation decoder is responsible for synthesizing CE-CT images. These two decoders are interconnected using a crossing technique to enhance each other's capabilities. Furthermore, we employ a multi-task learning strategy to guide the generator to focus more on the lesion area. To evaluate the performance of proposed SGCDD-GAN, we test it on an in-house CE-CT dataset. In both CE-CT image synthesis tasks-namely, synthesizing ART images and synthesizing PV images-the proposed SGCDD-GAN demonstrates superior performance metrics across the entire image and liver region, including SSIM, PSNR, MSE, and PCC scores. Furthermore, CE-CT images synthetized from our SGCDD-GAN achieve remarkable accuracy rates of 82.68%, 94.11%, and 94.11% in a deep learning-based FLLs classification task, along with a pilot assessment conducted by two radiologists.

7.
Biochim Biophys Acta Gene Regul Mech ; 1866(2): 194911, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36804477

RESUMEN

BACKGROUND: Gene regulatory network (GRN) is a model that characterizes the complex relationships between genes and thereby provides an informatics environment to measure the importance of nodes. The evaluation of important nodes in a GRN can effectively refer to their functional implications severing as key players in particular biological processes, such as master regulator and driver gene. Currently, it is mainly based on network topological parameters and focuses only on evaluating a single node individually. However, genes and products play their functions by interacting with each other. It is worth noting that the effects of gene combinations in GRN are not simply additive. Key combinations discovery is of significance in revealing gene sets with important functions. Recently, with the development of single-cell RNA-sequencing (scRNA-seq) technology, we can quantify gene expression profiles of individual cells that provide the potential to identify crucial nodes in gene regulations regarding specific condition, e.g., stem cell differentiation. RESULTS: In this paper, we propose a bioinformatics method, called Pseudo Knockout Importance (PKI), to quantify the importance of node and node sets in a specific GRN structure using time-course scRNA-seq data. First, we construct ordinary differential equations to approach the gene regulations during cell differentiation. Then we design gene pseudo knockout experiments and define PKI score evaluation criteria based on the coefficient of determination. The importance of nodes can be described as the influence on the ODE system of removing variables. For key gene combinations, PKI is derived as a combinatorial optimization problem of quantifying the in silico gene knockout effects. CONCLUSIONS: Here, we focus our analyses on the specific GRN of embryonic stem cells with time series gene expression profile. To verify the effectiveness and advantage of PKI method, we compare its node importance rankings with other twelve kinds of centrality-based methods, such as degree and Latora closeness. For key node combinations, we compare the results with the method based on minimum dominant set. Moreover, the famous combinations of transcription factors in induced pluripotent stem cell are also employed to verify the vital gene combinations identified by PKI. These results demonstrate the reliability and superiority of the proposed method.


Asunto(s)
Regulación de la Expresión Génica , Redes Reguladoras de Genes , Reproducibilidad de los Resultados , Biología Computacional/métodos , Factores de Transcripción/metabolismo
8.
IEEE J Biomed Health Inform ; 27(10): 4878-4889, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37585324

RESUMEN

Accurate segmentation of the hepatic vein can improve the precision of liver disease diagnosis and treatment. Since the hepatic venous system is a small target and sparsely distributed, with various and diverse morphology, data labeling is difficult. Therefore, automatic hepatic vein segmentation is extremely challenging. We propose a lightweight contextual and morphological awareness network and design a novel morphology aware module based on attention mechanism and a 3D reconstruction module. The morphology aware module can obtain the slice similarity awareness mapping, which can enhance the continuous area of the hepatic veins in two adjacent slices through attention weighting. The 3D reconstruction module connects the 2D encoder and the 3D decoder to obtain the learning ability of 3D context with a very small amount of parameters. Compared with other SOTA methods, using the proposed method demonstrates an enhancement in the dice coefficient with few parameters on the two datasets. A small number of parameters can reduce hardware requirements and potentially have stronger generalization, which is an advantage in clinical deployment.


Asunto(s)
Venas Hepáticas , Procesamiento de Imagen Asistido por Computador , Humanos , Venas Hepáticas/diagnóstico por imagen
9.
Bioengineering (Basel) ; 10(8)2023 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-37627784

RESUMEN

Multi-phase computed tomography (CT) images have gained significant popularity in the diagnosis of hepatic disease. There are several challenges in the liver segmentation of multi-phase CT images. (1) Annotation: due to the distinct contrast enhancements observed in different phases (i.e., each phase is considered a different domain), annotating all phase images in multi-phase CT images for liver or tumor segmentation is a task that consumes substantial time and labor resources. (2) Poor contrast: some phase images may have poor contrast, making it difficult to distinguish the liver boundary. In this paper, we propose a boundary-enhanced liver segmentation network for multi-phase CT images with unsupervised domain adaptation. The first contribution is that we propose DD-UDA, a dual discriminator-based unsupervised domain adaptation, for liver segmentation on multi-phase images without multi-phase annotations, effectively tackling the annotation problem. To improve accuracy by reducing distribution differences between the source and target domains, we perform domain adaptation at two levels by employing two discriminators, one at the feature level and the other at the output level. The second contribution is that we introduce an additional boundary-enhanced decoder to the encoder-decoder backbone segmentation network to effectively recognize the boundary region, thereby addressing the problem of poor contrast. In our study, we employ the public LiTS dataset as the source domain and our private MPCT-FLLs dataset as the target domain. The experimental findings validate the efficacy of our proposed methods, producing substantially improved results when tested on each phase of the multi-phase CT image even without the multi-phase annotations. As evaluated on the MPCT-FLLs dataset, the existing baseline (UDA) method achieved IoU scores of 0.785, 0.796, and 0.772 for the PV, ART, and NC phases, respectively, while our proposed approach exhibited superior performance, surpassing both the baseline and other state-of-the-art methods. Notably, our method achieved remarkable IoU scores of 0.823, 0.811, and 0.800 for the PV, ART, and NC phases, respectively, emphasizing its effectiveness in achieving accurate image segmentation.

10.
Artículo en Inglés | MEDLINE | ID: mdl-38083412

RESUMEN

Compared to non-contrast computed tomography (NC-CT) scans, contrast-enhanced (CE) CT scans provide more abundant information about focal liver lesions (FLLs), which play a crucial role in the FLLs diagnosis. However, CE-CT scans require patient to inject contrast agent into the body, which increase the physical and economic burden of the patient. In this paper, we propose a spatial attention-guided generative adversarial network (SAG-GAN), which can directly obtain corresponding CE-CT images from the patient's NC-CT images. In the SAG-GAN, we devise a spatial attention-guided generator, which utilize a lightweight spatial attention module to highlight synthesis task-related areas in NC-CT image and neglect unrelated areas. To assess the performance of our approach, we test it on two tasks: synthesizing CE-CT images in arterial phase and portal venous phase. Both qualitative and quantitative results demonstrate that SAG-GAN is superior to existing GANs-based image synthesis methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos
11.
Artículo en Inglés | MEDLINE | ID: mdl-38082913

RESUMEN

Computer-aided diagnostic methods, such as automatic and precise liver tumor detection, have a significant impact on healthcare. In recent years, deep learning-based liver tumor detection methods in multi-phase computed tomography (CT) images have achieved noticeable performance. Deep learning frameworks require a substantial amount of annotated training data but obtaining enough training data with high quality annotations is a major issue in medical imaging. Additionally, deep learning frameworks experience domain shift problems when they are trained using one dataset (source domain) and applied to new test data (target domain). To address the lack of training data and domain shift issues in multiphase CT images, here, we present an adversarial learning-based strategy to mitigate the domain gap across different phases of multiphase CT scans. We introduce to use Fourier phase component of CT images in order to improve the semantic information and more reliably identify the tumor tissues. Our approach eliminates the requirement for distinct annotations for each phase of CT scans. The experiment results show that our proposed method performs noticeably better than conventional training and other methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias Hepáticas , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Neoplasias Hepáticas/diagnóstico por imagen
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1552-1555, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36083929

RESUMEN

Multiphase computed tomography (CT) images are widely used for the diagnosis of liver disease. Since each phase has different contrast enhancement (i.e., different domain), the multiphase CT images should be annotated for all phases to perform liver or tumor segmentation, which is a time-consuming and labor-expensive task. In this paper, we propose a dual discriminator-based unsupervised domain adaptation (DD-UDA) for liver segmentation on multiphase CT images without annotations. Our framework consists of three modules: a task-specific generator and two discriminators. We have performed domain adaptation at two levels: one is at the feature level, and the other is at the output level, to improve accuracy by reducing the difference in distributions between the source and target domains. Experimental results using public data (PV phase only) as the source domain and private multiphase CT data as the target domain show the effectiveness of our proposed DD-UDA method. Clinical relevance- This study helps to efficiently and accurately segment the liver on multiphase CT images, which is an important preprocessing step for diagnosis and surgical support. By using the proposed DD-UDA method, the segmentation accuracy has improved from 5%, 8%, and 6% respectively, for all phases of CT images with comparison to those without UDA.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Hígado/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 447-450, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086485

RESUMEN

Non-small cell lung cancer (NSCLC) is a malignant tumor with high morbidity and mortality, with a high recurrence rate after surgery, which directly affects the life and health of patients. Recently, many studies are based on Computed Tomography (CT) images. They are cheap but have low accuracy. In contrast, the use of gene expression data to predict the recurrence of NSCLC has high accuracy. However, the acquisition of gene data is expensive and invasive, and cannot meet the recurrence prediction requirement of all patients. In this paper, we proposed a low-cost, high-accuracy residual multilayer perceptrons (ResMLP) recurrence prediction method. First, several proposed ResMLP modules are applied to construct a deep regression estimation model. Then, we build a mapping function of mixed features (handcrafted features and deep features) and gene data via this model. Finally, the recurrence prediction task is realized, by utilizing the gene estimation data obtained from the regression model to learn the information representation related to recurrence. The experimental results show that the proposed method has strong generalization ability and can reach 86.38% prediction accuracy. Clinical Relevance- This study improved the preoperative recurrence of NSCLC prediction accuracy from 78.61% by the conventional method to 86.38% by our proposed method using only the CT image.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Carcinoma de Pulmón de Células no Pequeñas/genética , Progresión de la Enfermedad , Genotipo , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/genética , Recurrencia Local de Neoplasia/patología , Redes Neurales de la Computación
14.
Front Radiol ; 2: 856460, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-37492657

RESUMEN

Hepatocellular carcinoma (HCC) is a primary liver cancer that produces a high mortality rate. It is one of the most common malignancies worldwide, especially in Asia, Africa, and southern Europe. Although surgical resection is an effective treatment, patients with HCC are at risk of recurrence after surgery. Preoperative early recurrence prediction for patients with liver cancer can help physicians develop treatment plans and will enable physicians to guide patients in postoperative follow-up. However, the conventional clinical data based methods ignore the imaging information of patients. Certain studies have used radiomic models for early recurrence prediction in HCC patients with good results, and the medical images of patients have been shown to be effective in predicting the recurrence of HCC. In recent years, deep learning models have demonstrated the potential to outperform the radiomics-based models. In this paper, we propose a prediction model based on deep learning that contains intra-phase attention and inter-phase attention. Intra-phase attention focuses on important information of different channels and space in the same phase, whereas inter-phase attention focuses on important information between different phases. We also propose a fusion model to combine the image features with clinical data. Our experiment results prove that our fusion model has superior performance over the models that use clinical data only or the CT image only. Our model achieved a prediction accuracy of 81.2%, and the area under the curve was 0.869.

15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1536-1539, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36085648

RESUMEN

Automatic and efficient liver tumor detection in multi-phase CT images is essential in computer-aided diagnosis of liver tumors. Nowadays, deep learning has been widely used in medical applications. Normally, deep learning-based AI systems need a large quantity of training data, but in the medical field, acquiring sufficient training data with high-quality annotations is a significant challenge. To solve the lack of training data issue, domain adaptation-based methods have recently been developed as a technique to bridge the domain gap across datasets with different feature characteristics and data distributions. This paper presents a domain adaptation-based method for detecting liver tumors in multi-phase CT images. We adopt knowledge for model learning from PV phase images to ART and NC phase images. Clinical Relevance- To minimize the domain gap we employ an adversarial learning scheme with the maximum square loss for mid-level output feature maps using an anchorless detector. Experiments show that our proposed method performs much better for various CT-phase images than normal training.


Asunto(s)
Aclimatación , Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Radiofármacos , Tomografía Computarizada por Rayos X
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2097-2100, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086312

RESUMEN

Contrast-enhanced computed tomography (CE-CT) images are used extensively for the diagnosis of liver cancer in clinical practice. Compared with the non-contrast CT (NC-CT) images (CT scans without injection), the CE-CT images are obtained after injecting the contrast, which will increase physical burden of patients. To handle the limitation, we proposed an improved conditional generative adversarial network (improved cGAN) to generate CE-CT images from non-contrast CT images. In the improved cGAN, we incorporate a pyramid pooling module and an elaborate feature fusion module to the generator to improve the capability of encoder in capturing multi-scale semantic features and prevent the dilution of information in the process of decoding. We evaluate the performance of our proposed method on a contrast-enhanced CT dataset including three phases of CT images, (i.e., non-contrast image, CE-CT images in arterial and portal venous phases). Experimental results suggest that the proposed method is superior to existing GAN-based models in quantitative and qualitative results.


Asunto(s)
Arterias , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos
17.
IEEE J Biomed Health Inform ; 26(8): 3988-3998, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35213319

RESUMEN

Organ segmentation is one of the most important step for various medical image analysis tasks. Recently, semi-supervised learning (SSL) has attracted much attentions by reducing labeling cost. However, most of the existing SSLs neglected the prior shape and position information specialized in the medical images, leading to unsatisfactory localization and non-smooth of objects. In this paper, we propose a novel atlas-based semi-supervised segmentation network with multi-task learning for medical organs, named MTL-ABS3Net, which incorporates the anatomical priors and makes full use of unlabeled data in a self-training and multi-task learning manner. The MTL-ABS3Net consists of two components: an Atlas-Based Semi-Supervised Segmentation Network (ABS3Net) and Reconstruction-Assisted Module (RAM). Specifically, the ABS3Net improves the existing SSLs by utilizing atlas prior, which generates credible pseudo labels in a self-training manner; while the RAM further assists the segmentation network by capturing the anatomical structures from the original images in a multi-task learning manner. Better reconstruction quality is achieved by using MS-SSIM loss function, which further improves the segmentation accuracy. Experimental results from the liver and spleen datasets demonstrated that the performance of our method was significantly improved compared to existing state-of-the-art methods.


Asunto(s)
Abdomen , Aprendizaje Automático Supervisado , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Bazo/diagnóstico por imagen
18.
Front Genet ; 12: 814073, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35186016

RESUMEN

lncRNA-protein interactions play essential roles in a variety of cellular processes. However, the experimental methods for systematically mapping of lncRNA-protein interactions remain time-consuming and expensive. Therefore, it is urgent to develop reliable computational methods for predicting lncRNA-protein interactions. In this study, we propose a computational method called LncPNet to predict potential lncRNA-protein interactions by embedding an lncRNA-protein heterogenous network. The experimental results indicate that LncPNet achieves promising performance on benchmark datasets extracted from the NPInter database with an accuracy of 0.930 and area under ROC curve (AUC) of 0.971. In addition, we further compare our method with other eight state-of-the-art methods, and the results illustrate that our method achieves superior prediction performance. LncPNet provides an effective method via a new perspective of representing lncRNA-protein heterogenous network, which will greatly benefit the prediction of lncRNA-protein interactions.

19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3309-3312, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891948

RESUMEN

Convolutional neural networks have become popular in medical image segmentation, and one of their most notable achievements is their ability to learn discriminative features using large labeled datasets. Two-dimensional (2D) networks are accustomed to extracting multiscale features with deep convolutional neural network extractors, i.e., ResNet-101. However, 2D networks are inefficient in extracting spatial features from volumetric images. Although most of the 2D segmentation networks can be extended to three-dimensional (3D) networks, extended 3D methods are resource and time intensive. In this paper, we propose an efficient and accurate network for fully automatic 3D segmentation. We designed a 3D multiple-contextual extractor (MCE) to simulate multiscale feature extraction and feature fusion to capture rich global contextual dependencies from different feature levels. We also designed a light 3D ResU-Net for efficient volumetric image segmentation. The proposed multiple-contextual extractor and light 3D ResU-Net constituted a complete segmentation network. By feeding the multiple-contextual features to the light 3D ResU-Net, we realized 3D medical image segmentation with high efficiency and accuracy. To validate the 3D segmentation performance of our proposed method, we evaluated the proposed network in the context of semantic segmentation on a private spleen dataset and public liver dataset. The spleen dataset contains 50 patients' CT scans, and the liver dataset contains 131 patients' CT scans.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Semántica , Humanos , Imagenología Tridimensional , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3561-3564, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34892008

RESUMEN

Non-small cell lung cancer (NSCLC) is a type of lung cancer that has a high recurrence rate after surgery. Precise prediction of preoperative prognosis for NSCLC recurrence tends to contribute to the suitable preparation for treatment. Currently, many studied have been conducted to predict the recurrence of NSCLC based on Computed Tomography-images (CT images) or genetic data. The CT image is not expensive but inaccurate. The gene data is more expensive but has high accuracy. In this study, we proposed a genotype-guided radiomics method called GGR and GGR_Fusion to make a higher accuracy prediction model with requires only CT images. The GGR is a two-step method which is consists of two models: the gene estimation model using deep learning and the recurrence prediction model using estimated genes. We further propose an improved performance model based on the GGR model called GGR_Fusion to improve the accuracy. The GGR_Fusion uses the extracted features from the gene estimation model to enhance the recurrence prediction model. The experiments showed that the prediction performance can be improved significantly from 78.61% accuracy, AUC=0.66 (existing radiomics method), 79.09% accuracy, AUC=0.68 (deep learning method) to 83.28% accuracy, AUC=0.77 by the proposed GGR and 84.39% accuracy, AUC=0.79 by the proposed GGR_Fusion.Clinical Relevance-This study improved the preoperative recurrence of NSCLC prediction accuracy from 78.61% by the conventional method to 84.39% by our proposed method using only the CT image.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Carcinoma de Pulmón de Células no Pequeñas/genética , Genotipo , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/genética , Tomografía Computarizada por Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA