Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
Brief Bioinform ; 24(1)2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36445207

RESUMO

Driven by multi-omics data, some multi-view clustering algorithms have been successfully applied to cancer subtypes prediction, aiming to identify subtypes with biometric differences in the same cancer, thereby improving the clinical prognosis of patients and designing personalized treatment plan. Due to the fact that the number of patients in omics data is much smaller than the number of genes, multi-view spectral clustering based on similarity learning has been widely developed. However, these algorithms still suffer some problems, such as over-reliance on the quality of pre-defined similarity matrices for clustering results, inability to reasonably handle noise and redundant information in high-dimensional omics data, ignoring complementary information between omics data, etc. This paper proposes multi-view spectral clustering with latent representation learning (MSCLRL) method to alleviate the above problems. First, MSCLRL generates a corresponding low-dimensional latent representation for each omics data, which can effectively retain the unique information of each omics and improve the robustness and accuracy of the similarity matrix. Second, the obtained latent representations are assigned appropriate weights by MSCLRL, and global similarity learning is performed to generate an integrated similarity matrix. Third, the integrated similarity matrix is used to feed back and update the low-dimensional representation of each omics. Finally, the final integrated similarity matrix is used for clustering. In 10 benchmark multi-omics datasets and 2 separate cancer case studies, the experiments confirmed that the proposed method obtained statistically and biologically meaningful cancer subtypes.


Assuntos
Multiômica , Neoplasias , Humanos , Algoritmos , Neoplasias/genética , Análise por Conglomerados
2.
Guang Pu Xue Yu Guang Pu Fen Xi ; 33(1): 65-8, 2013 Jan.
Artigo em Zh | MEDLINE | ID: mdl-23586226

RESUMO

In order to reduce the errors of near-infrared spectral acquisition, analytical models of coal spectra with different particle sizes, 0.2, 1, 3 and 13 mm, were studied in this paper. The feature information of spectra was extracted by PCA method, then two quantitative analytical models were established based on GA-BP and GA-Elman neural network algorithms. Through spectral preprocessing with data normalization and multiplicative scatter correction methods, the results showed that with the 0.2 mm size, the correlations between spectra and the standard value were the strongest, and the analytical precision of models were the best. But for smoothed spectra, the models, under 1 mm size, were better than others. Smoothing method was not suitable for the spectra with less obvious wave crest characteristics, while multiplicative scatter correction method was better. According to original spectra, particle size of 0.2 mm had the highest accuracy, followed by 1 and 3 mm and the worst was under 13 mm. Overall, the larger the size for coal particle, the more the unstable factors for spectra, increasing negative influences on analytical models.

3.
IEEE J Biomed Health Inform ; 27(7): 3478-3488, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37126618

RESUMO

Atrial fibrillation (AF) is an increasing medical burden worldwide, and its pathological manifestations are atrial tissue remodeling and low-pressure atrial tissue fibrosis. Due to the inherent defects of medical image data acquisition systems, the acquisition of high-resolution cardiac magnetic resonance imaging (CMRI) faces many problems. In response to these problems, we propose the Progressive Feedback Residual Attention Network (PFRN) for CMRI super-resolution. Specifically, we directly perform feature extraction on low-resolution images, retain feature information to a large extent, and then build multiple independent progressive feedback modules to extract high-frequency details. To accelerate network convergence and improve image reconstruction quality, we implement the MS-SSIM-L1 loss function. Furthermore, we utilize the residual attention stack module to explore the image's internal relevance and extract the low-resolution image's detailed features. Extensive benchmark evaluation shows that PFRN can improve the detailed information of the image SR reconstruction results, and the reconstructed CMRI has a better visual effect.


Assuntos
Fibrilação Atrial , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Retroalimentação , Imageamento por Ressonância Magnética/métodos , Átrios do Coração
4.
Comput Methods Programs Biomed ; 238: 107590, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37201252

RESUMO

BACKGROUND AND OBJECTIVE: With the high-resolution (HR) requirements of medical images in clinical practice, super-resolution (SR) reconstruction algorithms based on low-resolution (LR) medical images have become a research hotspot. This type of method can significantly improve image SR without improving hardware equipment, so it is of great significance to review it. METHODS: Aiming at the unique SR reconstruction algorithms in the field of medical images, based on subdivided medical fields such as magnetic resonance (MR) images, computed tomography (CT) images, and ultrasound images. Firstly, we deeply analyzed the research progress of SR reconstruction algorithms, and summarized and compared the different types of algorithms. Secondly, we introduced the evaluation indicators corresponding to the SR reconstruction algorithms. Finally, we prospected the development trend of SR reconstruction technology in the medical field. RESULTS: The medical image SR reconstruction technology based on deep learning can provide more abundant lesion information, relieve the expert's diagnosis pressure, and improve the diagnosis efficiency and accuracy. CONCLUSION: The medical image SR reconstruction technology based on deep learning helps to improve the quality of medicine, provides help for the diagnosis of experts, and lays a solid foundation for the subsequent analysis and identification tasks of the computer, which is of great significance for improving the diagnosis efficiency of experts and realizing intelligent medical care.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X
5.
IEEE Trans Cybern ; 53(10): 6421-6432, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35560090

RESUMO

Since the sample data after one exploration process can only be used to update network parameters once in on-policy deep reinforcement learning (DRL), a high sample efficiency is necessary to accelerate the training process of on-policy DRL. In the proposed method, a submartingale criterion is proposed on the basis of the equivalence relationship between the optimal policy and martingale, and then an advanced value iteration (AVI) method is proposed to conduct value iteration with a high accuracy. Based on this foundation, an anti-martingale (AM) reinforcement learning framework is established to efficiently select the sample data that is conducive to policy optimization. In succession, an AM proximal policy optimization (AMPPO) method, which combines the AM framework with proximal policy optimization (PPO), is proposed to reasonably accelerate the updating process of state value that satisfies the submartingale criterion. Experimental results on the Mujoco platform show that AMPPO can achieve better performance than several state-of-the-art comparative DRL methods.

6.
IEEE Trans Neural Netw Learn Syst ; 34(11): 9054-9063, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35286268

RESUMO

The accurate estimation of Q-function and the enhancement of agent's exploration ability have always been challenges of off-policy actor-critic algorithms. To address the two concerns, a novel robust actor-critic (RAC) is developed in this article. We first derive a robust policy improvement mechanism (RPIM) by using the local optimal policy about the current estimated Q-function to guide policy improvement. By constraining the relative entropy between the new policy and the previous one in policy improvement, the proposed RPIM can enhance the stability of the policy update process. The theoretical analysis shows that the incentive to increase the policy entropy is endowed when the policy is updated, which is conducive to enhancing the exploration ability of agents. Then, RAC is developed by applying the proposed RPIM to regulate the actor improvement process. The developed RAC is proven to be convergent. Finally, the proposed RAC is evaluated on some continuous-action control tasks in the MuJoCo platform and the experimental results show that RAC outperforms several state-of-the-art reinforcement learning algorithms.

7.
BMC Bioinformatics ; 13 Suppl 7: S4, 2012 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-22595001

RESUMO

BACKGROUND: Previous studies have shown modular structures in PPI (protein-protein interaction) networks. More recently, many genome and metagenome investigations have focused on identifying modules in PPI networks. However, most of the existing methods are insufficient when applied to networks with overlapping modular structures. In our study, we describe a novel overlapping module identification method (OMIM) to address this problem. RESULTS: Our method is an agglomerative clustering method merging modules according to their contributions to modularity. Nodes that have positive effects on more than two modules are defined as overlapping parts. As well, we designed de-noising steps based on a clustering coefficient and hub finding steps based on nodal weight. CONCLUSIONS: The low computational complexity and few control parameters prove that our method is suitable for large scale PPI network analysis. First, we verified OMIM on a small artificial word association network which was able to provide us with a comprehensive evaluation. Then experiments on real PPI networks from the MIPS Saccharomyces Cerevisiae dataset were carried out. The results show that OMIM outperforms several other popular methods in identifying high quality modular structures.


Assuntos
Algoritmos , Mapas de Interação de Proteínas , Saccharomyces cerevisiae/metabolismo , Análise por Conglomerados , Proteínas de Saccharomyces cerevisiae/química , Proteínas de Saccharomyces cerevisiae/metabolismo
8.
Comput Methods Programs Biomed ; 218: 106707, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35255374

RESUMO

BACKGROUND AND OBJECTIVE: Heart disease is a vital disease that has threatened human health, and is the number one killer of human life. Moreover, with the added influence of recent health factors, its incidence rate keeps showing an upward trend. Today, cardiac magnetic resonance (CMR) imaging can provide a full range of structural and functional information for the heart, and has become an important tool for the diagnosis and treatment of heart disease. Therefore, improving the image resolution of CMR has an important medical value for the diagnosis and condition assessment of heart disease. At present, most single-image super-resolution (SISR) reconstruction methods have some serious problems, such as insufficient feature information mining, difficulty to determine the dependence of each channel of feature map, and reconstruction error when reconstructing high-resolution image. METHODS: To solve these problems, we have proposed and implemented a dual U-Net residual network (DURN) for super-resolution of CMR images. Specifically, we first propose a U-Net residual network (URN) model, which is divided into the up-branch and the down-branch. The up-branch is composed of residual blocks and up-blocks to extract and upsample deep features; the down-branch is composed of residual blocks and down-blocks to extract and downsample deep features. Based on the URN model, we employ this a dual U-Net residual network (DURN) model, which combines the extracted deep features of the same position between the first URN and the second URN through residual connection. It can make full use of the features extracted by the first URN to extract deeper features of low-resolution images. RESULTS: When the scale factors are 2, 3, and 4, our DURN can obtain 37.86 dB, 33.96 dB, and 31.65 dB on the Set5 dataset, which shows (i) a maximum improvement of 4.17 dB, 3.55 dB, and 3.22dB over the Bicubic algorithm, and (ii) a minimum improvement of 0.34 dB, 0.14 dB, and 0.11 dB over the LapSRN algorithm. CONCLUSION: Comprehensive experimental study results on benchmark datasets demonstrate that our proposed DURN can not only achieve better performance for peak signal to noise ratio (PSNR) and structural similarity index (SSIM) values than other state-of-the-art SR image algorithms, but also reconstruct clearer super-resolution CMR images which have richer details, edges, and texture.


Assuntos
Cardiopatias/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/normas , Algoritmos , Progressão da Doença , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Razão Sinal-Ruído
9.
Comput Methods Programs Biomed ; 219: 106779, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35397410

RESUMO

BACKGROUND AND OBJECTIVE: Cataract is one of the most common causes of vision loss. Light scattering due to clouding of the lens in cataract patients makes it extremely difficult to image the retina of cataract patients with fundus cameras, resulting in a serious decrease in the quality of the retinal images taken. Furthermore, the age of cataract patients is generally too old, in addition to cataracts, the patients often have other retinal diseases, which brings great challenges to experts in the clinical diagnosis of cataract patients using retinal imaging. METHODS: In this paper, we present the End-to-End Residual Attention Mechanism (ERAN) for Cataractous Retinal Image Dehazing, which it includes four modules: encoding module, multi-scale feature extraction module, feature fusion module, and decoding module. The encoding module encodes the input cataract haze image into an image, facilitating subsequent feature extraction and reducing memory usage. The multi-scale feature extraction module includes a hole convolution module, a residual block, and an adaptive skip connection, which can expand the receptive field and extract features of different scales through weighted screening for fusion. The feature fusion module uses adaptive skip connections to enhance the network's ability to extract haze density images to make haze removal more thorough. Furthermore, the decoding module performs non-linear mapping on the fused features to obtain the haze density image, and then restores the haze-free image. RESULTS: The experimental results show that the proposed method has achieved better objective and subjective evaluation results, and has a better dehazing effect. CONCLUSION: We proposed ERAN method not only provides visually better images, but also helps experts better diagnose other retinal diseases in cataract patients, leading to better care and treatment.


Assuntos
Catarata , Doenças Retinianas , Catarata/diagnóstico por imagem , Progressão da Doença , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Retina/diagnóstico por imagem
10.
Comput Methods Programs Biomed ; 225: 106995, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35970055

RESUMO

BACKGROUND AND OBJECTIVE: The retina is the only organ in the body that can use visible light for non-invasive observation. By analyzing retinal images, we can achieve early screening, diagnosis and prevention of many ophthalmological and systemic diseases, helping patients avoid the risk of blindness. Due to the powerful feature extraction capabilities, many deep learning super-resolution reconstruction networks have been applied to retinal image analysis and achieved excellent results. METHODS: Given the lack of high-frequency information and poor visual perception in the current reconstruction results of super-resolution reconstruction networks under large-scale factors, we present an improved generative adversarial network (IGAN) algorithm for retinal image super-resolution reconstruction. Firstly, we construct a novel residual attention block, improving the reconstruction results lacking high-frequency information and texture details under large-scale factors. Secondly, we remove the Batch Normalization layer that affects the quality of image generation in the residual network. Finally, we use the more robust Charbonnier loss function instead of the mean square error loss function and the TV regular term to smooth the training results. RESULTS: Experimental results show that our proposed method significantly improves objective evaluation indicators such as peak signal-to-noise ratio and structural similarity. The obtained image has rich texture details and a better visual experience than the state-of-the-art image super-resolution methods. CONCLUSION: Our proposed method can better learn the mapping relationship between low-resolution and high-resolution retinal images. This method can be effectively and stably applied to the analysis of retinal images, providing an effective basis for early clinical treatment.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído
11.
IEEE Trans Cybern ; 52(9): 9428-9438, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33705327

RESUMO

In recent years, the proximal policy optimization (PPO) algorithm has received considerable attention because of its excellent performance in many challenging tasks. However, there is still a large space for theoretical explanation of the mechanism of PPO's horizontal clipping operation, which is a key means to improve the performance of PPO. In addition, while PPO is inspired by the learning theory of trust region policy optimization (TRPO), the theoretical connection between PPO's clipping operation and TRPO's trust region constraint has not been well studied. In this article, we first analyze the effect of PPO's clipping operation on the objective function of conservative policy iteration, and strictly give the theoretical relationship between PPO and TRPO. Then, a novel first-order policy gradient algorithm called authentic boundary PPO (ABPPO) is proposed, which is based on the authentic boundary setting rule. To ensure the difference between the new and old policies is better kept within the clipping range, by borrowing the idea of ABPPO, we proposed two novel improved PPO algorithms called rollback mechanism-based ABPPO (RMABPPO) and penalized point policy difference-based ABPPO (P3DABPPO), which are based on the ideas of rollback clipping and penalized point policy difference, respectively. Experiments on the continuous robotic control tasks implemented in MuJoCo show that our proposed improved PPO algorithms can effectively improve the learning stability and accelerate the learning speed compared with the original PPO.


Assuntos
Algoritmos , Organizações de Prestadores Preferenciais , Políticas
12.
IEEE Trans Cybern ; 52(5): 3006-3017, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-33027029

RESUMO

Encouraging the agent to explore has always been an important and challenging topic in the field of reinforcement learning (RL). Distributional representation for network parameters or value functions is usually an effective way to improve the exploration ability of the RL agent. However, directly changing the representation form of network parameters from fixed values to function distributions may cause algorithm instability and low learning inefficiency. Therefore, to accelerate and stabilize parameter distribution learning, a novel inference-based posteriori parameter distribution optimization (IPPDO) algorithm is proposed. From the perspective of solving the evidence lower bound of probability, we, respectively, design the objective functions for continuous-action and discrete-action tasks of parameter distribution optimization based on inference. In order to alleviate the overestimation of the value function, we use multiple neural networks to estimate value functions with Retrace, and the smaller estimate participates in the network parameter update; thus, the network parameter distribution can be learned. After that, we design a method used for sampling weight from network parameter distribution by adding an activation function to the standard deviation of parameter distribution, which achieves the adaptive adjustment between fixed values and distribution. Furthermore, this IPPDO is a deep RL (DRL) algorithm based on off-policy, which means that it can effectively improve data efficiency by using off-policy techniques such as experience replay. We compare IPPDO with other prevailing DRL algorithms on the OpenAI Gym and MuJoCo platforms. Experiments on both continuous-action and discrete-action tasks indicate that IPPDO can explore more in the action space, get higher rewards faster, and ensure algorithm stability.


Assuntos
Algoritmos , Redes Neurais de Computação , Aprendizagem , Reforço Psicológico , Projetos de Pesquisa
13.
Artigo em Inglês | MEDLINE | ID: mdl-36094996

RESUMO

In this article, a novel coupled policy improvement mechanism is developed for improving policy iteration (PI) algorithms. In contrast to the common PI, the developed dual parallel policy iteration (DPPI) with coupled policy improvement mechanism consists of two parallel PIs. At each PI step, the performances of the two parallel policies are evaluated and the better one is defined as the dominant policy. Then, the dominant policy is used to guide the parallel policy improvement in a soft manner by constraining the Kullback-Liebler (KL) divergence between the dominant policy and the policy to be updated. It is proven that the convergence of DPPI can be guaranteed under the designed coupled policy improvement mechanism. Moreover, it is clearly shown that under certain conditions, the Q -functions of the two new policies obtained in each parallel policy improvement are larger than those of all the previous dominant policies, which is conductive to accelerate the PI process and improve the policy learning efficiency to some extent. Furthermore, by combining DPPI with the twin delay deep deterministic (TD3) policy gradient, we propose a reinforcement learning (RL) algorithm: parallel TD3 (PTD3). Experimental results on continuous-action control tasks in the MuJoCo and OpenAI Gym platforms show that the proposed PTD3 outperforms the state-of-the-art RL algorithms.

14.
IEEE Trans Cybern ; 52(5): 3111-3122, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-33027028

RESUMO

Estimation bias is an important index for evaluating the performance of reinforcement learning (RL) algorithms. The popular RL algorithms, such as Q -learning and deep Q -network (DQN), often suffer overestimation due to the maximum operation in estimating the maximum expected action values of the next states, while double Q -learning (DQ) and double DQN may fall into underestimation by using a double estimator (DE) to avoid overestimation. To keep the balance between overestimation and underestimation, we propose a novel integrated DE (IDE) architecture by combining the maximum operation and DE operation to estimate the maximum expected action value. Based on IDE, two RL algorithms: 1) integrated DQ (IDQ) and 2) its deep network version, that is, integrated double DQN (IDDQN), are proposed. The main idea of the proposed RL algorithms is that the maximum and DE operations are integrated to eliminate the estimation bias, where one estimator is stochastically used to perform action selection based on the maximum operation, and the convex combination of two estimators is used to carry out action evaluation. We theoretically analyze the reason of estimation bias caused by using nonmaximum operation to estimate the maximum expected value and investigate the possible reasons of underestimation existence in DQ. We also prove the unbiasedness of IDE and convergence of IDQ. Experiments on the grid world and Atari 2600 games indicate that IDQ and IDDQN can reduce or even eliminate estimation bias effectively, enable the learning to be more stable and balanced, and improve the performance effectively.


Assuntos
Algoritmos , Reforço Psicológico , Aprendizagem
15.
Comput Intell Neurosci ; 2021: 9990297, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34925501

RESUMO

Clustering of tumor samples can help identify cancer types and discover new cancer subtypes, which is essential for effective cancer treatment. Although many traditional clustering methods have been proposed for tumor sample clustering, advanced algorithms with better performance are still needed. Low-rank subspace clustering is a popular algorithm in recent years. In this paper, we propose a novel one-step robust low-rank subspace segmentation method (ORLRS) for clustering the tumor sample. For a gene expression data set, we seek its lowest rank representation matrix and the noise matrix. By imposing the discrete constraint on the low-rank matrix, without performing spectral clustering, ORLRS learns the cluster indicators of subspaces directly, i.e., performing the clustering task in one step. To improve the robustness of the method, capped norm is adopted to remove the extreme data outliers in the noise matrix. Furthermore, we conduct an efficient solution to solve the problem of ORLRS. Experiments on several tumor gene expression data demonstrate the effectiveness of ORLRS.


Assuntos
Algoritmos , Neoplasias , Análise por Conglomerados , Expressão Gênica , Humanos
16.
Front Genet ; 12: 718915, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34552619

RESUMO

It is a vital task to design an integrated machine learning model to discover cancer subtypes and understand the heterogeneity of cancer based on multiple omics data. In recent years, some multi-view clustering algorithms have been proposed and applied to the prediction of cancer subtypes. Among them, the multi-view clustering methods based on graph learning are widely concerned. These multi-view approaches usually have one or more of the following problems. Many multi-view algorithms use the original omics data matrix to construct the similarity matrix and ignore the learning of the similarity matrix. They separate the data clustering process from the graph learning process, resulting in a highly dependent clustering performance on the predefined graph. In the process of graph fusion, these methods simply take the average value of the affinity graph of multiple views to represent the result of the fusion graph, and the rich heterogeneous information is not fully utilized. To solve the above problems, in this paper, a Multi-view Spectral Clustering Based on Multi-smooth Representation Fusion (MRF-MSC) method was proposed. Firstly, MRF-MSC constructs a smooth representation for each data type, which can be viewed as a sample (patient) similarity matrix. The smooth representation can explicitly enhance the grouping effect. Secondly, MRF-MSC integrates the smooth representation of multiple omics data to form a similarity matrix containing all biological data information through graph fusion. In addition, MRF-MSC adaptively gives weight factors to the smooth regularization representation of each omics data by using the self-weighting method. Finally, MRF-MSC imposes constrained Laplacian rank on the fusion similarity matrix to get a better cluster structure. The above problems can be transformed into spectral clustering for solving, and the clustering results can be obtained. MRF-MSC unifies the above process of graph construction, graph fusion and spectral clustering under one framework, which can learn better data representation and high-quality graphs, so as to achieve better clustering effect. In the experiment, MRF-MSC obtained good experimental results on the TCGA cancer data sets.

17.
Comput Math Methods Med ; 2021: 8214304, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34422096

RESUMO

The deep neural network has achieved good results in medical image superresolution. However, due to the medical equipment limitations and the complexity of the human body structure, it is difficult to reconstruct clear cardiac magnetic resonance (CMR) superresolution images. To reconstruct clearer CMR images, we propose a CMR image superresolution (SR) algorithm based on multichannel residual attention networks (MCRN), which uses the idea of residual learning to alleviate the difficulty of training and fully explore the feature information of the image and uses the back-projection learning mechanism to learn the interdependence between high-resolution images and low-resolution images. Furthermore, the MCRN model introduces an attention mechanism to dynamically allocate each feature map with different attention resources to discover more high-frequency information and learn the dependency between each channel of the feature map. Extensive benchmark evaluation shows that compared with state-of-the-art image SR methods, our MCRN algorithm not only improves the objective index significantly but also provides richer texture information for the reconstructed CMR images, and our MCRN algorithm is better than the Bicubic algorithm in evaluating the information entropy and average gradient of the reconstructed image quality.


Assuntos
Coração/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Algoritmos , Biologia Computacional , Bases de Dados Factuais/estatística & dados numéricos , Aprendizado Profundo , Humanos , Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Imageamento por Ressonância Magnética/estatística & dados numéricos
18.
Genes (Basel) ; 12(4)2021 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-33916856

RESUMO

Integrating multigenomic data to recognize cancer subtype is an important task in bioinformatics. In recent years, some multiview clustering algorithms have been proposed and applied to identify cancer subtype. However, these clustering algorithms ignore that each data contributes differently to the clustering results during the fusion process, and they require additional clustering steps to generate the final labels. In this paper, a new one-step method for cancer subtype recognition based on graph learning framework is designed, called Laplacian Rank Constrained Multiview Clustering (LRCMC). LRCMC first forms a graph for a single biological data to reveal the relationship between data points and uses affinity matrix to encode the graph structure. Then, it adds weights to measure the contribution of each graph and finally merges these individual graphs into a consensus graph. In addition, LRCMC constructs the adaptive neighbors to adjust the similarity of sample points, and it uses the rank constraint on the Laplacian matrix to ensure that each graph structure has the same connected components. Experiments on several benchmark datasets and The Cancer Genome Atlas (TCGA) datasets have demonstrated the effectiveness of the proposed algorithm comparing to the state-of-the-art methods.


Assuntos
Algoritmos , Biomarcadores Tumorais/metabolismo , Biologia Computacional/métodos , Regulação Neoplásica da Expressão Gênica , Modelos Teóricos , Neoplasias/classificação , Biomarcadores Tumorais/genética , Análise por Conglomerados , Humanos , Neoplasias/genética , Neoplasias/metabolismo , Neoplasias/patologia , Reconhecimento Automatizado de Padrão , Prognóstico , Taxa de Sobrevida
19.
Comput Methods Programs Biomed ; 200: 105934, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33454574

RESUMO

BACKGROUND AND OBJECTIVE: With the increasing problem of coronavirus disease 2019 (COVID-19) in the world, improving the image resolution of COVID-19 computed tomography (CT) becomes a very important task. At present, single-image super-resolution (SISR) models based on convolutional neural networks (CNN) generally have problems such as the loss of high-frequency information and the large size of the model due to the deep network structure. METHODS: In this work, we propose an optimization model based on multi-window back-projection residual network (MWSR), which outperforms most of the state-of-the-art methods. Firstly, we use multi-window to refine the same feature map at the same time to obtain richer high/low frequency information, and fuse and filter out the features needed by the deep network. Then, we develop a back-projection network based on the dilated convolution, using up-projection and down-projection modules to extract image features. Finally, we merge several repeated and continuous residual modules with global features, merge the information flow through the network, and input them to the reconstruction module. RESULTS: The proposed method shows the superiority over the state-of-the-art methods on the benchmark dataset, and generates clear COVID-19 CT super-resolution images. CONCLUSION: Both subjective visual effects and objective evaluation indicators are improved, and the model specifications are optimized. Therefore, the MWSR method can improve the clarity of CT images of COVID-19 and effectively assist the diagnosis and quantitative assessment of COVID-19.


Assuntos
COVID-19/diagnóstico por imagem , Aumento da Imagem/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Aprendizado Profundo , Humanos , SARS-CoV-2
20.
Comput Methods Programs Biomed ; 208: 106252, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34252814

RESUMO

BACKGROUND AND OBJECTIVE: Magnetic Resonance Image (MRI) analysis can provide anatomical examination of internal organs, which is helpful for diagnosis of the disease. Aiming at the problems of insufficient feature information mining in the process of MRI super-resolution (SR) reconstruction, the difficulty of determining the interdependence between the channels of the feature map, and the reconstruction error when reconstructing high-resolution (HR) images, we propose a SR method to solve these problems. METHODS: In this work, we propose a gradual back-projection residual attention network for MRI super-resolution (GRAN), which outperforms most of the state-of-the-art methods. Firstly, we use the gradual upsampling method to gradually scale the low-resolution (LR) image to a given magnification to alleviate the high-frequency information loss caused by the upsampling process. Secondly, we merge the idea of iterative back-projection at each stage of gradual upsampling, learn the mapping relationship between HR and LR feature maps and reduce the noise introduced during the upsampling process. Finally, we use the attention mechanism to dynamically allocate attention resources to the feature maps generated at different stages of the gradual back-projection network, so that the network model can learn the interdependence between each feature map. RESULTS: For the 2 × and 4 × enlargement, the proposed GRAN method shows the superiority over the state-of-the-art methods on the Set5, Set14, and Urban100 benchmark datasets, extensive benchmark experiment and analysis show that the superiority of the GRAN algorithm in terms of peak signal-to-noise ratio and structural similarity index indicators. CONCLUSION: The MRI results reconstructed by gradual back-projection residual attention network on the public dataset IDI have good image sharpness, rich texture details and good visual experience. In addition, the reconstructed image is the closest to the real image, enabling the medical expert to see the biological tissue structure and its early pathological changes more clearly, providing assistance and support to the medical expert in the diagnosis and treatment of the disease.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Humanos , Razão Sinal-Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA