Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Contrast Media Mol Imaging ; 2021: 3474921, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35002567

RESUMO

In order to improve the accuracy of remote sensing image target detection, this paper proposes a remote sensing image target detection algorithm DFS based on deep learning. Firstly, dimension clustering module, loss function, and sliding window segmentation detection are designed. The data set used in the experiment comes from GoogleEarth, and there are 6 types of objects: airplanes, boats, warehouses, large ships, bridges, and ports. Training set, verification set, and test set contain 73490 images, 22722 images, and 2138 images, respectively. It is assumed that the number of detected positive samples and negative samples is A and B, respectively, and the number of undetected positive samples and negative samples is C and D, respectively. The experimental results show that the precision-recall curve of DFS for six types of targets shows that DFS has the best detection effect for bridges and the worst detection effect for boats. The main reason is that the size of the bridge is relatively large, and it is clearly distinguished from the background in the image, so the detection difficulty is low. However, the target of the boat is very small, and it is easy to be mixed with the background, so it is difficult to detect. The MAP of DFS is improved by 12.82%, the detection accuracy is improved by 13%, and the recall rate is slightly decreased by 1% compared with YOLOv2. According to the number of detection targets, the number of false positives (FPs) of DFS is much less than that of YOLOv2. The false positive rate is greatly reduced. In addition, the average IOU of DFS is 11.84% higher than that of YOLOv2. For small target detection efficiency and large remote sensing image detection, the DFS algorithm has obvious advantages.


Assuntos
Aprendizado Profundo , Algoritmos , Tecnologia de Sensoriamento Remoto
2.
Comput Intell Neurosci ; 2020: 4737969, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33178256

RESUMO

BACKGROUND: Breast invasive carcinoma (BRCA) is not a single disease as each subtype has a distinct morphology structure. Although several computational methods have been proposed to conduct breast cancer subtype identification, the specific interaction mechanisms of genes involved in the subtypes are still incomplete. To identify and explore the corresponding interaction mechanisms of genes for each subtype of breast cancer can impose an important impact on the personalized treatment for different patients. METHODS: We integrate the biological importance of genes from the gene regulatory networks to the differential expression analysis and then obtain the weighted differentially expressed genes (weighted DEGs). A gene with a high weight means it regulates more target genes and thus holds more biological importance. Besides, we constructed gene coexpression networks for control and experiment groups, and the significantly differentially interacting structures encouraged us to design the corresponding Gene Ontology (GO) enrichment based on gene coexpression networks (GOEGCN). The GOEGCN considers the two-side distinction analysis between gene coexpression networks for control and experiment groups. The method allows us to study how the modulated coexpressed gene couples impact biological functions at a GO level. RESULTS: We modeled the binary classification with weighted DEGs for each subtype. The binary classifier could make a good prediction for an unseen sample, and the experimental results validated the effectiveness of our proposed approaches. The novel enriched GO terms based on GOEGCN for control and experiment groups of each subtype explain the specific biological function changes according to the two-side distinction of coexpression network structures to some extent. CONCLUSION: The weighted DEGs contain biological importance derived from the gene regulatory network. Based on the weighted DEGs, five binary classifiers were learned and showed good performance concerning the "Sensitivity," "Specificity," "Accuracy," "F1," and "AUC" metrics. The GOEGCN with weighted DEGs for control and experiment groups presented a novel GO enrichment analysis results and the novel enriched GO terms would further unveil the changes of specific biological functions among all the BRCA subtypes to some extent. The R code in this research is available at https://github.com/yxchspring/GOEGCN_BRCA_Subtypes.


Assuntos
Neoplasias da Mama , Neoplasias da Mama/genética , Biologia Computacional , Feminino , Perfilação da Expressão Gênica , Redes Reguladoras de Genes , Humanos , Aprendizado de Máquina , RNA-Seq
3.
Sci Rep ; 10(1): 10624, 2020 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-32606385

RESUMO

A novel method is developed for predicting the stage of a cancer tissue based on the consistency level between the co-expression patterns in the given sample and samples in a specific stage. The basis for the prediction method is that cancer samples of the same stage share common functionalities as reflected by the co-expression patterns, which are distinct from samples in the other stages. Test results reveal that our prediction results are as good or potentially better than manually annotated stages by cancer pathologists. This new co-expression-based capability enables us to study how functionalities of cancer samples change as they evolve from early to the advanced stage. New and exciting results are discovered through such functional analyses, which offer new insights about what functions tend to be lost at what stage compared to the control tissues and similarly what new functions emerge as a cancer advances. To the best of our knowledge, this new capability represents the first computational method for accurately staging a cancer sample. The R source code used in this study is available at GitHub (https://github.com/yxchspring/CECS).


Assuntos
Expressão Gênica , Neoplasias/patologia , Biologia Computacional , Bases de Dados Genéticas , Humanos , Estadiamento de Neoplasias , Neoplasias/genética , Prognóstico , Software
4.
Front Physiol ; 11: 612928, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33424635

RESUMO

BACKGROUND: Magnetic resonance imaging (MRI) has a wide range of applications in medical imaging. Recently, studies based on deep learning algorithms have demonstrated powerful processing capabilities for medical imaging data. Previous studies have mostly focused on common diseases that usually have large scales of datasets and centralized the lesions in the brain. In this paper, we used deep learning models to process MRI images to differentiate the rare neuromyelitis optical spectrum disorder (NMOSD) from multiple sclerosis (MS) automatically, which are characterized by scattered and overlapping lesions. METHODS: We proposed a novel model structure to capture 3D MRI images' essential information and converted them into lower dimensions. To empirically prove the efficiency of our model, firstly, we used a conventional 3-dimensional (3D) model to classify the T2-weighted fluid-attenuated inversion recovery (T2-FLAIR) images and proved that the traditional 3D convolutional neural network (CNN) models lack the learning capacity to distinguish between NMOSD and MS. Then, we compressed the 3D T2-FLAIR images by a two-view compression block to apply two different depths (18 and 34 layers) of 2D models for disease diagnosis and also applied transfer learning by pre-training our model on ImageNet dataset. RESULTS: We found that our models possess superior performance when our models were pre-trained on ImageNet dataset, in which the models' average accuracies of 34 layers model and 18 layers model were 0.75 and 0.725, sensitivities were 0.707 and 0.708, and specificities were 0.759 and 0.719, respectively. Meanwhile, the traditional 3D CNN models lacked the learning capacity to distinguish between NMOSD and MS. CONCLUSION: The novel CNN model we proposed could automatically differentiate the rare NMOSD from MS, especially, our model showed better performance than traditional3D CNN models. It indicated that our 3D compressed CNN models are applicable in handling diseases with small-scale datasets and possess overlapping and scattered lesions.

5.
Med Biol Eng Comput ; 57(1): 107-121, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30003400

RESUMO

With the advent of biomedical imaging technology, the number of captured and stored biomedical images is rapidly increasing day by day in hospitals, imaging laboratories and biomedical institutions. Therefore, more robust biomedical image analysis technology is needed to meet the requirement of the diagnosis and classification of various kinds of diseases using biomedical images. However, the current biomedical image classification methods and general non-biomedical image classifiers cannot extract more compact biomedical image features or capture the tiny differences between similar images with different types of diseases from the same category. In this paper, we propose a novel fused convolutional neural network to develop a more accurate and highly efficient classifier for biomedical images, which combines shallow layer features and deep layer features from the proposed deep neural network architecture. In the analysis, it was observed that the shallow layers provided more detailed local features, which could distinguish different diseases in the same category, while the deep layers could convey more high-level semantic information used to classify the diseases among the various categories. A detailed comparison of our approach with traditional classification algorithms and popular deep classifiers across several public biomedical image datasets showed the superior performance of our proposed method for biomedical image classification. In addition, we also evaluated the performance of our method in modality classification of medical images using the ImageCLEFmed dataset. Graphical abstract The graphical abstract shows the fused, deep convolutional neural network architecture proposed for biomedical image classification. In the architecture, we can clearly see the feature-fusing process going from shallow layers and the deep layers.


Assuntos
Diagnóstico por Imagem/classificação , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Algoritmos , Aprendizado Profundo , Humanos
6.
Comput Methods Programs Biomed ; 158: 53-69, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29544790

RESUMO

BACKGROUND AND OBJECTIVES: The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. METHODS: We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. RESULTS: We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state-of-the-art techniques in indexing biomedical images. CONCLUSIONS: We propose a novel and automated indexing system based on deep preference learning to characterize biomedical images for developing computer aided diagnosis (CAD) systems in healthcare. Our proposed system shows an outstanding indexing ability and high efficiency for biomedical image retrieval applications and it can be used to collect and annotate the high-resolution images in a biomedical database for further biomedical image research and applications.


Assuntos
Armazenamento e Recuperação da Informação/métodos , Aprendizado de Máquina , Sistemas de Informação em Radiologia , Algoritmos , Bases de Dados Factuais , Diagnóstico por Imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
7.
Comput Methods Programs Biomed ; 140: 283-293, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28254085

RESUMO

BACKGROUND AND OBJECTIVES: Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. METHODS: We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. RESULTS: With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. CONCLUSIONS: We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets.


Assuntos
Diagnóstico por Imagem , Redes Neurais de Computação , Aprendizado de Máquina , Modelos Teóricos
8.
Comput Intell Neurosci ; 2016: 6749325, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27872639

RESUMO

We proposed a new method of gist feature extraction for building recognition and named the feature extracted by this method as the histogram of oriented gradient based gist (HOG-gist). The proposed method individually computes the normalized histograms of multiorientation gradients for the same image with four different scales. The traditional approach uses the Gabor filters with four angles and four different scales to extract orientation gist feature vectors from an image. Our method, in contrast, uses the normalized histogram of oriented gradient as orientation gist feature vectors of the same image. These HOG-based orientation gist vectors, combined with intensity and color gist feature vectors, are the proposed HOG-gist vectors. In general, the HOG-gist contains four multiorientation histograms (four orientation gist feature vectors), and its texture description ability is stronger than that of the traditional gist using Gabor filters with four angles. Experimental results using Sheffield Buildings Database verify the feasibility and effectiveness of the proposed HOG-gist.


Assuntos
Processamento de Imagem Assistida por Computador , Orientação Espacial , Reconhecimento Automatizado de Padrão , Algoritmos , Bases de Dados Factuais , Humanos , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA