Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Comput Intell Neurosci ; 2022: 2836486, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35449738

RESUMEN

Nowadays, the information processing capabilities and resource storage capabilities of computers have been greatly improved, which also provides support for the neural network technology. Convolutional neural networks have good characterization capabilities in computer vision tasks, such as image recognition technology. Aiming at the problem of high similarity image recognition and classification in a specific field, this paper proposes a high similarity image recognition and classification algorithm fused with convolutional neural networks. First, we extract the image texture features, train different types, and different resolution image sets and determine the optimal texture different parameter values. Second, we decompose the image into subimages according to the texture difference, extract the energy features of each subimage, and perform classification. Then, the input image feature vector is converted into a one-dimensional vector through the alternating 5-layer convolution and 3-layer pooling of convolutional neural networks. On this basis, different sizes of convolution kernels are used to extract different convolutions of the image features, and then use convolution to achieve the feature fusion of different dimensional convolutions. Finally, through the increase in the number of training and the increase in the amount of data, the network parameters are continuously optimized to improve the classification accuracy in the training set and in the test set. The actual accuracy of the weights is verified, and the convolutional neural network model with the highest classification accuracy is obtained. In the experiment, two image data sets of gems and apples are selected as the experimental data to classify and identify gems and determine the origin of apples. The experimental results show that the average identification accuracy of the algorithm is more than 90%.


Asunto(s)
Algoritmos , Redes Neurales de la Computación
2.
IEEE Trans Med Imaging ; 41(9): 2207-2216, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35286257

RESUMEN

Benefiting from the powerful expressive capability of graphs, graph-based approaches have been popularly applied to handle multi-modal medical data and achieved impressive performance in various biomedical applications. For disease prediction tasks, most existing graph-based methods tend to define the graph manually based on specified modality (e.g., demographic information), and then integrated other modalities to obtain the patient representation by Graph Representation Learning (GRL). However, constructing an appropriate graph in advance is not a simple matter for these methods. Meanwhile, the complex correlation between modalities is ignored. These factors inevitably yield the inadequacy of providing sufficient information about the patient's condition for a reliable diagnosis. To this end, we propose an end-to-end Multi-modal Graph Learning framework (MMGL) for disease prediction with multi-modality. To effectively exploit the rich information across multi-modality associated with the disease, modality-aware representation learning is proposed to aggregate the features of each modality by leveraging the correlation and complementarity between the modalities. Furthermore, instead of defining the graph manually, the latent graph structure is captured through an effective way of adaptive graph learning. It could be jointly optimized with the prediction model, thus revealing the intrinsic connections among samples. Our model is also applicable to the scenario of inductive learning for those unseen data. An extensive group of experiments on two disease prediction tasks demonstrates that the proposed MMGL achieves more favorable performance. The code of MMGL is available at https://github.com/SsGood/MMGL.


Asunto(s)
Aprendizaje Automático , Humanos
3.
IEEE J Biomed Health Inform ; 26(2): 638-647, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34990372

RESUMEN

To bridge the gap between the source and target domains in unsupervised domain adaptation (UDA), the most common strategy puts focus on matching the marginal distributions in the feature space through adversarial learning. However, such category-agnostic global alignment lacks of exploiting the class-level joint distributions, causing the aligned distribution less discriminative. To address this issue, we propose in this paper a novel margin preserving self-paced contrastive Learning (MPSCL) model for cross-modal medical image segmentation. Unlike the conventional construction of contrastive pairs in contrastive learning, the domain-adaptive category prototypes are utilized to constitute the positive and negative sample pairs. With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space. To enhance the supervision for contrastive learning, more informative pseudo-labels are generated in target domain in a self-paced way, thus benefiting the category-aware distribution alignment for UDA. Furthermore, the domain-invariant representations are learned through joint contrastive learning between the two domains. Extensive experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance, and outperforms a wide variety of state-of-the-art methods by a large margin.


Asunto(s)
Corazón , Semántica , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...