Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Digit Imaging ; 36(4): 1608-1623, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37012446

RESUMO

Segmentation of tumor regions in H &E-stained slides is an important task for a pathologist while diagnosing different types of cancer, including oral squamous cell carcinoma (OSCC). Histological image segmentation is often constrained by the availability of labeled training data since labeling histological images is a highly skilled, complex, and time-consuming task. Thus, data augmentation strategies become essential to train convolutional neural networks models to overcome the overfitting problem when only a few training samples are available. This paper proposes a new data augmentation strategy, named Random Composition Augmentation (RCAug), to train fully convolutional networks (FCN) to segment OSCC tumor regions in H &E-stained histological images. Given the input image and their corresponding label, a pipeline with a random composition of geometric, distortion, color transfer, and generative image transformations is executed on the fly. Experimental evaluations were performed using an FCN-based method to segment OSCC regions through a set of different data augmentation transformations. By using RCAug, we improved the FCN-based segmentation method from 0.51 to 0.81 of intersection-over-union (IOU) in a whole slide image dataset and from 0.65 to 0.69 of IOU in a tissue microarray images dataset.


Assuntos
Carcinoma de Células Escamosas , Neoplasias Bucais , Humanos , Processamento de Imagem Assistida por Computador/métodos , Carcinoma de Células Escamosas/diagnóstico por imagem , Neoplasias Bucais/diagnóstico por imagem , Redes Neurais de Computação
2.
Entropy (Basel) ; 26(1)2023 Dec 28.
Artigo em Inglês | MEDLINE | ID: mdl-38248160

RESUMO

In this work, a computational scheme is proposed to identify the main combinations of handcrafted descriptors and deep-learned features capable of classifying histological images stained with hematoxylin and eosin. The handcrafted descriptors were those representatives of multiscale and multidimensional fractal techniques (fractal dimension, lacunarity and percolation) applied to quantify the histological images with the corresponding representations via explainable artificial intelligence (xAI) approaches. The deep-learned features were obtained from different convolutional neural networks (DenseNet-121, EfficientNet-b2, Inception-V3, ResNet-50 and VGG-19). The descriptors were investigated through different associations. The most relevant combinations, defined through a ranking algorithm, were analyzed via a heterogeneous ensemble of classifiers with the support vector machine, naive Bayes, random forest and K-nearest neighbors algorithms. The proposed scheme was applied to histological samples representative of breast cancer, colorectal cancer, oral dysplasia and liver tissue. The best results were accuracy rates of 94.83% to 100%, with the identification of pattern ensembles for classifying multiple histological images. The computational scheme indicated solutions exploring a reduced number of features (a maximum of 25 descriptors) and with better performance values than those observed in the literature. The presented information in this study is useful to complement and improve the development of computer-aided diagnosis focused on histological images.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA