Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(20)2021 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-34695922

RESUMO

Prostate cancer is a significant cause of morbidity and mortality in the USA. In this paper, we develop a computer-aided diagnostic (CAD) system for automated grade groups (GG) classification using digitized prostate biopsy specimens (PBSs). Our CAD system aims to firstly classify the Gleason pattern (GP), and then identifies the Gleason score (GS) and GG. The GP classification pipeline is based on a pyramidal deep learning system that utilizes three convolution neural networks (CNN) to produce both patch- and pixel-wise classifications. The analysis starts with sequential preprocessing steps that include a histogram equalization step to adjust intensity values, followed by a PBSs' edge enhancement. The digitized PBSs are then divided into overlapping patches with the three sizes: 100 × 100 (CNNS), 150 × 150 (CNNM), and 200 × 200 (CNNL), pixels, and 75% overlap. Those three sizes of patches represent the three pyramidal levels. This pyramidal technique allows us to extract rich information, such as that the larger patches give more global information, while the small patches provide local details. After that, the patch-wise technique assigns each overlapped patch a label as GP categories (1 to 5). Then, the majority voting is the core approach for getting the pixel-wise classification that is used to get a single label for each overlapped pixel. The results after applying those techniques are three images of the same size as the original, and each pixel has a single label. We utilized the majority voting technique again on those three images to obtain only one. The proposed framework is trained, validated, and tested on 608 whole slide images (WSIs) of the digitized PBSs. The overall diagnostic accuracy is evaluated using several metrics: precision, recall, F1-score, accuracy, macro-averaged, and weighted-averaged. The (CNNL) has the best accuracy results for patch classification among the three CNNs, and its classification accuracy is 0.76. The macro-averaged and weighted-average metrics are found to be around 0.70-0.77. For GG, our CAD results are about 80% for precision, and between 60% to 80% for recall and F1-score, respectively. Also, it is around 94% for accuracy and NPV. To highlight our CAD systems' results, we used the standard ResNet50 and VGG-16 to compare our CNN's patch-wise classification results. As well, we compared the GG's results with that of the previous work.


Assuntos
Aprendizado Profundo , Próstata , Biópsia , Humanos , Masculino , Gradação de Tumores , Redes Neurais de Computação , Próstata/diagnóstico por imagem
2.
Cancers (Basel) ; 14(23)2022 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-36497378

RESUMO

In this work, we introduced an automated diagnostic system for Gleason system grading and grade groups (GG) classification using whole slide images (WSIs) of digitized prostate biopsy specimens (PBSs). Our system first classifies the Gleason pattern (GP) from PBSs and then identifies the Gleason score (GS) and GG. We developed a comprehensive DL-based approach to develop a grading pipeline system for the digitized PBSs and consider GP as a classification problem (not segmentation) compared to current research studies (deals with as a segmentation problem). A multilevel binary classification was implemented to enhance the segmentation accuracy for GP. Also, we created three levels of analysis (pyramidal levels) to extract different types of features. Each level has four shallow binary CNN to classify five GP labels. A majority fusion is applied for each pixel that has a total of 39 labeled images to create the final output for GP. The proposed framework is trained, validated, and tested on 3080 WSIs of PBS. The overall diagnostic accuracy for each CNN is evaluated using several metrics: precision (PR), recall (RE), and accuracy, which are documented by the confusion matrices.The results proved our system's potential for classifying all five GP and, thus, GG. The overall accuracy for the GG is evaluated using two metrics, PR and RE. The grade GG results are between 50% to 92% for RE and 50% to 92% for PR. Also, a comparison between our CNN architecture and the standard CNN (ResNet50) highlights our system's advantage. Finally, our deep-learning system achieved an agreement with the consensus grade groups.

3.
Cardiovasc Eng Technol ; 13(1): 170-180, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34402037

RESUMO

PURPOSE: Drug induced cardiac toxicity is a disruption of the functionality of cardiomyocytes which is highly correlated to the organization of the subcellular structures. We can analyze cellular structures by utilizing microscopy imaging data. However, conventional image analysis methods might miss structural deteriorations that are difficult to perceive. Here, we propose an image-based deep learning pipeline for the automated quantification of drug induced structural deteriorations using a 3D heart slice culture model. METHODS: In our deep learning pipeline, we quantify the induced structural deterioration from three anticancer drugs (doxorubicin, sunitinib, and herceptin) with known adverse cardiac effects. The proposed deep learning framework is composed of three convolutional neural networks that process three different image sizes. The results of the three networks are combined to produce a classification map that shows the locations of the structural deteriorations in the input cardiac image. RESULTS: The result of our technique is the capability of producing classification maps that accurately detect drug induced structural deterioration on the pixel level. CONCLUSION: This technology could be widely applied to perform unbiased quantification of the structural effect of the cardiotoxins on heart slices.


Assuntos
Inteligência Artificial , Miócitos Cardíacos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA