Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Bioengineering (Basel) ; 11(6)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38927865

RESUMO

Prostate cancer is a significant health concern with high mortality rates and substantial economic impact. Early detection plays a crucial role in improving patient outcomes. This study introduces a non-invasive computer-aided diagnosis (CAD) system that leverages intravoxel incoherent motion (IVIM) parameters for the detection and diagnosis of prostate cancer (PCa). IVIM imaging enables the differentiation of water molecule diffusion within capillaries and outside vessels, offering valuable insights into tumor characteristics. The proposed approach utilizes a two-step segmentation approach through the use of three U-Net architectures for extracting tumor-containing regions of interest (ROIs) from the segmented images. The performance of the CAD system is thoroughly evaluated, considering the optimal classifier and IVIM parameters for differentiation and comparing the diagnostic value of IVIM parameters with the commonly used apparent diffusion coefficient (ADC). The results demonstrate that the combination of central zone (CZ) and peripheral zone (PZ) features with the Random Forest Classifier (RFC) yields the best performance. The CAD system achieves an accuracy of 84.08% and a balanced accuracy of 82.60%. This combination showcases high sensitivity (93.24%) and reasonable specificity (71.96%), along with good precision (81.48%) and F1 score (86.96%). These findings highlight the effectiveness of the proposed CAD system in accurately segmenting and diagnosing PCa. This study represents a significant advancement in non-invasive methods for early detection and diagnosis of PCa, showcasing the potential of IVIM parameters in combination with machine learning techniques. This developed solution has the potential to revolutionize PCa diagnosis, leading to improved patient outcomes and reduced healthcare costs.

2.
Front Med (Lausanne) ; 11: 1380405, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38741771

RESUMO

Introduction: Non-melanoma skin cancer comprising Basal cell carcinoma (BCC), Squamous cell carcinoma (SCC), and Intraepidermal carcinoma (IEC) has the highest incidence rate among skin cancers. Intelligent decision support systems may address the issue of the limited number of subject experts and help in mitigating the parity of health services between urban centers and remote areas. Method: In this research, we propose a transformer-based model for the segmentation of histopathology images not only into inflammation and cancers such as BCC, SCC, and IEC but also to identify skin tissues and boundaries that are important in decision-making. Accurate segmentation of these tissue types will eventually lead to accurate detection and classification of non-melanoma skin cancer. The segmentation according to tissue types and their visual representation before classification enhances the trust of pathologists and doctors being relatable to how most pathologists approach this problem. The visualization of the confidence of the model in its prediction through uncertainty maps is also what distinguishes this study from most deep learning methods. Results: The evaluation of proposed system is carried out using publicly available dataset. The application of our proposed segmentation system demonstrated good performance with an F1 score of 0.908, mean intersection over union (mIoU) of 0.653, and average accuracy of 83.1%, advocating that the system can be used as a decision support system successfully and has the potential of subsequently maturing into a fully automated system. Discussion: This study is an attempt to automate the segmentation of the most occurring non-melanoma skin cancer using a transformer-based deep learning technique applied to histopathology skin images. Highly accurate segmentation and visual representation of histopathology images according to tissue types by the proposed system implies that the system can be used for skin-related routine pathology tasks including cancer and other anomaly detection, their classification, and measurement of surgical margins in the case of cancer cases.

3.
Cancers (Basel) ; 14(23)2022 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-36497378

RESUMO

In this work, we introduced an automated diagnostic system for Gleason system grading and grade groups (GG) classification using whole slide images (WSIs) of digitized prostate biopsy specimens (PBSs). Our system first classifies the Gleason pattern (GP) from PBSs and then identifies the Gleason score (GS) and GG. We developed a comprehensive DL-based approach to develop a grading pipeline system for the digitized PBSs and consider GP as a classification problem (not segmentation) compared to current research studies (deals with as a segmentation problem). A multilevel binary classification was implemented to enhance the segmentation accuracy for GP. Also, we created three levels of analysis (pyramidal levels) to extract different types of features. Each level has four shallow binary CNN to classify five GP labels. A majority fusion is applied for each pixel that has a total of 39 labeled images to create the final output for GP. The proposed framework is trained, validated, and tested on 3080 WSIs of PBS. The overall diagnostic accuracy for each CNN is evaluated using several metrics: precision (PR), recall (RE), and accuracy, which are documented by the confusion matrices.The results proved our system's potential for classifying all five GP and, thus, GG. The overall accuracy for the GG is evaluated using two metrics, PR and RE. The grade GG results are between 50% to 92% for RE and 50% to 92% for PR. Also, a comparison between our CNN architecture and the standard CNN (ResNet50) highlights our system's advantage. Finally, our deep-learning system achieved an agreement with the consensus grade groups.

4.
Comput Biol Med ; 151(Pt A): 106222, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36343406

RESUMO

The high precedence of epidemiological examination of skin lesions necessitated the well-performing efficient classification and segmentation models. In the past two decades, various algorithms, especially machine/deep learning-based methods, replicated the classical visual examination to accomplish the above-mentioned tasks. These automated streams of models demand evident lesions with less background and noise affecting the region of interest. However, even after the proposal of these advanced techniques, there are gaps in achieving the efficacy of matter. Recently, many preprocessors proposed to enhance the contrast of lesions, which further aided the skin lesion segmentation and classification tasks. Metaheuristics are the methods used to support the search space optimisation problems. We propose a novel Hybrid Metaheuristic Differential Evolution-Bat Algorithm (DE-BA), which estimates parameters used in the brightness preserving contrast stretching transformation function. For extensive experimentation we tested our proposed algorithm on various publicly available databases like ISIC 2016, 2017, 2018 and PH2, and validated the proposed model with some state-of-the-art already existing segmentation models. The tabular and visual comparison of the results concluded that DE-BA as a preprocessor positively enhances the segmentation results.


Assuntos
Melanoma , Dermatopatias , Neoplasias Cutâneas , Humanos , Dermoscopia/métodos , Melanoma/diagnóstico , Neoplasias Cutâneas/diagnóstico , Heurística , Algoritmos , Dermatopatias/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
5.
Bioengineering (Basel) ; 9(10)2022 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-36290500

RESUMO

Gliomas are the most common type of primary brain tumors and one of the highest causes of mortality worldwide. Accurate grading of gliomas is of immense importance to administer proper treatment plans. In this paper, we develop a comprehensive non-invasive multimodal magnetic resonance (MR)-based computer-aided diagnostic (CAD) system to precisely differentiate between different grades of gliomas (Grades: I, II, III, and IV). A total of 99 patients with gliomas (M = 49, F = 50, age range = 1-79 years) were included after providing their informed consent to participate in this study. The proposed imaging-based glioma grading (GG-CAD) system utilizes three different MR imaging modalities, namely; contrast-enhanced T1-MR, T2-MR known as fluid-attenuated inversion-recovery (FLAIR), and diffusion-weighted (DW-MR) to extract the following imaging features: (i) morphological features based on constructing the histogram of oriented gradients (HOG) and estimating the glioma volume, (ii) first and second orders textural features by constructing histogram, gray-level run length matrix (GLRLM), and gray-level co-occurrence matrix (GLCM), (iii) functional features by estimating voxel-wise apparent diffusion coefficients (ADC) and contrast-enhancement slope. These features are then integrated together and processed using a Gini impurity-based selection approach to find the optimal set of significant features. The reduced significant features are then fed to a multi-layer perceptron artificial neural networks (MLP-ANN) classification model to obtain the final diagnosis of a glioma tumor as Grade I, II, III, or IV. The GG-CAD system was evaluated on the enrolled 99 gliomas (Grade I = 13, Grade II = 22, Grade III = 22, and Grade IV = 42) using a leave-one-subject-out (LOSO) and k-fold stratified (with k = 5 and 10) cross-validation approach. The GG-CAD achieved 0.96 ± 0.02 quadratic-weighted Cohen's kappa and 95.8% ± 1.9% overall diagnostic accuracy at LOSO and an outstanding diagnostic performance at k = 10 and 5. Alternative classifiers, including RFs and SVMlin produced inferior results compared to the proposed MLP-ANN GG-CAD system. These findings demonstrate the feasibility of the proposed CAD system as a novel tool to objectively characterize gliomas using the comprehensive extracted and selected imaging features. The developed GG-CAD system holds promise to be used as a non-invasive diagnostic tool for Precise Grading of Glioma.

6.
Sensors (Basel) ; 21(14)2021 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-34300667

RESUMO

Renal cell carcinoma (RCC) is the most common and a highly aggressive type of malignant renal tumor. In this manuscript, we aim to identify and integrate the optimal discriminating morphological, textural, and functional features that best describe the malignancy status of a given renal tumor. The integrated discriminating features may lead to the development of a novel comprehensive renal cancer computer-assisted diagnosis (RC-CAD) system with the ability to discriminate between benign and malignant renal tumors and specify the malignancy subtypes for optimal medical management. Informed consent was obtained from a total of 140 biopsy-proven patients to participate in the study (male = 72 and female = 68, age range = 15 to 87 years). There were 70 patients who had RCC (40 clear cell RCC (ccRCC), 30 nonclear cell RCC (nccRCC)), while the other 70 had benign angiomyolipoma tumors. Contrast-enhanced computed tomography (CE-CT) images were acquired, and renal tumors were segmented for all patients to allow the extraction of discriminating imaging features. The RC-CAD system incorporates the following major steps: (i) applying a new parametric spherical harmonic technique to estimate the morphological features, (ii) modeling a novel angular invariant gray-level co-occurrence matrix to estimate the textural features, and (iii) constructing wash-in/wash-out slopes to estimate the functional features by quantifying enhancement variations across different CE-CT phases. These features were subsequently combined and processed using a two-stage multilayer perceptron artificial neural network (MLP-ANN) classifier to classify the renal tumor as benign or malignant and identify the malignancy subtype as well. Using the combined features and a leave-one-subject-out cross-validation approach, the developed RC-CAD system achieved a sensitivity of 95.3%±2.0%, a specificity of 99.9%±0.4%, and Dice similarity coefficient of 0.98±0.01 in differentiating malignant from benign tumors, as well as an overall accuracy of 89.6%±5.0% in discriminating ccRCC from nccRCC. The diagnostic abilities of the developed RC-CAD system were further validated using a randomly stratified 10-fold cross-validation approach. The obtained results using the proposed MLP-ANN classification model outperformed other machine learning classifiers (e.g., support vector machine, random forests, relational functional gradient boosting, etc.). Hence, integrating morphological, textural, and functional features enhances the diagnostic performance, making the proposal a reliable noninvasive diagnostic tool for renal tumors.


Assuntos
Angiomiolipoma , Carcinoma de Células Renais , Neoplasias Renais , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Carcinoma de Células Renais/diagnóstico por imagem , Diagnóstico por Computador , Diagnóstico Diferencial , Feminino , Humanos , Neoplasias Renais/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Adulto Jovem
7.
Sensors (Basel) ; 21(11)2021 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-34070290

RESUMO

Background and Objective: The use of computer-aided detection (CAD) systems can help radiologists make objective decisions and reduce the dependence on invasive techniques. In this study, a CAD system that detects and identifies prostate cancer from diffusion-weighted imaging (DWI) is developed. Methods: The proposed system first uses non-negative matrix factorization (NMF) to integrate three different types of features for the accurate segmentation of prostate regions. Then, discriminatory features in the form of apparent diffusion coefficient (ADC) volumes are estimated from the segmented regions. The ADC maps that constitute these volumes are labeled by a radiologist to identify the ADC maps with malignant or benign tumors. Finally, transfer learning is used to fine-tune two different previously-trained convolutional neural network (CNN) models (AlexNet and VGGNet) for detecting and identifying prostate cancer. Results: Multiple experiments were conducted to evaluate the accuracy of different CNN models using DWI datasets acquired at nine distinct b-values that included both high and low b-values. The average accuracy of AlexNet at the nine b-values was 89.2±1.5% with average sensitivity and specificity of 87.5±2.3% and 90.9±1.9%. These results improved with the use of the deeper CNN model (VGGNet). The average accuracy of VGGNet was 91.2±1.3% with sensitivity and specificity of 91.7±1.7% and 90.1±2.8%. Conclusions: The results of the conducted experiments emphasize the feasibility and accuracy of the developed system and the improvement of this accuracy using the deeper CNN.


Assuntos
Imagem de Difusão por Ressonância Magnética , Neoplasias da Próstata , Algoritmos , Humanos , Aprendizado de Máquina , Masculino , Redes Neurais de Computação , Neoplasias da Próstata/diagnóstico por imagem , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA