Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Diagnostics (Basel) ; 14(9)2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38732368

RESUMO

BACKGROUND: At the time of cancer diagnosis, it is crucial to accurately classify malignant gastric tumors and the possibility that patients will survive. OBJECTIVE: This study aims to investigate the feasibility of identifying and applying a new feature extraction technique to predict the survival of gastric cancer patients. METHODS: A retrospective dataset including the computed tomography (CT) images of 135 patients was assembled. Among them, 68 patients survived longer than three years. Several sets of radiomics features were extracted and were incorporated into a machine learning model, and their classification performance was characterized. To improve the classification performance, we further extracted another 27 texture and roughness parameters with 2484 superficial and spatial features to propose a new feature pool. This new feature set was added into the machine learning model and its performance was analyzed. To determine the best model for our experiment, Random Forest (RF) classifier, Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Naïve Bayes (NB) (four of the most popular machine learning models) were utilized. The models were trained and tested using the five-fold cross-validation method. RESULTS: Using the area under ROC curve (AUC) as an evaluation index, the model that was generated using the new feature pool yields AUC = 0.98 ± 0.01, which was significantly higher than the models created using the traditional radiomics feature set (p < 0.04). RF classifier performed better than the other machine learning models. CONCLUSIONS: This study demonstrated that although radiomics features produced good classification performance, creating new feature sets significantly improved the model performance.

2.
Med Phys ; 50(12): 7670-7683, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37083190

RESUMO

BACKGROUND: Developing computer aided diagnosis (CAD) schemes of mammograms to classify between malignant and benign breast lesions has attracted a lot of research attention over the last several decades. However, unlike radiologists who make diagnostic decisions based on the fusion of image features extracted from multi-view mammograms, most CAD schemes are single-view-based schemes, which limit CAD performance and clinical utility. PURPOSE: This study aims to develop and test a novel CAD framework that optimally fuses information extracted from ipsilateral views of bilateral mammograms using both deep transfer learning (DTL) and radiomics feature extraction methods. METHODS: An image dataset containing 353 benign and 611 malignant cases is assembled. Each case contains four images: the craniocaudal (CC) and mediolateral oblique (MLO) view of the left and right breast. First, we extract four matching regions of interest (ROIs) from images that surround centers of two suspicious lesion regions seen in CC and MLO views, as well as matching ROIs in the contralateral breasts. Next, the handcrafted radiomics (HCRs) features and VGG16 model-generated automated features are extracted from each ROI resulting in eight feature vectors. Then, after reducing feature dimensionality and quantifying the bilateral and ipsilateral asymmetry of four ROIs to yield four new feature vectors, we test four fusion methods to build three support vector machine (SVM) classifiers by an optimal fusion of asymmetrical image features extracted from four view images. RESULTS: Using a 10-fold cross-validation method, results show that a SVM classifier trained using an optimal fusion of four view images yields the highest classification performance (AUC = 0.876 ± 0.031), which significantly outperforms SVM classifiers trained using one projection view alone, AUC = 0.817 ± 0.026 and 0.792 ± 0.026 for the CC and MLO view of bilateral mammograms, respectively (p < 0.001). CONCLUSIONS: The study demonstrates that the shift from single-view CAD to four-view CAD and the inclusion of both DTL and radiomics features significantly increases CAD performance in distinguishing between malignant and benign breast lesions.


Assuntos
Algoritmos , Aprendizado Profundo , Mamografia/métodos , Diagnóstico por Computador
3.
Tomography ; 8(5): 2411-2425, 2022 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-36287799

RESUMO

Background: The accurate classification between malignant and benign breast lesions detected on mammograms is a crucial but difficult challenge for reducing false-positive recall rates and improving the efficacy of breast cancer screening. Objective: This study aims to optimize a new deep transfer learning model by implementing a novel attention mechanism in order to improve the accuracy of breast lesion classification. Methods: ResNet50 is selected as the base model to develop a new deep transfer learning model. To enhance the accuracy of breast lesion classification, we propose adding a convolutional block attention module (CBAM) to the standard ResNet50 model and optimizing a new model for this task. We assembled a large dataset with 4280 mammograms depicting suspicious soft-tissue mass-type lesions. A region of interest (ROI) is extracted from each image based on lesion center. Among them, 2480 and 1800 ROIs depict verified benign and malignant lesions, respectively. The image dataset is randomly split into two subsets with a ratio of 9:1 five times to train and test two ResNet50 models with and without using CBAM. Results: Using the area under ROC curve (AUC) as an evaluation index, the new CBAM-based ResNet50 model yields AUC = 0.866 ± 0.015, which is significantly higher than that obtained by the standard ResNet50 model (AUC = 0.772 ± 0.008) (p < 0.01). Conclusion: This study demonstrates that although deep transfer learning technology attracted broad research interest in medical-imaging informatic fields, adding a new attention mechanism to optimize deep transfer learning models for specific application tasks can play an important role in further improving model performances.


Assuntos
Neoplasias da Mama , Redes Neurais de Computação , Humanos , Feminino , Aprendizado de Máquina , Mamografia/métodos , Neoplasias da Mama/diagnóstico por imagem , Área Sob a Curva
4.
Front Oncol ; 12: 980793, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36119479

RESUMO

Breast cancer remains the most diagnosed cancer in women. Advances in medical imaging modalities and technologies have greatly aided in the early detection of breast cancer and the decline of patient mortality rates. However, reading and interpreting breast images remains difficult due to the high heterogeneity of breast tumors and fibro-glandular tissue, which results in lower cancer detection sensitivity and specificity and large inter-reader variability. In order to help overcome these clinical challenges, researchers have made great efforts to develop computer-aided detection and/or diagnosis (CAD) schemes of breast images to provide radiologists with decision-making support tools. Recent rapid advances in high throughput data analysis methods and artificial intelligence (AI) technologies, particularly radiomics and deep learning techniques, have led to an exponential increase in the development of new AI-based models of breast images that cover a broad range of application topics. In this review paper, we focus on reviewing recent advances in better understanding the association between radiomics features and tumor microenvironment and the progress in developing new AI-based quantitative image feature analysis models in three realms of breast cancer: predicting breast cancer risk, the likelihood of tumor malignancy, and tumor response to treatment. The outlook and three major challenges of applying new AI-based models of breast images to clinical practice are also discussed. Through this review we conclude that although developing new AI-based models of breast images has achieved significant progress and promising results, several obstacles to applying these new AI-based models to clinical practice remain. Therefore, more research effort is needed in future studies.

5.
Bioengineering (Basel) ; 9(6)2022 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-35735499

RESUMO

Objective: Radiomics and deep transfer learning are two popular technologies used to develop computer-aided detection and diagnosis (CAD) schemes of medical images. This study aims to investigate and to compare the advantages and the potential limitations of applying these two technologies in developing CAD schemes. Methods: A relatively large and diverse retrospective dataset including 3000 digital mammograms was assembled in which 1496 images depicted malignant lesions and 1504 images depicted benign lesions. Two CAD schemes were developed to classify breast lesions. The first scheme was developed using four steps namely, applying an adaptive multi-layer topographic region growing algorithm to segment lesions, computing initial radiomics features, applying a principal component algorithm to generate an optimal feature vector, and building a support vector machine classifier. The second CAD scheme was built based on a pre-trained residual net architecture (ResNet50) as a transfer learning model to classify breast lesions. Both CAD schemes were trained and tested using a 10-fold cross-validation method. Several score fusion methods were also investigated to classify breast lesions. CAD performances were evaluated and compared by the areas under the ROC curve (AUC). Results: The ResNet50 model-based CAD scheme yielded AUC = 0.85 ± 0.02, which was significantly higher than the radiomics feature-based CAD scheme with AUC = 0.77 ± 0.02 (p < 0.01). Additionally, the fusion of classification scores generated by the two CAD schemes did not further improve classification performance. Conclusion: This study demonstrates that using deep transfer learning is more efficient to develop CAD schemes and it enables a higher lesion classification performance than CAD schemes developed using radiomics-based technology.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA