Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters











Publication year range
1.
Sci Rep ; 14(1): 22467, 2024 09 28.
Article in English | MEDLINE | ID: mdl-39341957

ABSTRACT

The study aims to investigate the potential of training efficient deep learning models by using 2.5D (2.5-Dimension) masks of sICH. Furthermore, it intends to evaluate and compare the predictive performance of a joint model incorporating four types of features with standalone 2.5D deep learning, radiomics, radiology, and clinical models for early expansion in sICH. A total of 254 sICH patients were enrolled retrospectively and divided into two groups according to whether the hematoma was enlarged or not. The 2.5D mask of sICH is constructed with the maximum axial, coronal and sagittal planes of the hematoma, which is used to train the deep learning model and extract deep learning features. Predictive models were built on clinic, radiology, radiomics and deep learning features separately and four type features jointly. The diagnostic performance of each model was measured using the receiver operating characteristic curve (AUC), Accuracy, Recall, F1 and decision curve analysis (DCA). The AUCs of the clinic model, radiology model, radiomics model, deep learning model, joint model, and nomogram model on the train set (training and Cross-validation) were 0.639, 0.682, 0.859, 0.807, 0.939, and 0.942, respectively, while the AUCs on the test set (external validation) were 0.680, 0.758, 0.802, 0.857, 0.929, and 0.926. Decision curve analysis showed that the joint model was superior to the other models and demonstrated good consistency between the predicted probability of early hematoma expansion and the actual occurrence probability. Our study demonstrates that the joint model is a more efficient and robust prediction model, as verified by multicenter data. This finding highlights the potential clinical utility of a multifactorial prediction model that integrates various data sources for prognostication in patients with intracerebral hemorrhage. The Critical Relevance Statement: Combining 2.5D deep learning features with clinic features, radiology markers, and radiomics signatures to establish a joint model enabling physicians to conduct better-individualized assessments the risk of early expansion of sICH.


Subject(s)
Deep Learning , Humans , Male , Female , Retrospective Studies , Middle Aged , ROC Curve , Aged , Early Diagnosis
2.
Quant Imaging Med Surg ; 14(8): 5396-5407, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39144035

ABSTRACT

Background: Deep learning features (DLFs) derived from radiomics features (RFs) fused with deep learning have shown potential in enhancing diagnostic capability. However, the limited repeatability and reproducibility of DLFs across multiple centers represents a challenge in the clinically validation of these features. This study thus aimed to evaluate the repeatability and reproducibility of DLFs and their potential efficiency in differentiating subtypes of lung adenocarcinoma less than 10 mm in size and manifesting as ground-glass nodules (GGNs). Methods: A chest phantom with nodules was scanned repeatedly using different thin-slice computed tomography (TSCT) scanners with varying acquisition and reconstruction parameters. The robustness of the DLFs was measured using the concordance correlation coefficient (CCC) and intraclass correlation coefficient (ICC). A deep learning approach was used for visualizing the DLFs. To assess the clinical effectiveness and generalizability of the stable and informative DLFs, three hospitals were used to source 275 patients, in whom 405 nodules were pathologically differentially diagnosed as GGN lung adenocarcinoma less than 10 mm in size and were retrospectively reviewed for clinical validation. Results: A total of 64 DLFs were analyzed, which revealed that the variables of slice thickness and slice interval (ICC, 0.79±0.18) and reconstruction kernel (ICC, 0.82±0.07) were significantly associated with the robustness of DLFs. Feature visualization showed that the DLFs were mainly focused around the nodule areas. In the external validation, a subset of 28 robust DLFs identified as stable under all sources of variability achieved the highest area under curve [AUC =0.65, 95% confidence interval (CI): 0.53-0.76] compared to other DLF models and the radiomics model. Conclusions: Although different manufacturers and scanning schemes affect the reproducibility of DLFs, certain DLFs demonstrated excellent stability and effectively improved diagnostic the efficacy for identifying subtypes of lung adenocarcinoma. Therefore, as the first step, screening stable DLFs in multicenter DLFs research may improve diagnostic efficacy and promote the application of these features.

3.
Comput Biol Med ; 174: 108461, 2024 May.
Article in English | MEDLINE | ID: mdl-38626509

ABSTRACT

BACKGROUND: Positron emission tomography (PET) is extensively employed for diagnosing and staging various tumors, including liver cancer, lung cancer, and lymphoma. Accurate subtype classification of tumors plays a crucial role in formulating effective treatment plans for patients. Notably, lymphoma comprises subtypes like diffuse large B-cell lymphoma and Hodgkin's lymphoma, while lung cancer encompasses adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. Similarly, liver cancer consists of subtypes such as cholangiocarcinoma and hepatocellular carcinoma. Consequently, the subtype classification of tumors based on PET images holds immense clinical significance. However, in clinical practice, the number of cases available for each subtype is often limited and imbalanced. Therefore, the primary challenge lies in achieving precise subtype classification using a small dataset. METHOD: This paper presents a novel approach for tumor subtype classification in small datasets using RA-DL (Radiomics-DeepLearning) attention. To address the limited sample size, Support Vector Machines (SVM) is employed as the classifier for tumor subtypes instead of deep learning methods. Emphasizing the importance of texture information in tumor subtype recognition, radiomics features are extracted from the tumor regions during the feature extraction stage. These features are compressed using an autoencoder to reduce redundancy. In addition to radiomics features, deep features are also extracted from the tumors to leverage the feature extraction capabilities of deep learning. In contrast to existing methods, our proposed approach utilizes the RA-DL-Attention mechanism to guide the deep network in extracting complementary deep features that enhance the expressive capacity of the final features while minimizing redundancy. To address the challenges of limited and imbalanced data, our method avoids using classification labels during deep feature extraction and instead incorporates 2D Region of Interest (ROI) segmentation and image reconstruction as auxiliary tasks. Subsequently, all lesion features of a single patient are aggregated into a feature vector using a multi-instance aggregation layer. RESULT: Validation experiments were conducted on three PET datasets, specifically the liver cancer dataset, lung cancer dataset, and lymphoma dataset. In the context of lung cancer, our proposed method achieved impressive performance with Area Under Curve (AUC) values of 0.82, 0.84, and 0.83 for the three-classification task. For the binary classification task of lymphoma, our method demonstrated notable results with AUC values of 0.95 and 0.75. Moreover, in the binary classification task of liver tumor, our method exhibited promising performance with AUC values of 0.84 and 0.86. CONCLUSION: The experimental results clearly indicate that our proposed method outperforms alternative approaches significantly. Through the extraction of complementary radiomics features and deep features, our method achieves a substantial improvement in tumor subtype classification performance using small PET datasets.


Subject(s)
Positron-Emission Tomography , Support Vector Machine , Humans , Positron-Emission Tomography/methods , Neoplasms/diagnostic imaging , Neoplasms/classification , Databases, Factual , Deep Learning , Image Interpretation, Computer-Assisted/methods , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/classification , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/classification , Radiomics
4.
Diagnostics (Basel) ; 13(10)2023 May 11.
Article in English | MEDLINE | ID: mdl-37238180

ABSTRACT

BACKGROUND: Although handcrafted radiomics features (RF) are commonly extracted via radiomics software, employing deep features (DF) extracted from deep learning (DL) algorithms merits significant investigation. Moreover, a "tensor'' radiomics paradigm where various flavours of a given feature are generated and explored can provide added value. We aimed to employ conventional and tensor DFs, and compare their outcome prediction performance to conventional and tensor RFs. METHODS: 408 patients with head and neck cancer were selected from TCIA. PET images were first registered to CT, enhanced, normalized, and cropped. We employed 15 image-level fusion techniques (e.g., dual tree complex wavelet transform (DTCWT)) to combine PET and CT images. Subsequently, 215 RFs were extracted from each tumor in 17 images (or flavours) including CT only, PET only, and 15 fused PET-CT images through the standardized-SERA radiomics software. Furthermore, a 3 dimensional autoencoder was used to extract DFs. To predict the binary progression-free-survival-outcome, first, an end-to-end CNN algorithm was employed. Subsequently, we applied conventional and tensor DFs vs. RFs as extracted from each image to three sole classifiers, namely multilayer perceptron (MLP), random-forest, and logistic regression (LR), linked with dimension reduction algorithms. RESULTS: DTCWT fusion linked with CNN resulted in accuracies of 75.6 ± 7.0% and 63.4 ± 6.7% in five-fold cross-validation and external-nested-testing, respectively. For the tensor RF-framework, polynomial transform algorithms + analysis of variance feature selector (ANOVA) + LR enabled 76.67 ± 3.3% and 70.6 ± 6.7% in the mentioned tests. For the tensor DF framework, PCA + ANOVA + MLP arrived at 87.0 ± 3.5% and 85.3 ± 5.2% in both tests. CONCLUSIONS: This study showed that tensor DF combined with proper machine learning approaches enhanced survival prediction performance compared to conventional DF, tensor and conventional RF, and end-to-end CNN frameworks.

5.
Viruses ; 14(8)2022 07 28.
Article in English | MEDLINE | ID: mdl-36016288

ABSTRACT

COVID-19 which was announced as a pandemic on 11 March 2020, is still infecting millions to date as the vaccines that have been developed do not prevent the disease but rather reduce the severity of the symptoms. Until a vaccine is developed that can prevent COVID-19 infection, the testing of individuals will be a continuous process. Medical personnel monitor and treat all health conditions; hence, the time-consuming process to monitor and test all individuals for COVID-19 becomes an impossible task, especially as COVID-19 shares similar symptoms with the common cold and pneumonia. Some off-the-counter tests have been developed and sold, but they are unreliable and add an additional burden because false-positive cases have to visit hospitals and perform specialized diagnostic tests to confirm the diagnosis. Therefore, the need for systems that can automatically detect and diagnose COVID-19 automatically without human intervention is still an urgent priority and will remain so because the same technology can be used for future pandemics and other health conditions. In this paper, we propose a modified machine learning (ML) process that integrates deep learning (DL) algorithms for feature extraction and well-known classifiers that can accurately detect and diagnose COVID-19 from chest CT scans. Publicly available datasets were made available by the China Consortium for Chest CT Image Investigation (CC-CCII). The highest average accuracy obtained was 99.9% using the modified ML process when 2000 features were extracted using GoogleNet and ResNet18 and using the support vector machine (SVM) classifier. The results obtained using the modified ML process were higher when compared to similar methods reported in the extant literature using the same datasets or different datasets of similar size; thus, this study is considered of added value to the current body of knowledge. Further research in this field is required to develop methods that can be applied in hospitals and can better equip mankind to be prepared for any future pandemics.


Subject(s)
COVID-19 , Deep Learning , Pneumonia , COVID-19/diagnostic imaging , Humans , Pneumonia/diagnostic imaging , SARS-CoV-2 , Tomography, X-Ray Computed/methods
6.
Diagnostics (Basel) ; 12(7)2022 Jul 01.
Article in English | MEDLINE | ID: mdl-35885512

ABSTRACT

Diabetic Retinopathy (DR) is a medical condition present in patients suffering from long-term diabetes. If a diagnosis is not carried out at an early stage, it can lead to vision impairment. High blood sugar in diabetic patients is the main source of DR. This affects the blood vessels within the retina. Manual detection of DR is a difficult task since it can affect the retina, causing structural changes such as Microaneurysms (MAs), Exudates (EXs), Hemorrhages (HMs), and extra blood vessel growth. In this work, a hybrid technique for the detection and classification of Diabetic Retinopathy in fundus images of the eye is proposed. Transfer learning (TL) is used on pre-trained Convolutional Neural Network (CNN) models to extract features that are combined to generate a hybrid feature vector. This feature vector is passed on to various classifiers for binary and multiclass classification of fundus images. System performance is measured using various metrics and results are compared with recent approaches for DR detection. The proposed method provides significant performance improvement in DR detection for fundus images. For binary classification, the proposed modified method achieved the highest accuracy of 97.8% and 89.29% for multiclass classification.

7.
Environ Sci Pollut Res Int ; 29(34): 51909-51926, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35257344

ABSTRACT

Environmental microorganism (EM) offers a highly efficient, harmless, and low-cost solution to environmental pollution. They are used in sanitation, monitoring, and decomposition of environmental pollutants. However, this depends on the proper identification of suitable microorganisms. In order to fasten, lower the cost, and increase consistency and accuracy of identification, we propose the novel pairwise deep learning features (PDLFs) to analyze microorganisms. The PDLFs technique combines the capability of handcrafted and deep learning features. In this technique, we leverage the Shi and Tomasi interest points by extracting deep learning features from patches which are centered at interest points' locations. Then, to increase the number of potential features that have intermediate spatial characteristics between nearby interest points, we use Delaunay triangulation theorem and straight line geometric theorem to pair the nearby deep learning features. The potential of pairwise features is justified on the classification of EMs using SVMs, Linear discriminant analysis, Logistic regression, XGBoost and Random Forest classifier. The pairwise features obtain outstanding results of 99.17%, 91.34%, 91.32%, 91.48%, and 99.56%, which are the increase of about 5.95%, 62.40%, 62.37%, 61.84%, and 3.23% in accuracy, F1-score, recall, precision, and specificity respectively, compared to non-paired deep learning features.


Subject(s)
Deep Learning , Environmental Microbiology , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods
8.
Math Biosci Eng ; 18(5): 5790-5815, 2021 06 25.
Article in English | MEDLINE | ID: mdl-34517512

ABSTRACT

A brain tumor is an abnormal growth of brain cells inside the head, which reduces the patient's survival chance if it is not diagnosed at an earlier stage. Brain tumors vary in size, different in type, irregular in shapes and require distinct therapies for different patients. Manual diagnosis of brain tumors is less efficient, prone to error and time-consuming. Besides, it is a strenuous task, which counts on radiologist experience and proficiency. Therefore, a modern and efficient automated computer-assisted diagnosis (CAD) system is required which may appropriately address the aforementioned problems at high accuracy is presently in need. Aiming to enhance performance and minimise human efforts, in this manuscript, the first brain MRI image is pre-processed to improve its visual quality and increase sample images to avoid over-fitting in the network. Second, the tumor proposals or locations are obtained based on the agglomerative clustering-based method. Third, image proposals and enhanced input image are transferred to backbone architecture for features extraction. Fourth, high-quality image proposals or locations are obtained based on a refinement network, and others are discarded. Next, these refined proposals are aligned to the same size, and finally, transferred to the head network to achieve the desired classification task. The proposed method is a potent tumor grading tool assessed on a publicly available brain tumor dataset. Extensive experiment results show that the proposed method outperformed the existing approaches evaluated on the same dataset and achieved an optimal performance with an overall classification accuracy of 98.04%. Besides, the model yielded the accuracy of 98.17, 98.66, 99.24%, sensitivity (recall) of 96.89, 97.82, 99.24%, and specificity of 98.55, 99.38, 99.25% for Meningioma, Glioma, and Pituitary classes, respectively.


Subject(s)
Brain Neoplasms , Glioma , Brain/diagnostic imaging , Brain Neoplasms/diagnostic imaging , Diagnosis, Computer-Assisted , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging
9.
J Clin Med ; 10(14)2021 Jul 14.
Article in English | MEDLINE | ID: mdl-34300266

ABSTRACT

The COVID-19 pandemic continues to spread globally at a rapid pace, and its rapid detection remains a challenge due to its rapid infectivity and limited testing availability. One of the simply available imaging modalities in clinical routine involves chest X-ray (CXR), which is often used for diagnostic purposes. Here, we proposed a computer-aided detection of COVID-19 in CXR imaging using deep and conventional radiomic features. First, we used a 2D U-Net model to segment the lung lobes. Then, we extracted deep latent space radiomics by applying deep convolutional autoencoder (ConvAE) with internal dense layers to extract low-dimensional deep radiomics. We used Johnson-Lindenstrauss (JL) lemma, Laplacian scoring (LS), and principal component analysis (PCA) to reduce dimensionality in conventional radiomics. The generated low-dimensional deep and conventional radiomics were integrated to classify COVID-19 from pneumonia and healthy patients. We used 704 CXR images for training the entire model (i.e., U-Net, ConvAE, and feature selection in conventional radiomics). Afterward, we independently validated the whole system using a study cohort of 1597 cases. We trained and tested a random forest model for detecting COVID-19 cases through multivariate binary-class and multiclass classification. The maximal (full multivariate) model using a combination of the two radiomic groups yields performance in classification cross-validated accuracy of 72.6% (69.4-74.4%) for multiclass and 89.6% (88.4-90.7%) for binary-class classification.

10.
Biosensors (Basel) ; 10(11)2020 Oct 31.
Article in English | MEDLINE | ID: mdl-33142939

ABSTRACT

Breast cancer is the most common cancer in women. Early diagnosis improves outcome and survival, which is the cornerstone of breast cancer treatment. Thermography has been utilized as a complementary diagnostic technique in breast cancer detection. Artificial intelligence (AI) has the capacity to capture and analyze the entire concealed information in thermography. In this study, we propose a method to potentially detect the immunohistochemical response to breast cancer by finding thermal heterogeneous patterns in the targeted area. In this study for breast cancer screening 208 subjects participated and normal and abnormal (diagnosed by mammography or clinical diagnosis) conditions were analyzed. High-dimensional deep thermomic features were extracted from the ResNet-50 pre-trained model from low-rank thermal matrix approximation using sparse principal component analysis. Then, a sparse deep autoencoder designed and trained for such data decreases the dimensionality to 16 latent space thermomic features. A random forest model was used to classify the participants. The proposed method preserves thermal heterogeneity, which leads to successful classification between normal and abnormal subjects with an accuracy of 78.16% (73.3-81.07%). By non-invasively capturing a thermal map of the entire tumor, the proposed method can assist in screening and diagnosing this malignancy. These thermal signatures may preoperatively stratify the patients for personalized treatment planning and potentially monitor the patients during treatment.


Subject(s)
Breast Neoplasms/diagnosis , Deep Learning , Vasodilation , Artificial Intelligence , Biomarkers , Early Detection of Cancer , Female , Humans , Mammography , Thermography
11.
Diagnostics (Basel) ; 10(8)2020 Aug 06.
Article in English | MEDLINE | ID: mdl-32781795

ABSTRACT

Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.

12.
Sensors (Basel) ; 19(19)2019 Sep 24.
Article in English | MEDLINE | ID: mdl-31554229

ABSTRACT

The fields of human activity analysis have recently begun to diversify. Many researchers have taken much interest in developing action recognition or action prediction methods. The research on human action evaluation differs by aiming to design computation models and evaluation approaches for automatically assessing the quality of human actions. This line of study has become popular because of its explosively emerging real-world applications, such as physical rehabilitation, assistive living for elderly people, skill training on self-learning platforms, and sports activity scoring. This paper presents a comprehensive survey of approaches and techniques in action evaluation research, including motion detection and preprocessing using skeleton data, handcrafted feature representation methods, and deep learning-based feature representation methods. The benchmark datasets from this research field and some evaluation criteria employed to validate the algorithms' performance are introduced. Finally, the authors present several promising future directions for further studies.


Subject(s)
Deep Learning , Algorithms , Humans , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL