Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 85
Filtrar
1.
Bioengineering (Basel) ; 11(7)2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-39061725

RESUMEN

This study evaluates the reproducibility of machine learning models that integrate radiomics and deep features (features extracted from a 3D autoencoder neural network) to classify various brain hemorrhages effectively. Using a dataset of 720 patients, we extracted 215 radiomics features (RFs) and 15,680 deep features (DFs) from CT brain images. With rigorous screening based on Intraclass Correlation Coefficient thresholds (>0.75), we identified 135 RFs and 1054 DFs for analysis. Feature selection techniques such as Boruta, Recursive Feature Elimination (RFE), XGBoost, and ExtraTreesClassifier were utilized alongside 11 classifiers, including AdaBoost, CatBoost, Decision Trees, LightGBM, Logistic Regression, Naive Bayes, Neural Networks, Random Forest, Support Vector Machines (SVM), and k-Nearest Neighbors (k-NN). Evaluation metrics included Area Under the Curve (AUC), Accuracy (ACC), Sensitivity (SEN), and F1-score. The model evaluation involved hyperparameter optimization, a 70:30 train-test split, and bootstrapping, further validated with the Wilcoxon signed-rank test and q-values. Notably, DFs showed higher accuracy. In the case of RFs, the Boruta + SVM combination emerged as the optimal model for AUC, ACC, and SEN, while XGBoost + Random Forest excelled in F1-score. Specifically, RFs achieved AUC, ACC, SEN, and F1-scores of 0.89, 0.85, 0.82, and 0.80, respectively. Among DFs, the ExtraTreesClassifier + Naive Bayes combination demonstrated remarkable performance, attaining an AUC of 0.96, ACC of 0.93, SEN of 0.92, and an F1-score of 0.92. Distinguished models in the RF category included SVM with Boruta, Logistic Regression with XGBoost, SVM with ExtraTreesClassifier, CatBoost with XGBoost, and Random Forest with XGBoost, each yielding significant q-values of 42. In the DFs realm, ExtraTreesClassifier + Naive Bayes, ExtraTreesClassifier + Random Forest, and Boruta + k-NN exhibited robustness, with 43, 43, and 41 significant q-values, respectively. This investigation underscores the potential of synergizing DFs with machine learning models to serve as valuable screening tools, thereby enhancing the interpretation of head CT scans for patients with brain hemorrhages.

2.
Interdiscip Sci ; 16(2): 503-518, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38733473

RESUMEN

Cancer remains a severe illness, and current research indicates that tumor homing peptides (THPs) play an important part in cancer therapy. The identification of THPs can provide crucial insights for drug-discovery and pharmaceutical industries as they allow for tailored medication delivery towards cancer cells. These peptides have a high affinity enabling particular receptors present upon tumor surfaces, allowing for the creation of precision medications that reduce off-target consequences and enhance cancer patient treatment results. Wet-lab techniques are considered essential tools for studying THPs; however, they're labor-extensive and time-consuming, therefore making prediction of THPs a challenging task for the researchers. Computational-techniques, on the other hand, are considered significant tools in identifying THPs according to the sequence data. Despite many strategies have been presented to predict new THP, there is still a need to develop a robust method with higher rates of success. In this paper, we developed a novel framework, THP-DF, for accurately identifying THPs on a large-scale. Firstly, the peptide sequences are encoded through various sequential features. Secondly, each feature is passed to BiLSTM and attention layers to extract simplified deep features. Finally, an ensemble-framework is formed via integrating sequential- and deep features which are fed to a support vector machine which with 10-fold cross-validation to carry to validate the efficiency. The experimental results showed that THP-DF worked better on both [Formula: see text] and [Formula: see text] datasets by achieving accuracy of > 95% which are higher than existing predictors both datasets. This indicates that the proposed predictor could be a beneficial tool to precisely and rapidly identify THPs and will contribute to the cutting-edge cancer treatment strategies and pharmaceuticals.


Asunto(s)
Biología Computacional , Neoplasias , Péptidos , Máquina de Vectores de Soporte , Péptidos/química , Humanos , Biología Computacional/métodos , Algoritmos
3.
Comput Methods Biomech Biomed Engin ; 27(9): 1181-1205, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38629714

RESUMEN

The cardiovascular disease (CVD) is the dangerous disease in the world. Most of the people around the world are affected by this dangerous CVD. In under-developed countries, the prediction of CVD remains the toughest job and it takes more time and cost. Diagnosing this illness is an intricate task that has to be performed precisely to save the life span of the human. In this research, an advanced deep model-based CVD prediction and risk analysis framework is proposed to minimize the death rate of humans all around the world. The data required for the prediction of CVD is collected from online data sources. Then, the input data is preprocessed using data cleaning, data scaling, and Nan and null value removal techniques. From the preprocessed data, three sets of features are extracted. The three sets of features include deep features, Principal Component Analysis (PCA), and Support Vector Machine (SVM)-based features. A Multi-scale Weighted Feature Fusion-based Deep Structure Network (MWFF-DSN) is developed to predict CVD. This structure is composed of a Multi-scale weighted Feature fusion-based Convolutional Neural Network (CNN) with a Residual Gated Recurrent Unit (GRU). The retrieved features are given as input to MWFF-DSN, and for optimizing weights, a Modernized Plum Tree Algorithm (MPTA) is developed. From the overall analysis, the developed model has attained an accuracy of 96% and it achieves a specificity of 95.95%. The developed model takes minimum time for the CVD and it gives highly accurate detection results.


Asunto(s)
Enfermedades Cardiovasculares , Redes Neurales de la Computación , Humanos , Análisis de Componente Principal , Máquina de Vectores de Soporte , Algoritmos
4.
Artículo en Inglés | MEDLINE | ID: mdl-38584483

RESUMEN

A heart attack is intended as top prevalent among all ruinous ailments. Day by day, the number of affected people count is increasing globally. The medical field is struggling to detect heart disease in the initial step. Early prediction can help patients to save their life. Thus, this paper implements a novel heart disease prediction model with the help of a hybrid deep learning strategy. The developed framework consists of various steps like (i) Data collection, (ii) Deep feature extraction, and (iii) Disease prediction. Initially, the standard medical data from various patients are acquired from the clinical standard datasets. Here, a One-Dimensional Convolutional Neural Network (1DCNN) is utilized for extracting the deep features from the acquired medical data to minimize the number of redundant data from the gathered large-scale data. The acquired deep features are directly fed to the Hybrid Optimized Deep Classifier (HODC) with the integration of Temporal Convolutional Networks (TCN) with Long Short-Term Memory (LSTM), where the parameters in both classifiers are optimized using the newly suggested Enhanced Forensic-Based Investigation (EFBI) inspired meta-optimization algorithm. Throughout the result analysis, the accuracy and precision rate of the offered approach is 98.67% and 99.48%. The evaluation outcomes show that the recommended system outperforms the extant systems in terms of performance metrics examination.

5.
Med Image Anal ; 92: 103067, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38141454

RESUMEN

We present a system for anomaly detection in histopathological images. In histology, normal samples are usually abundant, whereas anomalous (pathological) cases are scarce or not available. Under such settings, one-class classifiers trained on healthy data can detect out-of-distribution anomalous samples. Such approaches combined with pre-trained Convolutional Neural Network (CNN) representations of images were previously employed for anomaly detection (AD). However, pre-trained off-the-shelf CNN representations may not be sensitive to abnormal conditions in tissues, while natural variations of healthy tissue may result in distant representations. To adapt representations to relevant details in healthy tissue we propose training a CNN on an auxiliary task that discriminates healthy tissue of different species, organs, and staining reagents. Almost no additional labeling workload is required, since healthy samples come automatically with aforementioned labels. During training we enforce compact image representations with a center-loss term, which further improves representations for AD. The proposed system outperforms established AD methods on a published dataset of liver anomalies. Moreover, it provided comparable results to conventional methods specifically tailored for quantification of liver anomalies. We show that our approach can be used for toxicity assessment of candidate drugs at early development stages and thereby may reduce expensive late-stage drug attrition.


Asunto(s)
Desarrollo de Medicamentos , Redes Neurales de la Computación , Humanos
6.
Biomed Eng Online ; 22(1): 125, 2023 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-38102586

RESUMEN

BACKGROUND: Multi-omics research has the potential to holistically capture intra-tumor variability, thereby improving therapeutic decisions by incorporating the key principles of precision medicine. The purpose of this study is to identify a robust method of integrating features from different sources, such as imaging, transcriptomics, and clinical data, to predict the survival and therapy response of non-small cell lung cancer patients. METHODS: 2996 radiomics, 5268 transcriptomics, and 8 clinical features were extracted from the NSCLC Radiogenomics dataset. Radiomics and deep features were calculated based on the volume of interest in pre-treatment, routine CT examinations, and then combined with RNA-seq and clinical data. Several machine learning classifiers were used to perform survival analysis and assess the patient's response to adjuvant chemotherapy. The proposed analysis was evaluated on an unseen testing set in a k-fold cross-validation scheme. Score- and concatenation-based multi-omics were used as feature integration techniques. RESULTS: Six radiomics (elongation, cluster shade, entropy, variance, gray-level non-uniformity, and maximal correlation coefficient), six deep features (NasNet-based activations), and three transcriptomics (OTUD3, SUCGL2, and RQCD1) were found to be significant for therapy response. The examined score-based multi-omic improved the AUC up to 0.10 on the unseen testing set (0.74 ± 0.06) and the balance between sensitivity and specificity for predicting therapy response for 106 patients, resulting in less biased models and improving upon the either highly sensitive or highly specific single-source models. Six radiomics (kurtosis, GLRLM- and GLSZM-based non-uniformity from images with no filtering, biorthogonal, and daubechies wavelets), seven deep features (ResNet-based activations), and seven transcriptomics (ELP3, ZZZ3, PGRMC2, TRAK1, ATIC, USP7, and PNPLA2) were found to be significant for the survival analysis. Accordingly, the survival analysis for 115 patients was also enhanced up to 0.20 by the proposed score-based multi-omics in terms of the C-index (0.79 ± 0.03). CONCLUSIONS: Compared to single-source models, multi-omics integration has the potential to improve prediction performance, increase model stability, and reduce bias for both treatment response and survival analysis.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Humanos , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Carcinoma de Pulmón de Células no Pequeñas/genética , Carcinoma de Pulmón de Células no Pequeñas/terapia , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/genética , Entropía , Perfilación de la Expresión Génica , Aprendizaje Automático , Peptidasa Específica de Ubiquitina 7 , Proteasas Ubiquitina-Específicas
7.
Math Biosci Eng ; 20(9): 15859-15882, 2023 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-37919992

RESUMEN

We propose a deep feature-based sparse approximation classification technique for classification of breast masses into benign and malignant categories in film screen mammographs. This is a significant application as breast cancer is a leading cause of death in the modern world and improvements in diagnosis may help to decrease rates of mortality for large populations. While deep learning techniques have produced remarkable results in the field of computer-aided diagnosis of breast cancer, there are several aspects of this field that remain under-studied. In this work, we investigate the applicability of deep-feature-generated dictionaries to sparse approximation-based classification. To this end we construct dictionaries from deep features and compute sparse approximations of Regions Of Interest (ROIs) of breast masses for classification. Furthermore, we propose block and patch decomposition methods to construct overcomplete dictionaries suitable for sparse coding. The effectiveness of our deep feature spatially localized ensemble sparse analysis (DF-SLESA) technique is evaluated on a merged dataset of mass ROIs from the CBIS-DDSM and MIAS datasets. Experimental results indicate that dictionaries of deep features yield more discriminative sparse approximations of mass characteristics than dictionaries of imaging patterns and dictionaries learned by unsupervised machine learning techniques such as K-SVD. Of note is that the proposed block and patch decomposition strategies may help to simplify the sparse coding problem and to find tractable solutions. The proposed technique achieves competitive performances with state-of-the-art techniques for benign/malignant breast mass classification, using 10-fold cross-validation in merged datasets of film screen mammograms.


Asunto(s)
Neoplasias de la Mama , Mama , Humanos , Femenino , Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico , Mamografía/métodos , Diagnóstico por Computador , Medios de Comunicación de Masas
8.
BMC Med Imaging ; 23(1): 195, 2023 11 22.
Artículo en Inglés | MEDLINE | ID: mdl-37993801

RESUMEN

BACKGROUND: The purpose of this study is to investigate the use of radiomics and deep features obtained from multiparametric magnetic resonance imaging (mpMRI) for grading prostate cancer. We propose a novel approach called multi-flavored feature extraction or tensor, which combines four mpMRI images using eight different fusion techniques to create 52 images or datasets for each patient. We evaluate the effectiveness of this approach in grading prostate cancer and compare it to traditional methods. METHODS: We used the PROSTATEx-2 dataset consisting of 111 patients' images from T2W-transverse, T2W-sagittal, DWI, and ADC images. We used eight fusion techniques to merge T2W, DWI, and ADC images, namely Laplacian Pyramid, Ratio of the low-pass pyramid, Discrete Wavelet Transform, Dual-Tree Complex Wavelet Transform, Curvelet Transform, Wavelet Fusion, Weighted Fusion, and Principal Component Analysis. Prostate cancer images were manually segmented, and radiomics features were extracted using the Pyradiomics library in Python. We also used an Autoencoder for deep feature extraction. We used five different feature sets to train the classifiers: all radiomics features, all deep features, radiomics features linked with PCA, deep features linked with PCA, and a combination of radiomics and deep features. We processed the data, including balancing, standardization, PCA, correlation, and Least Absolute Shrinkage and Selection Operator (LASSO) regression. Finally, we used nine classifiers to classify different Gleason grades. RESULTS: Our results show that the SVM classifier with deep features linked with PCA achieved the most promising results, with an AUC of 0.94 and a balanced accuracy of 0.79. Logistic regression performed best when using only the deep features, with an AUC of 0.93 and balanced accuracy of 0.76. Gaussian Naive Bayes had lower performance compared to other classifiers, while KNN achieved high performance using deep features linked with PCA. Random Forest performed well with the combination of deep features and radiomics features, achieving an AUC of 0.94 and balanced accuracy of 0.76. The Voting classifiers showed higher performance when using only the deep features, with Voting 2 achieving the highest performance, with an AUC of 0.95 and balanced accuracy of 0.78. CONCLUSION: Our study concludes that the proposed multi-flavored feature extraction or tensor approach using radiomics and deep features can be an effective method for grading prostate cancer. Our findings suggest that deep features may be more effective than radiomics features alone in accurately classifying prostate cancer.


Asunto(s)
Imágenes de Resonancia Magnética Multiparamétrica , Neoplasias de la Próstata , Masculino , Humanos , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Teorema de Bayes , Imagen por Resonancia Magnética/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Modelos Logísticos , Estudios Retrospectivos
9.
Multimed Tools Appl ; : 1-19, 2023 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-37362723

RESUMEN

Yellow rust is a devastating disease that causes significant losses in wheat production worldwide and significantly affects wheat quality. It can be controlled by cultivating resistant cultivars, applying fungicides, and appropriate agricultural practices. The degree of precautions depends on the extent of the disease. Therefore, it is critical to detect the disease as early as possible. The disease causes deformations in the wheat leaf texture that reveals the severity of the disease. The gray-level co-occurrence matrix(GLCM) is a conventional texture feature descriptor extracted from gray-level images. However, numerous studies in the literature attempt to incorporate texture color with GLCM features to reveal hidden patterns that exist in color channels. On the other hand, recent advances in image analysis have led to the extraction of data-representative features so-called deep features. In particular, convolutional neural networks (CNNs) have the remarkable capability of recognizing patterns and show promising results for image classification when fed with image texture. Herein, the feasibility of using a combination of textural features and deep features to determine the severity of yellow rust disease in wheat was investigated. Textural features include both gray-level and color-level information. Also, pre-trained DenseNet was employed for deep features. The dataset, so-called Yellow-Rust-19, composed of wheat leaf images, was employed. Different classification models were developed using different color spaces such as RGB, HSV, and L*a*b, and two classification methods such as SVM and KNN. The combined model named CNN-CGLCM_HSV, where HSV and SVM were employed, with an accuracy of 92.4% outperformed the other models.

10.
Front Oncol ; 13: 1151257, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37346069

RESUMEN

Skin cancer is a serious disease that affects people all over the world. Melanoma is an aggressive form of skin cancer, and early detection can significantly reduce human mortality. In the United States, approximately 97,610 new cases of melanoma will be diagnosed in 2023. However, challenges such as lesion irregularities, low-contrast lesions, intraclass color similarity, redundant features, and imbalanced datasets make improved recognition accuracy using computerized techniques extremely difficult. This work presented a new framework for skin lesion recognition using data augmentation, deep learning, and explainable artificial intelligence. In the proposed framework, data augmentation is performed at the initial step to increase the dataset size, and then two pretrained deep learning models are employed. Both models have been fine-tuned and trained using deep transfer learning. Both models (Xception and ShuffleNet) utilize the global average pooling layer for deep feature extraction. The analysis of this step shows that some important information is missing; therefore, we performed the fusion. After the fusion process, the computational time was increased; therefore, we developed an improved Butterfly Optimization Algorithm. Using this algorithm, only the best features are selected and classified using machine learning classifiers. In addition, a GradCAM-based visualization is performed to analyze the important region in the image. Two publicly available datasets-ISIC2018 and HAM10000-have been utilized and obtained improved accuracy of 99.3% and 91.5%, respectively. Comparing the proposed framework accuracy with state-of-the-art methods reveals improved and less computational time.

11.
Front Neurosci ; 17: 1145526, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37284662

RESUMEN

Introduction: In the clinical setting, it becomes increasingly important to detect epileptic seizures automatically since it could significantly reduce the burden for the care of patients suffering from intractable epilepsy. Electroencephalography (EEG) signals record the brain's electrical activity and contain rich information about brain dysfunction. As a non-invasive and inexpensive tool for detecting epileptic seizures, visual evaluation of EEG recordings is labor-intensive and subjective and requires significant improvement. Methods: This study aims to develop a new approach to recognize seizures automatically using EEG recordings. During feature extraction of EEG input from raw data, we construct a new deep neural network (DNN) model. Deep feature maps derived from layers placed hierarchically in a convolution neural network are put into different kinds of shallow classifiers to detect the anomaly. Feature maps are reduced in dimensionality using Principal Component Analysis (PCA). Results: By analyzing the EEG Epilepsy dataset and the Bonn dataset for epilepsy, we conclude that our proposed method is both effective and robust. These datasets vary significantly in the acquisition of data, the formulation of clinical protocols, and the storage of digital information, making processing and analysis challenging. On both datasets, extensive experiments are performed using a cross-validation by 10 folds strategy to demonstrate approximately 100% accuracy for binary and multi-category classification. Discussion: In addition to demonstrating that our methodology outperforms other up-to-date approaches, the results of this study also suggest that it can be applied in clinical practice as well.

12.
Health Inf Sci Syst ; 11(1): 22, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37151916

RESUMEN

Recognizing emotions accurately in real life is crucial in human-computer interaction (HCI) systems. Electroencephalogram (EEG) signals have been extensively employed to identify emotions. The researchers have used several EEG-based emotion identification datasets to validate their proposed models. In this paper, we have employed a novel metaheuristic optimization approach for accurate emotion classification by applying it to select both channel and rhythm of EEG data. In this work, we have proposed the particle swarm with visit table strategy (PS-VTS) metaheuristic technique to improve the effectiveness of EEG-based human emotion identification. First, the EEG signals are denoised using a low pass filter, and then rhythm extraction is done using discrete wavelet transform (DWT). The continuous wavelet transform (CWT) approach transforms each rhythm signal into a rhythm image. The pre-trained MobilNetv2 model has been pre-trained for deep feature extraction, and a support vector machine (SVM) is used to classify the emotions. Two models are developed for optimal channels and rhythm sets. In Model 1, optimal channels are selected separately for each rhythm, and global optima are determined in the optimization process according to the best channel sets of the rhythms. The best rhythms are first determined for each channel, and then the optimal channel-rhythm set is selected in Model 2. Our proposed model obtained an accuracy of 99.2871% and 97.8571% for the classification of HA (high arousal)-LA (low arousal) and HV (high valence)-LV (low valence), respectively with the DEAP dataset. Our generated model obtained the highest classification accuracy compared to the previously reported methods.

13.
Front Public Health ; 11: 1123581, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37139387

RESUMEN

Variations in the size and texture of melanoma make the classification procedure more complex in a computer-aided diagnostic (CAD) system. The research proposes an innovative hybrid deep learning-based layer-fusion and neutrosophic-set technique for identifying skin lesions. The off-the-shelf networks are examined to categorize eight types of skin lesions using transfer learning on International Skin Imaging Collaboration (ISIC) 2019 skin lesion datasets. The top two networks, which are GoogleNet and DarkNet, achieved an accuracy of 77.41 and 82.42%, respectively. The proposed method works in two successive stages: first, boosting the classification accuracy of the trained networks individually. A suggested feature fusion methodology is applied to enrich the extracted features' descriptive power, which promotes the accuracy to 79.2 and 84.5%, respectively. The second stage explores how to combine these networks for further improvement. The error-correcting output codes (ECOC) paradigm is utilized for constructing a set of well-trained true and false support vector machine (SVM) classifiers via fused DarkNet and GoogleNet feature maps, respectively. The ECOC's coding matrices are designed to train each true classifier and its opponent in a one-versus-other fashion. Consequently, contradictions between true and false classifiers in terms of their classification scores create an ambiguity zone quantified by the indeterminacy set. Recent neutrosophic techniques resolve this ambiguity to tilt the balance toward the correct skin cancer class. As a result, the classification score is increased to 85.74%, outperforming the recent proposals by an obvious step. The trained models alongside the implementation of the proposed single-valued neutrosophic sets (SVNSs) will be publicly available for aiding relevant research fields.


Asunto(s)
Melanoma , Neoplasias Cutáneas , Humanos , Neoplasias Cutáneas/diagnóstico , Melanoma/diagnóstico , Diagnóstico Diferencial , Máquina de Vectores de Soporte
14.
Med Phys ; 50(7): 4220-4233, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37102270

RESUMEN

BACKGROUND: Cancer prognosis before and after treatment is key for patient management and decision making. Handcrafted imaging biomarkers-radiomics-have shown potential in predicting prognosis. PURPOSE: However, given the recent progress in deep learning, it is timely and relevant to pose the question: could deep learning based 3D imaging features be used as imaging biomarkers and outperform radiomics? METHODS: Effectiveness, reproducibility in test/retest, across modalities, and correlation of deep features with clinical features such as tumor volume and TNM staging were tested in this study. Radiomics was introduced as the reference image biomarker. For deep feature extraction, we transformed the CT scans into videos, and we adopted the pre-trained Inflated 3D ConvNet (I3D) video classification network as the architecture. We used four datasets-LUNG 1 (n = 422), LUNG 4 (n = 106), OPC (n = 605), and H&N 1 (n = 89)-with 1270 samples from different centers and cancer types-lung and head and neck cancer-to test deep features' predictiveness and two additional datasets to assess the reproducibility of deep features. RESULTS: Support Vector Machine-Recursive Feature Elimination (SVM-RFE) selected top 100 deep features achieved a concordance index (CI) of 0.67 in survival prediction in LUNG 1, 0.87 in LUNG 4, 0.76 in OPC, and 0.87 in H&N 1, while SVM-RFE selected top 100 radiomics achieved CIs of 0.64, 0.77, 0.73, and 0.74, respectively, all statistically significant differences (p < 0.01, Wilcoxon's test). Most selected deep features are not correlated with tumor volume and TNM staging. However, full radiomics features show higher reproducibility than full deep features in a test/retest setting (0.89 vs. 0.62, concordance correlation coefficient). CONCLUSION: The results show that deep features can outperform radiomics while providing different views for tumor prognosis compared to tumor volume and TNM staging. However, deep features suffer from lower reproducibility than radiomic features and lack the interpretability of the latter.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Reproducibilidad de los Resultados , Estudios de Factibilidad , Neoplasias Pulmonares/diagnóstico por imagen , Biomarcadores
15.
Sensors (Basel) ; 23(5)2023 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-36904702

RESUMEN

In recent years, different deep learning frameworks were introduced for hyperspectral image (HSI) classification. However, the proposed network models have a higher model complexity, and do not provide high classification accuracy if few-shot learning is used. This paper presents an HSI classification method that combines random patches network (RPNet) and recursive filtering (RF) to obtain informative deep features. The proposed method first convolves image bands with random patches to extract multi-level deep RPNet features. Thereafter, the RPNet feature set is subjected to dimension reduction through principal component analysis (PCA), and the extracted components are filtered using the RF procedure. Finally, the HSI spectral features and the obtained RPNet-RF features are combined to classify the HSI using a support vector machine (SVM) classifier. In order to test the performance of the proposed RPNet-RF method, some experiments were performed on three widely known datasets using a few training samples for each class, and classification results were compared with those obtained by other advanced HSI classification methods adopted for small training samples. The comparison showed that the RPNet-RF classification is characterized by higher values of such evaluation metrics as overall accuracy and Kappa coefficient.

16.
Sensors (Basel) ; 23(4)2023 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-36850708

RESUMEN

Image tracking and retrieval strategies are of vital importance in visual Simultaneous Localization and Mapping (SLAM) systems. For most state-of-the-art systems, hand-crafted features and bag-of-words (BoW) algorithms are the common solutions. Recent research reports the vulnerability of these traditional algorithms in complex environments. To replace these methods, this work proposes HFNet-SLAM, an accurate and real-time monocular SLAM system built on the ORB-SLAM3 framework incorporated with deep convolutional neural networks (CNNs). This work provides a pipeline of feature extraction, keypoint matching, and loop detection fully based on features from CNNs. The performance of this system has been validated on public datasets against other state-of-the-art algorithms. The results reveal that the HFNet-SLAM achieves the lowest errors among systems available in the literature. Notably, the HFNet-SLAM obtains an average accuracy of 2.8 cm in EuRoC dataset in pure visual configuration. Besides, it doubles the accuracy in medium and large environments in TUM-VI dataset compared with ORB-SLAM3. Furthermore, with the optimisation of TensorRT technology, the entire system can run in real-time at 50 FPS.

17.
Neural Netw ; 160: 238-258, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36701878

RESUMEN

BACKGROUND: The idea of smart healthcare has gradually gained attention as a result of the information technology industry's rapid development. Smart healthcare uses next-generation technologies i.e., artificial intelligence (AI) and Internet of Things (IoT), to intelligently transform current medical methods to make them more efficient, dependable and individualized. One of the most prominent uses of telemedicine and e-health in medical image analysis is teledermatology. Telecommunications technologies are used in this industry to send medical information to professionals. Teledermatology is a useful method for the identification of skin lesions, particularly in rural locations, because the skin is visually perceptible. One of the most recent tools for diagnosing skin cancer is dermoscopy. To classify skin malignancies, numerous computational approaches have been proposed in the literature. However, difficulties still exist i.e., lesions with low contrast, imbalanced datasets, high level of memory complexity, and the extraction of redundant features. METHODS: In this work, a unified CAD model is proposed based on a deep learning framework for skin lesion segmentation and classification. In the proposed approach, the source dermoscopic images are initially pre-processed using a contrast enhancement based modified bio-inspired multiple exposure fusion approach. In the second stage, a custom 26-layered convolutional neural network (CNN) architecture is designed to segment the skin lesion regions. In the third stage, four pre-trained CNN models (Xception, ResNet-50, ResNet-101 and VGG16) are modified and trained using transfer learning on the segmented lesion images. In the fourth stage, the deep features vectors are extracted from all the CNN models and fused using the convolutional sparse image decomposition fusion approach. In the fifth stage, the univariate measurement and Poisson distribution feature selection approach is used for the best features selection for classification. Finally, the selected features are fed to the multi-class support vector machine (MC-SVM) for the final classification. RESULTS: The proposed approach employed to the HAM10000, ISIC2018, ISIC2019, and PH2 datasets and achieved an accuracy of 98.57%, 98.62%, 93.47%, and 98.98% respectively which are better than previous works. CONCLUSION: When compared to renowned state-of-the-art methods, experimental results show that the proposed skin lesion detection and classification approach achieved higher performance in terms of both visually and enhanced quantitative evaluation with enhanced accuracy.


Asunto(s)
Aprendizaje Profundo , Melanoma , Neoplasias Cutáneas , Humanos , Inteligencia Artificial , Algoritmos , Dermoscopía/métodos , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/patología , Atención a la Salud
18.
J Digit Imaging ; 36(3): 1248-1261, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36702987

RESUMEN

Systems for retrieving and managing content-based medical images are becoming more important, especially as medical imaging technology advances and the medical image database grows. In addition, these systems can also use medical images to better grasp and gain a deeper understanding of the causes and treatments of different diseases, not just for diagnostic purposes. For achieving all these purposes, there is a critical need for an efficient and accurate content-based medical image retrieval (CBMIR) method. This paper proposes an efficient method (RbQE) for the retrieval of computed tomography (CT) and magnetic resonance (MR) images. RbQE is based on expanding the features of querying and exploiting the pre-trained learning models AlexNet and VGG-19 to extract compact, deep, and high-level features from medical images. There are two searching procedures in RbQE: a rapid search and a final search. In the rapid search, the original query is expanded by retrieving the top-ranked images from each class and is used to reformulate the query by calculating the mean values for deep features of the top-ranked images, resulting in a new query for each class. In the final search, the new query that is most similar to the original query will be used for retrieval from the database. The performance of the proposed method has been compared to state-of-the-art methods on four publicly available standard databases, namely, TCIA-CT, EXACT09-CT, NEMA-CT, and OASIS-MRI. Experimental results show that the proposed method exceeds the compared methods by 0.84%, 4.86%, 1.24%, and 14.34% in average retrieval precision (ARP) for the TCIA-CT, EXACT09-CT, NEMA-CT, and OASIS-MRI databases, respectively.


Asunto(s)
Algoritmos , Almacenamiento y Recuperación de la Información , Humanos , Tomografía Computarizada por Rayos X
19.
Ultrasound Med Biol ; 49(2): 560-568, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36376157

RESUMEN

We evaluated the performance of ultrasound image-based deep features and radiomics for differentiating small fat-poor angiomyolipoma (sfp-AML) from small renal cell carcinoma (SRCC). This retrospective study included 194 patients with pathologically proven small renal masses (diameter ≤4 cm; 67 in the sfp-AML group and 127 in the SRCC group). We obtained 206 and 364 images from the sfp-AML and SRCC groups with experienced radiologist identification, respectively. We extracted 4024 deep features from the autoencoder neural network and 1497 radiomics features from the Pyradiomics toolbox; the latter included first-order, shape, high-order, Laplacian of Gaussian and Wavelet features. All subjects were allocated to the training and testing sets with a ratio of 3:1 using stratified sampling. The least absolute shrinkage and selection operator (LASSO) regression model was applied to select the most diagnostic features. Support vector machine (SVM) was adopted as the discriminative classifier. An optimal feature subset including 45 deep and 7 radiomics features was screened by the LASSO model. The SVM classifier achieved good performance in discriminating between sfp-AMLs and SRCCs, with areas under the curve (AUCs) of 0.96 and 0.85 in the training and testing sets, respectively. The classifier built using deep and radiomics features can accurately differentiate sfp-AMLs from SRCCs on ultrasound imaging.


Asunto(s)
Angiomiolipoma , Carcinoma de Células Renales , Neoplasias Renales , Humanos , Carcinoma de Células Renales/diagnóstico por imagen , Carcinoma de Células Renales/patología , Estudios Retrospectivos , Angiomiolipoma/diagnóstico por imagen , Angiomiolipoma/patología , Neoplasias Renales/diagnóstico por imagen , Neoplasias Renales/patología , Ultrasonografía
20.
J Clin Med ; 11(24)2022 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-36555950

RESUMEN

Radiomics investigates the predictive role of quantitative parameters calculated from radiological images. In oncology, tumour segmentation constitutes a crucial step of the radiomic workflow. Manual segmentation is time-consuming and prone to inter-observer variability. In this study, a state-of-the-art deep-learning network for automatic segmentation (nnU-Net) was applied to computed tomography images of lung tumour patients, and its impact on the performance of survival radiomic models was assessed. In total, 899 patients were included, from two proprietary and one public datasets. Different network architectures (2D, 3D) were trained and tested on different combinations of the datasets. Automatic segmentations were compared to reference manual segmentations performed by physicians using the DICE similarity coefficient. Subsequently, the accuracy of radiomic models for survival classification based on either manual or automatic segmentations were compared, considering both hand-crafted and deep-learning features. The best agreement between automatic and manual contours (DICE = 0.78 ± 0.12) was achieved averaging 2D and 3D predictions and applying customised post-processing. The accuracy of the survival classifier (ranging between 0.65 and 0.78) was not statistically different when using manual versus automatic contours, both with hand-crafted and deep features. These results support the promising role nnU-Net can play in automatic segmentation, accelerating the radiomic workflow without impairing the models' accuracy. Further investigations on different clinical endpoints and populations are encouraged to confirm and generalise these findings.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA