Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Diagnostics (Basel) ; 13(8)2023 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-37189496

RESUMEN

Imaging data fusion is becoming a bottleneck in clinical applications and translational research in medical imaging. This study aims to incorporate a novel multimodality medical image fusion technique into the shearlet domain. The proposed method uses the non-subsampled shearlet transform (NSST) to extract both low- and high-frequency image components. A novel approach is proposed for fusing low-frequency components using a modified sum-modified Laplacian (MSML)-based clustered dictionary learning technique. In the NSST domain, directed contrast can be used to fuse high-frequency coefficients. Using the inverse NSST method, a multimodal medical image is obtained. Compared to state-of-the-art fusion techniques, the proposed method provides superior edge preservation. According to performance metrics, the proposed method is shown to be approximately 10% better than existing methods in terms of standard deviation, mutual information, etc. Additionally, the proposed method produces excellent visual results regarding edge preservation, texture preservation, and more information.

2.
Artif Intell Med ; 139: 102542, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37100511

RESUMEN

BACKGROUND/INTRODUCTION: Manual detection and localization of the brain's epileptogenic areas using electroencephalogram (EEG) signals is time-intensive and error-prone. An automated detection system is, thus, highly desirable for support in clinical diagnosis. A set of relevant and significant non-linear features plays a major role in developing a reliable, automated focal detection system. METHODS: A new feature extraction method is designed to classify focal EEG signals using eleven non-linear geometrical attributes derived from the Fourier-Bessel series expansion-based empirical wavelet transform (FBSE-EWT) segmented rhythm's second-order difference plot (SODP). A total of 132 features (2 channels × 6 rhythms × 11 geometrical attributes) were computed. However, some of the obtained features might be non-significant and redundant features. Hence, to acquire an optimal set of relevant non-linear features, a new hybridization of 'Kruskal-Wallis statistical test (KWS)' with 'VlseKriterijuska Optimizacija I Komoromisno Resenje' termed as the KWS-VIKOR approach was adopted. The KWS-VIKOR has a two-fold operational feature. First, the significant features are selected using the KWS test with a p-value lesser than 0.05. Next, the multi-attribute decision-making (MADM) based VIKOR method ranks the selected features. Several classification methods further validate the efficacy of the features of the selected top n%. RESULTS: The proposed framework has been evaluated using the Bern-Barcelona dataset. The highest classification accuracy of 98.7% was achieved using the top 35% ranked features in classifying the focal and non-focal EEG signals with the least-squares support vector machine (LS-SVM) classifier. CONCLUSIONS: The achieved results exceeded those reported through other methods. Hence, the proposed framework will more effectively assist the clinician in localizing the epileptogenic areas.


Asunto(s)
Procesamiento de Señales Asistido por Computador , Análisis de Ondículas , Algoritmos , Electroencefalografía/métodos , Máquina de Vectores de Soporte
3.
Curr Med Imaging ; 2023 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-36779492

RESUMEN

BACKGROUND: Mounting novel solutions for conspicuous neurodegenerative disorders that grow consistently, such as Alzheimer's disease, rely on tracking and identifying disease development, improvement, and progression. Compared to many clinical or survey-based detection methods, early Alzheimer's stage detection can be possible through computer-based MR brain images and discrete stochastic processes. AIM: In the case of Alzheimer's stage progression, the existing models illustrate that the learning problem comprises two issues: estimating posterior probabilities of the Alzheimer's stage and computing conditioned statistics of the Alzheimer's end-stage. The proposed model overcomes these issues by restructuring the estimation problem as EM-centered CT- HMM. METHODS: This paper proposes a novel framework model with two phases; the first phase covers the feature extraction of magnetic resonance imaging based on many computer vision methods known as a collection of bag-of-features (BoF). In the second phase, the EM-centered learning method is used for the continuous-time hidden Markov model (CT-HMM), an efficient approach to modeling Alzheimer's disease progression with time and stages. The proposed CT-HMM is implemented with eight Alzheimer's stages (source: ADNI) to visualize and predict the stage progression of the ADNI MRI dataset. RESULTS: The proposed model reported the transition posterior probability as 0.765 (high to low stage progression) and 0.234 (low to high stage progression). The model's accuracy and F1 score are estimated as 97.13 and 96.51, respectively. CONCLUSION: The proposed model's accuracy and evaluation metrics reported higher results in the work on Alzheimer's stage progression and prediction.

4.
Curr Med Imaging ; 19(2): 182-193, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35379137

RESUMEN

Noise in computed tomography (CT) images may occur due to low radiation doses. Hence, the main aim of this paper is to reduce the noise from low-dose CT images so that the risk of high radiation dose can be reduced. BACKGROUND: The novel coronavirus outbreak has ushered in different new areas of research in medical instrumentation and technology. Medical diagnostics and imaging are one of the ways in which the area and level of infection can be detected. OBJECTIVE: COVID-19 attacks people with less immunity, so infants, kids, and pregnant women are more vulnerable to the infection. So, they need to undergo CT scanning to find the infection level. But the high radiation diagnostic is also fatal for them, so the intensity of radiation needs to be reduced significantly, which may generate the noise in the CT images. METHOD: This paper introduces a new denoising technique for low-dose Covid-19 CT images using a convolution neural network (CNN) and noise-based thresholding method. The major concern of the methodology for reducing the risk associated with radiation while diagnosing. RESULTS: The results are evaluated visually and using standard performance metrics. From comparative analysis, it was observed that proposed works give better outcomes. CONCLUSION: The proposed low-dose COVID-19 CT image denoising model is therefore concluded to have a better potential to be effective in various pragmatic medical image processing applications in noise suppression and clinical edge preservation.


Asunto(s)
COVID-19 , Embarazo , Femenino , Humanos , Dosis de Radiación , Relación Señal-Ruido , COVID-19/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos
5.
Med Biol Eng Comput ; 59(11-12): 2297-2310, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34545514

RESUMEN

Advances in high-throughput techniques lead to evolving a large number of unknown protein sequences (UPS). Functional characterization of UPS is significant for the investigation of disease symptoms and drug repositioning. Protein subcellular localization is imperative for the functional characterization of protein sequences. Diverse techniques are used on protein sequences for feature extraction. However, many times a single feature extraction technique leads to poor prediction performance. In this paper, two feature augmentations are described through sequence induced, physicochemical, and evolutionary information of the amino acid residues. While augmented features preserve the sequence-order-information and protein-residue-properties. Two bacterial protein datasets Gram-Positive (G +) and Gram-Negative (G-) are utilized for the experimental work. After performing essential preprocessing on protein datasets, two sets of feature vectors are obtained. These feature vectors are used separately to train the different individual and ensembles such as decision tree (C 4.5), k-nearest neighbor (k-NN), multi-layer perceptron (MLP), Naïve Bayes (NB), support vector machine (SVM), AdaBoost, gradient boosting machine (GBM), and random forest (RF) with fivefold cross-validation. Prediction results of the model demonstrate that overall accuracy reported by C4.5 is highest 99.57% on G + and 97.47% on G- datasets with known protein sequences. Similarly, for the UPS overall accuracy of G + is 85.17% with SVM and 82.45% with G- dataset using MLP.


Asunto(s)
Redes Neurales de la Computación , Máquina de Vectores de Soporte , Algoritmos , Secuencia de Aminoácidos , Teorema de Bayes , Proteínas
6.
Comput Biol Med ; 136: 104708, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34358996

RESUMEN

Epilepsy is a neurological disorder that has severely affected many people's lives across the world. Electroencephalogram (EEG) signals are used to characterize the brain's state and detect various disorders. The EEG signals are non-stationary and non-linear in nature. Therefore, it is challenging to accurately process and learn from the recorded EEG signals in order to detect disorders like epilepsy. This paper proposed an automated learning framework using the Fourier-Bessel series expansion-based empirical wavelet transform (FBSE-EWT) method for detecting epileptic seizures from EEG signals. The scale-space boundary detection method was adopted to segment the Fourier-Bessel series expansion (FBSE) spectrum of multiple frame-size time-segmented EEG signals. Multiple frame-size time-segmented EEG signal's analysis was done using four different frame sizes: full, half, quarter, and half-quarter length of recorded EEG signals. Two different time-segmentation approaches were investigated on EEG signals: 1) segmenting signals based on multiple frame-size and 2) segmenting signals based on multiple frame-size with zero-padding the remaining signal. The FBSE-EWT method was applied to decompose the EEG signals into narrow sub-band signals. Features such as line-length (LL), log-energy-entropy (LEnt), and norm-entropy (NEnt) were computed from various frequency range sub-band signals. The relief-F feature ranking method was employed to select the most significant features; this reduces the computational burden of the models. The top-ranked accumulated features were used for classification using least square-support machine learning (LS-SVM), support vector machine (SVM), k-nearest neighbor (k-NN), and ensemble bagged tree classifiers. The proposed framework for epileptic seizure detection was evaluated on two publicly available benchmark EEG datasets: the Bonn EEG dataset and Children's Hospital Boston (CHB) and the Massachusetts Institute of Technology (MIT), well known as the CHB-MIT scalp EEG dataset. Training and testing of the models were performed using the 10-fold cross-validation technique. The FBSE-EWT based learning framework was compared with other state-of-the-art methods using both datasets. Experimental results showed that the proposed framework achieved 100 % classification accuracy on the Bonn EEG dataset, whereas 99.84 % classification accuracy on the CHB-MIT scalp EEG dataset.


Asunto(s)
Epilepsia , Análisis de Ondículas , Algoritmos , Niño , Electroencefalografía , Epilepsia/diagnóstico , Humanos , Convulsiones , Procesamiento de Señales Asistido por Computador , Máquina de Vectores de Soporte
7.
Curr Med Imaging ; 16(4): 355-370, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32410538

RESUMEN

BACKGROUND: Biomedical data is filled with continuous real values; these values in the feature set tend to create problems like underfitting, the curse of dimensionality and increase in misclassification rate because of higher variance. In response, pre-processing techniques on dataset minimizes the side effects and have shown success in maintaining the adequate accuracy. AIMS: Feature selection and discretization are the two necessary preprocessing steps that were effectively employed to handle the data redundancies in the biomedical data. However, in the previous works, the absence of unified effort by integrating feature selection and discretization together in solving the data redundancy problem leads to the disjoint and fragmented field. This paper proposes a novel multi-objective based dimensionality reduction framework, which incorporates both discretization and feature reduction as an ensemble model for performing feature selection and discretization. Selection of optimal features and the categorization of discretized and non-discretized features from the feature subset is governed by the multi-objective genetic algorithm (NSGA-II). The two objectives, minimizing the error rate during the feature selection and maximizing the information gain, while discretization is considered as fitness criteria. METHODS: The proposed model used wrapper-based feature selection algorithm to select the optimal features and categorized these selected features into two blocks namely discretized and nondiscretized blocks. The feature belongs to the discretized block will participate in the binary discretization while the second block features will not be discretized and used in its original form. RESULTS: For the establishment and acceptability of the proposed ensemble model, the experiment is conducted on the fifteen medical datasets, and the metric such as accuracy, mean and standard deviation are computed for the performance evaluation of the classifiers. CONCLUSION: After an extensive experiment conducted on the dataset, it can be said that the proposed model improves the classification rate and outperform the base learner.


Asunto(s)
Algoritmos , Conjuntos de Datos como Asunto , Reconocimiento de Normas Patrones Automatizadas/métodos , Inteligencia Artificial , Humanos
8.
J Biomed Inform ; 102: 103376, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-31935461

RESUMEN

Inadequate patient samples and costly annotated data generations result into the smaller dataset in the biomedical domain. Due to which the predictions with a trained model that usually reveal a single small dataset association are fail to derive robust insights. To cope with the data sparsity, a promising strategy of combining data from the different related tasks is exercised in various application. Motivated by, successful work in the various bioinformatics application, we propose a multitask learning model based on multi-kernel that exploits the dependencies among various related tasks. This work aims to combine the knowledge from experimental studies of the different dataset to build stronger predictive models for HIV-1 protease cleavage sites prediction. In this study, a set of peptide data from one source is referred as 'task' and to integrate interactions from multiple tasks; our method exploits the common features and parameters sharing across the data source. The proposed framework uses feature integration, feature selection, multi-kernel and multifactorial evolutionary algorithm to model multitask learning. The framework considered seven different feature descriptors and four different kernel variants of support vector machines to form the optimal multi-kernel learning model. To validate the effectiveness of the model, the performance parameters such as average accuracy, and area under curve have been evaluated on the suggested model. We also carried out Friedman and post hoc statistical test to substantiate the significant improvement achieved by the proposed framework. The result obtained following the extensive experiment confirms the belief that multitask learning in cleavage site identification can improve the performance.


Asunto(s)
Algoritmos , Biología Computacional , Proteasa del VIH , Proteasa del VIH/química , Aprendizaje
9.
Artif Intell Med ; 101: 101757, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31813491

RESUMEN

The ability of multitask learning promulgated its sovereignty in the machine learning field with the diversified application including but not limited to bioinformatics and pattern recognition. Bioinformatics provides a wide range of applications for Multitask Learning (MTL) methods. Identification of Bacterial virulent protein is one such application that helps in understanding the virulence mechanism for the design of drug and vaccine. However, the limiting factor in a reliable prediction model is the scarcity of the experimentally verified training data. To deal with, casting the problem in a Multitask Learning scenario, could be beneficial. Reusability of auxiliary data from related multiple domains in the prediction of target domain with limited labeled data is the primary objective of multitask learning model. Due to the amalgamation of multiple related data, it is possible that the probability distribution between the features tends to vary. Therefore, to deal with change amongst the feature distribution, this paper proposes a composite model for multitask learning framework which is based on two principles: discovering the shared parameters for identifying the relationships between tasks and common underlying representation of features amongst the related tasks. Through multi-kernel and factorial evolution, the proposed framework able to discover the shared kernel parameters and latent feature representation that is common amongst the tasks. To examine the benefits of the proposed model, an extensive experiment is performed on the freely available dataset at VirulentPred web server. Based on the results, we found that multitask learning model performs better than the conventional single task model. Additionally, our findings state that if the distribution between the tasks is high, then training the multiple models yield slightly better prediction. However, if the data distribution difference is low, multitask learning significantly outperforms the individual learning.


Asunto(s)
Proteínas Bacterianas/fisiología , Simulación por Computador , Aprendizaje Automático , Virulencia/fisiología , Algoritmos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...