Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Biomed Tech (Berl) ; 69(2): 181-192, 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-37871189

RESUMO

The automatic segmentation of the abnormality region from the head MRI is a challenging task in the medical science domain. The abnormality in the form of the tumor comprises the uncontrolled growth of the cells. The automatic identification of the affected cells using computerized software systems is demanding in the past several years to provide a second opinion to radiologists. In this paper, a new clustering approach is introduced based on the machine learning aspect that clusters the tumor region from the input MRI using disjoint tree generation followed by tree merging. Further, the proposed algorithm is improved by introducing the theory of joint probabilities and nearest neighbors. Later, the proposed algorithm is automated to find the number of clusters required with its nearest neighbors to do semantic segmentation of the tumor cells. The proposed algorithm provides good semantic segmentation results having the DB index-0.11 and Dunn index-13.18 on the SMS dataset. While the experimentation with BRATS 2015 dataset yields Dice complete=80.5 %, Dice core=73.2 %, and Dice enhanced=62.8 %. The comparative analysis of the proposed approach with benchmark models and algorithms proves the model's significance and its applicability to do semantic segmentation of the tumor cells with the average increment in the accuracy of around ±2.5 % with machine learning algorithms.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Algoritmos , Aprendizado de Máquina , Análise por Conglomerados , Processamento de Imagem Assistida por Computador/métodos
2.
Artigo em Inglês | MEDLINE | ID: mdl-37104103

RESUMO

The boundaries and regions between individual classes in biomedical image classification are hazy and overlapping. These overlapping features make predicting the correct classification result for biomedical imaging data a difficult diagnostic task. Thus, in precise classification, it is frequently necessary to obtain all necessary information before making a decision. This paper presents a novel deep-layered design architecture based on Neuro-Fuzzy-Rough intuition to predict hemorrhages using fractured bone images and head CT scans. To deal with data uncertainty, the proposed architecture design employs a parallel pipeline with rough-fuzzy layers. In this case, the rough-fuzzy function functions as a membership function, incorporating the ability to process rough-fuzzy uncertainty information. It not only improves the deep model's overall learning process, but it also reduces feature dimensions. The proposed architecture design improves the model's learning and self-adaptation capabilities. In experiments, the proposed model performed well, with training and testing accuracies of 96.77% and 94.52%, respectively, in detecting hemorrhages using fractured head images. The comparative analysis shows that the model outperforms existing models by an average of 2.6 ±0.90% on various performance metrics.

3.
IEEE J Biomed Health Inform ; 27(2): 664-672, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35394919

RESUMO

These days, the usage of machine-learning-enabled dynamic Internet of Medical Things (IoMT) systems with multiple technologies for digital healthcare applications has been growing progressively in practice. Machine learning plays a vital role in the IoMT system to balance the load between delay and energy. However, the traditional learning models fraud on the data in the distributed IoMT system for healthcare applications are still a critical research problem in practice. The study devises a federated learning-based blockchain-enabled task scheduling (FL-BETS) framework with different dynamic heuristics. The study considers the different healthcare applications that have both hard constraint (e.g., deadline) and resource energy consumption (e.g., soft constraint) during execution on the distributed fog and cloud nodes. The goal of FL-BETS is to identify and ensure the privacy preservation and fraud of data at various levels, such as local fog nodes and remote clouds, with minimum energy consumption and delay, and to satisfy the deadlines of healthcare workloads. The study introduces the mathematical model. In the performance evaluation, FL-BETS outperforms all existing machine learning and blockchain mechanisms in fraud analysis, data validation, energy and delay constraints for healthcare applications.


Assuntos
Blockchain , Internet das Coisas , Humanos , Privacidade , Atenção à Saúde , Redes de Comunicação de Computadores
4.
Comput Electr Eng ; 101: 108113, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35692868

RESUMO

The outlook of the World toward health infrastructure has drastically changed due to COVID-19 which created the need for the development of emerging technologies where interactions between the patients and the health workers can be minimized. Consequently, a secure and energy-efficient internet of medical things (IoMT) enabled wireless sensor network (WSN) is proposed for communicable infectious diseases that utilizes genetic algorithm. The proposed system makes use of movable sinks in IoT-enabled WSNs for healthcare called OptiGeA. The OptiGeA protocol is depicted for cluster heads (CHs) election by joining the factor of energy, density, distance, and heterogeneous node's capacity for fitness function. Additionally, a novel deployment technique and multiple mobile sink approaches are proposed to reduce transmission distance between sink and CH during system operation which mitigates hotspot issues. It is evident from the simulations that the OptiGeA protocol outflanks state-of-the-art protocols in terms of different performance measurements.

5.
Soft comput ; : 1-29, 2022 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-35574265

RESUMO

The rapid growth of data generated by several applications like engineering, biotechnology, energy, and others has become a crucial challenge in the high dimensional data mining. The large amounts of data, especially those with high dimensions, may contain many irrelevant, redundant, or noisy features, which may negatively affect the accuracy and efficiency of the industrial data mining process. Recently, several meta-heuristic optimization algorithms have been utilized to evolve feature selection techniques for dealing with the vast dimensionality problem. Despite optimization algorithms' ability to find the near-optimal feature subset of the search space, they still face some global optimization challenges. This paper proposes an improved version of the sooty tern optimization (ST) algorithm, namely the ST-AL method, to improve the search performance for high-dimensional industrial optimization problems. ST-AL method is developed by boosting the performance of STOA by applying four strategies. The first strategy is the use of a control randomization parameters that ensure the balance between the exploration-exploitation stages during the search process; moreover, it avoids falling into local optimums. The second strategy entails the creation of a new exploration phase based on the Ant lion (AL) algorithm. The third strategy is improving the STOA exploitation phase by modifying the main equation of position updating. Finally, the greedy selection is used to ignore the poor generated population and keeps it from diverging from the existing promising regions. To evaluate the performance of the proposed ST-AL algorithm, it has been employed as a global optimization method to discover the optimal value of ten CEC2020 benchmark functions. Also, it has been applied as a feature selection approach on 16 benchmark datasets in the UCI repository and compared with seven well-known optimization feature selection methods. The experimental results reveal the superiority of the proposed algorithm in avoiding local minima and increasing the convergence rate. The experimental result are compared with state-of-the-art algorithms, i.e., ALO, STOA, PSO, GWO, HHO, MFO, and MPA and found that the mean accuracy achieved is in range 0.94-1.00.

6.
Comput Math Methods Med ; 2022: 7751263, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35096136

RESUMO

Epileptic seizures occur due to brain abnormalities that can indirectly affect patient's health. It occurs abruptly without any symptoms and thus increases the mortality rate of humans. Almost 1% of world's population suffers from epileptic seizures. Prediction of seizures before the beginning of onset is beneficial for preventing seizures by medication. Nowadays, modern computational tools, machine learning, and deep learning methods have been used to predict seizures using EEG. However, EEG signals may get corrupted with background noise, and artifacts such as eye blinks and physical movements of muscles may lead to "pops" in the signal, resulting in electrical interference, which is cumbersome to detect through visual inspection for longer duration recordings. These limitations in automatic detection of interictal spikes and epileptic seizures are preferred, which is an essential tool for examining and scrutinizing the EEG recording more precisely. These restrictions bring our attention to present a review of automated schemes that will help neurologists categorize epileptic and nonepileptic signals. While preparing this review paper, it is observed that feature selection and classification are the main challenges in epilepsy prediction algorithms. This paper presents various techniques depending on various features and classifiers over the last few years. The methods presented will give a detailed understanding and ideas about seizure prediction and future research directions.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Eletroencefalografia/métodos , Aprendizado de Máquina , Convulsões/diagnóstico , Algoritmos , Teorema de Bayes , Biologia Computacional , Bases de Dados Factuais/estatística & dados numéricos , Diagnóstico por Computador/estatística & dados numéricos , Eletroencefalografia/estatística & dados numéricos , Epilepsia/diagnóstico , Humanos , Modelos Logísticos , Redes Neurais de Computação , Convulsões/classificação , Processamento de Sinais Assistido por Computador , Razão Sinal-Ruído , Máquina de Vetores de Suporte
7.
Comput Intell Neurosci ; 2022: 1070697, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35047027

RESUMO

Chronic illnesses like chronic respiratory disease, cancer, heart disease, and diabetes are threats to humans around the world. Among them, heart disease with disparate features or symptoms complicates diagnosis. Because of the emergence of smart wearable gadgets, fog computing and "Internet of Things" (IoT) solutions have become necessary for diagnosis. The proposed model integrates Edge-Fog-Cloud computing for the accurate and fast delivery of outcomes. The hardware components collect data from different patients. The heart feature extraction from signals is done to get significant features. Furthermore, the feature extraction of other attributes is also gathered. All these features are gathered and subjected to the diagnostic system using an Optimized Cascaded Convolution Neural Network (CCNN). Here, the hyperparameters of CCNN are optimized by the Galactic Swarm Optimization (GSO). Through the performance analysis, the precision of the suggested GSO-CCNN is 3.7%, 3.7%, 3.6%, 7.6%, 67.9%, 48.4%, 33%, 10.9%, and 7.6% more advanced than PSO-CCNN, GWO-CCNN, WOA-CCNN, DHOA-CCNN, DNN, RNN, LSTM, CNN, and CCNN, respectively. Thus, the comparative analysis of the suggested system ensures its efficiency over the conventional models.


Assuntos
Aprendizado Profundo , Cardiopatias , Internet das Coisas , Computação em Nuvem , Humanos , Redes Neurais de Computação
8.
Comput Intell Neurosci ; 2022: 6390260, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35082843

RESUMO

Understanding the situation is a critical component of any self-driving system. Accurate real-time visual signal processing to create pixelwise classed pictures, also known as semantic segmentation, is critical for scenario comprehension and subsequent acceptance of this new technology. Due to the intricate interaction between pixels in each frame of the received camera data, such efficiency in terms of processing time and accuracy could not be achieved prior to recent advances in deep learning algorithms. We present an effective approach for semantic segmentation for self-driving automobiles in this study. We combine deep learning architectures like convolutional neural networks and autoencoders, as well as cutting-edge approaches like feature pyramid networks and bottleneck residual blocks, to develop our model. The CamVid dataset, which has undergone considerable data augmentation, is utilised to train and test our model. To validate the suggested model, we compare the acquired findings to various baseline models reported in the literature.


Assuntos
Aprendizado Profundo , Veículos Autônomos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Semântica
9.
Biomed Tech (Berl) ; 66(2): 201-208, 2021 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-32776890

RESUMO

The quality of the medical image plays a major role in decision making by the radiologists. There exists a visual differentiation between the normal scene color images and medical images. Due to the low illumination and unavailability of the color parameter, medical images require more attention by radiologists for decision making. In this paper a new approach is proposed that enhances the quality of the Magnetic Resonance (MR) images. Proposed approach uses the spectral information present in form of Amplitude and Frequency within the MR image slices for an enhancement. The extracted enhanced spectral information gives better visualization as compared with original signal image generated from MR scanner. The quantitative analysis of the proposed approach suggests that the new method is far better than the traditional state-of-art image enhancement methods.


Assuntos
Neoplasias Encefálicas/fisiopatologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Algoritmos , Humanos , Aumento da Imagem/métodos
10.
J Med Syst ; 43(3): 55, 2019 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-30694396

RESUMO

Gait analysis is considered as the most systematic study of human motion. The analysis of gait includes visual and analytical perception of the individual, augmentation of various mechanical instrumentations for measuring movement of body, muscles activity and body mechanics. Past study focused on gait analysis of various animal locomotion and humans mainly on sports biomechanics. This paper aims to quantify the gait performance with Jaipur Knee, which is one of the most widely used prosthesis in Indian population. Gait data with Jaipur knee joint is not available till date. The proposed study targets to predict the performance of Jaipur knee joint in terms of gait symmetry with transfemoral Indian amputees. Gait symmetry may be the basis of recommendation of knee joint to prosthetic patients. This study used kinematics and kinetics parameters together to quantify the performance of Jaipur knee joint to evaluate gait symmetry. This research will be helpful for clinician to predict and further to prevent the degenerated musculoskeletal effects generally seen in unilateral transfemoral amputees.


Assuntos
Amputados/reabilitação , Membros Artificiais , Marcha/fisiologia , Articulação do Joelho/fisiologia , Perna (Membro)/fisiologia , Adulto , Idoso , Fenômenos Biomecânicos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Movimento/fisiologia , Amplitude de Movimento Articular
11.
Comput Methods Programs Biomed ; 137: 195-201, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28110724

RESUMO

BACKGROUND AND OBJECTIVES: In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. METHOD: Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. RESULTS: To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. CONCLUSIONS: The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset.


Assuntos
Aprendizado de Máquina , Matemática , Algoritmos , Modelos Teóricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...