Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 10(3): e25257, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38327435

RESUMO

Image encryption involves applying cryptographic approaches to convert the content of an image into an illegible or encrypted format, reassuring that illegal users cannot simply interpret or access the actual visual details. Commonly employed models comprise symmetric key algorithms for the encryption of the image data, necessitating a secret key for decryption. This study introduces a new Chaotic Image Encryption Algorithm with an Improved Bonobo Optimizer and DNA Coding (CIEAIBO-DNAC) for enhanced security. The presented CIEAIBO-DNAC technique involves different processes such as initial value generation, substitution, diffusion, and decryption. Primarily, the key is related to the input image pixel values by the MD5 hash function, and the hash value produced by the input image can be utilized as a primary value of the chaotic model to boost key sensitivity. Besides, the CIEAIBO-DNAC technique uses the Improved Bonobo Optimizer (IBO) algorithm for scrambling the pixel position in the block and the scrambling process among the blocks takes place. Moreover, in the diffusion stage, DNA encoding, obfuscation, and decoding process were carried out to attain encrypted images. Extensive experimental evaluations and security analyses are conducted to assess the outcome of the CIEAIBO-DNAC technique. The simulation outcome demonstrates excellent security properties, including resistance against several attacks, ensuring it can be applied to real-time image encryption scenarios.

2.
Sensors (Basel) ; 24(3)2024 Jan 23.
Artigo em Inglês | MEDLINE | ID: mdl-38339452

RESUMO

Advancements in sensing technology have expanded the capabilities of both wearable devices and smartphones, which are now commonly equipped with inertial sensors such as accelerometers and gyroscopes. Initially, these sensors were used for device feature advancement, but now, they can be used for a variety of applications. Human activity recognition (HAR) is an interesting research area that can be used for many applications like health monitoring, sports, fitness, medical purposes, etc. In this research, we designed an advanced system that recognizes different human locomotion and localization activities. The data were collected from raw sensors that contain noise. In the first step, we detail our noise removal process, which employs a Chebyshev type 1 filter to clean the raw sensor data, and then the signal is segmented by utilizing Hamming windows. After that, features were extracted for different sensors. To select the best feature for the system, the recursive feature elimination method was used. We then used SMOTE data augmentation techniques to solve the imbalanced nature of the Extrasensory dataset. Finally, the augmented and balanced data were sent to a long short-term memory (LSTM) deep learning classifier for classification. The datasets used in this research were Real-World Har, Real-Life Har, and Extrasensory. The presented system achieved 89% for Real-Life Har, 85% for Real-World Har, and 95% for the Extrasensory dataset. The proposed system outperforms the available state-of-the-art methods.


Assuntos
Exercício Físico , Dispositivos Eletrônicos Vestíveis , Humanos , Locomoção , Atividades Humanas , Reconhecimento Psicológico
3.
Environ Res ; 246: 118171, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38215925

RESUMO

Coastal arid regions are similar to deserts, where it receives significantly less rainfall, less than 10 cm. Perhaps the world's worst natural disaster, coastal area droughts, can only be detected using reliable monitoring systems. Creating a reliable drought forecast model and figuring out how well various models can analyze drought factors in coastal arid regions are two of the biggest obstacles in this field. Different time-series methods and machine-learning models have traditionally been utilized in forecasting strategies. Deep learning is promising when describing the complex interplay between coastal drought and its contributing variables. Considering the possibility of enhancing our understanding of drought features, applying deep learning approaches has yet to be tried widely. The current investigation employs a deep learning strategy. Coastal Drought indices are commonly used to comprehend the situation better; hence the Standard Precipitation Evaporation Index (SPEI) was used since it incorporates temperatures and precipitation into its computation. An integrated coastal drought monitoring model was presented and validated using convolutional long short-term memory with self-attention (SA-CLSTM). The Climatic Research Unit (CRU) dataset, which spans 1901-2018, was mined for the drought index and predictor data. To learn how LSTM forecasting could enhance drought forecasting, we analyzed the findings regarding numerous drought parameters (drought severity, drought category, or geographic variation). The model's ability to predict drought intensity was assessed using the Coefficient of Determination (R2), the Root Mean Square Error (RMSE), and the Mean Absolute Error (MAE). Both the SPEI 1 and SPEI 3 examples had R2 values more than 0.99 for the model. The range of predicted outcomes for each drought group was analyzed using a multi-class Receiver Operating Characteristic based Area under Curves (ROC-AUC) method. The research showed that the AUC for SPEI 1 was 0.99 and for SPEI 3, 0.99. The study's results indicate progress over machine learning models for one month in advance, accounting for various drought conditions. This work's findings may be used to mitigate drought, and additional improvement can be achieved by testing other models.


Assuntos
Aprendizado Profundo , Secas , Temperatura , Previsões , Aprendizado de Máquina
4.
PeerJ Comput Sci ; 9: e1663, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38077610

RESUMO

The neurological ailment known as Parkinson's disease (PD) affects people throughout the globe. The neurodegenerative PD-related disorder primarily affects people in middle to late life. Motor symptoms such as tremors, muscle rigidity, and sluggish, clumsy movement are common in patients with this disorder. Genetic and environmental variables play significant roles in the development of PD. Despite much investigation, the root cause of this neurodegenerative disease is still unidentified. Clinical diagnostics rely heavily on promptly detecting such irregularities to slow or stop the progression of illnesses successfully. Because of its direct correlation with brain activity, electroencephalography (EEG) is an essential PD diagnostic technique. Electroencephalography, or EEG, data are biomarkers of brain activity changes. However, these signals are non-linear, non-stationary, and complicated, making analysis difficult. One must often resort to a lengthy human labor process to accomplish results using traditional machine-learning approaches. The breakdown, feature extraction, and classification processes are typical examples of these stages. To overcome these obstacles, we present a novel deep-learning model for the automated identification of Parkinson's disease (PD). The Gabor transform, a standard method in EEG signal processing, was used to turn the raw data from the EEG recordings into spectrograms. In this research, we propose densely linked bidirectional long short-term memory (DLBLSTM), which first represents each layer as the sum of its hidden state plus the hidden states of all layers above it, then recursively transmits that representation to all layers below it. This study's suggested deep learning model was trained using these spectrograms as input data. Using a robust sixfold cross-validation method, the proposed model showed excellent accuracy with a classification accuracy of 99.6%. The results indicate that the suggested algorithm can automatically identify PD.

5.
Cancers (Basel) ; 15(20)2023 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-37894383

RESUMO

Internet of Things (IoT)-assisted skin cancer recognition integrates several connected devices and sensors for supporting the primary analysis and monitoring of skin conditions. A preliminary analysis of skin cancer images is extremely difficult because of factors such as distinct sizes and shapes of lesions, differences in color illumination, and light reflections on the skin surface. In recent times, IoT-based skin cancer recognition utilizing deep learning (DL) has been used for enhancing the early analysis and monitoring of skin cancer. This article presents an optimal deep learning-based skin cancer detection and classification (ODL-SCDC) methodology in the IoT environment. The goal of the ODL-SCDC technique is to exploit metaheuristic-based hyperparameter selection approaches with a DL model for skin cancer classification. The ODL-SCDC methodology involves an arithmetic optimization algorithm (AOA) with the EfficientNet model for feature extraction. For skin cancer detection, a stacked denoising autoencoder (SDAE) classification model has been used. Lastly, the dragonfly algorithm (DFA) is utilized for the optimal hyperparameter selection of the SDAE algorithm. The simulation validation of the ODL-SCDC methodology has been tested on a benchmark ISIC skin lesion database. The extensive outcomes reported a better solution of the ODL-SCDC methodology compared with other models, with a maximum sensitivity of 97.74%, specificity of 99.71%, and accuracy of 99.55%. The proposed model can assist medical professionals, specifically dermatologists and potentially other healthcare practitioners, in the skin cancer diagnosis process.

6.
Sensors (Basel) ; 23(17)2023 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-37687819

RESUMO

Ubiquitous computing has been a green research area that has managed to attract and sustain the attention of researchers for some time now. As ubiquitous computing applications, human activity recognition and localization have also been popularly worked on. These applications are used in healthcare monitoring, behavior analysis, personal safety, and entertainment. A robust model has been proposed in this article that works over IoT data extracted from smartphone and smartwatch sensors to recognize the activities performed by the user and, in the meantime, classify the location at which the human performed that particular activity. The system starts by denoising the input signal using a second-order Butterworth filter and then uses a hamming window to divide the signal into small data chunks. Multiple stacked windows are generated using three windows per stack, which, in turn, prove helpful in producing more reliable features. The stacked data are then transferred to two parallel feature extraction blocks, i.e., human activity recognition and human localization. The respective features are extracted for both modules that reinforce the system's accuracy. A recursive feature elimination is applied to the features of both categories independently to select the most informative ones among them. After the feature selection, a genetic algorithm is used to generate ten different generations of each feature vector for data augmentation purposes, which directly impacts the system's performance. Finally, a deep neural decision forest is trained for classifying the activity and the subject's location while working on both of these attributes in parallel. For the evaluation and testing of the proposed system, two openly accessible benchmark datasets, the ExtraSensory dataset and the Sussex-Huawei Locomotion dataset, were used. The system outperformed the available state-of-the-art systems by recognizing human activities with an accuracy of 88.25% and classifying the location with an accuracy of 90.63% over the ExtraSensory dataset, while, for the Sussex-Huawei Locomotion dataset, the respective results were 96.00% and 90.50% accurate.


Assuntos
Atividades Humanas , Reconhecimento Psicológico , Humanos , Memória , Benchmarking , Inteligência
7.
Sensors (Basel) ; 23(17)2023 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-37687826

RESUMO

Smart grids (SGs) play a vital role in the smart city environment, which exploits digital technology, communication systems, and automation for effectively managing electricity generation, distribution, and consumption. SGs are a fundamental module of smart cities that purpose to leverage technology and data for enhancing the life quality for citizens and optimize resource consumption. The biggest challenge in dealing with SGs and smart cities is the potential for cyberattacks comprising Distributed Denial of Service (DDoS) attacks. DDoS attacks involve overwhelming a system with a huge volume of traffic, causing disruptions and potentially leading to service outages. Mitigating and detecting DDoS attacks in SGs is of great significance to ensuring their stability and reliability. Therefore, this study develops a new White Shark Equilibrium Optimizer with a Hybrid Deep-Learning-based Cybersecurity Solution (WSEO-HDLCS) technique for a Smart City Environment. The goal of the WSEO-HDLCS technique is to recognize the presence of DDoS attacks, in order to ensure cybersecurity. In the presented WSEO-HDLCS technique, the high-dimensionality data problem can be resolved by the use of WSEO-based feature selection (WSEO-FS) approach. In addition, the WSEO-HDLCS technique employs a stacked deep autoencoder (SDAE) model for DDoS attack detection. Moreover, the gravitational search algorithm (GSA) is utilized for the optimal selection of the hyperparameters related to the SDAE model. The simulation outcome of the WSEO-HDLCS system is validated on the CICIDS-2017 dataset. The widespread simulation values highlighted the promising outcome of the WSEO-HDLCS methodology over existing methods.

8.
Sensors (Basel) ; 23(17)2023 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-37687978

RESUMO

Gestures have been used for nonverbal communication for a long time, but human-computer interaction (HCI) via gestures is becoming more common in the modern era. To obtain a greater recognition rate, the traditional interface comprises various devices, such as gloves, physical controllers, and markers. This study provides a new markerless technique for obtaining gestures without the need for any barriers or pricey hardware. In this paper, dynamic gestures are first converted into frames. The noise is removed, and intensity is adjusted for feature extraction. The hand gesture is first detected through the images, and the skeleton is computed through mathematical computations. From the skeleton, the features are extracted; these features include joint color cloud, neural gas, and directional active model. After that, the features are optimized, and a selective feature set is passed through the classifier recurrent neural network (RNN) to obtain the classification results with higher accuracy. The proposed model is experimentally assessed and trained over three datasets: HaGRI, Egogesture, and Jester. The experimental results for the three datasets provided improved results based on classification, and the proposed system achieved an accuracy of 92.57% over HaGRI, 91.86% over Egogesture, and 91.57% over the Jester dataset, respectively. Also, to check the model liability, the proposed method was tested on the WLASL dataset, attaining 90.43% accuracy. This paper also includes a comparison with other-state-of-the art methods to compare our model with the standard methods of recognition. Our model presented a higher accuracy rate with a markerless approach to save money and time for classifying the gestures for better interaction.


Assuntos
Gestos , Agentes Neurotóxicos , Humanos , Automação , Redes Neurais de Computação , Reconhecimento Psicológico
9.
Sensors (Basel) ; 23(18)2023 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-37765984

RESUMO

Smart home monitoring systems via internet of things (IoT) are required for taking care of elders at home. They provide the flexibility of monitoring elders remotely for their families and caregivers. Activities of daily living are an efficient way to effectively monitor elderly people at home and patients at caregiving facilities. The monitoring of such actions depends largely on IoT-based devices, either wireless or installed at different places. This paper proposes an effective and robust layered architecture using multisensory devices to recognize the activities of daily living from anywhere. Multimodality refers to the sensory devices of multiple types working together to achieve the objective of remote monitoring. Therefore, the proposed multimodal-based approach includes IoT devices, such as wearable inertial sensors and videos recorded during daily routines, fused together. The data from these multi-sensors have to be processed through a pre-processing layer through different stages, such as data filtration, segmentation, landmark detection, and 2D stick model. In next layer called the features processing, we have extracted, fused, and optimized different features from multimodal sensors. The final layer, called classification, has been utilized to recognize the activities of daily living via a deep learning technique known as convolutional neural network. It is observed from the proposed IoT-based multimodal layered system's results that an acceptable mean accuracy rate of 84.14% has been achieved.

10.
Artigo em Inglês | MEDLINE | ID: mdl-36768060

RESUMO

Big Data analytics is a technique for researching huge and varied datasets and it is designed to uncover hidden patterns, trends, and correlations, and therefore, it can be applied for making superior decisions in healthcare. Drug-drug interactions (DDIs) are a main concern in drug discovery. The main role of precise forecasting of DDIs is to increase safety potential, particularly, in drug research when multiple drugs are co-prescribed. Prevailing conventional method machine learning (ML) approaches mainly depend on handcraft features and lack generalization. Today, deep learning (DL) techniques that automatically study drug features from drug-related networks or molecular graphs have enhanced the capability of computing approaches for forecasting unknown DDIs. Therefore, in this study, we develop a sparrow search optimization with deep learning-based DDI prediction (SSODL-DDIP) technique for healthcare decision making in big data environments. The presented SSODL-DDIP technique identifies the relationship and properties of the drugs from various sources to make predictions. In addition, a multilabel long short-term memory with an autoencoder (MLSTM-AE) model is employed for the DDI prediction process. Moreover, a lexicon-based approach is involved in determining the severity of interactions among the DDIs. To improve the prediction outcomes of the MLSTM-AE model, the SSO algorithm is adopted in this work. To assure better performance of the SSODL-DDIP technique, a wide range of simulations are performed. The experimental results show the promising performance of the SSODL-DDIP technique over recent state-of-the-art algorithms.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Memória de Curto Prazo , Interações Medicamentosas , Algoritmos , Aprendizado de Máquina
11.
Front Public Health ; 11: 1338215, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38192545

RESUMO

This paper pioneers the exploration of ocular cancer, and its management with the help of Artificial Intelligence (AI) technology. Existing literature presents a significant increase in new eye cancer cases in 2023, experiencing a higher incidence rate. Extensive research was conducted using online databases such as PubMed, ACM Digital Library, ScienceDirect, and Springer. To conduct this review, Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines are used. Of the collected 62 studies, only 20 documents met the inclusion criteria. The review study identifies seven ocular cancer types. Important challenges associated with ocular cancer are highlighted, including limited awareness about eye cancer, restricted healthcare access, financial barriers, and insufficient infrastructure support. Financial barriers is one of the widely examined ocular cancer challenges in the literature. The potential role and limitations of ChatGPT are discussed, emphasizing its usefulness in providing general information to physicians, noting its inability to deliver up-to-date information. The paper concludes by presenting the potential future applications of ChatGPT to advance research on ocular cancer globally.


Assuntos
Neoplasias Oculares , Médicos , Humanos , Inteligência Artificial , Bases de Dados Factuais , Neoplasias Oculares/epidemiologia , Neoplasias Oculares/terapia , Tecnologia
12.
Healthcare (Basel) ; 10(4)2022 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-35455854

RESUMO

Decision-making medical systems (DMS) refer to the design of decision techniques in the healthcare sector. They involve a procedure of employing ideas and decisions related to certain processes such as data acquisition, processing, judgment, and conclusion. Pancreatic cancer is a lethal type of cancer, and its prediction is ineffective with current techniques. Automated detection and classification of pancreatic tumors can be provided by the computer-aided diagnosis (CAD) model using radiological images such as computed tomography (CT) and magnetic resonance imaging (MRI). The recently developed machine learning (ML) and deep learning (DL) models can be utilized for the automated and timely detection of pancreatic cancer. In light of this, this article introduces an intelligent deep-learning-enabled decision-making medical system for pancreatic tumor classification (IDLDMS-PTC) using CT images. The major intention of the IDLDMS-PTC technique is to examine the CT images for the existence of pancreatic tumors. The IDLDMS-PTC model derives an emperor penguin optimizer (EPO) with multilevel thresholding (EPO-MLT) technique for pancreatic tumor segmentation. Additionally, the MobileNet model is applied as a feature extractor with optimal auto encoder (AE) for pancreatic tumor classification. In order to optimally adjust the weight and bias values of the AE technique, the multileader optimization (MLO) technique is utilized. The design of the EPO algorithm for optimal threshold selection and the MLO algorithm for parameter tuning shows the novelty. A wide range of simulations was executed on benchmark datasets, and the outcomes reported the promising performance of the IDLDMS-PTC model on the existing methods.

13.
Sensors (Basel) ; 22(1)2021 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-35009747

RESUMO

Diabetic retinopathy (DR) is a human eye disease that affects people who are suffering from diabetes. It causes damage to their eyes, including vision loss. It is treatable; however, it takes a long time to diagnose and may require many eye exams. Early detection of DR may prevent or delay the vision loss. Therefore, a robust, automatic and computer-based diagnosis of DR is essential. Currently, deep neural networks are being utilized in numerous medical areas to diagnose various diseases. Consequently, deep transfer learning is utilized in this article. We employ five convolutional-neural-network-based designs (AlexNet, GoogleNet, Inception V4, Inception ResNet V2 and ResNeXt-50). A collection of DR pictures is created. Subsequently, the created collections are labeled with an appropriate treatment approach. This automates the diagnosis and assists patients through subsequent therapies. Furthermore, in order to identify the severity of DR retina pictures, we use our own dataset to train deep convolutional neural networks (CNNs). Experimental results reveal that the pre-trained model Se-ResNeXt-50 obtains the best classification accuracy of 97.53% for our dataset out of all pre-trained models. Moreover, we perform five different experiments on each CNN architecture. As a result, a minimum accuracy of 84.01% is achieved for a five-degree classification.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Redes Neurais de Computação , Retina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...