Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 31
Filter
1.
Sci Rep ; 14(1): 10219, 2024 05 03.
Article in English | MEDLINE | ID: mdl-38702373

ABSTRACT

The difficulty of collecting maize leaf lesion characteristics in an environment that undergoes frequent changes, suffers varying illumination from lighting sources, and is influenced by a variety of other factors makes detecting diseases in maize leaves difficult. It is critical to monitor and identify plant leaf diseases during the initial growing period to take suitable preventative measures. In this work, we propose an automated maize leaf disease recognition system constructed using the PRF-SVM model. The PRFSVM model was constructed by combining three powerful components: PSPNet, ResNet50, and Fuzzy Support Vector Machine (Fuzzy SVM). The combination of PSPNet and ResNet50 not only assures that the model can capture delicate visual features but also allows for end-to-end training for smooth integration. Fuzzy SVM is included as a final classification layer to accommodate the inherent fuzziness and uncertainty in real-world image data. Five different maize crop diseases (common rust, southern rust, grey leaf spot, maydis leaf blight, and turcicum leaf blight along with healthy leaves) are selected from the Plant Village dataset for the algorithm's evaluation. The average accuracy achieved using the proposed method is approximately 96.67%. The PRFSVM model achieves an average accuracy rating of 96.67% and a mAP value of 0.81, demonstrating the efficacy of our approach for detecting and classifying various forms of maize leaf diseases.


Subject(s)
Plant Diseases , Plant Leaves , Support Vector Machine , Zea mays , Zea mays/microbiology , Zea mays/growth & development , Plant Diseases/microbiology , Plant Leaves/microbiology , Algorithms , Fuzzy Logic
2.
Sci Rep ; 14(1): 7841, 2024 Apr 03.
Article in English | MEDLINE | ID: mdl-38570648

ABSTRACT

Recent research has focused on applying blockchain technology to solve security-related problems in Internet of Things (IoT) networks. However, the inherent scalability issues of blockchain technology become apparent in the presence of a vast number of IoT devices and the substantial data generated by these networks. Therefore, in this paper, we use a lightweight consensus algorithm to cater to these problems. We propose a scalable blockchain-based framework for managing IoT data, catering to a large number of devices. This framework utilizes the Delegated Proof of Stake (DPoS) consensus algorithm to ensure enhanced performance and efficiency in resource-constrained IoT networks. DPoS being a lightweight consensus algorithm leverages a selected number of elected delegates to validate and confirm transactions, thus mitigating the performance and efficiency degradation in the blockchain-based IoT networks. In this paper, we implemented an Interplanetary File System (IPFS) for distributed storage, and Docker to evaluate the network performance in terms of throughput, latency, and resource utilization. We divided our analysis into four parts: Latency, throughput, resource utilization, and file upload time and speed in distributed storage evaluation. Our empirical findings demonstrate that our framework exhibits low latency, measuring less than 0.976 ms. The proposed technique outperforms Proof of Stake (PoS), representing a state-of-the-art consensus technique. We also demonstrate that the proposed approach is useful in IoT applications where low latency or resource efficiency is required.

3.
PeerJ Comput Sci ; 10: e1933, 2024.
Article in English | MEDLINE | ID: mdl-38660154

ABSTRACT

The robust development of the blockchain distributed ledger, the Internet of Things (IoT), and fog computing-enabled connected devices and nodes has changed our lifestyle nowadays. Due to this, the increased rate of device sales and utilization increases the demand for edge computing technology with collaborative procedures. However, there is a well-established paradigm designed to optimize various distinct quality-of-service requirements, including bandwidth, latency, transmission power, delay, duty cycle, throughput, response, and edge sense, and bring computation and data storage closer to the devices and edges, along with ledger security and privacy during transmission. In this article, we present a systematic review of blockchain Hyperledger enabling fog and edge computing, which integrates as an outsourcing computation over the serverless consortium network environment. The main objective of this article is to classify recently published articles and survey reports on the current status in the domain of edge distributed computing and outsourcing computation, such as fog and edge. In addition, we proposed a blockchain-Hyperledger Sawtooth-enabled serverless edge-based distributed outsourcing computation architecture. This theoretical architecture-based solution delivers robust data security in terms of integrity, transparency, provenance, and privacy-protected preservation in the immutable storage to store the outsourcing computational ledgers. This article also highlights the changes between the proposed taxonomy and the current system based on distinct parameters, such as system security and privacy. Finally, a few open research issues and limitations with promising future directions are listed for future research work.

5.
Sci Rep ; 14(1): 4299, 2024 02 21.
Article in English | MEDLINE | ID: mdl-38383520

ABSTRACT

Skin cancer is a frequently occurring and possibly deadly disease that necessitates prompt and precise diagnosis in order to ensure efficacious treatment. This paper introduces an innovative approach for accurately identifying skin cancer by utilizing Convolution Neural Network architecture and optimizing hyperparameters. The proposed approach aims to increase the precision and efficacy of skin cancer recognition and consequently enhance patients' experiences. This investigation aims to tackle various significant challenges in skin cancer recognition, encompassing feature extraction, model architecture design, and optimizing hyperparameters. The proposed model utilizes advanced deep-learning methodologies to extract complex features and patterns from skin cancer images. We enhance the learning procedure of deep learning by integrating Standard U-Net and Improved MobileNet-V3 with optimization techniques, allowing the model to differentiate malignant and benign skin cancers. Also substituted the crossed-entropy loss function of the Mobilenet-v3 mathematical framework with a bias loss function to enhance the accuracy. The model's squeeze and excitation component was replaced with the practical channel attention component to achieve parameter reduction. Integrating cross-layer connections among Mobile modules has been proposed to leverage synthetic features effectively. The dilated convolutions were incorporated into the model to enhance the receptive field. The optimization of hyperparameters is of utmost importance in improving the efficiency of deep learning models. To fine-tune the model's hyperparameter, we employ sophisticated optimization methods such as the Bayesian optimization method using pre-trained CNN architecture MobileNet-V3. The proposed model is compared with existing models, i.e., MobileNet, VGG-16, MobileNet-V2, Resnet-152v2 and VGG-19 on the "HAM-10000 Melanoma Skin Cancer dataset". The empirical findings illustrate that the proposed optimized hybrid MobileNet-V3 model outperforms existing skin cancer detection and segmentation techniques based on high precision of 97.84%, sensitivity of 96.35%, accuracy of 98.86% and specificity of 97.32%. The enhanced performance of this research resulted in timelier and more precise diagnoses, potentially contributing to life-saving outcomes and mitigating healthcare expenditures.


Subject(s)
Accidental Injuries , Melanoma , Skin Neoplasms , Humans , Bayes Theorem , Skin Neoplasms/diagnosis , Skin , Melanoma/diagnosis
6.
Sci Rep ; 14(1): 4533, 2024 02 24.
Article in English | MEDLINE | ID: mdl-38402249

ABSTRACT

Postpartum Depression Disorder (PPDD) is a prevalent mental health condition and results in severe depression and suicide attempts in the social community. Prompt actions are crucial in tackling PPDD, which requires a quick recognition and accurate analysis of the probability factors associated with this condition. This concern requires attention. The primary aim of our research is to investigate the feasibility of anticipating an individual's mental state by categorizing individuals with depression from those without depression using a dataset consisting of text along with audio recordings from patients diagnosed with PPDD. This research proposes a hybrid PPDD framework that combines Improved Bi-directional Long Short-Term Memory (IBi-LSTM) with Transfer Learning (TL) based on two Convolutional Neural Network (CNN) architectures, respectively CNN-text and CNN audio. In the proposed model, the CNN section efficiently utilizes TL to obtain crucial knowledge from text and audio characteristics, whereas the improved Bi-LSTM module combines written material and sound data to obtain intricate chronological interpersonal relationships. The proposed model incorporates an attention technique to augment the effectiveness of the Bi-LSTM scheme. An experimental analysis is conducted on the PPDD online textual and speech audio dataset collected from UCI. It includes textual features such as age, women's health tracks, medical histories, demographic information, daily life metrics, psychological evaluations, and 'speech records' of PPDD patients. Data pre-processing is applied to maintain the data integrity and achieve reliable model performance. The proposed model demonstrates a great performance in better precision, recall, accuracy, and F1-score over existing deep learning models, including VGG-16, Base-CNN, and CNN-LSTM. These metrics indicate the model's ability to differentiate among women at risk of PPDD vs. non-PPDD. In addition, the feature importance analysis demonstrates that specific risk factors substantially impact the prediction of PPDD. The findings of this research establish a basis for improved precision and promptness in assessing the risk of PPDD, which may ultimately result in earlier implementation of interventions and the establishment of support networks for women who are susceptible to PPDD.


Subject(s)
Deep Learning , Depression, Postpartum , Depressive Disorder , Humans , Female , Depression, Postpartum/diagnosis , Depression, Postpartum/epidemiology , Prevalence , Risk Factors
7.
Sci Rep ; 14(1): 1337, 2024 Jan 16.
Article in English | MEDLINE | ID: mdl-38228707

ABSTRACT

Virtual machine (VM) integration methods have effectively proven an optimized load balancing in cloud data centers. The main challenge with VM integration methods is the trade-off among cost effectiveness, quality of service, performance, optimal resource utilization and compliance with service level agreement violations. Deep Learning methods are widely used in existing research on cloud load balancing. However, there is still a problem with acquiring noisy multilayered fluctuations in workload due to the limited resource-level provisioning. The long short-term memory (LSTM) model plays a vital role in the prediction of server load and workload provisioning. This research presents a hybrid model using deep learning with Particle Swarm Intelligence and Genetic Algorithm ("DPSO-GA") for dynamic workload provisioning in cloud computing. The proposed model works in two phases. The first phase utilizes a hybrid PSO-GA approach to address the prediction challenge by combining the benefits of these two methods in fine-tuning the Hyperparameters. In the second phase, CNN-LSTM is utilized. Before using the CNN-LSTM approach to forecast the consumption of resources, a hybrid approach, PSO-GA, is used for training it. In the proposed framework, a one-dimensional CNN and LSTM are used to forecast the cloud resource utilization at various subsequent time steps. The LSTM module simulates temporal information that predicts the upcoming VM workload, while a CNN module extracts complicated distinguishing features gathered from VM workload statistics. The proposed model simultaneously integrates the resource utilization in a multi-resource utilization, which helps overcome the load balancing and over-provisioning issues. Comprehensive simulations are carried out utilizing the Google cluster traces benchmarks dataset to verify the efficiency of the proposed DPSO-GA technique in enhancing the distribution of resources and load balancing for the cloud. The proposed model achieves outstanding results in terms of better precision, accuracy and load allocation.

8.
Nanoscale Adv ; 5(22): 6216-6227, 2023 Nov 07.
Article in English | MEDLINE | ID: mdl-37941957

ABSTRACT

Applications: the study of highly advanced hybrid nanofluids has aroused the interest of academics and engineers, particularly those working in the fields of chemical and applied thermal engineering. The improved properties of hybrid nanoliquids are superior to those of earlier classes of nanofluids (which are simply referred to as nanofluids). Therefore, it is essential to report on the process of analyzing nanofluids by passing them through elastic surfaces, as this is a typical practice in engineering and industrial applications. Purpose and methodology: the investigation of hybrid nanoliquids was the sole focus of this research, which was conducted using a stretched sheet. Using supporting correlations, an estimate was made of the improved thermal conductivity, density, heat capacitance, and viscosity. In addition, the distinctiveness of the model was increased by the incorporation of a variety of distinct physical limitations, such as thermal slip, radiation, micropolarity, uniform surface convection, and stretching effects. After that, a numerical analysis of the model was performed, and the physical results are presented. Core findings: the results of the model showed that it is possible to attain the desired momentum of hybrid nanofluids by keeping the fluidic system at a uniform suction, and that this momentum may be enhanced by increasing the force of the injecting fluid via a stretched sheet. Surface convection, thermal radiation, and high dissipative energy are all great physical instruments that can be used to acquire heat in hybrid nanofluids. This heat acquisition is significant from both an applied thermal engineering perspective and a chemical engineering perspective. The features of simple nano and common hybrid nanoliquids have been compared and the results indicate that hybrid nanofluids exhibit dominant behavior when measured against the percentage concentration of nanoparticles, which enables them to be used in large-scale practical applications.

9.
Diagnostics (Basel) ; 13(18)2023 Sep 12.
Article in English | MEDLINE | ID: mdl-37761285

ABSTRACT

Speckle noise is a pervasive problem in medical imaging, and conventional methods for despeckling often lead to loss of edge information due to smoothing. To address this issue, we propose a novel approach that combines a nature-inspired minibatch water wave swarm optimization (NIMWVSO) framework with an invertible sparse fuzzy wavelet transform (ISFWT) in the frequency domain. The ISFWT learns a non-linear redundant transform with a perfect reconstruction property that effectively removes noise while preserving structural and edge information in medical images. The resulting threshold is then used by the NIMWVSO to further reduce multiplicative speckle noise. Our approach was evaluated using the MSTAR dataset, and objective functions were based on two contrasting reference metrics, namely the peak signal-to-noise ratio (PSNR) and the mean structural similarity index metric (MSSIM). Our results show that the suggested approach outperforms modern filters and has significant generalization ability to unknown noise levels, while also being highly interpretable. By providing a new framework for despeckling medical images, our work has the potential to improve the accuracy and reliability of medical imaging diagnosis and treatment planning.

10.
Sensors (Basel) ; 23(18)2023 Sep 13.
Article in English | MEDLINE | ID: mdl-37765912

ABSTRACT

Industrial automation systems are undergoing a revolutionary change with the use of Internet-connected operating equipment and the adoption of cutting-edge advanced technology such as AI, IoT, cloud computing, and deep learning within business organizations. These innovative and additional solutions are facilitating Industry 4.0. However, the emergence of these technological advances and the quality solutions that they enable will also introduce unique security challenges whose consequence needs to be identified. This research presents a hybrid intrusion detection model (HIDM) that uses OCNN-LSTM and transfer learning (TL) for Industry 4.0. The proposed model utilizes an optimized CNN by using enhanced parameters of the CNN via the grey wolf optimizer (GWO) method, which fine-tunes the CNN parameters and helps to improve the model's prediction accuracy. The transfer learning model helps to train the model, and it transfers the knowledge to the OCNN-LSTM model. The TL method enhances the training process, acquiring the necessary knowledge from the OCNN-LSTM model and utilizing it in each next cycle, which helps to improve detection accuracy. To measure the performance of the proposed model, we conducted a multi-class classification analysis on various online industrial IDS datasets, i.e., ToN-IoT and UNW-NB15. We have conducted two experiments for these two datasets, and various performance-measuring parameters, i.e., precision, F-measure, recall, accuracy, and detection rate, were calculated for the OCNN-LSTM model with and without TL and also for the CNN and LSTM models. For the ToN-IoT dataset, the OCNN-LSTM with TL model achieved a precision of 92.7%; for the UNW-NB15 dataset, the precision was 94.25%, which is higher than OCNN-LSTM without TL.

11.
Sci Rep ; 13(1): 12473, 2023 08 01.
Article in English | MEDLINE | ID: mdl-37528148

ABSTRACT

Hepatitis C Virus (HCV) is a viral infection that causes liver inflammation. Annually, approximately 3.4 million cases of HCV are reported worldwide. A diagnosis of HCV in earlier stages helps to save lives. In the HCV review, the authors used a single ML-based prediction model in the current research, which encounters several issues, i.e., poor accuracy, data imbalance, and overfitting. This research proposed a Hybrid Predictive Model (HPM) based on an improved random forest and support vector machine to overcome existing research limitations. The proposed model improves a random forest method by adding a bootstrapping approach. The existing RF method is enhanced by adding a bootstrapping process, which helps eliminate the tree's minor features iteratively to build a strong forest. It improves the performance of the HPM model. The proposed HPM model utilizes a 'Ranker method' to rank the dataset features and applies an IRF with SVM, selecting higher-ranked feature elements to build the prediction model. This research uses the online HCV dataset from UCI to measure the proposed model's performance. The dataset is highly imbalanced; to deal with this issue, we utilized the synthetic minority over-sampling technique (SMOTE). This research performs two experiments. The first experiment is based on data splitting methods, K-fold cross-validation, and training: testing-based splitting. The proposed method achieved an accuracy of 95.89% for k = 5 and 96.29% for k = 10; for the training and testing-based split, the proposed method achieved 91.24% for 80:20 and 92.39% for 70:30, which is the best compared to the existing SVM, MARS, RF, DT, and BGLM methods. In experiment 2, the analysis is performed using feature selection (with SMOTE and without SMOTE). The proposed method achieves an accuracy of 41.541% without SMOTE and 96.82% with SMOTE-based feature selection, which is better than existing ML methods. The experimental results prove the importance of feature selection to achieve higher accuracy in HCV research.


Subject(s)
Hepacivirus , Hepatitis C , Humans , Random Forest , Support Vector Machine , Algorithms
12.
Front Physiol ; 14: 1143249, 2023.
Article in English | MEDLINE | ID: mdl-37064899

ABSTRACT

The new coronavirus that produced the pandemic known as COVID-19 has been going across the world for a while. Nearly every area of development has been impacted by COVID-19. There is an urgent need for improvement in the healthcare system. However, this contagious illness can be controlled by appropriately donning a facial mask. If people keep a strong social distance and wear face masks, COVID-19 can be controlled. A method for detecting these violations is proposed in this paper. These infractions include failing to wear a facemask and failing to maintain social distancing. To train a deep learning architecture, a dataset compiled from several sources is used. To compute the distance between two people in a particular area and also predicts the people wearing and not wearing the mask, The proposed system makes use of YOLOv3 architecture and computer vision. The goal of this research is to provide valuable tool for reducing the transmission of this contagious disease in various environments, including streets and supermarkets. The proposed system is evaluated using the COCO dataset. It is evident from the experimental analysis that the proposed system performs well in predicting the people wearing the mask because it has acquired an accuracy of 99.2% and an F1-score of 0.99.

13.
Front Physiol ; 14: 1125952, 2023.
Article in English | MEDLINE | ID: mdl-36793418

ABSTRACT

Generally, cloud computing is integrated with wireless sensor network to enable the monitoring systems and it improves the quality of service. The sensed patient data are monitored with biosensors without considering the patient datatype and this minimizes the work of hospitals and physicians. Wearable sensor devices and the Internet of Medical Things (IoMT) have changed the health service, resulting in faster monitoring, prediction, diagnosis, and treatment. Nevertheless, there have been difficulties that need to be resolved by the use of AI methods. The primary goal of this study is to introduce an AI-powered, IoMT telemedicine infrastructure for E-healthcare. In this paper, initially the data collection from the patient body is made using the sensed devices and the information are transmitted through the gateway/Wi-Fi and is stored in IoMT cloud repository. The stored information is then acquired, preprocessed to refine the collected data. The features from preprocessed data are extracted by means of high dimensional Linear Discriminant analysis (LDA) and the best optimal features are selected using reconfigured multi-objective cuckoo search algorithm (CSA). The prediction of abnormal/normal data is made by using Hybrid ResNet 18 and GoogleNet classifier (HRGC). The decision is then made whether to send alert to hospitals/healthcare personnel or not. If the expected results are satisfactory, the participant information is saved in the internet for later use. At last, the performance analysis is carried so as to validate the efficiency of proposed mechanism.

14.
Sensors (Basel) ; 22(23)2022 Dec 04.
Article in English | MEDLINE | ID: mdl-36502183

ABSTRACT

Emotion charting using multimodal signals has gained great demand for stroke-affected patients, for psychiatrists while examining patients, and for neuromarketing applications. Multimodal signals for emotion charting include electrocardiogram (ECG) signals, electroencephalogram (EEG) signals, and galvanic skin response (GSR) signals. EEG, ECG, and GSR are also known as physiological signals, which can be used for identification of human emotions. Due to the unbiased nature of physiological signals, this field has become a great motivation in recent research as physiological signals are generated autonomously from human central nervous system. Researchers have developed multiple methods for the classification of these signals for emotion detection. However, due to the non-linear nature of these signals and the inclusion of noise, while recording, accurate classification of physiological signals is a challenge for emotion charting. Valence and arousal are two important states for emotion detection; therefore, this paper presents a novel ensemble learning method based on deep learning for the classification of four different emotional states including high valence and high arousal (HVHA), low valence and low arousal (LVLA), high valence and low arousal (HVLA) and low valence high arousal (LVHA). In the proposed method, multimodal signals (EEG, ECG, and GSR) are preprocessed using bandpass filtering and independent components analysis (ICA) for noise removal in EEG signals followed by discrete wavelet transform for time domain to frequency domain conversion. Discrete wavelet transform results in spectrograms of the physiological signal and then features are extracted using stacked autoencoders from those spectrograms. A feature vector is obtained from the bottleneck layer of the autoencoder and is fed to three classifiers SVM (support vector machine), RF (random forest), and LSTM (long short-term memory) followed by majority voting as ensemble classification. The proposed system is trained and tested on the AMIGOS dataset with k-fold cross-validation. The proposed system obtained the highest accuracy of 94.5% and shows improved results of the proposed method compared with other state-of-the-art methods.


Subject(s)
Arousal , Emotions , Humans , Emotions/physiology , Arousal/physiology , Wavelet Analysis , Electroencephalography/methods , Support Vector Machine
15.
Comput Intell Neurosci ; 2022: 3145956, 2022.
Article in English | MEDLINE | ID: mdl-36238674

ABSTRACT

Effective software cost estimation significantly contributes to decision-making. The rising trend of using nature-inspired meta-heuristic algorithms has been seen in software cost estimation problems. The constructive cost model (COCOMO) method is a well-known regression-based algorithmic technique for estimating software costs. The limitation of the COCOMO models is that the values of these coefficients are constant for similar kinds of projects whereas, in reality, these parameters vary from one organization to another organization. Therefore, for accurate estimation, it is necessary to fine-tune the coefficients. The research community is now examining deep learning (DL) as a forward-looking solution to improve cost estimation. Although deep learning architectures provide some improvements over existing flat technologies, they also have some shortcomings, such as large training delays, over-fitting, and under-fitting. Deep learning models usually require fine-tuning to a large number of parameters. The meta-heuristic algorithm supports finding a good optimal solution at a reasonable computational cost. Additionally, heuristic approaches allow for the location of an optimum solution. So, it can be used with deep neural networks to minimize training delays. The hybrid of ant colony optimization with BAT (HACO-BA) algorithm is a hybrid optimization technique that combines the most common global optimum search technique for ant colonies (ACO) in association with one of the newest search techniques called the BAT algorithm (BA). This technology supports the solution of multivariable problems and has been applied to the optimization of a large number of engineering problems. This work will perform a two-fold assessment of algorithms: (i) comparing the efficacy of ACO, BA, and HACO-BA in optimizing COCOMO II coefficients; and (ii) using HACO-BA algorithms to optimize and improve the deep learning training process. The experimental results show that the hybrid HACO-BA performs better as compared to ACO and BA for tuning COCOMO II. HACO-BA also performs better in the optimization of DNN in terms of execution time and accuracy. The process is executed upto 100 epochs, and the accuracy achieved by the proposed DNN approach is almost 98% while NN achieved accuracy of up to 85% on the same datasets.


Subject(s)
Deep Learning , Heuristics , Algorithms , Neural Networks, Computer , Software
16.
Sensors (Basel) ; 22(17)2022 Aug 31.
Article in English | MEDLINE | ID: mdl-36081022

ABSTRACT

In the recent past, a huge number of cameras have been placed in a variety of public and private areas for the purposes of surveillance, the monitoring of abnormal human actions, and traffic surveillance. The detection and recognition of abnormal activity in a real-world environment is a big challenge, as there can be many types of alarming and abnormal activities, such as theft, violence, and accidents. This research deals with accidents in traffic videos. In the modern world, video traffic surveillance cameras (VTSS) are used for traffic surveillance and monitoring. As the population is increasing drastically, the likelihood of accidents is also increasing. The VTSS is used to detect abnormal events or incidents regarding traffic on different roads and highways, such as traffic jams, traffic congestion, and vehicle accidents. Mostly in accidents, people are helpless and some die due to the unavailability of emergency treatment on long highways and those places that are far from cities. This research proposes a methodology for detecting accidents automatically through surveillance videos. A review of the literature suggests that convolutional neural networks (CNNs), which are a specialized deep learning approach pioneered to work with grid-like data, are effective in image and video analysis. This research uses CNNs to find anomalies (accidents) from videos captured by the VTSS and implement a rolling prediction algorithm to achieve high accuracy. In the training of the CNN model, a vehicle accident image dataset (VAID), composed of images with anomalies, was constructed and used. For testing the proposed methodology, the trained CNN model was checked on multiple videos, and the results were collected and analyzed. The results of this research show the successful detection of traffic accident events with an accuracy of 82% in the traffic surveillance system videos.


Subject(s)
Deep Learning , Accidents, Traffic , Algorithms , Cities , Humans , Neural Networks, Computer
17.
Comput Intell Neurosci ; 2022: 2664901, 2022.
Article in English | MEDLINE | ID: mdl-35958769

ABSTRACT

Nowadays, so many people are living in world. If so many people are living, then the diseases are also increasing day by day due to adulterated and chemical content food. The people may suffer either from a small disease such as cold and cough or from a big disease such as cancer. In this work, we have discussed on the encephalon tumor or cancer which is a big problem nowadays. If we will consider about the whole world, then there are deficiency of clinical experts or doctors as compared to the encephalon tumor affected person. So, here, we have used an automatic classification of tumor by the help of particle swarm optimization (PSO)-based extreme learning machine (ELM) technique with the segmentation process by the help of improved fast and robust fuzzy C mean (IFRFCM) algorithm and most commonly feature reduction method used gray level co-occurrence matrix (GLCM) that may helpful to the clinical experts. Here, we have used the BraTs ("Multimodal Brain Tumor Segmentation Challenge 2020") dataset for both the training and testing purpose. It has been monitored that our system has given better classification accuracy as an approximation of 99.47% which can be observed as a good outcome.


Subject(s)
Algorithms , Neoplasms , Brain , Humans
18.
Sensors (Basel) ; 22(14)2022 Jul 08.
Article in English | MEDLINE | ID: mdl-35890830

ABSTRACT

Underwater wireless sensor networks (UWSNs) have emerged as the most widely used wireless network infrastructure in many applications. Sensing nodes are frequently deployed in hostile aquatic environments in order to collect data on resources that are severely limited in terms of transmission time and bandwidth. Since underwater information is very sensitive and unique, the authentication of users is very important to access the data and information. UWSNs have unique communication and computation needs that are not met by the existing digital signature techniques. As a result, a lightweight signature scheme is required to meet the communication and computation requirements. In this research, we present a Certificateless Online/Offline Signature (COOS) mechanism for UWSNs. The proposed scheme is based on the concept of a hyperelliptic curves cryptosystem, which offers the same degree of security as RSA, bilinear pairing, and elliptic curve cryptosystems (ECC) but with a smaller key size. In addition, the proposed scheme was proven secure in the random oracle model under the hyperelliptic curve discrete logarithm problem. A security analysis was also carried out, as well as comparisons with appropriate current online/offline signature schemes. The comparison demonstrated that the proposed scheme is superior to the existing schemes in terms of both security and efficiency. Additionally, we also employed the fuzzy-based Evaluation-based Distance from Average Solutions (EDAS) technique to demonstrate the effectiveness of the proposed scheme.

19.
Front Psychol ; 13: 920594, 2022.
Article in English | MEDLINE | ID: mdl-35719580

ABSTRACT

Consumers' decision-making is complex and diverse in terms of gender. Different social, psychological, and economic factors mold the decision-making preferences of consumers. Most researchers used a variance-based approach to explain consumer decision-making that assumes symmetric relationship between variables. We have collected data from 468 smartwatch users and applied a fuzzy set qualitative comparative analysis (fsQCA) to explain and compare male and female consumers' decision-making complexity. fsQCA assumes that an asymmetric relationship between variables can exist in the real world, and different combinations of variables can lead to the same output. Results explain that different variables have a core and secondary level of impact on consumer decision-making. Hence, we can not claim that certain factors are significant or insignificant for decision-making. fsQCA results revealed that cost value, performance expectancy, and social influence play a key role in consumers' buying decisions. This study has contributed to the existing literature by explaining consumer decision-making by applying configuration and complexity theories and identifying unique solutions for both genders. A major contribution to theoretical literature was also made by this research, which revealed the complexity of consumer purchasing decisions made for new products.

20.
Front Public Health ; 10: 885212, 2022.
Article in English | MEDLINE | ID: mdl-35548086

ABSTRACT

Percentage mammographic breast density (MBD) is one of the most notable biomarkers. It is assessed visually with the support of radiologists with the four qualitative Breast Imaging Reporting and Data System (BIRADS) categories. It is demanding for radiologists to differentiate between the two variably allocated BIRADS classes, namely, "BIRADS C and BIRADS D." Recently, convolution neural networks have been found superior in classification tasks due to their ability to extract local features with shared weight architecture and space invariance characteristics. The proposed study intends to examine an artificial intelligence (AI)-based MBD classifier toward developing a latent computer-assisted tool for radiologists to distinguish the BIRADS class in modern clinical progress. This article proposes a multichannel DenseNet architecture for MBD classification. The proposed architecture consists of four-channel DenseNet transfer learning architecture to extract significant features from a single patient's two a mediolateral oblique (MLO) and two craniocaudal (CC) views of digital mammograms. The performance of the proposed classifier is evaluated using 200 cases consisting of 800 digital mammograms of the different BIRADS density classes with validated density ground truth. The classifier's performance is assessed with quantitative metrics such as precision, responsiveness, specificity, and the area under the curve (AUC). The concluding preliminary outcomes reveal that this intended multichannel model has delivered good performance with an accuracy of 96.67% during training and 90.06% during testing and an average AUC of 0.9625. Obtained results are also validated qualitatively with the help of a radiologist expert in the field of MBD. Proposed architecture achieved state-of-the-art results with a fewer number of images and with less computation power.


Subject(s)
Breast Density , Breast Neoplasms , Artificial Intelligence , Breast Neoplasms/diagnostic imaging , Female , Humans , Mammography/methods , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...