Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(10)2022 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-35632142

RESUMO

Blockchain technology is gaining a lot of attention in various fields, such as intellectual property, finance, smart agriculture, etc. The security features of blockchain have been widely used, integrated with artificial intelligence, Internet of Things (IoT), software defined networks (SDN), etc. The consensus mechanism of blockchain is its core and ultimately affects the performance of the blockchain. In the past few years, many consensus algorithms, such as proof of work (PoW), ripple, proof of stake (PoS), practical byzantine fault tolerance (PBFT), etc., have been designed to improve the performance of the blockchain. However, the high energy requirement, memory utilization, and processing time do not match with our actual desires. This paper proposes the consensus approach on the basis of PoW, where a single miner is selected for mining the task. The mining task is offloaded to the edge networking. The miner is selected on the basis of the digitization of the specifications of the respective machines. The proposed model makes the consensus approach more energy efficient, utilizes less memory, and less processing time. The improvement in energy consumption is approximately 21% and memory utilization is 24%. Efficiency in the block generation rate at the fixed time intervals of 20 min, 40 min, and 60 min was observed.

2.
Sensors (Basel) ; 22(16)2022 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-36015869

RESUMO

Wireless sensor networks (WSNs) have recently been viewed as the basic architecture that prepared the way for the Internet of Things (IoT) to arise. Nevertheless, when WSNs are linked with the IoT, a difficult issue arises due to excessive energy utilization in their nodes and short network longevity. As a result, energy constraints in sensor nodes, sensor data sharing and routing protocols are the fundamental topics in WSN. This research presents an enhanced smart-energy-efficient routing protocol (ESEERP) technique that extends the lifetime of the network and improves its connection to meet the aforementioned deficiencies. It selects the Cluster Head (CH) depending on an efficient optimization method derived from several purposes. It aids in the reduction of sleepy sensor nodes and decreases energy utilization. A Sail Fish Optimizer (SFO) is used to find an appropriate route to the sink node for data transfer following CH selection. Regarding energy utilization, bandwidth, packet delivery ratio and network longevity, the proposed methodology is mathematically studied, and the results have been compared to identical current approaches such as a Genetic algorithm (GA), Ant Lion optimization (ALO) and Particle Swarm Optimization (PSO). The simulation shows that in the proposed approach for the longevity of the network, there are 3500 rounds; energy utilization achieves a maximum of 0.5 Joules; bandwidth transmits the data at the rate of 0.52 MBPS; the packet delivery ratio (PDR) is at the rate of 96% for 500 nodes, respectively.


Assuntos
Redes de Comunicação de Computadores , Internet das Coisas , Algoritmos , Animais , Conservação de Recursos Energéticos , Tecnologia sem Fio
3.
Sensors (Basel) ; 22(8)2022 Apr 13.
Artigo em Inglês | MEDLINE | ID: mdl-35458972

RESUMO

Lymph node metastasis in breast cancer may be accurately predicted using a DenseNet-169 model. However, the current system for identifying metastases in a lymph node is manual and tedious. A pathologist well-versed with the process of detection and characterization of lymph nodes goes through hours investigating histological slides. Furthermore, because of the massive size of most whole-slide images (WSI), it is wise to divide a slide into batches of small image patches and apply methods independently on each patch. The present work introduces a novel method for the automated diagnosis and detection of metastases from whole slide images using the Fast AI framework and the 1-cycle policy. Additionally, it compares this new approach to previous methods. The proposed model has surpassed other state-of-art methods with more than 97.4% accuracy. In addition, a mobile application is developed for prompt and quick response. It collects user information and models to diagnose metastases present in the early stages of cancer. These results indicate that the suggested model may assist general practitioners in accurately analyzing breast cancer situations, hence preventing future complications and mortality. With digital image processing, histopathologic interpretation and diagnostic accuracy have improved considerably.


Assuntos
Neoplasias da Mama , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/patologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Linfonodos/patologia , Metástase Linfática/patologia , Políticas
4.
Sensors (Basel) ; 22(13)2022 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-35808508

RESUMO

Cloud providers create a vendor-locked-in environment by offering proprietary and non-standard APIs, resulting in a lack of interoperability and portability among clouds. To overcome this deterrent, solutions must be developed to exploit multiple clouds efficaciously. This paper proposes a middleware platform to mitigate the application portability issue among clouds. A literature review is also conducted to analyze the solutions for application portability. The middleware allows an application to be ported on various platform-as-a-service (PaaS) clouds and supports deploying different services of an application on disparate clouds. The efficiency of the abstraction layer is validated by experimentation on an application that uses the message queue, Binary Large Objects (BLOB), email, and short message service (SMS) services of various clouds via the proposed middleware against the same application using these services via their native code. The experimental results show that adding this middleware mildly affects the latency, but it dramatically reduces the developer's overhead of implementing each service for different clouds to make it portable.


Assuntos
Software
5.
Sensors (Basel) ; 22(9)2022 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-35591142

RESUMO

As a result of the proliferation of digital and network technologies in all facets of modern society, including the healthcare systems, the widespread adoption of Electronic Healthcare Records (EHRs) has become the norm. At the same time, Blockchain has been widely accepted as a potent solution for addressing security issues in any untrusted, distributed, decentralized application and has thus seen a slew of works on Blockchain-enabled EHRs. However, most such prototypes ignore the performance aspects of proposed designs. In this paper, a prototype for a Blockchain-based EHR has been presented that employs smart contracts with Hyperledger Fabric 2.0, which also provides a unified performance analysis with Hyperledger Caliper 0.4.2. The additional contribution of this paper lies in the use of a multi-hosted testbed for the performance analysis in addition to far more realistic Gossip-based traffic scenario analysis with Tcpdump tools. Moreover, the prototype is tested for performance with superior transaction ordering schemes such as Kafka and RAFT, unlike other literature that mostly uses SOLO for the purpose, which accounts for superior fault tolerance. All of these additional unique features make the performance evaluation presented herein much more realistic and hence adds hugely to the credibility of the results obtained. The proposed framework within the multi-host instances continues to behave more successfully with high throughput, low latency, and low utilization of resources for opening, querying, and transferring transactions into a healthcare Blockchain network. The results obtained in various rounds of evaluation demonstrate the superiority of the proposed framework.


Assuntos
Blockchain , Benchmarking , Atenção à Saúde , Tecnologia
6.
Sensors (Basel) ; 22(23)2022 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-36502150

RESUMO

The wearable healthcare equipment is primarily designed to alert patients of any specific health conditions or to act as a useful tool for treatment or follow-up. With the growth of technologies and connectivity, the security of these devices has become a growing concern. The lack of security awareness amongst novice users and the risk of several intermediary attacks for accessing health information severely endangers the use of IoT-enabled healthcare systems. In this paper, a blockchain-based secure data storage system is proposed along with a user authentication and health status prediction system. Firstly, this work utilizes reversed public-private keys combined Rivest-Shamir-Adleman (RP2-RSA) algorithm for providing security. Secondly, feature selection is completed by employing the correlation factor-induced salp swarm optimization algorithm (CF-SSOA). Finally, health status classification is performed using advanced weight initialization adapted SignReLU activation function-based artificial neural network (ASR-ANN) which classifies the status as normal and abnormal. Meanwhile, the abnormal measures are stored in the corresponding patient blockchain. Here, blockchain technology is used to store medical data securely for further analysis. The proposed model has achieved an accuracy of 95.893% and is validated by comparing it with other baseline techniques. On the security front, the proposed RP2-RSA attains a 96.123% security level.


Assuntos
Blockchain , Humanos , Redes Neurais de Computação , Algoritmos , Tecnologia , Atenção à Saúde , Segurança Computacional
7.
Sensors (Basel) ; 21(16)2021 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-34451013

RESUMO

In machine learning and data science, feature selection is considered as a crucial step of data preprocessing. When we directly apply the raw data for classification or clustering purposes, sometimes we observe that the learning algorithms do not perform well. One possible reason for this is the presence of redundant, noisy, and non-informative features or attributes in the datasets. Hence, feature selection methods are used to identify the subset of relevant features that can maximize the model performance. Moreover, due to reduction in feature dimension, both training time and storage required by the model can be reduced as well. In this paper, we present a tri-stage wrapper-filter-based feature selection framework for the purpose of medical report-based disease detection. In the first stage, an ensemble was formed by four filter methods-Mutual Information, ReliefF, Chi Square, and Xvariance-and then each feature from the union set was assessed by three classification algorithms-support vector machine, naïve Bayes, and k-nearest neighbors-and an average accuracy was calculated. The features with higher accuracy were selected to obtain a preliminary subset of optimal features. In the second stage, Pearson correlation was used to discard highly correlated features. In these two stages, XGBoost classification algorithm was applied to obtain the most contributing features that, in turn, provide the best optimal subset. Then, in the final stage, we fed the obtained feature subset to a meta-heuristic algorithm, called whale optimization algorithm, in order to further reduce the feature set and to achieve higher accuracy. We evaluated the proposed feature selection framework on four publicly available disease datasets taken from the UCI machine learning repository, namely, arrhythmia, leukemia, DLBCL, and prostate cancer. Our obtained results confirm that the proposed method can perform better than many state-of-the-art methods and can detect important features as well. Less features ensure less medical tests for correct diagnosis, thus saving both time and cost.


Assuntos
Algoritmos , Máquina de Vetores de Suporte , Teorema de Bayes , Análise por Conglomerados , Humanos , Aprendizado de Máquina , Masculino
8.
Sensors (Basel) ; 21(23)2021 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-34884099

RESUMO

Diabetes is a fatal disease that currently has no treatment. However, early diagnosis of diabetes aids patients to start timely treatment and thus reduces or eliminates the risk of severe complications. The prevalence of diabetes has been rising rapidly worldwide. Several methods have been introduced to diagnose diabetes at an early stage, however, most of these methods lack interpretability, due to which the diagnostic process cannot be explained. In this paper, fuzzy logic has been employed to develop an interpretable model and to perform an early diagnosis of diabetes. Fuzzy logic has been combined with the cosine amplitude method, and two fuzzy classifiers have been constructed. Afterward, fuzzy rules have been designed based on these classifiers. Lastly, a publicly available diabetes dataset has been used to evaluate the performance of the proposed fuzzy rule-based model. The results show that the proposed model outperforms existing techniques by achieving an accuracy of 96.47%. The proposed model has demonstrated great prediction accuracy, suggesting that it can be utilized in the healthcare sector for the accurate diagnose of diabetes.


Assuntos
Algoritmos , Diabetes Mellitus , Diabetes Mellitus/diagnóstico , Lógica Fuzzy , Humanos
9.
Sensors (Basel) ; 22(1)2021 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-35274628

RESUMO

The paper presents a new security aspect for a Mobile Ad-Hoc Network (MANET)-based IoT model using the concept of artificial intelligence. The Black Hole Attack (BHA) is considered one of the most affecting threats in the MANET in which the attacker node drops the entire data traffic and hence degrades the network performance. Therefore, it necessitates the designing of an algorithm that can protect the network from the BHA node. This article introduces Ad-hoc On-Demand Distance Vector (AODV), a new updated routing protocol that combines the advantages of the Artificial Bee Colony (ABC), Artificial Neural Network (ANN), and Support Vector Machine (SVM) techniques. The combination of the SVM with ANN is the novelty of the proposed model that helps to identify the attackers within the discovered route using the AODV routing mechanism. Here, the model is trained using ANN but the selection of training data is performed using the ABC fitness function followed by SVM. The role of ABC is to provide a better route for data transmission between the source and the destination node. The optimized route, suggested by ABC, is then passed to the SVM model along with the node's properties. Based on those properties ANN decides whether the node is a normal or an attacker node. The simulation analysis performed in MATLAB shows that the proposed work exhibits an improvement in terms of Packet Delivery Ratio (PDR), throughput, and delay. To validate the system efficiency, a comparative analysis is performed against the existing approaches such as Decision Tree and Random Forest that indicate that the utilization of the SVM with ANN is a beneficial step regarding the detection of BHA attackers in the MANET-based IoT networks.


Assuntos
Algoritmos , Inteligência Artificial , Simulação por Computador , Redes Neurais de Computação , Máquina de Vetores de Suporte
10.
Sensors (Basel) ; 21(16)2021 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-34450933

RESUMO

The substantial advancements offered by the edge computing has indicated serious evolutionary improvements for the internet of things (IoT) technology. The rigid design philosophy of the traditional network architecture limits its scope to meet future demands. However, information centric networking (ICN) is envisioned as a promising architecture to bridge the huge gaps and maintain IoT networks, mostly referred as ICN-IoT. The edge-enabled ICN-IoT architecture always demands efficient in-network caching techniques for supporting better user's quality of experience (QoE). In this paper, we propose an enhanced ICN-IoT content caching strategy by enabling artificial intelligence (AI)-based collaborative filtering within the edge cloud to support heterogeneous IoT architecture. This collaborative filtering-based content caching strategy would intelligently cache content on edge nodes for traffic management at cloud databases. The evaluations has been conducted to check the performance of the proposed strategy over various benchmark strategies, such as LCE, LCD, CL4M, and ProbCache. The analytical results demonstrate the better performance of our proposed strategy with average gain of 15% for cache hit ratio, 12% reduction in content retrieval delay, and 28% reduced average hop count in comparison to best considered LCD. We believe that the proposed strategy will contribute an effective solution to the related studies in this domain.

11.
Sci Rep ; 14(1): 6589, 2024 03 19.
Artigo em Inglês | MEDLINE | ID: mdl-38504098

RESUMO

Identifying and recognizing the food on the basis of its eating sounds is a challenging task, as it plays an important role in avoiding allergic foods, providing dietary preferences to people who are restricted to a particular diet, showcasing its cultural significance, etc. In this research paper, the aim is to design a novel methodology that helps to identify food items by analyzing their eating sounds using various deep learning models. To achieve this objective, a system has been proposed that extracts meaningful features from food-eating sounds with the help of signal processing techniques and deep learning models for classifying them into their respective food classes. Initially, 1200 audio files for 20 food items labeled have been collected and visualized to find relationships between the sound files of different food items. Later, to extract meaningful features, various techniques such as spectrograms, spectral rolloff, spectral bandwidth, and mel-frequency cepstral coefficients are used for the cleaning of audio files as well as to capture the unique characteristics of different food items. In the next phase, various deep learning models like GRU, LSTM, InceptionResNetV2, and the customized CNN model have been trained to learn spectral and temporal patterns in audio signals. Besides this, the models have also been hybridized i.e. Bidirectional LSTM + GRU and RNN + Bidirectional LSTM, and RNN + Bidirectional GRU to analyze their performance for the same labeled data in order to associate particular patterns of sound with their corresponding class of food item. During evaluation, the highest accuracy, precision,F1 score, and recall have been obtained by GRU with 99.28%, Bidirectional LSTM + GRU with 97.7% as well as 97.3%, and RNN + Bidirectional LSTM with 97.45%, respectively. The results of this study demonstrate that deep learning models have the potential to precisely identify foods on the basis of their sound by computing the best outcomes.


Assuntos
Aprendizado Profundo , Humanos , Reconhecimento Psicológico , Alimentos , Rememoração Mental , Registros
12.
Heliyon ; 10(5): e26416, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38468957

RESUMO

The emergence of federated learning (FL) technique in fog-enabled healthcare system has leveraged enhanced privacy towards safeguarding sensitive patient information over heterogeneous computing platforms. In this paper, we introduce the FedHealthFog framework, which was meticulously developed to overcome the difficulties of distributed learning in resource-constrained IoT-enabled healthcare systems, particularly those sensitive to delays and energy efficiency. Conventional federated learning approaches face challenges stemming from substantial compute requirements and significant communication costs. This is primarily due to their reliance on a singular server for the aggregation of global data, which results in inefficient training models. We present a transformational approach to address these problems by elevating strategically placed fog nodes to the position of local aggregators within the federated learning architecture. A sophisticated greedy heuristic technique is used to optimize the choice of a fog node as the global aggregator in each communication cycle between edge devices and the cloud. The FedHealthFog system notably accounts for drop in communication latency of 87.01%, 26.90%, and 71.74%, and energy consumption of 57.98%, 34.36%, and 35.37% respectively, for three benchmark algorithms analyzed in this study. The effectiveness of FedHealthFog is strongly supported by outcomes of our experiments compared to cutting-edge alternatives while simultaneously reducing number of global aggregation cycles. These findings highlight FedHealthFog's potential to transform federated learning in resource-constrained IoT environments for delay-sensitive applications.

13.
Sci Rep ; 14(1): 5753, 2024 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-38459096

RESUMO

Parasitic organisms pose a major global health threat, mainly in regions that lack advanced medical facilities. Early and accurate detection of parasitic organisms is vital to saving lives. Deep learning models have uplifted the medical sector by providing promising results in diagnosing, detecting, and classifying diseases. This paper explores the role of deep learning techniques in detecting and classifying various parasitic organisms. The research works on a dataset consisting of 34,298 samples of parasites such as Toxoplasma Gondii, Trypanosome, Plasmodium, Leishmania, Babesia, and Trichomonad along with host cells like red blood cells and white blood cells. These images are initially converted from RGB to grayscale followed by the computation of morphological features such as perimeter, height, area, and width. Later, Otsu thresholding and watershed techniques are applied to differentiate foreground from background and create markers on the images for the identification of regions of interest. Deep transfer learning models such as VGG19, InceptionV3, ResNet50V2, ResNet152V2, EfficientNetB3, EfficientNetB0, MobileNetV2, Xception, DenseNet169, and a hybrid model, InceptionResNetV2, are employed. The parameters of these models are fine-tuned using three optimizers: SGD, RMSprop, and Adam. Experimental results reveal that when RMSprop is applied, VGG19, InceptionV3, and EfficientNetB0 achieve the highest accuracy of 99.1% with a loss of 0.09. Similarly, using the SGD optimizer, InceptionV3 performs exceptionally well, achieving the highest accuracy of 99.91% with a loss of 0.98. Finally, applying the Adam optimizer, InceptionResNetV2 excels, achieving the highest accuracy of 99.96% with a loss of 0.13, outperforming other optimizers. The findings of this research signify that using deep learning models coupled with image processing methods generates a highly accurate and efficient way to detect and classify parasitic organisms.


Assuntos
Babesia , Aprendizado Profundo , Parasitos , Toxoplasma , Animais , Microscopia
14.
Sci Rep ; 13(1): 20918, 2023 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-38017082

RESUMO

In this article, a low-complexity VLSI architecture based on a radix-4 hyperbolic COordinate Rotion DIgital Computer (CORDIC) is proposed to compute the [Formula: see text] root and [Formula: see text] power of a fixed-point number. The most recent techniques use the radix-2 CORDIC algorithm to compute the root and power. The high computation latency of radix-2 CORDIC is the primary concern for the designers. [Formula: see text] root and [Formula: see text] power computations are divided into three phases, and each phase is performed by a different class of the proposed modified radix-4 CORDIC algorithms in the proposed architecture. Although radix-4 CORDIC can converge faster with fewer recurrences, it demands more hardware resources and computational steps due to its intricate angle selection logic and variable scale factor. We have employed the modified radix-4 hyperbolic vectoring (R4HV) CORDIC to compute logarithms, radix-4 linear vectoring (R4LV) to perform division, and the modified scaling-free radix-4 hyperbolic rotation (R4HR) CORDIC to compute exponential. The criteria to select the amount of rotation in R4HV CORDIC is complicated and depends on the coordinates [Formula: see text] and [Formula: see text] of the rotating vector. In the proposed modified R4HV CORDIC, we have derived the simple selection criteria based on the fact that the inputs to R4HV CORDIC are related. The proposed criteria only depend on the coordinate [Formula: see text] that reduces the hardware complexity of the R4HV CORDIC. The R4HR CORDIC shows the complex scale factor, and compensation of such scale factor necessitates the complex hardware. The complexity of R4HR CORDIC is reduced by pre-computing the scale factor for initial iterations and by employing scaling-free rotations for later iterations. Quantitative hardware analysis suggests better hardware utilization than the recent approaches. The proposed architecture is implemented on a Virtex-6 FPGA, and FPGA implementation demonstrates [Formula: see text] less hardware utilization with better error performance than the approach with the radix-2 CORDIC algorithm.

15.
Sci Rep ; 13(1): 5372, 2023 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-37005398

RESUMO

Industrial Internet of Things (IIoT) seeks more attention in attaining enormous opportunities in the field of Industry 4.0. But there exist severe challenges related to data privacy and security when processing the automatic and practical data collection and monitoring over industrial applications in IIoT. Traditional user authentication strategies in IIoT are affected by single factor authentication, which leads to poor adaptability along with the increasing users count and different user categories. For addressing such issue, this paper aims to implement the privacy preservation model in IIoT using the advancements of artificial intelligent techniques. The two major stages of the designed system are the sanitization and restoration of IIoT data. Data sanitization hides the sensitive information in IIoT for preventing it from leakage of information. Moreover, the designed sanitization procedure performs the optimal key generation by a new Grasshopper-Black Hole Optimization (G-BHO) algorithm. A multi-objective function involving the parameters like degree of modification, hiding rate, correlation coefficient between the actual data and restored data, and information preservation rate was derived and utilized for generating optimal key. The simulation result establishes the dominance of the proposed model over other state-of the-art models in terms of various performance metrics. In respect of privacy preservation, the proposed G-BHO algorithm has achieved 1%, 15.2%, 12.6%, and 1% enhanced result than JA, GWO, GOA, and BHO, respectively.

16.
Sci Rep ; 13(1): 22204, 2023 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-38097756

RESUMO

The steady two-dimension (2D) ternary nanofluid (TNF) flow across an inclined permeable cylinder/plate is analyzed in the present study. The TNF flow has been examined under the consequences of heat source/sink, permeable medium and mixed convection. For the preparation of TNF, the magnesium oxide (MgO), cobalt ferrite (CoFe2O4) and titanium dioxide (TiO2) are dispersed in water. The rising need for highly efficient cooling mechanisms in several sectors and energy-related processes ultimately inspired the current work. The fluid flow and energy propagation is mathematically described in the form of coupled PDEs. The system of PDEs is reduced into non-dimensional forms of ODEs, which are further numerically handled through the Matlab package (bvp4c). It has been observed that the results display that the porosity factor advances the thermal curve, whereas drops the fluid velocity. The effect of heat source/sink raises the energy field. Furthermore, the plate surface illustrates a leading behavior of energy transport over cylinder geometry versus the variation of ternary nanoparticles (NPs). The energy dissemination rate in the cylinder enhances from 4.73 to 11.421%, whereas for the plate, the energy distribution rate boosts from 6.37 to 13.91% as the porosity factor varies from 0.3 to 0.9.

17.
Sci Rep ; 13(1): 18475, 2023 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-37891188

RESUMO

Agriculture plays a pivotal role in the economies of developing countries by providing livelihoods, sustenance, and employment opportunities in rural areas. However, crop diseases pose a significant threat to both farmers' incomes and food security. Furthermore, these diseases also show adverse effects on human health by causing various illnesses. Till date, only a limited number of studies have been conducted to identify and classify diseased cauliflower plants but they also face certain challenges such as insufficient disease surveillance mechanisms, the lack of comprehensive datasets that are properly labelled as well as are of high quality, and the considerable computational resources that are necessary for conducting thorough analysis. In view of the aforementioned challenges, the primary objective of this manuscript is to tackle these significant concerns and enhance understanding regarding the significance of cauliflower disease identification and detection in rural agriculture through the use of advanced deep transfer learning techniques. The work is conducted on the four classes of cauliflower diseases i.e. Bacterial spot rot, Black rot, Downy Mildew, and No disease which are taken from VegNet dataset. Ten deep transfer learning models such as EfficientNetB0, Xception, EfficientNetB1, MobileNetV2, EfficientNetB2, DenseNet201, EfficientNetB3, InceptionResNetV2, EfficientNetB4, and ResNet152V2, are trained and examined on the basis of root mean square error, recall, precision, F1-score, accuracy, and loss. Remarkably, EfficientNetB1 achieved the highest validation accuracy (99.90%), lowest loss (0.16), and root mean square error (0.40) during experimentation. It has been observed that our research highlights the critical role of advanced CNN models in automating cauliflower disease detection and classification and such models can lead to robust applications for cauliflower disease management in agriculture, ultimately benefiting both farmers and consumers.


Assuntos
Aprendizado Profundo , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Humanos , Agricultura , Gerenciamento Clínico , Pesquisa Empírica
18.
Sci Rep ; 12(1): 7974, 2022 05 13.
Artigo em Inglês | MEDLINE | ID: mdl-35562362

RESUMO

Software effort estimation is a significant part of software development and project management. The accuracy of effort estimation and scheduling results determines whether a project succeeds or fails. Many studies have focused on improving the accuracy of predicted results, yet accurate estimation of effort has proven to be a challenging task for researchers and practitioners, particularly when it comes to projects that use agile approaches. This work investigates the application of the adaptive neuro-fuzzy inference system (ANFIS) along with the novel Energy-Efficient BAT (EEBAT) technique for effort prediction in the Scrum environment. The proposed ANFIS-EEBAT approach is evaluated using real agile datasets. It provides the best results in all the evaluation criteria used. The proposed approach is also statistically validated using nonparametric tests, and it is found that ANFIS-EEBAT worked best as compared to various state-of-the-art meta-heuristic and machine learning (ML) algorithms such as fireworks, ant lion optimizer (ALO), bat, particle swarm optimization (PSO), and genetic algorithm (GA).


Assuntos
Algoritmos , Aprendizado de Máquina , Fenômenos Físicos , Software
19.
Comput Intell Neurosci ; 2022: 7086632, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35800676

RESUMO

It is vital to develop an appropriate prediction model and link carefully to measurable events such as clinical parameters and patient outcomes to analyze the severity of the disease. Timely identifying retinal diseases is becoming more vital to prevent blindness among young and adults. Investigation of blood vessels delivers preliminary information on the existence and treatment of glaucoma, retinopathy, and so on. During the analysis of diabetic retinopathy, one of the essential steps is to extract the retinal blood vessel accurately. This study presents an improved Gabor filter through various enhancement approaches. The degraded images with the enhancement of certain features can simplify image interpretation both for a human observer and for machine recognition. Thus, in this work, few enhancement approaches such as Gamma corrected adaptively with distributed weight (GCADW), joint equalization of histogram (JEH), homomorphic filter, unsharp masking filter, adaptive unsharp masking filter, and particle swarm optimization (PSO) based unsharp masking filter are taken into consideration. In this paper, an effort has been made to improve the performance of the Gabor filter by combining it with different enhancement methods and to enhance the detection of blood vessels. The performance of all the suggested approaches is assessed on publicly available databases such as DRIVE and CHASE_DB1. The results of all the integrated enhanced techniques are analyzed, discussed, and compared. The best result is delivered by PSO unsharp masking filter combined with the Gabor filter with an accuracy of 0.9593 for the DRIVE database and 0.9685 for the CHASE_DB1 database. The results illustrate the robustness of the recommended model in automatic blood vessel segmentation that makes it possible to be a clinical support decision tool in diabetic retinopathy diagnosis.


Assuntos
Retinopatia Diabética , Algoritmos , Bases de Dados Factuais , Humanos , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Vasos Retinianos
20.
Comput Intell Neurosci ; 2022: 3854635, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35528334

RESUMO

Recent imaging science and technology discoveries have considered hyperspectral imagery and remote sensing. The current intelligent technologies, such as support vector machines, sparse representations, active learning, extreme learning machines, transfer learning, and deep learning, are typically based on the learning of the machines. These techniques enrich the processing of such three-dimensional, multiple bands, and high-resolution images with their precision and fidelity. This article presents an extensive survey depicting machine-dependent technologies' contributions and deep learning on landcover classification based on hyperspectral images. The objective of this study is three-fold. First, after reading a large pool of Web of Science (WoS), Scopus, SCI, and SCIE-indexed and SCIE-related articles, we provide a novel approach for review work that is entirely systematic and aids in the inspiration of finding research gaps and developing embedded questions. Second, we emphasize contemporary advances in machine learning (ML) methods for identifying hyperspectral images, with a brief, organized overview and a thorough assessment of the literature involved. Finally, we draw the conclusions to assist researchers in expanding their understanding of the relationship between machine learning and hyperspectral images for future research.


Assuntos
Aprendizado de Máquina , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA