Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 70
Filtrar
1.
J Med Internet Res ; 26: e47430, 2024 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-38241075

RESUMO

BACKGROUND: Diabetes mellitus (DM) is a major health concern among children with the widespread adoption of advanced technologies. However, concerns are growing about the transparency, replicability, biasedness, and overall validity of artificial intelligence studies in medicine. OBJECTIVE: We aimed to systematically review the reporting quality of machine learning (ML) studies of pediatric DM using the Minimum Information About Clinical Artificial Intelligence Modelling (MI-CLAIM) checklist, a general reporting guideline for medical artificial intelligence studies. METHODS: We searched the PubMed and Web of Science databases from 2016 to 2020. Studies were included if the use of ML was reported in children with DM aged 2 to 18 years, including studies on complications, screening studies, and in silico samples. In studies following the ML workflow of training, validation, and testing of results, reporting quality was assessed via MI-CLAIM by consensus judgments of independent reviewer pairs. Positive answers to the 17 binary items regarding sufficient reporting were qualitatively summarized and counted as a proxy measure of reporting quality. The synthesis of results included testing the association of reporting quality with publication and data type, participants (human or in silico), research goals, level of code sharing, and the scientific field of publication (medical or engineering), as well as with expert judgments of clinical impact and reproducibility. RESULTS: After screening 1043 records, 28 studies were included. The sample size of the training cohort ranged from 5 to 561. Six studies featured only in silico patients. The reporting quality was low, with great variation among the 21 studies assessed using MI-CLAIM. The number of items with sufficient reporting ranged from 4 to 12 (mean 7.43, SD 2.62). The items on research questions and data characterization were reported adequately most often, whereas items on patient characteristics and model examination were reported adequately least often. The representativeness of the training and test cohorts to real-world settings and the adequacy of model performance evaluation were the most difficult to judge. Reporting quality improved over time (r=0.50; P=.02); it was higher than average in prognostic biomarker and risk factor studies (P=.04) and lower in noninvasive hypoglycemia detection studies (P=.006), higher in studies published in medical versus engineering journals (P=.004), and higher in studies sharing any code of the ML pipeline versus not sharing (P=.003). The association between expert judgments and MI-CLAIM ratings was not significant. CONCLUSIONS: The reporting quality of ML studies in the pediatric population with DM was generally low. Important details for clinicians, such as patient characteristics; comparison with the state-of-the-art solution; and model examination for valid, unbiased, and robust results, were often the weak points of reporting. To assess their clinical utility, the reporting standards of ML studies must evolve, and algorithms for this challenging population must become more transparent and replicable.


Assuntos
Inteligência Artificial , Diabetes Mellitus , Humanos , Criança , Reprodutibilidade dos Testes , Aprendizado de Máquina , Diabetes Mellitus/diagnóstico , Lista de Checagem
2.
Sensors (Basel) ; 22(12)2022 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-35746208

RESUMO

The convolutional neural network (CNN) has become a powerful tool in machine learning (ML) that is used to solve complex problems such as image recognition, natural language processing, and video analysis. Notably, the idea of exploring convolutional neural network architecture has gained substantial attention as well as popularity. This study focuses on the intrinsic various CNN architectures: LeNet, AlexNet, VGG16, ResNet-50, and Inception-V1, which have been scrutinized and compared with each other for the detection of lung cancer using publicly available LUNA16 datasets. Furthermore, multiple performance optimizers: root mean square propagation (RMSProp), adaptive moment estimation (Adam), and stochastic gradient descent (SGD), were applied for this comparative study. The performances of the three CNN architectures were measured for accuracy, specificity, sensitivity, positive predictive value, false omission rate, negative predictive value, and F1 score. The experimental results showed that the CNN AlexNet architecture with the SGD optimizer achieved the highest validation accuracy for CT lung cancer with an accuracy of 97.42%, misclassification rate of 2.58%, 97.58% sensitivity, 97.25% specificity, 97.58% positive predictive value, 97.25% negative predictive value, false omission rate of 2.75%, and F1 score of 97.58%. AlexNet with the SGD optimizer was the best and outperformed compared to the other state-of-the-art CNN architectures.


Assuntos
Neoplasias Pulmonares , Redes Neurais de Computação , Humanos , Neoplasias Pulmonares/diagnóstico , Aprendizado de Máquina , Tomografia Computadorizada por Raios X
3.
Sensors (Basel) ; 22(9)2022 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-35591194

RESUMO

Precipitation in any form-such as rain, snow, and hail-can affect day-to-day outdoor activities. Rainfall prediction is one of the challenging tasks in weather forecasting process. Accurate rainfall prediction is now more difficult than before due to the extreme climate variations. Machine learning techniques can predict rainfall by extracting hidden patterns from historical weather data. Selection of an appropriate classification technique for prediction is a difficult job. This research proposes a novel real-time rainfall prediction system for smart cities using a machine learning fusion technique. The proposed framework uses four widely used supervised machine learning techniques, i.e., decision tree, Naïve Bayes, K-nearest neighbors, and support vector machines. For effective prediction of rainfall, the technique of fuzzy logic is incorporated in the framework to integrate the predictive accuracies of the machine learning techniques, also known as fusion. For prediction, 12 years of historical weather data (2005 to 2017) for the city of Lahore is considered. Pre-processing tasks such as cleaning and normalization were performed on the dataset before the classification process. The results reflect that the proposed machine learning fusion-based framework outperforms other models.


Assuntos
Lógica Fuzzy , Aprendizado de Máquina , Teorema de Bayes , Cidades , Máquina de Vetores de Suporte
4.
Sensors (Basel) ; 22(10)2022 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-35632242

RESUMO

Oral cancer is a dangerous and extensive cancer with a high death ratio. Oral cancer is the most usual cancer in the world, with more than 300,335 deaths every year. The cancerous tumor appears in the neck, oral glands, face, and mouth. To overcome this dangerous cancer, there are many ways to detect like a biopsy, in which small chunks of tissues are taken from the mouth and tested under a secure and hygienic microscope. However, microscope results of tissues to detect oral cancer are not up to the mark, a microscope cannot easily identify the cancerous cells and normal cells. Detection of cancerous cells using microscopic biopsy images helps in allaying and predicting the issues and gives better results if biologically approaches apply accurately for the prediction of cancerous cells, but during the physical examinations microscopic biopsy images for cancer detection there are major chances for human error and mistake. So, with the development of technology deep learning algorithms plays a major role in medical image diagnosing. Deep learning algorithms are efficiently developed to predict breast cancer, oral cancer, lung cancer, or any other type of medical image. In this study, the proposed model of transfer learning model using AlexNet in the convolutional neural network to extract rank features from oral squamous cell carcinoma (OSCC) biopsy images to train the model. Simulation results have shown that the proposed model achieved higher classification accuracy 97.66% and 90.06% of training and testing, respectively.


Assuntos
Carcinoma de Células Escamosas , Neoplasias de Cabeça e Pescoço , Neoplasias Bucais , Biópsia , Carcinoma de Células Escamosas/diagnóstico , Humanos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Neoplasias Bucais/diagnóstico , Carcinoma de Células Escamosas de Cabeça e Pescoço
5.
Sensors (Basel) ; 21(1)2021 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-33401741

RESUMO

In this paper, the multi-state synchronization of chaotic systems with non-identical, unknown, and time-varying delay in the presence of external perturbations and parametric uncertainties was studied. The presence of unknown delays, unknown bounds of disturbance and uncertainty, as well as changes in system parameters complicate the determination of control function and synchronization. During a synchronization scheme using a robust-adaptive control procedure with the help of the Lyapunov stability theorem, the errors converged to zero, and the updating rules were set to estimate the system parameters and delays. To investigate the performance of the proposed design, simulations have been carried out on two Chen hyper-chaotic systems as the slave and one Chua hyper-chaotic system as the master. Our results showed that the proposed controller outperformed the state-of-the-art techniques in terms of convergence speed of synchronization, parameter estimation, and delay estimation processes. The parameters and time delays were achieved with appropriate approximation. Finally, secure communication was realized with a chaotic masking method, and our results revealed the effectiveness of the proposed method in secure telecommunications.

6.
J Environ Manage ; 298: 113551, 2021 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-34435571

RESUMO

The predicts current and future flood risk in the Kalvan watershed of northwestern Markazi Province, Iran. To do this, 512 flood and non-flood locations were identified and mapped. Twenty flood-risk factors were selected to model flood risk using several machine learning techniques: conditional inference random forest (CIRF), the gradient boosting model (GBM), extreme gradient boosting (XGB) and their ensembles. To investigate the future (year 2050) effects of changing climates and changing land use on future flood risk, a general circulation model (GCM) with representative concentration pathways (RCPs) of the 2.6 and 8.5 scenarios by 2050 was tested for impacts on 8 precipitation variables. In addition, future land uses in 2050 was prepared using a CA-Markov model. The performances of the flood risk models were validated with Receiver Operating Characteristic-Area Under Curve (ROC-AUC) and other statistical analyses. The AUC value of the ROC curve indicates that the ensemble model had the highest predictive power (AUC = 0.83) and was followed by GBM (AUC = 0.80), XGB (AUC = 0.79), and CIRF (AUC = 0.78). The results of climate and land use changes on future flood-prone areas showed that the areas classified as having moderate to very high flood risk will increase by 2050. Due to the changes occurring with land uses and in climates, the area classified as moderate to very high risk increased in the predictions from all four models. The areal proportion classes of the risk zones in 2050 under the RCP 2.6 scenario using the ensemble model have changed of the following proportions from the current distribution Very Low = -12.04 %, Low = -8.56 %, Moderate = +1.56 %, High = +11.55 %, and Very High = +7.49 %. The RCP 8.5 scenario has caused the following changes from the present percentages: Very Low = -14.48 %, Low = -6.35 %, Moderate = +4.54 %, High = +10.61 %, and Very High = +5.67 %. The results of current and future flood risk mapping can aid planners and flood hazard managers in their efforts to mitigate impacts.


Assuntos
Inundações , Aprendizado de Máquina , Clima , Previsões , Curva ROC
7.
Molecules ; 26(1)2020 Dec 31.
Artigo em Inglês | MEDLINE | ID: mdl-33396329

RESUMO

Accurate determination of the physicochemical characteristics of ionic liquids (ILs), especially viscosity, at widespread operating conditions is of a vital role for various fields. In this study, the viscosity of pure ILs is modeled using three approaches: (I) a simple group contribution method based on temperature, pressure, boiling temperature, acentric factor, molecular weight, critical temperature, critical pressure, and critical volume; (II) a model based on thermodynamic properties, pressure, and temperature; and (III) a model based on chemical structure, pressure, and temperature. Furthermore, Eyring's absolute rate theory is used to predict viscosity based on boiling temperature and temperature. To develop Model (I), a simple correlation was applied, while for Models (II) and (III), smart approaches such as multilayer perceptron networks optimized by a Levenberg-Marquardt algorithm (MLP-LMA) and Bayesian Regularization (MLP-BR), decision tree (DT), and least square support vector machine optimized by bat algorithm (BAT-LSSVM) were utilized to establish robust and accurate predictive paradigms. These approaches were implemented using a large database consisting of 2813 experimental viscosity points from 45 different ILs under an extensive range of pressure and temperature. Afterward, the four most accurate models were selected to construct a committee machine intelligent system (CMIS). Eyring's theory's results to predict the viscosity demonstrated that although the theory is not precise, its simplicity is still beneficial. The proposed CMIS model provides the most precise responses with an absolute average relative deviation (AARD) of less than 4% for predicting the viscosity of ILs based on Model (II) and (III). Lastly, the applicability domain of the CMIS model and the quality of experimental data were assessed through the Leverage statistical method. It is concluded that intelligent-based predictive models are powerful alternatives for time-consuming and expensive experimental processes of the ILs viscosity measurement.


Assuntos
Algoritmos , Inteligência Artificial , Teorema de Bayes , Líquidos Iônicos/química , Solventes/química , Máquina de Vetores de Suporte , Temperatura , Termodinâmica , Viscosidade
8.
Entropy (Basel) ; 22(12)2020 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-33339406

RESUMO

Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel machine learning method is proposed to tackle real estate modeling complexity. Call detail records (CDR) provides excellent opportunities for in-depth investigation of the mobility characterization. This study explores the CDR potential for predicting the real estate price with the aid of artificial intelligence (AI). Several essential mobility entropy factors, including dweller entropy, dweller gyration, workers' entropy, worker gyration, dwellers' work distance, and workers' home distance, are used as input variables. The prediction model is developed using the machine learning method of multi-layered perceptron (MLP) trained with the evolutionary algorithm of particle swarm optimization (PSO). Model performance is evaluated using mean square error (MSE), sustainability index (SI), and Willmott's index (WI). The proposed model showed promising results revealing that the workers' entropy and the dwellers' work distances directly influence the real estate price. However, the dweller gyration, dweller entropy, workers' gyration, and the workers' home had a minimum effect on the price. Furthermore, it is shown that the flow of activities and entropy of mobility are often associated with the regions with lower real estate prices.

9.
Entropy (Basel) ; 22(11)2020 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-33287007

RESUMO

Predicting stock market (SM) trends is an issue of great interest among researchers, investors and traders since the successful prediction of SMs' direction may promise various benefits. Because of the fairly nonlinear nature of the historical data, accurate estimation of the SM direction is a rather challenging issue. The aim of this study is to present a novel machine learning (ML) model to forecast the movement of the Borsa Istanbul (BIST) 100 index. Modeling was performed by multilayer perceptron-genetic algorithms (MLP-GA) and multilayer perceptron-particle swarm optimization (MLP-PSO) in two scenarios considering Tanh (x) and the default Gaussian function as the output function. The historical financial time series data utilized in this research is from 1996 to 2020, consisting of nine technical indicators. Results are assessed using Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE) and correlation coefficient values to compare the accuracy and performance of the developed models. Based on the results, the involvement of the Tanh (x) as the output function, improved the accuracy of models compared with the default Gaussian function, significantly. MLP-PSO with population size 125, followed by MLP-GA with population size 50, provided higher accuracy for testing, reporting RMSE of 0.732583 and 0.733063, MAPE of 28.16%, 29.09% and correlation coefficient of 0.694 and 0.695, respectively. According to the results, using the hybrid ML method could successfully improve the prediction accuracy.

10.
Entropy (Basel) ; 22(11)2020 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-33286986

RESUMO

This paper presents an extensive and practical study of the estimation of stable channel bank shape and dimensions using the maximum entropy principle. The transverse slope (St) distribution of threshold channel bank cross-sections satisfies the properties of the probability space. The entropy of St is subject to two constraint conditions, and the principle of maximum entropy must be applied to find the least biased probability distribution. Accordingly, the Lagrange multiplier (λ) as a critical parameter in the entropy equation is calculated numerically based on the maximum entropy principle. The main goal of the present paper is the investigation of the hydraulic parameters influence governing the mean transverse slope (St¯) value comprehensively using a Gene Expression Programming (GEP) by knowing the initial information (discharge (Q) and mean sediment size (d50)) related to the intended problem. An explicit and simple equation of the St¯ of banks and the geometric and hydraulic parameters of flow is introduced based on the GEP in combination with the previous shape profile equation related to previous researchers. Therefore, a reliable numerical hybrid model is designed, namely Entropy-based Design Model of Threshold Channels (EDMTC) based on entropy theory combined with the evolutionary algorithm of the GEP model, for estimating the bank profile shape and also dimensions of threshold channels. A wide range of laboratory and field data are utilized to verify the proposed EDMTC. The results demonstrate that the used Shannon entropy model is accurate with a lower average value of Mean Absolute Relative Error (MARE) equal to 0.317 than a previous model proposed by Cao and Knight (1997) (MARE = 0.98) in estimating the bank profile shape of threshold channels based on entropy for the first time. Furthermore, the EDMTC proposed in this paper has acceptable accuracy in predicting the shape profile and consequently, the dimensions of threshold channel banks with a wide range of laboratory and field data when only the channel hydraulic characteristics (e.g., Q and d50) are known. Thus, EDMTC can be used in threshold channel design and implementation applications in cases when the channel characteristics are unknown. Furthermore, the uncertainty analysis of the EDMTC supports the model's high reliability with a Width of Uncertainty Bound (WUB) of ±0.03 and standard deviation (Sd) of 0.24.

11.
Environ Res ; 179(Pt A): 108770, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31577962

RESUMO

Earth fissures are the cracks on the surface of the earth mainly formed in the arid and the semi-arid basins. The excessive withdrawal of groundwater, as well as the other underground natural resources, has been introduced as the significant causing of land subsidence and potentially, the earth fissuring. Fissuring is rapidly turning into the nations' major disasters which are responsible for significant economic, social, and environmental damages with devastating consequences. Modeling the earth fissure hazard is particularly important for identifying the vulnerable groundwater areas for the informed water management, and effectively enforce the groundwater recharge policies toward the sustainable conservation plans to preserve existing groundwater resources. Modeling the formation of earth fissures and ultimately prediction of the hazardous areas has been greatly challenged due to the complexity, and the multidisciplinary involved to predict the earth fissures. This paper aims at proposing novel machine learning models for prediction of earth fissuring hazards. The Simulated annealing feature selection (SAFS) method was applied to identify key features, and the generalized linear model (GLM), multivariate adaptive regression splines (MARS), classification and regression tree (CART), random forest (RF), and support vector machine (SVM) have been used for the first time to build the prediction models. Results indicated that all the models had good accuracy (>86%) and precision (>81%) in the prediction of the earth fissure hazard. The GLM model (as a linear model) had the lowest performance, while the RF model was the best model in the modeling process. Sensitivity analysis indicated that the hazardous class in the study area was mainly related to low elevations with characteristics of high groundwater withdrawal, drop in groundwater level, high well density, high road density, low precipitation, and Quaternary sediments distribution.


Assuntos
Fenômenos Geológicos , Água Subterrânea , Modelos de Riscos Proporcionais , Monitoramento Ambiental/métodos , Aprendizado de Máquina
12.
Sensors (Basel) ; 19(16)2019 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-31443244

RESUMO

Clients are increasingly looking for fast and effective means to quickly and frequently survey and communicate the condition of their buildings so that essential repairs and maintenance work can be done in a proactive and timely manner before it becomes too dangerous and expensive. Traditional methods for this type of work commonly comprise of engaging building surveyors to undertake a condition assessment which involves a lengthy site inspection to produce a systematic recording of the physical condition of the building elements, including cost estimates of immediate and projected long-term costs of renewal, repair and maintenance of the building. Current asset condition assessment procedures are extensively time consuming, laborious, and expensive and pose health and safety threats to surveyors, particularly at height and roof levels which are difficult to access. This paper aims at evaluating the application of convolutional neural networks (CNN) towards an automated detection and localisation of key building defects, e.g., mould, deterioration, and stain, from images. The proposed model is based on pre-trained CNN classifier of VGG-16 (later compaired with ResNet-50, and Inception models), with class activation mapping (CAM) for object localisation. The challenges and limitations of the model in real-life applications have been identified. The proposed model has proven to be robust and able to accurately detect and localise building defects. The approach is being developed with the potential to scale-up and further advance to support automated detection of defects and deterioration of buildings in real-time using mobile devices and drones.

13.
Sensors (Basel) ; 19(21)2019 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-31683885

RESUMO

Despite the many conveniences of Radio Frequency Identification (RFID) systems, the underlying open architecture for communication between the RFID devices may lead to various security threats. Recently, many solutions were proposed to secure RFID systems and many such systems are based on only lightweight primitives, including symmetric encryption, hash functions, and exclusive OR operation. Many solutions based on only lightweight primitives were proved insecure, whereas, due to resource-constrained nature of RFID devices, the public key-based cryptographic solutions are unenviable for RFID systems. Very recently, Gope and Hwang proposed an authentication protocol for RFID systems based on only lightweight primitives and claimed their protocol can withstand all known attacks. However, as per the analysis in this article, their protocol is infeasible and is vulnerable to collision, denial-of-service (DoS), and stolen verifier attacks. This article then presents an improved realistic and lightweight authentication protocol to ensure protection against known attacks. The security of the proposed protocol is formally analyzed using Burrows Abadi-Needham (BAN) logic and under the attack model of automated security verification tool ProVerif. Moreover, the security features are also well analyzed, although informally. The proposed protocol outperforms the competing protocols in terms of security.

14.
J Air Waste Manag Assoc ; 73(1): 40-49, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35905292

RESUMO

Due to the high consumption of Medium-density fiberboard (MDF), waste products of this material are growing worldwide. In this research, the feasibility of using Medium-density fiberboard waste ash (MDFWA) as part of cement in concrete was investigated. For this purpose, 0, 5, 10, 15, 20, and 25% of the cement in concrete was substituted with MDFWA. For all design mixes, the water/blind ratio and the volume of aggregates were same. The slump, compressive strengths, SEM, EDX, TGA, DSC, and FTIR tests were conducted on the samples. At 28 days, the results demonstrated that the compressive strength of the sample containing 20% MDFWA increased by 13.6% compared to the control sample. Furthermore, the microstructure of the concrete show that the voids of the sample containing 20% MDFWA reduced compared to the control sample and also more calcium silicate hydrate (C-S-H) crystal formed.Implications: The significance of the present paper is to solve the environmental issue caused by large amount of Medium-density fiberboard waste ash (MDFWA) and produce also sustainable concrete. In addition, the replacement of cement with MDFWA increases the compressive strength and enhancement of the microstructure of concrete due to extra C-S-H products. Therefore, the findings confirm that by using 20% MDFWA, a more eco-friendly production, denser, sustainable, economical, and stronger concrete would be achieved.


Assuntos
Materiais de Construção , Silicatos , Resíduos , Compostos de Cálcio
15.
ACS Omega ; 7(14): 11578-11586, 2022 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-35449927

RESUMO

Identifying the number of oil families in petroleum basins provides practical and valuable information in petroleum geochemistry studies from exploration to development. Oil family grouping helps us track migration pathways, identify the number of active source rock(s), and examine the reservoir continuity. To date, almost in all oil family typing studies, common statistical methods such as principal component analysis (PCA) and hierarchical clustering analysis (HCA) have been used. However, there is no publication regarding using artificial neural networks (ANNs) for examining the oil families in petroleum basins. Hence, oil family typing requires novel and not overused and common techniques. This paper is the first report of oil family typing using ANNs as robust computational methods. To this end, a self-organization map (SOM) neural network associated with three clustering validity indexes was employed on oil samples belonging to the Iranian part of the Persian Gulf oilfields. For the SOM network, at first, 10 default clusters were selected. Afterward, three effective clustering validity coefficients, namely, Calinski-Harabasz (CH), Silhouette (SH), and Davies-Bouldin (DB), were studied to find the optimum number of clusters. Accordingly, among 10 default clusters, the maximum CH (62) and SH (0.58) were acquired for 4 clusters. Similarly, the lowest DB (0.8) was obtained for four clusters. Thus, all three validation coefficients introduced four clusters as the optimum number of clusters or oil families. According to the geochemical parameters, it can be deduced that the corresponding source rocks of four oil families have been deposited in a marine carbonate depositional environment under dysoxic-anoxic conditions. However, oil families show some differences based on geochemical data. The number of oil families identified in the present report is consistent with those previously reported by other researchers in the same study area. However, the techniques used in the present paper, which have not been implemented so far, can be introduced as more straightforward for clustering purposes in oil family typing than those of common and overused methods of PCA and HCA.

16.
Math Biosci Eng ; 19(10): 9749-9768, 2022 07 06.
Artigo em Inglês | MEDLINE | ID: mdl-36031966

RESUMO

The main aim of the study is to investigate the growth of oyster mushrooms in two substrates, namely straw and wheat straw. In the following, the study moves towards modeling and optimization of the production yield by considering the energy consumption, water consumption, total income and environmental impacts as the dependent variables. Accordingly, life cycle assessment (LCA) platform was developed for achieving the environmental impacts of the studied scenarios. The next step developed an ANN-based model for the prediction of dependent variables. Finally, optimization was performed using response surface methodology (RSM) by fitting quadratic equations for generating the required factors. According to the results, the optimum condition for the production of OM from waste paper can be found in the paper portion range of 20% and the wheat straw range of 80% with a production yield of about 4.5 kg and a higher net income of 16.54 $ in the presence of the lower energy and water consumption by about 361.5 kWh and 29.53 kg, respectively. The optimum condition delivers lower environmental impacts on Human Health, Ecosystem Quality, Climate change, and Resources by about 5.64 DALY, 8.18 PDF*m2*yr, 89.77 g CO2 eq and 1707.05 kJ, respectively. It can be concluded that, sustainable production of OM can be achieved in line with the policy used to produce alternative food source from waste management techniques.


Assuntos
Pleurotus , Ecossistema , Meio Ambiente , Humanos , Redes Neurais de Computação
17.
Comput Intell Neurosci ; 2022: 5054641, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36268157

RESUMO

With the emergence of the Internet of Things (IoT), investigation of different diseases in healthcare improved, and cloud computing helped to centralize the data and to access patient records throughout the world. In this way, the electrocardiogram (ECG) is used to diagnose heart diseases or abnormalities. The machine learning techniques have been used previously but are feature-based and not as accurate as transfer learning; the proposed development and validation of embedded device prove ECG arrhythmia by using the transfer learning (DVEEA-TL) model. This model is the combination of hardware, software, and two datasets that are augmented and fused and further finds the accuracy results in high proportion as compared to the previous work and research. In the proposed model, a new dataset is made by the combination of the Kaggle dataset and the other, which is made by taking the real-time healthy and unhealthy datasets, and later, the AlexNet transfer learning approach is applied to get a more accurate reading in terms of ECG signals. In this proposed research, the DVEEA-TL model diagnoses the heart abnormality in respect of accuracy during the training and validation stages as 99.9% and 99.8%, respectively, which is the best and more reliable approach as compared to the previous research in this field.


Assuntos
Arritmias Cardíacas , Eletrocardiografia , Humanos , Eletrocardiografia/métodos , Arritmias Cardíacas/diagnóstico , Computação em Nuvem , Aprendizado de Máquina , Software
18.
Comput Biol Med ; 150: 106019, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36162198

RESUMO

In recent years, the global Internet of Medical Things (IoMT) industry has evolved at a tremendous speed. Security and privacy are key concerns on the IoMT, owing to the huge scale and deployment of IoMT networks. Machine learning (ML) and blockchain (BC) technologies have significantly enhanced the capabilities and facilities of healthcare 5.0, spawning a new area known as "Smart Healthcare." By identifying concerns early, a smart healthcare system can help avoid long-term damage. This will enhance the quality of life for patients while reducing their stress and healthcare costs. The IoMT enables a range of functionalities in the field of information technology, one of which is smart and interactive health care. However, combining medical data into a single storage location to train a powerful machine learning model raises concerns about privacy, ownership, and compliance with greater concentration. Federated learning (FL) overcomes the preceding difficulties by utilizing a centralized aggregate server to disseminate a global learning model. Simultaneously, the local participant keeps control of patient information, assuring data confidentiality and security. This article conducts a comprehensive analysis of the findings on blockchain technology entangled with federated learning in healthcare. 5.0. The purpose of this study is to construct a secure health monitoring system in healthcare 5.0 by utilizing a blockchain technology and Intrusion Detection System (IDS) to detect any malicious activity in a healthcare network and enables physicians to monitor patients through medical sensors and take necessary measures periodically by predicting diseases. The proposed system demonstrates that the approach is optimized effectively for healthcare monitoring. In contrast, the proposed healthcare 5.0 system entangled with FL Approach achieves 93.22% accuracy for disease prediction, and the proposed RTS-DELM-based secure healthcare 5.0 system achieves 96.18% accuracy for the estimation of intrusion detection.


Assuntos
Blockchain , Humanos , Qualidade de Vida , Tecnologia , Instalações de Saúde , Atenção à Saúde
19.
Comput Biol Med ; 149: 105975, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36057197

RESUMO

In this study, a novel approach is proposed for glucose regulation in type-I diabetes patients. Unlike most studies, the glucose-insulin metabolism is considered to be uncertain. A new approach on the basis of the Immersion and Invariance (I&I) theorem is presented to derive the adaptation rules for the unknown parameters. Also, a new deep learned type-II fuzzy logic system (T2FLS) is proposed to compensate the estimation errors and guarantee stability. The suggested T2FLS is tuned by the singular value decomposition (SVD) method and adaptive tuning rules that are extracted from stability investigation. To evaluate the performance, the modified Bergman model (BM) is applied. Besides the dynamic uncertainties, the meal effect on glucose level is also considered. The meal effect is defined as the effect of edibles. Similar to the patient activities, the edibles can also have a major impact on the glucose level. Furthermore, to assess the effect of patient informal activities and the effect of other illnesses, a high random perturbation is applied to glucose-insulin dynamics. The effectiveness of the suggested approach is demonstrated by comparing the simulation results with some other methods. Simulations show that the glucose level is well regulated by the suggested method after a short time. By examination on some patients with various diabetic condition, it is seen that the suggested approach is well effective, and the glucose level of patients lies in the desired range in more than 99% h.


Assuntos
Aprendizado Profundo , Diabetes Mellitus Tipo 1 , Algoritmos , Glicemia/metabolismo , Simulação por Computador , Lógica Fuzzy , Humanos , Imersão , Insulina
20.
Front Oncol ; 12: 834028, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35769710

RESUMO

Breast cancer is the most menacing cancer among all types of cancer in women around the globe. Early diagnosis is the only way to increase the treatment options which then decreases the death rate and increases the chance of survival in patients. However, it is a challenging task to differentiate abnormal breast tissues from normal tissues because of their structure and unclear boundaries. Therefore, early and accurate diagnosis and classification of breast lesions into malignant or benign lesions is an active domain of research. Over the decade, numerous artificial neural network (ANN)-based techniques were adopted in order to diagnose and classify breast cancer due to the unique characteristics of learning key features from complex data via a training process. However, these schemes have limitations like slow convergence and longer training time. To address the above mentioned issues, this paper employs a meta-heuristic algorithm for tuning the parameters of the neural network. The main novelty of this work is the computer-aided diagnosis scheme for detecting abnormalities in breast ultrasound images by integrating a wavelet neural network (WNN) and the grey wolf optimization (GWO) algorithm. Here, breast ultrasound (US) images are preprocessed with a sigmoid filter followed by interference-based despeckling and then by anisotropic diffusion. The automatic segmentation algorithm is adopted to extract the region of interest, and subsequently morphological and texture features are computed. Finally, the GWO-tuned WNN is exploited to accomplish the classification task. The classification performance of the proposed scheme is validated on 346 ultrasound images. Efficiency of the proposed methodology is evaluated by computing the confusion matrix and receiver operating characteristic (ROC) curve. Numerical analysis revealed that the proposed work can yield higher classification accuracy when compared to the prevailing methods and thereby proves its potential in effective breast tumor detection and classification. The proposed GWO-WNN method (98%) gives better accuracy than other methods like SOM-SVM (87.5), LOFA-SVM (93.62%), MBA-RF (96.85%), and BAS-BPNN (96.3%).

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa