Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 65
Filtrar
1.
Clin Chem Lab Med ; 62(3): 436-441, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-37782817

RESUMO

OBJECTIVES: To create a supervised machine learning algorithm aimed at predicting an optimal cerebrospinal fluid (CSF) dilution when determining virus specific antibody indices to reduce the need for repeated tests. METHODS: The CatBoost model was trained, optimized, and tested on a dataset with five input variables: albumin quotient, immunoglobulin G (IgG) in CSF, IgG quotient (QIgG), intrathecal synthesis (ITS) and limes quotient (LIM IgG). Albumin and IgG concentrations in CSF and serum were performed by immunonephelometry on Atellica NEPH 630 (Siemens Healthineers, Erlangen, Germany) and ITS and LIM IgG were calculated according to Reiber. Concentrations of IgG antibodies to measles, rubella, varicella zoster and herpes simplex 1/2 viruses were analysed in CSF and serum by ELISA (Euroimmun, Lübeck, Germany). Optimal CSF dilution was defined for each virus and used as a classification variable while the standard operating procedure was set to start at 2×-dilution of CSF. RESULTS: The dataset included 571 samples with the imbalanced distribution of the optimal CSF dilutions: 2× dilution n=440, 3× dilution n=109, 4× dilution n=22. The optimized CatBoost model achieved an area under the curve (AUC) score of 0.971, and a test accuracy of 0.900. The model falsely classified 14 (9.9 %) samples of the testing set but reduced the need for repeated testing compared to the standard protocol by 42 %. The output of the CatBoost model is mostly dependant on the QIgG, ITS and CSF IgG variables. CONCLUSIONS: An accurate algorithm was achieved for predicting the optimal CSF dilution, which reduces the number of test repeats.


Assuntos
Esclerose Múltipla , Rubéola (Sarampo Alemão) , Humanos , Imunoglobulina G , Ensaio de Imunoadsorção Enzimática , Aprendizado de Máquina , Albuminas , Anticorpos Antivirais , Líquido Cefalorraquidiano , Esclerose Múltipla/líquido cefalorraquidiano
2.
Arch Pharm (Weinheim) ; 357(10): e2400486, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38996352

RESUMO

AlphaFold is an artificial intelligence approach for predicting the three-dimensional (3D) structures of proteins with atomic accuracy. One challenge that limits the use of AlphaFold models for drug discovery is the correct prediction of folding in the absence of ligands and cofactors, which compromises their direct use. We have previously described the optimization and use of the histone deacetylase 11 (HDAC11) AlphaFold model for the docking of selective inhibitors such as FT895 and SIS17. Based on the predicted binding mode of FT895 in the optimized HDAC11 AlphaFold model, a new scaffold for HDAC11 inhibitors was designed, and the resulting compounds were tested in vitro against various HDAC isoforms. Compound 5a proved to be the most active compound with an IC50 of 365 nM and was able to selectively inhibit HDAC11. Furthermore, docking of 5a showed a binding mode comparable to FT895 but could not adopt any reasonable poses in other HDAC isoforms. We further supported the docking results with molecular dynamics simulations that confirmed the predicted binding mode. 5a also showed promising activity with an EC50 of 3.6 µM on neuroblastoma cells.


Assuntos
Antineoplásicos , Desenho de Fármacos , Inibidores de Histona Desacetilases , Histona Desacetilases , Simulação de Acoplamento Molecular , Neuroblastoma , Histona Desacetilases/metabolismo , Inibidores de Histona Desacetilases/farmacologia , Inibidores de Histona Desacetilases/química , Inibidores de Histona Desacetilases/síntese química , Humanos , Neuroblastoma/tratamento farmacológico , Neuroblastoma/patologia , Relação Estrutura-Atividade , Antineoplásicos/farmacologia , Antineoplásicos/química , Antineoplásicos/síntese química , Linhagem Celular Tumoral , Simulação de Dinâmica Molecular , Estrutura Molecular , Relação Dose-Resposta a Droga , Inteligência Artificial
3.
Sensors (Basel) ; 23(3)2023 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-36772319

RESUMO

Artificial Intelligence (Al) models are being produced and used to solve a variety of current and future business and technical problems. Therefore, AI model engineering processes, platforms, and products are acquiring special significance across industry verticals. For achieving deeper automation, the number of data features being used while generating highly promising and productive AI models is numerous, and hence the resulting AI models are bulky. Such heavyweight models consume a lot of computation, storage, networking, and energy resources. On the other side, increasingly, AI models are being deployed in IoT devices to ensure real-time knowledge discovery and dissemination. Real-time insights are of paramount importance in producing and releasing real-time and intelligent services and applications. Thus, edge intelligence through on-device data processing has laid down a stimulating foundation for real-time intelligent enterprises and environments. With these emerging requirements, the focus turned towards unearthing competent and cognitive techniques for maximally compressing huge AI models without sacrificing AI model performance. Therefore, AI researchers have come up with a number of powerful optimization techniques and tools to optimize AI models. This paper is to dig deep and describe all kinds of model optimization at different levels and layers. Having learned the optimization methods, this work has highlighted the importance of having an enabling AI model optimization framework.

4.
Int J Mol Sci ; 24(9)2023 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-37175594

RESUMO

As one of the most important post-transcriptional modifications, m6Am plays a fairly important role in conferring mRNA stability and in the progression of cancers. The accurate identification of the m6Am sites is critical for explaining its biological significance and developing its application in the medical field. However, conventional experimental approaches are time-consuming and expensive, making them unsuitable for the large-scale identification of the m6Am sites. To address this challenge, we exploit a CatBoost-based method, m6Aminer, to identify the m6Am sites on mRNA. For feature extraction, nine different feature-encoding schemes (pseudo electron-ion interaction potential, hash decimal conversion method, dinucleotide binary encoding, nucleotide chemical properties, pseudo k-tuple composition, dinucleotide numerical mapping, K monomeric units, series correlation pseudo trinucleotide composition, and K-spaced nucleotide pair frequency) were utilized to form the initial feature space. To obtain the optimized feature subset, the ExtraTreesClassifier algorithm was adopted to perform feature importance ranking, and the top 300 features were selected as the optimal feature subset. With different performance assessment methods, 10-fold cross-validation and independent test, m6Aminer achieved average AUC of 0.913 and 0.754, demonstrating a competitive performance with the state-of-the-art models m6AmPred (0.905 and 0.735) and DLm6Am (0.897 and 0.730). The prediction model developed in this study can be used to identify the m6Am sites in the whole transcriptome, laying a foundation for the functional research of m6Am.


Assuntos
Algoritmos , Nucleotídeos , RNA Mensageiro/genética , Transcriptoma , Biologia Computacional
5.
Entropy (Basel) ; 25(3)2023 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-36981331

RESUMO

Fault diagnosis of complex equipment has become a hot field in recent years. Due to excellent uncertainty processing capability and small sample problem modeling capability, belief rule base (BRB) has been widely used in the fault diagnosis. However, previous BRB models almost did not consider the diverse distributions of observation data which may reduce diagnostic accuracy. In this paper, a new fault diagnosis model based on BRB is proposed. Considering that the previous triangular membership function cannot address the diverse distribution of observation data, a new nonlinear membership function is proposed to transform the input information. Then, since the model parameters initially determined by experts are inaccurate, a new parameter optimization model with the parameters of the nonlinear membership function is proposed and driven by the gradient descent method to prevent the expert knowledge from being destroyed. A fault diagnosis case of laser gyro is used to verify the validity of the proposed model. In the case study, the diagnosis accuracy of the new BRB-based fault diagnosis model reached 95.56%, which shows better fault diagnosis performance than other methods.

6.
Environ Res ; 208: 112759, 2022 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-35077716

RESUMO

PM2.5 pollution endangers human health and urban sustainable development. Land use regression (LUR) is one of the most important methods to reveal the temporal and spatial heterogeneity of PM2.5, and the introduction of characteristic variables of geographical factors and the improvement of model construction methods are important research directions for its optimization. However, the complex non-linear correlation between PM2.5 and influencing indicators is always unrecognized by the traditional regression model. The two-dimensional landscape pattern index is difficult to reflect the real information of the surface, and the research accuracy cannot meet the requirements. As such, a novel integrated three-dimensional landscape pattern index (TDLPI) and machine learning extreme gradient boosting (XGBOOST) improved LUR model (LTX) are developed to estimate the spatiotemporal heterogeneity in the fine particle concentration in Shaanxi, China, and health risks of exposure and inhalation of PM2.5 were explored. The LTX model performed well with R2 = 0.88, RMSE of 8.73 µg/m3 and MAE of 5.85 µg/m3. Our findings suggest that integrated three-dimensional landscape pattern information and XGBOOST approaches can accurately estimate annual and seasonal variations of PM2.5 pollution The Guanzhong Plain and northern Shaanxi always feature high PM2.5 values, which exhibit similar distribution trends to those of the observed PM2.5 pollution. This study demonstrated the outstanding performance of the LTX model, which outperforms most models in past researches. On the whole, LTX approach is reliable and can improve the accuracy of pollutant concentration prediction. The health risks of human exposure to fine particles are relatively high in winter. Central part is a high health risk area, while northern area is low. Our study provides a new method for atmospheric pollutants assessing, which is important for LUR model optimization, high-precision PM2.5 pollution prediction and landscape pattern planning. These results can also contribute to human health exposure risks and future epidemiological studies of air pollution.


Assuntos
Poluentes Atmosféricos , Poluição do Ar , Poluentes Atmosféricos/análise , Poluição do Ar/análise , China , Monitoramento Ambiental/métodos , Humanos , Aprendizado de Máquina , Material Particulado/análise
7.
Sensors (Basel) ; 22(9)2022 May 05.
Artigo em Inglês | MEDLINE | ID: mdl-35591208

RESUMO

COVID-19 has caused millions of infections and deaths over the last 2 years. Machine learning models have been proposed as an alternative to conventional epidemiologic models in an effort to optimize short- and medium-term forecasts that will help health authorities to optimize the use of policies and resources to tackle the spread of the SARS-CoV-2 virus. Although previous machine learning models based on time pattern analysis for COVID-19 sensed data have shown promising results, the spread of the virus has both spatial and temporal components. This manuscript proposes a new deep learning model that combines a time pattern extraction based on the use of a Long-Short Term Memory (LSTM) Recurrent Neural Network (RNN) over a preceding spatial analysis based on a Convolutional Neural Network (CNN) applied to a sequence of COVID-19 incidence images. The model has been validated with data from the 286 health primary care centers in the Comunidad de Madrid (Madrid region, Spain). The results show improved scores in terms of both root mean square error (RMSE) and explained variance (EV) when compared with previous models that have mainly focused on the temporal patterns and dependencies.


Assuntos
COVID-19 , COVID-19/epidemiologia , Previsões , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , SARS-CoV-2
8.
Sensors (Basel) ; 22(3)2022 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-35161914

RESUMO

'Resilience' is a new concept in the research and application of urban construction. From the perspective of building adaptability in a mountainous environment and maintaining safety performance over time, this paper innovatively proposes machine learning methods for evaluating the resilience of buildings in a mountainous area. Firstly, after considering the comprehensive effects of geographical and geological conditions, meteorological and hydrological factors, environmental factors and building factors, the database of building resilience evaluation models in a mountainous area is constructed. Then, machine learning methods such as random forest and support vector machine are used to complete model training and optimization. Finally, the test data are substituted into models, and the models' effects are verified by the confusion matrix. The results show the following: (1) Twelve dominant impact factors are screened. (2) Through the screening of dominant factors, the models are comprehensively optimized. (3) The accuracy of the optimization models based on random forest and support vector machine are both 97.4%, and the F1 scores are greater than 94.4%. Resilience has important implications for risk prevention and the control of buildings in a mountainous environment.


Assuntos
Aprendizado de Máquina , Máquina de Vetores de Suporte , China , Bases de Dados Factuais , Geografia , Geologia
9.
Sensors (Basel) ; 22(19)2022 Oct 08.
Artigo em Inglês | MEDLINE | ID: mdl-36236723

RESUMO

Building information modeling (BIM), a common technology contributing to information processing, is extensively applied in construction fields. BIM integration with augmented reality (AR) is flourishing in the construction industry, as it provides an effective solution for the lifecycle of a project. However, when applying BIM to AR data transfer, large and complicated models require large storage spaces, increase the model transfer time and data processing workload during rendering, and reduce visualization efficiency when using AR devices. The geometric optimization of the model using mesh reconstruction is a potential solution that can reduce the required storage while maintaining the shape of the components. In this study, a 3D engine-based mesh reconstruction algorithm that can pre-process BIM shape data and implement an AR-based full-size model is proposed, which is likely to increase the efficiency of decision making and project processing for construction management. As shown in the experimental validation, the proposed algorithm significantly reduces the number of vertices, triangles, and storage for geometric models while maintaining the overall shape. Moreover, the model elements and components of the optimized model have the same visual quality as the original model; thus, a high performance can be expected for BIM visualization in AR devices.

10.
J Environ Manage ; 300: 113785, 2021 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-34562818

RESUMO

Palms are iconic plants. Oil palms are very important economically and originate in Africa where they can act as a model for palms in general. The effect of future climate on the growth of oil palm will be very detrimental. Latitudinal migration of tropical crops to climate refuges may be impossible, and longitudinal migration has only been confirmed for oil palm, of all the tropical crops. The previous method to determine the longitudinal trend for oil palm used the longitudes of various countries in Africa and plotted these against the percentage suitable climate for growing oil palms in each country. An increasing longitudinal trend was observed from west to east. However, the longitudes of the countries were randomly distributed which may have introduced bias and the procedure was time consuming. The present report presents an optimised and systematic procedure that divided the regions, as presented on a map derived from a CLIMEX model, into ten equal sectors and the percentage suitable climates for growing oil palm were determined for each sector. This approach was quicker, systematic and straight forward and will be useful for management of oil palm plantations under climate change. The method confirmed and validated the trends reported in the original method although the suitability values were often lower and there was less spread of values around the trend. The values for the CSIRO MK3.0 and MIROC H models demonstrated considerable similarities to each other, contributing to validation of the method. The procedure of dividing maps equally into sectors derived from models, could be used for other crops, regions, or systems more generally, where the alternative may be a more superficial visual examination of the maps. Methods are required to mitigate the effects of climate change and stakeholders need to contribute more actively to the current climate debate with tangible actions.


Assuntos
Arecaceae , África , Mudança Climática , Produtos Agrícolas , Previsões , Óleo de Palmeira
11.
J Environ Manage ; 277: 111449, 2021 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-33035942

RESUMO

A response surface methodology was used to investigate the flocculation performance of an amphoteric flocculant (acrylamide-methacrylic acid ester-acrylic acid copolymer [ACPAM]) for harvesting microalgae. After three potential influencing factors (pH, dosage, and the stirring speed of an intensive mixing step ω1) passed screening in experiments using a Plackett-Burman design, steepest ascent experiments were conducted to identify the parameters for Box-Behnken assessments. In those assessments, ω1, dosage, ω12, dosage2, and ω1 ∙ dosage were identified as significant factors. This model was optimized by removing nonsignificant factors and applying Box-Cox transformation, both of which significantly improved the adequacy of the model. An optimized set of conditions (pH = 9.0, ω1 = 339.3 rpm, and dosage = 28.54 mg/L) was obtained under which flocculation efficiency (FE) was predicted to be 95.85% and 98.00% for the nonsignificant factors removed and Box-Cox transformed models, respectively, compared to an experimentally determined value of 98.06%. Thermal stability analyses showed that the ACPAM was generally stable below 100 °C with some weight loss caused by moisture evaporation. However, crosslinking of its molecules by imidization and condensation started to occur at 120 °C, resulting in a lower flocculation performance. Finally, the applicability of the ACPAM was studied by comparing its FE to those of two other flocculants (AlCl3 and chitosan) when harvesting three microalgal species. The results showed flocculation performance of ACPAM varied with microalgae species, for one species the ACPAM dosage needed was highest while for another species, the dosage was lowest.


Assuntos
Chlorella vulgaris , Clorófitas , Microalgas , Biomassa , Floculação
12.
Sensors (Basel) ; 20(16)2020 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-32824712

RESUMO

Deep learning-based artificial intelligence models are widely used in various computing fields. Especially, Convolutional Neural Network (CNN) models perform very well for image recognition and classification. In this paper, we propose an optimized CNN-based recognition model to recognize Caoshu characters. In the proposed scheme, an image pre-processing and data augmentation techniques for our Caoshu dataset were applied to optimize and enhance the CNN-based Caoshu character recognition model's recognition performance. In the performance evaluation, Caoshu character recognition performance was compared and analyzed according to the proposed performance optimization. Based on the model validation results, the recognition accuracy was up to about 98.0% in the case of TOP-1. Based on the testing results of the optimized model, the accuracy, precision, recall, and F1 score are 88.12%, 81.84%, 84.20%, and 83.0%, respectively. Finally, we have designed and implemented a Caoshu recognition service as an Android application based on the optimized CNN based Cahosu recognition model. We have verified that the Caoshu recognition service could be performed in real-time.

13.
Sensors (Basel) ; 18(2)2018 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-29466312

RESUMO

In recent research, microwave sensors have been used to follow up the recovery of lower extremity trauma patients. This is done mainly by monitoring the changes of dielectric properties of lower limb tissues such as skin, fat, muscle, and bone. As part of the characterization of the microwave sensor, it is crucial to assess the signal penetration in in vivo tissues. This work presents a new approach for investigating the penetration depth of planar microwave sensors based on the Split-Ring Resonator in the in vivo context of the femoral area. This approach is based on the optimization of a 3D simulation model using the platform of CST Microwave Studio and consisting of a sensor of the considered type and a multilayered material representing the femoral area. The geometry of the layered material is built based on information from ultrasound images and includes mainly the thicknesses of skin, fat, and muscle tissues. The optimization target is the measured S11 parameters at the sensor connector and the fitting parameters are the permittivity of each layer of the material. Four positions in the femoral area (two at distal and two at thigh) in four volunteers are considered for the in vivo study. The penetration depths are finally calculated with the help of the electric field distribution in simulations of the optimized model for each one of the 16 considered positions. The numerical results show that positions at the thigh contribute the highest penetration values of up to 17.5 mm. This finding has a high significance in planning in vitro penetration depth measurements and other tests that are going to be performed in the future.

14.
J Comput Neurosci ; 42(1): 71-85, 2017 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-27726048

RESUMO

Particle swarm optimization (PSO) has gained widespread use as a general mathematical programming paradigm and seen use in a wide variety of optimization and machine learning problems. In this work, we introduce a new variant on the PSO social network and apply this method to the inverse problem of input parameter selection from recorded auditory neuron tuning curves. The topology of a PSO social network is a major contributor to optimization success. Here we propose a new social network which draws influence from winner-take-all coding found in visual cortical neurons. We show that the winner-take-all network performs exceptionally well on optimization problems with greater than 5 dimensions and runs at a lower iteration count as compared to other PSO topologies. Finally we show that this variant of PSO is able to recreate auditory frequency tuning curves and modulation transfer functions, making it a potentially useful tool for computational neuroscience models.


Assuntos
Algoritmos , Simulação por Computador , Apoio Social , Estimulação Acústica , Modelos Neurológicos , Neurônios
15.
Hum Brain Mapp ; 35(9): 4499-517, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24639383

RESUMO

In recent years, a variety of multivariate classifier models have been applied to fMRI, with different modeling assumptions. When classifying high-dimensional fMRI data, we must also regularize to improve model stability, and the interactions between classifier and regularization techniques are still being investigated. Classifiers are usually compared on large, multisubject fMRI datasets. However, it is unclear how classifier/regularizer models perform for within-subject analyses, as a function of signal strength and sample size. We compare four standard classifiers: Linear and Quadratic Discriminants, Logistic Regression and Support Vector Machines. Classification was performed on data in the linear kernel (covariance) feature space, and classifiers are tuned with four commonly-used regularizers: Principal Component and Independent Component Analysis, and penalization of kernel features using L1 and L2 norms. We evaluated prediction accuracy (P) and spatial reproducibility (R) of all classifier/regularizer combinations on single-subject analyses, over a range of three different block task contrasts and sample sizes for a BOLD fMRI experiment. We show that the classifier model has a small impact on signal detection, compared to the choice of regularizer. PCA maximizes reproducibility and global SNR, whereas Lp -norms tend to maximize prediction. ICA produces low reproducibility, and prediction accuracy is classifier-dependent. However, trade-offs in (P,R) depend partly on the optimization criterion, and PCA-based models are able to explore the widest range of (P,R) values. These trends are consistent across task contrasts and data sizes (training samples range from 6 to 96 scans). In addition, the trends in classifier performance are consistent for ROI-based classifier analyses.


Assuntos
Imageamento por Ressonância Magnética/métodos , Tamanho da Amostra , Adulto , Encéfalo/irrigação sanguínea , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Circulação Cerebrovascular/fisiologia , Feminino , Humanos , Modelos Logísticos , Masculino , Oxigênio/sangue , Análise de Componente Principal , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador , Razão Sinal-Ruído , Máquina de Vetores de Suporte , Adulto Jovem
16.
Neural Netw ; 176: 106340, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38713967

RESUMO

Vision transformers have achieved remarkable success in computer vision tasks by using multi-head self-attention modules to capture long-range dependencies within images. However, the high inference computation cost poses a new challenge. Several methods have been proposed to address this problem, mainly by slimming patches. In the inference stage, these methods classify patches into two classes, one to keep and the other to discard in multiple layers. This approach results in additional computation at every layer where patches are discarded, which hinders inference acceleration. In this study, we tackle the patch slimming problem from a different perspective by proposing a life regression module that determines the lifespan of each image patch in one go. During inference, the patch is discarded once the current layer index exceeds its life. Our proposed method avoids additional computation and parameters in multiple layers to enhance inference speed while maintaining competitive performance. Additionally, our approach1 requires fewer training epochs than other patch slimming methods.


Assuntos
Algoritmos , Humanos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
17.
MethodsX ; 12: 102679, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38577406

RESUMO

The U.S. Geological Survey (USGS) has published a guideline to improve the quality of digital photogrammetric reconstructions created with the widely used Agisoft Metashape Professional software. The suggested workflows aim at filtering out low-quality tie points from the tie point cloud to optimize the camera model. However, the optimization procedure relies on an iteratively performed trial-and-error approach. If manually performed, the time expenditure required from the operator can be significant and the optimization process can be affected by the degree of diligence that is applied. To minimize the time expenditure and attentiveness required from the operator and to provide a framework for an improved reproducibility of camera model optimization workflows, we present here a python script serving as an extension for Agisoft Metashape Professional (tested on version 2.1.0) that automatizes the iterative point filtering procedure proposed by the USGS. As a result, the entire processing cycle can be performed largely unattended. •A graphical user interface allows to individually adjust important camera model optimization parameters.•Main tie point cloud quality measures can be directly assessed.•The reproducibility of the automated camera model optimization as tested in this study generally is above 99%.

18.
Sci Rep ; 14(1): 22119, 2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39333614

RESUMO

This study explores the prediction of mechanical characteristics of linear polyethylene based on oven residence time, employing various regression models and hyper-parameter tuning through the Whale Optimization Algorithm. The dataset comprises one input variable (oven residence time) and three output parameters (Tensile Strength, Impact Strength, and Flexure Strength). The models investigated include Multilayer Perceptron, K-Nearest Neighbors, Support Vector Regression, Polynomial Regression, and Theil-Sen Regression. The results showcased distinct performances across the models for each output parameter. The Polynomial Regression (WOA-PR) method has been identified as the most suitable option for predicting Tensile Strength due to its ability to achieve the lowest errors in terms of Mean Absolute Error, Root Mean Square Error, and Average Absolute Relative Deviation. K-Nearest Neighbors (WOA-KNN) outperforms other models in predicting Impact Strength due to its superior accuracy and reliability. Additionally, Support Vector Regression (WOA-SVR) emerges as the best model for predicting Flexure Strength, showcasing notable performance in minimizing prediction errors. These findings underscore the significance of model selection and optimization techniques in accurately predicting the mechanical properties of polymers, paving the way for enhanced manufacturing processes and material design.

19.
Sci Rep ; 14(1): 14030, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38890360

RESUMO

The classification of coal bursting liability (CBL) is essential for the mitigation and management of coal bursts in mining operations. This study establishes an index system for CBL classification, incorporating dynamic fracture duration (DT), elastic strain energy index (WET), bursting energy index (KE), and uniaxial compressive strength (RC). Utilizing a dataset comprising 127 CBL measurement groups, the impacts of various optimization algorithms were assessed, and two prominent machine learning techniques, namely the back propagation neural network (BPNN) and the support vector machine (SVM), were employed to develop twelve distinct models. The models' efficacy was evaluated based on accuracy, F1-score, Kappa coefficient, and sensitivity analysis. Among these, the Levenberg-Marquardt back propagation neural network (LM-BPNN) model was identified as superior, achieving an accuracy of 96.85%, F1-score of 0.9113, and Kappa coefficient of 0.9417. Further validation in Wudong Coal Mine and Yvwu Coal Industry confirmed the model, achieving 100% accuracy. These findings underscore the LM-BPNN model's potential as a viable tool for enhancing coal burst prevention strategies in coal mining sectors.

20.
Front Plant Sci ; 15: 1411485, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39301154

RESUMO

Introduction: Mechanical damage significantly reduces the market value of fruits, making the early detection of such damage a critical aspect of agricultural management. This study focuses on the early detection of mechanical damage in blueberries (variety: Sapphire) through a non-destructive method. Methods: The proposed method integrates hyperspectral image fusion with a multi-strategy improved support vector machine (SVM) model. Initially, spectral features and image features were extracted from the hyperspectral information using the successive projections algorithm (SPA) and Grey Level Co-occurrence Matrix (GLCM), respectively. Different models including SVM, RF (Random Forest), and PLS-DA (Partial Least Squares Discriminant Analysis) were developed based on the extracted features. To refine the SVM model, its hyperparameters were optimized using a multi-strategy improved Beluga Whale Optimization (BWO) algorithm. Results: The SVM model, upon optimization with the multi-strategy improved BWO algorithm, demonstrated superior performance, achieving the highest classification accuracy among the models tested. The optimized SVM model achieved a classification accuracy of 95.00% on the test set. Discussion: The integration of hyperspectral image information through feature fusion proved highly efficient for the early detection of bruising in blueberries. However, the effectiveness of this technology is contingent upon specific conditions in the detection environment, such as light intensity and temperature. The high accuracy of the optimized SVM model underscores its potential utility in post-harvest assessment of blueberries for early detection of bruising. Despite these promising results, further studies are needed to validate the model under varying environmental conditions and to explore its applicability to other fruit varieties.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA