Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Sep Sci ; 46(21): e2300582, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37675810

RESUMO

The extraction of berberine was carried out from Berberis vulgaris, Berberis aquifolium, and Hydrastis canadensis plants using ethanol and water (70:30, v/v). The extracted berberine was characterized by ultraviolet-visible and Fourier-transform infrared spectroscopy. The purity of berberine was ascertained by thin-layer chromatography using n-propanol-formic acid-water (95:1:4) and (90:1:9) solvents. hRf values were in the range of 44-49 with compact spots (diameter 0.2-0.4 cm). HPLC was carried out using ammonium acetate buffer and acetonitrile in gradient mode with Zodiac (4.6 × 150 mm, 3 µm) column. The flow rate was 1.0 mL/min and detection was at 220 nm. The values of separation and resolution factors of the standard and the extracted berberine were in the range of 1.13-1.16 and 1.40-1.71, respectively. A comparison has shown that both thin-layer chromatography and high-performance liquid chromatography (HPLC) methods found applications in different situations and requirements. The extracted berberine samples were used to treat Leishmaniosis and the results showed better activity of berberine in comparison to the standard drug Amphotericin B. Briefly, the reported research is a novel and may be used to extract berberine from plants, separation and identification of berberine by thin layer chromatography and HPLC and to treat Leishmaniosis.


Assuntos
Berberina , Berberina/química , Cromatografia Líquida de Alta Pressão/métodos , Cromatografia em Camada Fina/métodos , Solventes/análise , Água
2.
Cluster Comput ; 24(3): 2581-2595, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33880074

RESUMO

Defects are the major problems in the current situation and predicting them is also a difficult task. Researchers and scientists have developed many software defects prediction techniques to overcome this very helpful issue. But to some extend there is a need for an algorithm/method to predict defects with more accuracy, reduce time and space complexities. All the previous research conducted on the data without feature reduction lead to the curse of dimensionality. We brought up a machine learning hybrid approach by combining Principal component Analysis (PCA) and Support vector machines (SVM) to overcome the ongoing problem. We have employed PROMISE (CM1: 344 observations, KC1: 2109 observations) data from the directory of NASA to conduct our research. We split the dataset into training (CM1: 240 observations, KC1: 1476 observations) dataset and testing (CM1: 104 observations, KC1: 633 observations) datasets. Using PCA, we find the principal components for feature optimization which reduce the time complexity. Then, we applied SVM for classification due to very native qualities over traditional and conventional methods. We also employed the GridSearchCV method for hyperparameter tuning. In the proposed hybrid model we have found better accuracy (CM1: 95.2%, KC1: 86.6%) than other methods. The proposed model also presents higher evaluation in the terms of other criteria. As a limitation, the only problem with SVM is there is no probabilistic explanation for classification which may very rigid towards classifications. In the future, some other method may also introduce which can overcome this limitation and keep a soft probabilistic based margin for classification on the optimal hyperplane.

3.
PLoS One ; 19(7): e0307112, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38990978

RESUMO

Maintaining quality in software development projects is becoming very difficult because the complexity of modules in the software is growing exponentially. Software defects are the primary concern, and software defect prediction (SDP) plays a crucial role in detecting faulty modules early and planning effective testing to reduce maintenance costs. However, SDP faces challenges like imbalanced data, high-dimensional features, model overfitting, and outliers. Moreover, traditional SDP models lack transparency and interpretability, which impacts stakeholder confidence in the Software Development Life Cycle (SDLC). We propose SPAM-XAI, a hybrid model integrating novel sampling, feature selection, and eXplainable-AI (XAI) algorithms to address these challenges. The SPAM-XAI model reduces features, optimizes the model, and reduces time and space complexity, enhancing its robustness. The SPAM-XAI model exhibited improved performance after experimenting with the NASA PROMISE repository's datasets. It achieved an accuracy of 98.13% on CM1, 96.00% on PC1, and 98.65% on PC2, surpassing previous state-of-the-art and baseline models with other evaluation matrices enhancement compared to existing methods. The SPAM-XAI model increases transparency and facilitates understanding of the interaction between features and error status, enabling coherent and comprehensible predictions. This enhancement optimizes the decision-making process and enhances the model's trustworthiness in the SDLC.


Assuntos
Algoritmos , Software , Modelos Teóricos , Inteligência Artificial , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA