Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Toxics ; 12(2)2024 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-38393213

RESUMO

Recently, Japan's discharge of wastewater from the Fukushima nuclear disaster into the ocean has attracted widespread attention. To effectively address the challenge of separating uranium, the focus is on finding a healthy and environmentally friendly way to adsorb uranium using biochar. In this paper, a BP neural network is combined with each of the four meta-heuristic algorithms, namely Particle Swarm Optimization (PSO), Differential Evolution (DE), Cheetah Optimization (CO) and Fick's Law Algorithm (FLA), to construct four prediction models for the uranium adsorption capacity in the treatment of radioactive wastewater with biochar: PSO-BP, DE-BP, CO-BP, FLA-BP. The coefficient of certainty (R2), error rate and CEC test set are used to judge the accuracy of the model based on the BP neural network. The results show that the Fick's Law Algorithm (FLA) has a better search ability and convergence speed than the other algorithms. The importance of the input parameters is quantitatively assessed and ranked using XGBoost in order to analyze which parameters have a greater impact on the predictions of the model, which indicates that the parameters with the greatest impact are the initial concentration of uranium (C0, mg/L) and the mass percentage of total carbon (C, %). To sum up, four prediction models can be applied to study the adsorption of uranium by biochar materials during actual experiments, and the advantage of Fick's Law Algorithm (FLA) is more obvious. The method of model prediction can significantly reduce the radiation risk caused by uranium to human health during the actual experiment and provide some reference for the efficient treatment of uranium wastewater by biochar.

2.
IEEE Trans Vis Comput Graph ; 24(1): 468-477, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28866529

RESUMO

Visualizations often appear in multiples, either in a single display (e.g., small multiples, dashboard) or across time or space (e.g., slideshow, set of dashboards). However, existing visualization design guidelines typically focus on single rather than multiple views. Solely following these guidelines can lead to effective yet inconsistent views (e.g., the same field has different axes domains across charts), making interpretation slow and error-prone. Moreover, little is known how consistency balances with other design considerations, making it difficult to incorporate consistency mechanisms in visualization authoring software. We present a wizard-of-oz study in which we observed how Tableau users achieve and sacrifice consistency in an exploration-to-presentation visualization design scenario. We extend (from our prior work) a set of encoding-specific constraints defining consistency across multiple views. Using the constraints as a checklist in our study, we observed cases where participants spontaneously maintained consistent encodings and warned cases where consistency was overlooked. In response to the warnings, participants either revised views for consistency or stated why they thought consistency should be overwritten. We categorize participants' actions and responses as constraint validations and exceptions, depicting the relative importance of consistency and other design considerations under various circumstances (e.g., data cardinality, available encoding resources, chart layout). We discuss automatic consistency checking as a constraint-satisfaction problem and provide design implications for communicating inconsistencies to users.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA