Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Water Res ; 263: 122179, 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39096812

RESUMO

The operation of modern wastewater treatment facilities is a balancing act in which a multitude of variables are controlled to achieve a wide range of objectives, many of which are conflicting. This is especially true within secondary activated sludge systems, where significant research and industry effort has been devoted to advance control optimization strategies, both domain-driven and data-driven. Among data-driven control strategies, reinforcement learning (RL) stands out for its ability to achieve better than human performance in complex environments. While RL has been applied to activated sludge process optimization in existing literature, these applications are typically limited in scope, and never for the control of more than three actions. Expanding the scope of RL control has the potential to increase the optimization potential while concurrently reducing the number of control systems that must be tuned and maintained by operations staff. This study examined several facets of the implementation of multi-action, multi-objective RL agents, namely how many actions a single agent could successfully control and what extent of environment data was necessary to train such agents. This study observed improved control optimization with increasing action scope, though control of waste activated sludge remains a challenge. Furthermore, agents were able to maintain a high level of performance under decreased observation scope, up to a point. When compared to baseline control of the Benchmark Simulation Model No. 1 (BSM1), an RL agent controlling seven individual actions improved the average BSM1 performance metric by 8.3 %, equivalent to an annual cost savings of $40,200 after accounting for the cost of additional sensors.

2.
Environ Sci Technol ; 57(46): 18382-18390, 2023 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-37405782

RESUMO

Treatment of wastewater using activated sludge relies on several complex, nonlinear processes. While activated sludge systems can provide high levels of treatment, including nutrient removal, operating these systems is often challenging and energy intensive. Significant research investment has been made in recent years into improving control optimization of such systems, through both domain knowledge and, more recently, machine learning. This study leverages a novel interface between a common process modeling software and a Python reinforcement learning environment to evaluate four common reinforcement learning algorithms for their ability to minimize treatment energy use while maintaining effluent compliance within the Benchmark Simulation Model No. 1 (BSM1) simulation. Three of the algorithms tested, deep Q-learning, proximal policy optimization, and synchronous advantage actor critic, generally performed poorly over the scenarios tested in this study. In contrast, the twin delayed deep deterministic policy gradient (TD3) algorithm consistently produced a high level of control optimization while maintaining the treatment requirements. Under the best selection of state observation features, TD3 control optimization reduced aeration and pumping energy requirements by 14.3% compared to the BSM1 benchmark control, outperforming the advanced domain-based control strategy of ammonia-based aeration control, although future work is necessary to improve robustness of RL implementation.


Assuntos
Esgotos , Purificação da Água , Eliminação de Resíduos Líquidos , Algoritmos , Águas Residuárias
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA