Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
J Environ Manage ; 365: 121530, 2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38905799

RESUMEN

Atrazine is a widely used herbicide in agriculture, and it has garnered significant attention because of its potential risks to the environment and human health. The extensive utilization of atrazine, alongside its persistence in water and soil, underscores the critical need to develop safe and efficient removal strategies. This comprehensive review aims to spotlight atrazine's potential impact on ecosystems and public health, particularly its enduring presence in soil, water, and plants. As a known toxic endocrine disruptor, atrazine poses environmental and health risks. The review navigates through innovative removal techniques across soil and water environments, elucidating microbial degradation, phytoremediation, and advanced methodologies such as electrokinetic-assisted phytoremediation (EKPR) and photocatalysis. The review notably emphasizes the complex process of atrazine degradation and ongoing scientific efforts to address this, recognizing its potential risks to both the environment and human health.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38289836

RESUMEN

Model compression methods are being developed to bridge the gap between the massive scale of neural networks and the limited hardware resources on edge devices. Since most real-world applications deployed on resource-limited hardware platforms typically have multiple hardware constraints simultaneously, most existing model compression approaches that only consider optimizing one single hardware objective are ineffective. In this article, we propose an automated pruning method called multi-constrained model compression (MCMC) that allows for the optimization of multiple hardware targets, such as latency, floating point operations (FLOPs), and memory usage, while minimizing the impact on accuracy. Specifically, we propose an improved multi-objective reinforcement learning (MORL) algorithm, the one-stage envelope deep deterministic policy gradient (DDPG) algorithm, to determine the pruning strategy for neural networks. Our improved one-stage envelope DDPG algorithm reduces exploration time and offers greater flexibility in adjusting target priorities, enhancing its suitability for pruning tasks. For instance, on the visual geometry group (VGG)-16 network, our method achieved an 80% reduction in FLOPs, a 2.31× reduction in memory usage, and a 1.92× acceleration, with an accuracy improvement of 0.09% compared with the baseline. For larger datasets, such as ImageNet, we reduced FLOPs by 50% for MobileNet-V1, resulting in a 4.7× faster speed and 1.48× memory compression, while maintaining the same accuracy. When applied to edge devices, such as JETSON XAVIER NX, our method resulted in a 71% reduction in FLOPs for MobileNet-V1, leading to a 1.63× faster speed, 1.64× memory compression, and an accuracy improvement.

3.
Artículo en Inglés | MEDLINE | ID: mdl-37071511

RESUMEN

Recently value-based centralized training with decentralized execution (CTDE) multi-agent reinforcement learning (MARL) methods have achieved excellent performance in cooperative tasks. However, the most representative method among these methods, Q-network MIXing (QMIX), restricts the joint action Q values to be a monotonic mixing of each agent's utilities. Furthermore, current methods cannot generalize to unseen environments or different agent configurations, which is known as ad hoc team play situation. In this work, we propose a novel Q values decomposition that considers both the return of an agent acting on its own and cooperating with other observable agents to address the nonmonotonic problem. Based on the decomposition, we propose a greedy action searching method that can improve exploration and is not affected by changes in observable agents or changes in the order of agents' actions. In this way, our method can adapt to ad hoc team play situation. Furthermore, we utilize an auxiliary loss related to environmental cognition consistency and a modified prioritized experience replay (PER) buffer to assist training. Our extensive experimental results show that our method achieves significant performance improvements in both challenging monotonic and nonmonotonic domains, and can handle the ad hoc team play situation perfectly.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA