Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 24(14)2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39066137

ABSTRACT

In response to the increasing number of agents and changing task scenarios in multi-agent collaborative systems, existing collaborative strategies struggle to effectively adapt to new task scenarios. To address this challenge, this paper proposes a knowledge distillation method combined with a domain separation network (DSN-KD). This method leverages the well-performing policy network from a source task as the teacher model, utilizes a domain-separated neural network structure to correct the teacher model's outputs as supervision, and guides the learning of agents in new tasks. The proposed method does not require the pre-design or training of complex state-action mappings, thereby reducing the cost of transfer. Experimental results in scenarios such as UAV surveillance and UAV cooperative target occupation, robot cooperative box pushing, UAV cooperative target strike, and multi-agent cooperative resource recovery in a particle simulation environment demonstrate that the DSN-KD transfer method effectively enhances the learning speed of new task policies and improves the proximity of the policy model to the theoretically optimal policy in practical tasks.

SELECTION OF CITATIONS
SEARCH DETAIL