Partial Consistency for Stabilizing Undiscounted Reinforcement Learning.
IEEE Trans Neural Netw Learn Syst
; 34(12): 10359-10373, 2023 Dec.
Article
in En
| MEDLINE
| ID: mdl-35468065
Undiscounted return is an important setup in reinforcement learning (RL) and characterizes many real-world problems. However, optimizing an undiscounted return often causes training instability. The causes of this instability problem have not been analyzed in-depth by existing studies. In this article, this problem is analyzed from the perspective of value estimation. The analysis result indicates that the instability originates from transient traps that are caused by inconsistently selected actions. However, selecting one consistent action in the same state limits exploration. For balancing exploration effectiveness and training stability, a novel sampling method called last-visit sampling (LVS) is proposed to ensure that a part of actions is selected consistently in the same state. The LVS method decomposes the state-action value into two parts, i.e., the last-visit (LV) value and the revisit value. The decomposition ensures that the LV value is determined by consistently selected actions. We prove that the LVS method can eliminate transient traps while preserving optimality. Also, we empirically show that the method can stabilize the training processes of five typical tasks, including vision-based navigation and manipulation tasks.
Full text:
1
Collection:
01-internacional
Database:
MEDLINE
Language:
En
Journal:
IEEE Trans Neural Netw Learn Syst
Year:
2023
Document type:
Article
Country of publication: