Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
Stefan Elfwing,E. Uchibe,K. Doya
2017 · DOI: 10.1016/j.neunet.2017.12.012
Neural Networks · 引用 2,121 次
TLDR
This study proposes two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU), and suggests the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection can be competitive with DQN, without the need for a separate target network.
