Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why normarlize advantage only for pg_loss but not for vf_loss? #441

Open
zzhixin opened this issue Dec 14, 2023 · 2 comments
Open

Why normarlize advantage only for pg_loss but not for vf_loss? #441

zzhixin opened this issue Dec 14, 2023 · 2 comments

Comments

@zzhixin
Copy link

zzhixin commented Dec 14, 2023

Problem Description

This is more a question than a problem
In the ppo implementation, why advantage normalization is done for pg_loss, not for vf_loss? Say we have a rl env with a dense reward ranging from 0 to 1000 pre step, With adv normalization for pg_loss alone, we have 100x scale difference between pg_loss and vf_loss! Which, as I know, directly affect the learning speed(performance). Because if you have a loss function timed by a big constant, you may better lower the learning rate. But as I know, cleanRL's implementation of ppo use the same lr for both value function and policy function.

My question is: isn't it more reasonable to apply adv norm to both pg_loss and vf_loss to make the loss scale the same?

@yu45020
Copy link

yu45020 commented Feb 27, 2024

I have a similar issue. When the reward is large, the loss from the value function is huge compared to the policy loss, and training is unstable. One way to solve it is to rescale reward. It breaks when reward in the later time steps are several magnitude difference than the early time steps.

Another solution is to normalize the value function. You may check out this repo. It may work or break as documented in What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study

@andreicozma1
Copy link

The rewards appear to be already normalized by the environment wrapper: env = gym.wrappers.NormalizeReward(env, gamma=gamma).

The advantages are already smaller scale than that of the returns that the critic learns to estimate, as by definition the advantage function is the difference between the action value function and the state value function. Therefore, it makes sense to normalize advantages because they're essentially just deltas to the value function, and the rewards that make up the returns have already been normalized.

Further, the loss for the value function is controlled by the vf_coef hyperparameter, providing a scale difference between the policy gradients and the value function gradients.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants