Skip to content
/ MACE Public

[AAAI 2024] Settling Decentralized Multi-Agent Coordinated Exploration by Novelty Sharing

Notifications You must be signed in to change notification settings

SigmaBM/MACE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Settling Decentralized Multi-Agent Coordinated Exploration by Novelty Sharing

Code for "Settling Decentralized Multi-Agent Coordinated Exploration by Novelty Sharing" accepted by AAAI 2024. [PDF]

MACE

Exploration in decentralized cooperative multi-agent reinforcement learning faces two challenges. One is that the novelty of global states is unavailable, while the novelty of local observations is biased. The other is how agents can explore in a coordinated way. To address these challenges, we propose MACE, a simple yet effective multi-agent coordinated exploration method.

Novelty-based intrinsic reward

By communicating only local novelty, agents can take into account other agents’ local novelty $u_t^j$ to approximate the global novelty.

$$ r_{\mathrm{nov}}^i\left(o_t^i, a_t^i\right)=\sum_j u_t^j $$

Hindsight-based intrinsic reward

Further, we newly introduce weighted mutual information to measure the influence of one agent’s action $a^i$ on other agents’ accumulated novelty $z^j$.

$$ \omega I\left(A_t^i ; Z_t^j \mid o_t^i\right)=\mathbb{E}_{a_t^i, z_t^j \mid o_t^i}\left[\omega(a_t^i, z_t^j) \log \frac{p(a_t^i, z_t^j \mid o_t^i)}{p(a_t^i \mid o_t^i) p(z_t^j \mid o_t^i)}\right] $$

We convert it as an intrinsic reward in hindsight to encourage agents to exert more influence on other agents’ exploration and boost coordinated exploration.

$$ r_{\mathrm{hin}}^{i \rightarrow j}\left(o_t^i, a_t^i, z_t^j\right)=z_t^j \log \frac{p(a_t^i \mid o_t^i, z_t^j)}{\pi^i(a_t^i \mid o_t^i)} $$

We combine the two intrinsic rewards to get the final shaped reward.

$$ \begin{align} r_{\mathrm{s}}^i\left(o_t^i, a_t^i,{z_t^j}_{j \neq i}\right) &= r_{\mathrm{ext}}+r_{\mathrm{nov}}^i\left(o_t^i, a_t^i\right)+\lambda \sum_{j \neq i} r_{\mathrm{hin}}^{i \rightarrow j}\left(o_t^i, a_t^i, z_t^j\right)\\ &= r_{\mathrm{ext}}+\sum_j u_t^j+\lambda \sum_{j \neq i} z_t^j \log \frac{p(a_t^i \mid o_t^i, z_t^j)}{\pi^i(a_t^i \mid o_t^i)} \end{align} $$

Training

For GridWorld environment:

./scripts/train_gridworld.sh

For Overcooked environment:

./scripts/train_overcooked.sh

About

[AAAI 2024] Settling Decentralized Multi-Agent Coordinated Exploration by Novelty Sharing

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published