How is multi-agent handled ?
See original GitHub issueHello there, I was wondering how you handle multi-agent learning. Let’s take as an example the supported kaggle hungry-geese environment. There is a function in the environment class :
rule_based_action()
If it exists, does it mean that the trained policy represents 1 player and the three others are rule based with this function ?
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (4 by maintainers)
Top Results From Across the Web
Why Multiagent Systems?
For instance, a domain that is easily broken into components--several independent tasks that can be handled by separate agents--could benefit from MAS.
Read more >Multi-Agent Systems - Introduction to CIS - Google Sites
A Multi-agent system is a system in within an environment can be composed of intelligent agents. These multi-agent systems can help solve problems....
Read more >Multiagents - an overview | ScienceDirect Topics
Multiagent systems can deal with complex situations by cooperation and have been shown to be successfully applied to complex monitoring and maintenance decision ......
Read more >Multiagent Systems for 3D Reconstruction Applications
The chapter explains how data acquisition step can be handled using a multiagent system. The explanation is provided by literature reviews ...
Read more >Multi-agent reinforcement learning: An overview
tive selection of Multi-Agent Reinforcement Learning (MARL) algorithms for fully ... In resource management, while the resources could be managed by.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
And if you want to train against different agents in self-play, you can set for specific opponents of self-play by re-writing
generation.py
(worker.py
). (ref. gfootball example) Multi-agent learning and self play against specific opponents are not supported yet as a default function in HandyRL.Multi-agent learning must consider the handling of GPU… Further consideration required.
Hi, thank you for your consideration. You are right about self-play.
rule_based_action()
is used inRuleBasedAgent
class inevaluation.py
. This agent is only used for evaluation of trained models (e.g. in the evaluation duringpython main.py --train
,python main.py --eval
.) However,RuleBasedAgent
is not set as default agent in the current setting. If you want to evaluate the win-rate against rule-based agent during training, you need to change this line. And change this line if you want to change the opponent in model evaluation (python main.py --eval
).