Increase efficiency by better encoding of observation
See original GitHub issueHi,
As mentioned in #11, I tried “LAYER” based observations.
Following are the observations for prisoners_dilemma_in_the_matrix
substrate for each agent:
- RGB:
(88,88,3)
uint8
- LAYER:
(11,11,17)
int32
But there is no documentation on how to interpret this LAYER observation. Could you please direct me to where I could find help interpreting LAYER?
I also tried changing the spriteSize
parameter from 8
to 1
.
This helps reduce the observation space from (88,88,3)
to (11,11,3)
, and I believe this could lead to speed-up and ease of learning in feature learning.
Is there any side-effect I should be careful about changing spriteSize
?
Any other suggestions from the authors to speed up the environment interaction and training loop to enable faster experimentation?
Thank You, Kinal Mehta
Issue Analytics
- State:
- Created a year ago
- Comments:12 (2 by maintainers)
Top Results From Across the Web
Depth of Encoding Through Observed Gestures in Foreign ...
According to Nyberg (2002), increased activity in these regions is responsible for deeper encoding and leads to better retrievable memory ...
Read more >Memory Recall After “Learning by Doing” and ... - NCBI - NIH
A recent study suggests that recall differences between enactment encoding and verbal encoding increase with expanding recall time (Spranger et ...
Read more >Smarter Ways to Encode Categorical Data for Machine Learning
Better encoding of categorical data can mean better model performance. In this article I'll introduce you to a wide range of encoding ...
Read more >Evidence for a Strategic View of Action Memory Processing
Providing participants more time to encode an enacted item allowed better or more efficient encoding of that item, resulting in improved recall performance....
Read more >Enactment versus Observation: Item-Specific and Relational ...
A good encoding of “in-order-to” and “enable relations” [10] may provide efficient retrieval paths for recalling actions within action ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Nice! It’s possible that *_in_the_matrix would work better than the others at sprite size of 1X1. So in order to really be sure this works in general we’d want to look at some of the more visually complicated substrates. But I agree that this result is very promising, and there’s a good chance it’ll continue to work well in the more complicated substrates too.
It will certainly affect learning dynamics in hard-to-predict ways. But I think that’s ok.
We’ve mostly regarded the layers as an implementation detail, not something that people would really be using as an observation. We might add or subtract layers in the future, or they might have their order permuted. They also sometimes contain privileged information that focal agents are not supposed to be able to access.
Some of the substrates have invisible objects that could be seen via the LAYERS observation. For example, observing the LAYERS would let you see the location where an apple will later spawn before it actually spawns. So an agent trained with the LAYERS observation would have access to privileged information that RGB agents would not have.