Potential EMA Model Issues
See original GitHub issueSo, I’ve noticed a potential bug related to the exponential moving averaged priors (I haven’t tested the other models).
Essentially, the EMA model seemingly refuses to “learn”. I have to believe that the weights aren’t being properly updated with the online_model
, for whatever reason.
As you can see, as soon as the EMA kicks in at step 1000, the loss plateaus. I’ve never worked with an EMA model before, so I guess this could be expected behavior?
This issue also presents itself when running evaluation on the EMA model (the (text<->predicted_image)
similarity should be at least 0.1 even after a few hundred steps)
Issue Analytics
- State:
- Created a year ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Ecological Momentary Assessment (EMA) in Studies of ... - NCBI
The challenges of modeling drug use in the lab have increased the impetus to study drug use in the field, using EMA. Go...
Read more >The evaluation of medicines, step-by-step
Further assessment and list of outstanding issues (by day 180) · 8. EMA experts continue assessment and prepare more questions · 9. Further...
Read more >EMA lauds PRIME priority meds scheme and outlines ...
This includes for example ATMPs and orphan diseases, which often present new scientific and regulatory challenges,” EMA officials said in a ...
Read more >Real‐World Data for Regulatory Decision Making: Challenges ...
We present a number of potential solutions to address the full spectrum of ... CDM, common data model; EMA, European Medicines Agency; ...
Read more >Ecological Momentary Assessment - an overview
EMA has the potential to solve several problems of applying ABA to mental health issues the most significant of which is the possibility...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@nousr oh crap, yes, you are right! i thought that i wasn’t incrementing the EMA steps in accordance with the global training step, but i was
https://github.com/lucidrains/DALLE2-pytorch/commit/9cc475f6e7990b3d978128902ee0ea90614451f6 should fix the update every issue
@lucidrains hey! when you get a chance, do you think could you explain the intuition for this line?
https://github.com/lucidrains/DALLE2-pytorch/blob/1cc288af39f171b3e7f77fdb4252682af05e17e9/dalle2_pytorch/trainer.py#L191
specifically, why the
// self.update_every
? I guess I don’t see the purpose of movingself.update_after
“up” by a factor ofupdate_every
?