UnboundLocalError: local variable 'epoch_logs' referenced before assignment
See original GitHub issueTraceback (most recent call last):
File "pretrained_mlp.py", line 102, in <module>
batch_size=1);
File ".../keras/keras/models.py", line 465, in fit
shuffle=shuffle, metrics=metrics)
File ".../keras/keras/models.py", line 228, in _fit
callbacks.on_epoch_end(epoch, epoch_logs)
UnboundLocalError: local variable 'epoch_logs' referenced before assignment
Has anybody ever had this issue? It infrequently comes up for me when training with autoencoders. I’m using python 2.7.8 on Ubuntu if that means anything.
Issue Analytics
- State:
- Created 8 years ago
- Reactions:6
- Comments:7 (1 by maintainers)
Top Results From Across the Web
Python 3: UnboundLocalError: local variable referenced ...
This local variable masks the global variable. In your case, Var1 is considered as a local variable, and it's used before being set, ......
Read more >Local variable referenced before assignment in Python
The Python UnboundLocalError: Local variable referenced before assignment occurs when we reference a local variable before assigning a value ...
Read more >Python local variable referenced before assignment Solution
The UnboundLocalError: local variable referenced before assignment error is raised when you try to assign a value to a local variable before it ......
Read more >UnboundLocalError: local variable ... - Net-Informations.Com
The unboundlocalerror: local variable referenced before assignment is raised when you try to use a variable before it has been assigned in the...
Read more >UnboundLocalError: local variable referenced before ...
In Python, variables that are only referenced inside a function are implicitly global. If a variable is assigned a value anywhere within the ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
For me, the solution was a smaller batch size. Here’s why:
Since I mistakenly used a batch_size larger than the number of training samples,
nb_train_samples//batch_size
was rounded down to 0. And as @luis-i-reyes-castro explains, steps_per_epoch must be above 0.It can happen when the model is trained on empty data (which won’t work anyway). I can turn that into a clearer error message.