Lose performance between 0.6.0 and 0.7.1
See original GitHub issueš Bug
When I train exactly the same model with pl 0.7.1, I get worse performance compared to pl0.6.0. I did a fresh install or Asteroid with both versions and ran exactly the same script on the same hardware. I get significantly worse performance with pl0.7.1. Are there some known issues I should be aware of? In the mean time, Iāll have to downgrade to 0.6.0
Environment
PL 0.6.0
Collecting environment information⦠[8/105]
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Debian GNU/Linux 10 (buster)
GCC version: (Debian 8.3.0-6) 8.3.0
CMake version: version 3.14.0
Python version: 3.6 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA
Versions of relevant libraries: [pip3] numpy==1.18.1 [pip3] pytorch-lightning==0.6.0 [pip3] torch==1.4.0 [pip3] torchvision==0.4.2 [conda] blas 1.0 mkl [conda] mkl 2019.4 243 [conda] mkl-include 2020.0 166 [conda] mkl-service 2.3.0 py36he904b0f_0 [conda] mkl_fft 1.0.14 py36ha843d7b_0 [conda] mkl_random 1.1.0 py36hd6b4f25_0 [conda] torch 1.3.1 pypi_0 pypi [conda] torchvision 0.4.2 pypi_0 pypi
Diff between 0.6.0 and 0.7.1 envs
diff env_0.7 env_0.6
19c19
< [pip3] pytorch-lightning==0.7.1
---
> [pip3] pytorch-lightning==0.6.0
Issue Analytics
- State:
- Created 4 years ago
- Comments:53 (50 by maintainers)
Iāve tried with 0.7.5 against 0.6.0 and got the same results on several of our architectures. Weāll finally upgrade and get all the new features you integrated š Thanks again for looking into it, Iām closing this.
Try again, I had sharing turned off. No, Colab doesnāt want to give me GPU for some reason, thatās why I tried CPU.