Add testing with PyTorch 1.11 on GPUs in CI
See original GitHub issue🚀 Feature
We’ve decided to have testing with both PyTorch LTS and stable release (1.8 and 1.11 as of now) in CI, and we’ve already seen some issues arose while trying to enable it in #12373.
TODO
Known issues with PL with PyTorch 1.11
- #12846
- #12860
- Fix an issue with fitting a model initialised in init_meta_context #12870
- Fix an issue with DDP comm tests with some newer PyTorch versions #12878
- Fix an issue with inference mode with FSDP
Motivation
To test new features, e.g. meta init and native FSDP, in CI that are only available in newer PyTorch versions.
Pitch
Use the following image:
pytorchlightning/pytorch_lightning:base-cuda-py3.7-torch1.11
Alternatives
n/a
Additional context
n/a
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
-
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
-
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
Issue Analytics
- State:
- Created a year ago
- Comments:5 (5 by maintainers)
1.11 is fine (already released)
We removed nightly testing because it was too flaky, making everybody ignore the job. We only enable it when there’s a release candidate upstream
do you mean on CPU or also on GPU? tbh, not sure or don’t remember why we have dropped it so I am very fine to add it for CPU… cc: @carmocca