Remove deprecated code after the 1.6 release
See original GitHub issueProposed refactor
Remove deprecated code after the 1.6 release.
NOTE: Please pick up a single item from the list (by commenting here in the issue) - and if there are no conflicts - we will happily assign you and put your name in front of the item in the list.
Please note that unless mentioned, the classes are importable from pytorch_lightning
, example: from pytorch_lightning import Trainer
.
-
LightningModule.summarize
-> #12559 -
pytorch_lightning.core.memory.LayerSummary
-> #12593 -
pytorch_lightning.core.memory.ModelSummary
-> #12593 -
pytorch_lightning.core.memory.get_gpu_memory_map
-> #12644 -
pytorch_lightning.core.memory.get_memory_profile
-> #12659 -
LightningModule.model_size
-> #12641 -
LightningDataModule.train_transforms
-> #12662 -
LightningDataModule.val_transforms
-> #12763 -
LightningDataModule.test_transforms
-> #12773 -
LightningDataModule.size
-> #12780 -
LightningDataModule.dims
andLightningDataModule(dims=...)
-> #12780 -
LightningModule.get_progress_bar_dict
-> #12839 -
Trainer.progress_bar_dict
-> #12839 -
Trainer(prepare_data_per_node=...)
-> #12536 -
Trainer(stochastic_weight_avg=...)
-> #12535 -
Trainer(terminate_on_nan=...)
andTrainer.terminate_on_nan
-> #12553 -
LightningModule.on_{train,val,test,predict}_dataloader
-> #13033 -
pytorch_lightning.loggers.TestTubeLogger
-> #12859 -
pytorch_lightning.Callback.on_keyboard_interrupt
-> #13438 -
Trainer(process_position=...)
-> #13071 -
Trainer(flush_logs_every_n_steps=...)
-> #13074 -
LightningModule.add_to_queue
-> @shenoynikhil -
LightningModule.get_from_queue
-> @shenoynikhil -
Trainer(progress_bar_refresh_rate=...)
-> #12514 -
LightningLoggerBase.close
andpytorch_lightning.loggers.LoggerCollection.close
-> #13149 -
pytorch_lightning.distributed.dist.LightningDistributed
#13549 -
Trainer(checkpoint_callback=...)
-> #13027 - Passing
dataloader_idx
toon_train_batch_start
ofpytorch_lightning.Callback
andLightningModule
-> #12769 -
LightningModule.on_post_move_to_device
#13548 -
pytorch_lightning.core.decorators.parameter_validation
#13514 -
Trainer(accelerator="ddp_spawn")
#12696 -
Trainer(plugins="ddp_spawn")
#12700 -
Trainer(weights_summary="full")
,Trainer(weights_summary=None)
,Trainer.weights_summary
-> #13070 -
Trainer(log_gpu_memory=...)
-> #12657 -
Trainer.slurm_job_id
#13459 -
pytorch_lightning.callbacks.gpu_stats.GPUStatsMonitor
-> #12554 -
pytorch_lightning.callbacks.gpu_stats.XLAStatsMonitor
-> #12688 -
pytorch_lightning.callbacks.progress.ProgressBar
-> #12658 -
Trainer(max_steps=None)
andTrainer.fit_loop.max_steps = None
#13591 -
pytorch_lightning.callbacks.lr_monitor.LearningRateMonitor.lr_sch_names
-> #13353 -
KubeflowEnvironment.is_using_kubeflow, LSFEnvironment.is_using_lsf, TorchElasticEnvironment.is_using_torchelastic
#13458 -
pytorch_lightning.overrides.distributed.IndexBatchSamplerWrapper.batch_indices
#13565 -
pytorch_lightning.strategies.SingleDeviceStrategy.post_dispatch
#13461 -
pytorch_lightning.trainer.connectors.logger_connector.logger_connector.LoggerConnector.gpu_metrics
Feel free to cross-check from the test file to ensure that the relevant test fails now (since it’s no more deprecated and instead removed).
Pitch
All the deprecated features we have are tested here:
If you are interested in taking care of one item, post a comment here asking to take it. This avoids multiple people working on the same thing.
Additional context
See pull requests linked in #10312 for examples on how to contribute 😃 Or a recent pull request #12514.
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
-
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
-
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
Issue Analytics
- State:
- Created a year ago
- Reactions:2
- Comments:65 (62 by maintainers)
CI hiccups are resolved now. Feel free to pick new pieces! 🥳
@vumichien sounds great, I’ll work on something else, sorry for the confusion!