AttributeError: 'DistributedDataParallel' object has no attribute 'version' at the time of model checkpointing
See original GitHub issueFull stacktrace:
... <snipped> ...
Epoch: [1][97/804] Time 1.485 (1.640) Data 0.006 (0.069) Loss 339.3378 (444.7247)
Epoch: [1][98/804] Time 1.753 (1.641) Data 0.012 (0.069) Loss 372.9253 (443.9921)
Epoch: [1][99/804] Time 1.500 (1.640) Data 0.017 (0.068) Loss 276.2173 (442.2974)
Epoch: [1][100/804] Time 1.461 (1.638) Data 0.006 (0.068) Loss 314.9874 (441.0243)
Saving checkpoint model to /datasets/deepspeech/librispeech/deepspeech_checkpoint_epoch_1_iter_100.pth
Traceback (most recent call last):
File "train.py", line 284, in <module>
wer_results=wer_results, cer_results=cer_results, avg_loss=avg_loss),
File "/workspace/src/deepspeech/deepspeech.pytorch/model.py", line 251, in serialize
'version': model.version,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 585, in __getattr__
type(self).__name__, name))
AttributeError: 'DistributedDataParallel' object has no attribute 'version'
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/workspace/src/deepspeech/deepspeech.pytorch/multiproc.py", line 46, in <module>
cmd=p.args)
subprocess.CalledProcessError: Command '['/opt/conda/bin/python', 'train.py', '--rnn-type', 'lstm', '--hidden-size', '1024', '--hidden-layers', '5', '--train-manifest', '/datasets/deepspeech/librispeech/libri_train_manifest.csv', '--val-manifest', '/datasets/deepspeech/librispeech/libri_val_manifest.csv', '--epochs', '60', '--num-workers', '16', '--cuda', '--learning-anneal', '1.01', '--batch-size', '64', '--no-sortaGrad', '--visdom', '--opt-level', 'O1', '--loss-scale', '1', '--id', 'libri', '--checkpoint', '--save-folder', '/datasets/deepspeech/librispeech', '--model-path', '/datasets/deepspeech/librispeech/deepspeech_final.pth', '--checkpoint-per-batch', '100', '--opt-level', 'O1', '--loss-scale', '1.0', '--world-size', '4', '--rank', '0', '--gpu-rank', '0']' returned non-zero exit status 1.
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
'DistributedDataParallel' object has no attribute 'no_sync'
Hi, I am trying to fine-tune layoutLM using with the following: distribution = {'smdistributed':{'dataparallel':{ 'enabled': True } ...
Read more >Source code for pytorch_lightning.trainer.trainer
Licensed under the Apache License, Version 2.0 (the "License"); # you may not use ... If there is no checkpoint file at the...
Read more >AttributeError: 'DataParallel' object has no attribute 'copy'
It works! I thought we could use .module.state_dict() only when we access checkpoint[model]. Now I understood how things work, by debugging it.
Read more >AttributeError: 'DataParallel' object has no attribute 'copy' - vision
While trying to load a checkpoint into a resnet model I get this error ! What is wrong here? this is the snippet...
Read more >ultralytics/yolov3: v9.5.0 - YOLOv5 v5.0 release compatibility ...
Pretrained Checkpoints Model size (pixels) mAP val 0.5:0.95 mAP test ... into this error AttributeError: 'NoneType' object has no attribute ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
You need to make changes package json in model.py file change it from : package = { ‘version’: model.version, ‘hidden_size’: model.hidden_size, ‘hidden_layers’: model.hidden_layers, ‘rnn_type’: supported_rnns_inv.get(model.rnn_type, model.rnn_type.name.lower()), ‘audio_conf’: model.audio_conf, ‘labels’: model.labels, ‘state_dict’: model.state_dict(), ‘bidirectional’: model.bidirectional }
to
package = { ‘version’: model.module.version, ‘hidden_size’: model.module.hidden_size, ‘hidden_layers’: model.module.hidden_layers, ‘rnn_type’: supported_rnns_inv.get(model.module.rnn_type, model.module.rnn_type.name.lower()), ‘audio_conf’: model.module.audio_conf, ‘labels’: model.module.labels, ‘state_dict’: model.module.state_dict(), ‘bidirectional’: model.module.bidirectional }
thanks @farisalasmary i’m going to have a look at this, will implement one of your proposed solutions in the master branch!