question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error when running yelp/train.py

See original GitHub issue

I followed README.md and ran python train.py --data_path ./data

But then I got the following errors:

{'dropout': 0.0, 'lr_ae': 1, 'load_vocab': '', 'nlayers': 1, 'batch_size': 64, 'beta1': 0.5, 'gan_gp_lambda': 0.1, 'nhidden': 128, 'vocab_size': 30000, 'niters_gan_schedule': '', 'niters_gan_d': 5, 'lr_gan_d': 0.0001, 'grad_lambda': 0.01, 'sample': False, 'arch_classify': '128-128', 'clip': 1, 'hidden_init': False, 'cuda': True, 'log_interval': 200, 'device_id': '0', 'temp': 1, 'seed': 1111, 'maxlen': 25, 'lowercase': True, 'data_path': './data', 'lambda_class': 1, 'lr_classify': 0.0001, 'outf': 'yelp_example', 'noise_r': 0.1, 'noise_anneal': 0.9995, 'lr_gan_g': 0.0001, 'niters_gan_g': 1, 'arch_g': '128-128', 'z_size': 32, 'epochs': 25, 'niters_ae': 1, 'arch_d': '128-128', 'emsize': 128, 'niters_gan_ae': 1}
Original vocab 9599; Pruned to 9603
Number of sentences dropped from ./data/valid1.txt: 0 out of 38205 total
Number of sentences dropped from ./data/valid2.txt: 0 out of 25278 total
Number of sentences dropped from ./data/train1.txt: 0 out of 267314 total
Number of sentences dropped from ./data/train2.txt: 0 out of 176787 total
Vocabulary Size: 9603
382 batches
252 batches
4176 batches
2762 batches
Loaded data!
Seq2Seq2Decoder(
  (embedding): Embedding(9603, 128)
  (embedding_decoder1): Embedding(9603, 128)
  (embedding_decoder2): Embedding(9603, 128)
  (encoder): LSTM(128, 128, batch_first=True)
  (decoder1): LSTM(256, 128, batch_first=True)
  (decoder2): LSTM(256, 128, batch_first=True)
  (linear): Linear(in_features=128, out_features=9603, bias=True)
)
MLP_G(
  (layer1): Linear(in_features=32, out_features=128, bias=True)
  (bn1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (activation1): ReLU()
  (layer2): Linear(in_features=128, out_features=128, bias=True)
  (bn2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (activation2): ReLU()
  (layer7): Linear(in_features=128, out_features=128, bias=True)
)
MLP_D(
  (layer1): Linear(in_features=128, out_features=128, bias=True)
  (activation1): LeakyReLU(negative_slope=0.2)
  (layer2): Linear(in_features=128, out_features=128, bias=True)
  (bn2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (activation2): LeakyReLU(negative_slope=0.2)
  (layer6): Linear(in_features=128, out_features=1, bias=True)
)
MLP_Classify(
  (layer1): Linear(in_features=128, out_features=128, bias=True)
  (activation1): ReLU()
  (layer2): Linear(in_features=128, out_features=128, bias=True)
  (bn2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (activation2): ReLU()
  (layer6): Linear(in_features=128, out_features=1, bias=True)
)
Training...
Traceback (most recent call last):
  File "train.py", line 574, in <module>
    train_ae(1, train1_data[niter], total_loss_ae1, start_time, niter)
  File "train.py", line 400, in train_ae
    output = autoencoder(whichdecoder, source, lengths, noise=True)
  File "/localhome/imd/anaconda2/envs/Pytorch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/groups/branson/home/imd/Documents/project/ARAE/yelp/models.py", line 143, in forward
    hidden = self.encode(indices, lengths, noise)
  File "/groups/branson/home/imd/Documents/project/ARAE/yelp/models.py", line 160, in encode
    batch_first=True)
  File "/localhome/imd/anaconda2/envs/Pytorch/lib/python3.5/site-packages/torch/onnx/__init__.py", line 56, in wrapper
    if not might_trace(args):
  File "/localhome/imd/anaconda2/envs/Pytorch/lib/python3.5/site-packages/torch/onnx/__init__.py", line 130, in might_trace
    first_arg = args[0]
IndexError: tuple index out of range

Issue Analytics

  • State:open
  • Created 5 years ago
  • Comments:9 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
jakezhaojbcommented, Aug 30, 2018

@vineetjohn Good point! I used PyTorch 0.3.1. I’m adding this to the README

0reactions
V-Enzocommented, Mar 12, 2020

@dangvanthin Hi, I met the same problem. Do you have the solution right now? Thank you

Read more comments on GitHub >

github_iconTop Results From Across the Web

Sometimes Code Runs and Sometimes it gives error
In ubuntu, the code sometimes run and sometimes give error. The error is below: Traceback (most recent call last): File "Craftsvilla.py", ...
Read more >
Aspect Based Sentiment Analysis 3/4 using PyAbsa - Kaggle
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the...
Read more >
Untitled
Make money online micro jobs, Di pietro interroga forlani, Hp 2060 error blinking, ... Eeprom write, Peyton manning run reaction, Vpr matrix 226r...
Read more >
Untitled
2011 toyota tacoma trd tx pro, Le serpent montreal yelp, Train booking office aberdeen ... Pd partito democristiano, Mean squared error regression analysis, ......
Read more >
Untitled
Pullover dressing gown, Canadian using american netflix, Bangla qawali-hyder ... Whirlpool f2 e1 error code, Goutal songes edp, Berufsorganisation pflege, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found