question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

allow other embedding-size besides 1024

See original GitHub issue

We’re using this implementation for a research project, and we’ve seen problems when trying to use other values of embedding-size. Can you give us some hints about what could be causing this situation?

pybullet build time: Jun 20 2019 15:31:37
Traceback (most recent call last):
  File "/current_project/robo-planet/torch_planet/main.py", line 233, in <module>
    beliefs, prior_states, prior_means, prior_std_devs, posterior_states, posterior_means, posterior_std_devs = transition_model(init_state, actions[:-1], init_belief, bottle(encoder, (observations[1:], )), nonterminals[:-1])
  File "/current_project/robo-planet/torch_planet/models.py", line 10, in bottle
    y = f(*map(lambda x: x[0].view(x[1][0] * x[1][1], *x[1][2:]), zip(x_tuple, x_sizes)))
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
RuntimeError: 
shape '[-1, 4096]' is invalid for input of size 2508800:
operation failed in interpreter:

            return torch.transpose(self, dim0, dim1), backward

        def view(self,
                 size: List[int]):
            self_size = self.size()
            def backward(grad_output):
                return grad_output.reshape(self_size), None

            return torch.view(self, size), backward
                   ~~~~~~~~~~ <--- HERE

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
maximecbcommented, Jun 28, 2019

The encoder and decoder architectures seem to have been taken directly from World Models.

That is what they claimed in the article. David Ha was also on both papers.

0reactions
Kaixhincommented, Jun 27, 2019

Nope I haven’t played around with this (agreed that it could probably be smaller). The encoder and decoder architectures seem to have been taken directly from World Models.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Can not use large dimension in Embedding layer on GPU(s).
I am using TF2.0 latest nightly build and I am trying to train LSTM model for text classification on very large dataset of...
Read more >
OpenAI GPT-3 Text Embeddings - Really a new state-of-the ...
Via a REST API endpoint, you can access four types of models from OpenAI: Ada (1024 dimensions); Babbage (2048 dimensions); Curie (4096 dimensions);...
Read more >
How to determine the embedding size?
More recent papers used 512, 768, 1024. One of the factors, influencing the choice of embedding is the way you would like different...
Read more >
Deep Learning; Personal Notes Part 1 Lesson 4: Structured ...
In this case we have a matrix with 1024 rows and 512 columns. ... The rule of thumb for determining the embedding size...
Read more >
Word embeddings | Text - TensorFlow
It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found