question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Add a predictor method to return more than one possible sequence

See original GitHub issue

Would be possible to add to this library a predictor_n (or to modify the current predictor) method to return more than one sequence as result? I think it would be a great tool to have when using beam search (with TopKDecoder).

I coded a first attempt to do that (it seems to work, https://github.com/juan-cb/pytorch-seq2seq/commit/442431001b122fa15c4b6476a9d7411570f53f20), but I’m not sure if it is the best way to implement that or is completely correct. The desired behavior is to return the n most probable sequences given an src_seq.

Thanks in advance

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
kylegao91commented, Jan 17, 2018

Hi @juan-cb, sorry for the late reply. Please find my changes to your predict_n below. It should work, and let’s make a pr.

    def predict_n(self, src_seq, n=None):
        src_id_seq = Variable(torch.LongTensor([self.src_vocab.stoi[tok] for tok in src_seq]),volatile=True).view(1, -1)
        if torch.cuda.is_available():
            src_id_seq = src_id_seq.cuda()

        softmax_list, _, other = self.model(src_id_seq, [len(src_seq)])

        result = []
        for x in range(0, int(n)):
            length = other['topk_length'][0][x]
            tgt_id_seq = [other['topk_sequence'][di][0,x,0].data[0] for di in range(length)]
            tgt_seq = [self.tgt_vocab.itos[tok] for tok in tgt_id_seq]
            result.append(tgt_seq)

        return result
1reaction
kylegao91commented, Jan 10, 2018

I think it’s nice feature to have and thanks for the attempted implementation.

I noticed that you use other["sequence"], which contains sequences for the input batch rather than the beams of each input. Instead, as noted here, the other['topk_sequence'] should be used to get the beam sequences.

Please feel free to make a pr from this, we can discuss from there.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Making Predictions with Sequences - Machine Learning Mastery
Sequence -to-sequence prediction involves predicting an output sequence given an input sequence. For example: Given: 1, 2, 3, 4, 5. Predict: 6, ...
Read more >
What will happen if a Function is tried to return more than one ...
This article focuses on discussing the scenario when more than one values are returned via return statements. Predict the Output:.
Read more >
Ensemble methods: bagging, boosting and stacking
As we already mentioned, the idea of stacking is to learn several different weak learners and combine them by training a meta-model to...
Read more >
SEQUENCE function in Excel - auto generate number series
See how to use the Excel SEQUENCE function to create a number series starting at a specific value.
Read more >
FORECAST and FORECAST.LINEAR functions
The syntax and usage of the two functions are the same, but the older FORECAST function will eventually be deprecated. It's still available...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found