question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

torchtext iterator that tokenizes each line of words between the tokens `<sos>` and `<eos>`

See original GitHub issue

Hello,

I generated a text file called openbookQA_train. The contents of this file are shown below:

<sos> The sun is responsible for <mcoption> (A) puppies learning new tricks <eos>
<sos> The sun is responsible for <mcoption> (B) children growing up and getting old <eos>
<sos> The sun is responsible for <mcoption> (C) flowers wilting in a vase <eos>
<sos> The sun is responsible for <mcoption> (D) plants sprouting, blooming and wilting <eos>

I am trying to use or define torchtext Iterator to generate the input that I can pass into my Transformer.

I want each sample in my next(iter(openbookQA_train)).text to be a series of integers that are obtained by tokenizing each line of words between <sos> and <eos> (including those special tokens), and for a sample that contains lesser number of tokens than the bptt length, I want the sample to include all of the tokenized words between <sos> and <eos> and the rest of the slots to be filled with the token <pad> up to the bptt length.

How can I achieve this objective?

Thank you,

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:20 (9 by maintainers)

github_iconTop GitHub Comments

1reaction
h56chocommented, Nov 27, 2019

instance_list = [instance for instance in batch.t()] doesn’t work but instance_list = [instance for instance in batch.text] seem to work… thank you!

0reactions
mttkcommented, Nov 27, 2019

If the batch is in format [bptt_size, batch_size], then instance_list = [instance for instance in batch.t()] should work

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to use TorchText for neural machine translation, plus ...
TorchText is incredibly convenient as it allows you to rapidly tokenize and batchify (are those even words?) your data.
Read more >
torchtext.data - Read the Docs
Default: False. tokenize – The function used to tokenize strings using this field into sequential examples. If “spacy”, the SpaCy tokenizer is used....
Read more >
1_torch_seq2seq_intro
Numericalized is just a fancy way of saying they have been converted from a sequence of tokens to a sequence of corresponding indices,...
Read more >
1 - Sequence to Sequence Learning with Neural Netw - Kaggle
At each time-step, the input to the encoder RNN is both the embedding, e, ... outputs an <eos> token or after a certain...
Read more >
A Tutorial on Torchtext - Allen Nie
All checked boxes are functionalities provided by Torchtext. ... Tokenization: break sentences into list of words
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found