question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to save tokenize data when training from scratch

See original GitHub issue

❓ Questions & Help

I am training Allbert from scratch following the blog post by hugging face. As it mentions that :

If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step.

How this can be done any suggestions? As of now , using the method given in the notebook:

from transformers import TextDataset

dataset = TextDataset(
    tokenizer=tokenizer,
    file_path="./oscar.eo.txt",
    block_size=128,
)

there is no method to save tokenize data, can anyone suggest how to save that as its already taking long enough before starting training.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
008karancommented, May 26, 2020

That worked @LysandreJik thanks! I still not getting how you can prepare pretraining data on the fly while training. I got large training data and don’t want to wait until it gets prepared for training.

1reaction
LysandreJikcommented, May 26, 2020

Once you have your data you can pickle it or use torch.save to save it to your disk and reload it later.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How-to Build a Transformer Tokenizer - Towards Data Science
How to train a transformer model from scratch. ... Saving our tokenizer creates two files, a merges.txt and vocab.json .
Read more >
Training a new tokenizer from an old one - Hugging Face
Training a tokenizer is a statistical process that tries to identify which subwords are the best to pick for a given corpus, and...
Read more >
Create a Tokenizer and Train a Huggingface RoBERTa Model ...
To train a tokenizer we need to save our dataset in a bunch of text files. We create a plain text file for...
Read more >
Tokenization and Text Data Preparation with TensorFlow ...
Tokenize our training data tokenizer ... providing the maximum number of words to keep in our vocabulary after tokenization, ...
Read more >
Tokenization in NLP: Types, Challenges, Examples, Tools
A tokenizer breaks unstructured data and natural language text ... Check how you can keep track of your TensorFlow / Keras model training...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found