How to save tokenize data when training from scratch
See original GitHub issue❓ Questions & Help
I am training Allbert from scratch following the blog post by hugging face. As it mentions that :
If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step.
How this can be done any suggestions? As of now , using the method given in the notebook:
from transformers import TextDataset
dataset = TextDataset(
tokenizer=tokenizer,
file_path="./oscar.eo.txt",
block_size=128,
)
there is no method to save tokenize data, can anyone suggest how to save that as its already taking long enough before starting training.
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (2 by maintainers)
Top Results From Across the Web
How-to Build a Transformer Tokenizer - Towards Data Science
How to train a transformer model from scratch. ... Saving our tokenizer creates two files, a merges.txt and vocab.json .
Read more >Training a new tokenizer from an old one - Hugging Face
Training a tokenizer is a statistical process that tries to identify which subwords are the best to pick for a given corpus, and...
Read more >Create a Tokenizer and Train a Huggingface RoBERTa Model ...
To train a tokenizer we need to save our dataset in a bunch of text files. We create a plain text file for...
Read more >Tokenization and Text Data Preparation with TensorFlow ...
Tokenize our training data tokenizer ... providing the maximum number of words to keep in our vocabulary after tokenization, ...
Read more >Tokenization in NLP: Types, Challenges, Examples, Tools
A tokenizer breaks unstructured data and natural language text ... Check how you can keep track of your TensorFlow / Keras model training...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
That worked @LysandreJik thanks! I still not getting how you can prepare pretraining data on the fly while training. I got large training data and don’t want to wait until it gets prepared for training.
Once you have your data you can pickle it or use
torch.save
to save it to your disk and reload it later.