Show training utterances in the training log
See original GitHub issueHi,
Some of my utterances are creating issues while training. Following is the error, it’s throwing:
RuntimeError: The size of tensor a (17243) must match the size of tensor b (2500) at non-singleton dimension 1
I want to troubleshoot, exactly which utterance is creating the problem. How can I modify the script, so, while training, it will also print utterances in the terminal, which batches are currently selected for training, or there is any other efficient way to troubleshoot the error?
Thanks
Issue Analytics
- State:
- Created 2 years ago
- Comments:6
Top Results From Across the Web
Model Training Logs - ServiceNow Community
Hello, Where can I find the model training logs? ... all intents have a minimum of 5 utterances and a maximum of 200...
Read more >8 Train Your Model for Natural Language Understanding
Utterances are messages that model designers use to train and test intents defined in a model. Oracle Digital Assistant provides a declarative environment...
Read more >Train and test your LUIS app - Cognitive Services
Training is the process of teaching your Language Understanding (LUIS) app to extract intent and entities from user utterances.
Read more >Training your NLP model – best practices in writing utterances
Stop the press - an NLP model is only as good as the data you train it with. I think we can all...
Read more >Einstein Bot Suggestions not showing up in Bot Training
- Utterances that are new are added as suggestions. - Utterance suggestions that were unused or declined in Bot training within last 2...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Thanks @Gastron The issue was resolved. I use batch.id, to print the utt. There was an utt of more than 10 minutes, which is creating the issue.
Thanks a lot 😃
I think technically you should be able to run the model with a different max_length - the positional encoding is not learned. Sometimes the attention mechanism doesn’t respond well to longer sequences than seen in training.