rasa x train model is different of rasa cmdline train model
See original GitHub issueRasa version: 1.1.3
Rasa X version (if used & relevant): 0.19.4
Python version: 3.6.5
Operating system (windows, osx, …): osx
Issue: why i use rasax front-end interface train model is different with cmdline train model?
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (6 by maintainers)
Top Results From Across the Web
Command Line Interface - Rasa
Command line interface for open source chatbot framework Rasa. Learn how to train, test and run your machine learning-based conversational ...
Read more >Command Line Interface - Rasa
Command line interface for open source chatbot framework Rasa. Learn how to train, test and run your machine learning-based conversational AI assistants.
Read more >Should all models in model directory show up in rasa X
When I use Rasa X to train a model, it displays in the UI. But before training models using Rasa X, if I...
Read more >Model Storage - Rasa
Models can be stored in different places after you trained your assistant. This page explains how to configure Rasa to load your models....
Read more >Tuning Your NLU Model - Rasa
The order of the components is determined by the order they are listed in the config.yml ; the output of a component can...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
cmdline
model is probably using the latest model. But forrasa x
, you need to manually change theproduction
model to latest. Go tohttp://{localhost-our-what-ever-your-ip}:5002/models
and assign theproduction
tag to latest model. Like this:Ah ok yeah then this is a duplicate of #4037, we’ll add this functionality soon