Bot giving other answers than the highest confidence, often not respecting fallback threshold [python]
See original GitHub issueRasa Core version: pip list | grep rasa_core returns nothing?
Python version: 3.6.8
Operating system (windows, osx, …): macOS Mojave
Issue: Bot not returning the same as the highest confidence
Hey,
I’m building a conversational chatbot in iPython. I’ve tried using some of the bash commands, but I couldn’t figure out a way to get the interactive mode/parse the interpreter in Jupyter with it. I’ve gotten the bot to work pretty okay based on intents, but most of the time the answers it gives are far off. There must be something simple wrong, it could be that I’ve copied code from different places that doesn’t go together, but I’m just not catching it. I’m using the tensorflow embedding.
Here’s my code:
from rasa_nlu.training_data import load_data
from rasa_nlu.config import RasaNLUModelConfig
from rasa_nlu.model import Trainer
from rasa_nlu import config
training_data = load_data("nlupaths.md")
trainer = Trainer(config.load("config.yml"))
interpreter = trainer.train(training_data)
model_directory = trainer.persist("./models/nlu", fixed_model_name="current")
from rasa_core.policies import FallbackPolicy, KerasPolicy, MemoizationPolicy
from rasa_core.agent import Agent
fallback = FallbackPolicy(fallback_action_name="utter_unclear",
core_threshold=0.65,
nlu_threshold=0.65)
agent = Agent('domainpaths.yml', policies=[MemoizationPolicy(), KerasPolicy(), fallback])
training_data = agent.load_data('storiespaths.md')
agent.train(
training_data,
validation_split=0.0
)
agent.persist('models/dialogue')
from rasa_core.agent import Agent
agent = Agent.load('models/dialogue', interpreter=model_directory)
print("You can now talk to Buddha-bot!")
while True:
a = input()
tmp = interpreter.parse(a)['intent']['name']
if a == 'stop':
break
responses = agent.handle_text(a)
for response in responses:
print(response["text"])
print(interpreter.parse(a))
When I talk to it, this is how a conversation looks. I’m writing questions that are exactly like the examples in the nlu file.
Bot: You can now talk to Buddha-bot! Me: hey Bot: Good day, nice to see you! Can you ask about something specific? (answer_intent: greet - correct, parsed_intent: greet - correct, confidence: 0.96) Me: why do we meditate? Bot: I’m doing great, thank you very much! Do you want to ask about something Buddhism-related? (answer_intent: feeling - wrong, parsed_intent: why_meditate - correct, confidence:0.95) Me: what about the eastern way? Bot: I’m sorry, I don’t have the answer to your question, could you rephrase it? (answer_intent: fallback - wrong, 0.65 threshold, parsed_intent: buddhist_not_enlightened - correct, confidence: 0.95) Me: How do we relieve ourselves from suffering? Bot: I’m really sorry, but maybe your question is too broad, phrased in a way that I don’t understand or beyond my knowledge. You can try to rephrase your question? (answer_intent: fallback - wrong, 0.65 threshold, parsed_intent: suffering_relief - correct, confidence: 0.94)
and so on.
I’m guessing that there is something very basic that I’m missing. I’m just trying to do this completely pythonesque because I prefer working in iPython. 😃 Any help would be very greatly appreciated!
Ps. If you would be so kind to answer a side question, is there support for REDP already in python? I can’t find anything about it. Thanks. 😃
Issue Analytics
- State:
- Created 5 years ago
- Comments:30 (13 by maintainers)
Top GitHub Comments
So the context memory depends on the intents and utterances, not the actual words, so yes, at that point the elaborate should provoke different answers if the context was exactly the same as a story you had written (via the MemoizationPolicy). And yes, the interpreter and the KerasPolicy are unrelated – the interpreter predicts the intent and the policies pick the actions. I recommend you read a little bit more into our docs to figure out exactly how your bot is making the decisions it’s making.
As for this issue, I’m going to close it since it’s just up to usage now. The forum is the correct place to ask questions like these – I realize it’s a little slower for response, but we like to keep GitHub issues focused on bugs/issues that could be affecting everybody as they take first priority. Good luck!
Yes, that’s fine. I’m checking now to see if the same bug is happening on 0.13.3