question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Bot giving other answers than the highest confidence, often not respecting fallback threshold [python]

See original GitHub issue

Rasa Core version: pip list | grep rasa_core returns nothing?

Python version: 3.6.8

Operating system (windows, osx, …): macOS Mojave

Issue: Bot not returning the same as the highest confidence

Hey,

I’m building a conversational chatbot in iPython. I’ve tried using some of the bash commands, but I couldn’t figure out a way to get the interactive mode/parse the interpreter in Jupyter with it. I’ve gotten the bot to work pretty okay based on intents, but most of the time the answers it gives are far off. There must be something simple wrong, it could be that I’ve copied code from different places that doesn’t go together, but I’m just not catching it. I’m using the tensorflow embedding.

Here’s my code:


from rasa_nlu.training_data import load_data
from rasa_nlu.config import RasaNLUModelConfig
from rasa_nlu.model import Trainer
from rasa_nlu import config

training_data = load_data("nlupaths.md")
trainer = Trainer(config.load("config.yml"))
interpreter = trainer.train(training_data)
model_directory = trainer.persist("./models/nlu", fixed_model_name="current")

from rasa_core.policies import FallbackPolicy, KerasPolicy, MemoizationPolicy
from rasa_core.agent import Agent
fallback = FallbackPolicy(fallback_action_name="utter_unclear",
                          core_threshold=0.65,
                          nlu_threshold=0.65)

agent = Agent('domainpaths.yml', policies=[MemoizationPolicy(), KerasPolicy(), fallback])
training_data = agent.load_data('storiespaths.md')

agent.train(
    training_data,
    validation_split=0.0
)

agent.persist('models/dialogue')

from rasa_core.agent import Agent
agent = Agent.load('models/dialogue', interpreter=model_directory)

print("You can now talk to Buddha-bot!")
while True:
    a = input()
    tmp = interpreter.parse(a)['intent']['name']
    if a == 'stop':
        break
    responses = agent.handle_text(a)
    for response in responses:
        print(response["text"])
        print(interpreter.parse(a))

When I talk to it, this is how a conversation looks. I’m writing questions that are exactly like the examples in the nlu file.

Bot: You can now talk to Buddha-bot! Me: hey Bot: Good day, nice to see you! Can you ask about something specific? (answer_intent: greet - correct, parsed_intent: greet - correct, confidence: 0.96) Me: why do we meditate? Bot: I’m doing great, thank you very much! Do you want to ask about something Buddhism-related? (answer_intent: feeling - wrong, parsed_intent: why_meditate - correct, confidence:0.95) Me: what about the eastern way? Bot: I’m sorry, I don’t have the answer to your question, could you rephrase it? (answer_intent: fallback - wrong, 0.65 threshold, parsed_intent: buddhist_not_enlightened - correct, confidence: 0.95) Me: How do we relieve ourselves from suffering? Bot: I’m really sorry, but maybe your question is too broad, phrased in a way that I don’t understand or beyond my knowledge. You can try to rephrase your question? (answer_intent: fallback - wrong, 0.65 threshold, parsed_intent: suffering_relief - correct, confidence: 0.94)

and so on.

I’m guessing that there is something very basic that I’m missing. I’m just trying to do this completely pythonesque because I prefer working in iPython. 😃 Any help would be very greatly appreciated!

Ps. If you would be so kind to answer a side question, is there support for REDP already in python? I can’t find anything about it. Thanks. 😃

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:30 (13 by maintainers)

github_iconTop GitHub Comments

1reaction
erohmensingcommented, Mar 14, 2019

So the context memory depends on the intents and utterances, not the actual words, so yes, at that point the elaborate should provoke different answers if the context was exactly the same as a story you had written (via the MemoizationPolicy). And yes, the interpreter and the KerasPolicy are unrelated – the interpreter predicts the intent and the policies pick the actions. I recommend you read a little bit more into our docs to figure out exactly how your bot is making the decisions it’s making.

As for this issue, I’m going to close it since it’s just up to usage now. The forum is the correct place to ask questions like these – I realize it’s a little slower for response, but we like to keep GitHub issues focused on bugs/issues that could be affecting everybody as they take first priority. Good luck!

1reaction
erohmensingcommented, Mar 13, 2019

Yes, that’s fine. I’m checking now to see if the same bug is happening on 0.13.3

Read more comments on GitHub >

github_iconTop Results From Across the Web

Handling chatbot failure gracefully | by Aniruddha Karajgi
It no longer predicts whatever intent has the highest confidence, even if it was really low. The FallbackClassifier now overrides that intent ...
Read more >
Fallback and Human Handoff - Rasa
When an action confidence is below the threshold, Rasa will run the action ... To give the bot a chance to figure out...
Read more >
Set bot confidence thresholds with confidence - Genesys
The best way to find a good threshold for a bot is to feed it a set of test data that has been...
Read more >
Rasa Fall back policy for faq chatbot - Stack Overflow
To your first, if you're using response selector, fallback will kick in as normal if confidence is below the threshold you set. Giving...
Read more >
Failing Gracefully with Rasa. Rasa Core 0.13 includes a new…
If the classification confidence is below a certain threshold, ... However, when interacting with a chatbot this fallback behavior can do ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found