How to use the nlp.js Express API from multiple node servers
See original GitHub issueSummary
My goal is to have one nlp.js chat bot server running in backend, and multiple node servers communicating with it. I have tried different approaches:
- Reproducing the client http requests from the frontend given when the configuration in conf.json has
serveBot
property astrue
:
{
"settings": {
"api-server": {
"port": 3150,
"serveBot": true
}
},
"use": [
"Basic",
"LangEn",
"ExpressApiServer",
"DirectlineConnector",
"Bot"
]
}
With this approach I can create conversations using the express api service in /directline/conversations
.
When I make a POST request to /directline/conversations/:conversationId/activities
, it returns a 200 code, so I assume it processes it well.
But the GET (also in /directline/conversations/:conversationId/activities
) internally processes it well but the http request returns undefined
.
- Directly using the directline controller:
const dock = await dockStart();
const { controller } = dock.get('directline');
// Here I can use controller as "await controller.addActivity(conversationId, activity);"
This works fine, but when I use more than one node server, I have to create one chatbot for each, or communicate them internally.
Possible Solution
I am sure there must be a better solution to do this, since the express API works fine with the default frontend when the configuration in conf.json has serveBot
property as true
. Am I missing something?
Your Environment
Software | Version |
---|---|
@nlpjs/basic |
4.17.5 |
@nlpjs/bot |
4.19.7 |
@nlpjs/core |
4.17.0 |
@nlpjs/directline-connector |
4.19.1 |
@nlpjs/express-api-server |
4.19.8 |
node |
10.15.3 |
npm |
6.4.1 |
Operating System | Ubuntu |
Thank you, greetings from Barcelona.
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (1 by maintainers)
Hello, This protocol is the DirectLine from Microsoft. You will need first to create a conversation:
This returns an object like this:
From there don’t take into account the expiresIn, and focus in the conversationId. This will be the conversation id of this conversation.
Now is time to talk with the bot, like this:
The will add one activity to the conversation, but you’ll receive the 200 even before the message is really processed by NLP. Why? Because imagine that this message triggers a huge process of database, or is an alarm of the type “remind me in 12 hours”. You cannot have the REST call hanging up there for hours. Even worst, the reaction to one message, is not always one message… it can be 0 messages, it can be 10 messages… It can be a message each minute… So you cannot answer in the same REST call.
The way it have to handle that is there is a list of activities per conversation, the user can add activities and the backend can add activites.
You can retrieve all the activities with:
The answer will be like that:
As you can see you receive the activities but also a watermark. This is for doing a filter, so you can call now:
To filter and tell the backend “hey, I already processed all message until watermark 2, so only send me messages from here”
Again, this protocol is from Microsoft, so you can obtain information here: https://docs.microsoft.com/en-us/azure/bot-service/rest-api/bot-framework-rest-direct-line-3-0-start-conversation?view=azure-bot-service-4.0
Another way is, if you already now that you will have only one answer per message of the user, and you know that this answer can be given in a rasonable time:
Given your example:
Now you can get a conversation id like this:
You’ll obtain something like:
Now to talk with the bot:
And you’ll obtain:
Thank your sharing the solution.
Is there consideration about a solution based on socket.io in roadmap of nlp.js so that client could got answer from bots as soon as quickly and not need to send request with watermark parameter.
If there are other methods that could get event when activities’ve been updated besides of requesting “/directline/conversations/:conversationId/activities?watermark=2”