question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to use the nlp.js Express API from multiple node servers

See original GitHub issue

Summary

My goal is to have one nlp.js chat bot server running in backend, and multiple node servers communicating with it. I have tried different approaches:

  • Reproducing the client http requests from the frontend given when the configuration in conf.json has serveBot property as true:
{
    "settings": {
        "api-server": {
            "port": 3150,
            "serveBot": true
        }
    },
    "use": [
        "Basic",
        "LangEn",
        "ExpressApiServer",
        "DirectlineConnector",
        "Bot"
    ]
}

With this approach I can create conversations using the express api service in /directline/conversations. When I make a POST request to /directline/conversations/:conversationId/activities, it returns a 200 code, so I assume it processes it well. But the GET (also in /directline/conversations/:conversationId/activities) internally processes it well but the http request returns undefined.

  • Directly using the directline controller:
        const dock = await dockStart();
        const { controller } = dock.get('directline');
        // Here I can use controller as "await controller.addActivity(conversationId, activity);"

This works fine, but when I use more than one node server, I have to create one chatbot for each, or communicate them internally.

Possible Solution

I am sure there must be a better solution to do this, since the express API works fine with the default frontend when the configuration in conf.json has serveBot property as true. Am I missing something?

Your Environment

Software Version
@nlpjs/basic 4.17.5
@nlpjs/bot 4.19.7
@nlpjs/core 4.17.0
@nlpjs/directline-connector 4.19.1
@nlpjs/express-api-server 4.19.8
node 10.15.3
npm 6.4.1
Operating System Ubuntu

Thank you, greetings from Barcelona.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
jesus-seijas-spcommented, Feb 2, 2021

Hello, This protocol is the DirectLine from Microsoft. You will need first to create a conversation:

curl --location --request POST 'http://localhost:3000/directline/conversations'

This returns an object like this:

{
    "conversationId": "239717d9-5589-78a9-ad5d-53fb4eae495a",
    "expiresIn": 1800
}

From there don’t take into account the expiresIn, and focus in the conversationId. This will be the conversation id of this conversation.

Now is time to talk with the bot, like this:

curl --location --request POST 'http://localhost:3000/directline/conversations/239717d9-5589-78a9-ad5d-53fb4eae495a/activities' \
--header 'Content-Type: application/json' \
--data-raw '{"channelData":{"clientActivityID":"16122752442687lh97zrqfx2","clientTimestamp":"2021-02-02T14:14:04.268Z"},"text":"who are you","textFormat":"plain","type":"message","channelId":"webchat","from":{"id":"User","name":"","role":"user"},"locale":"es-ES","timestamp":"2021-02-02T14:14:04.268Z","entities":[{"requiresBotState":true,"supportsListening":true,"supportsTts":true,"type":"ClientCapabilities"}]}'

The will add one activity to the conversation, but you’ll receive the 200 even before the message is really processed by NLP. Why? Because imagine that this message triggers a huge process of database, or is an alarm of the type “remind me in 12 hours”. You cannot have the REST call hanging up there for hours. Even worst, the reaction to one message, is not always one message… it can be 0 messages, it can be 10 messages… It can be a message each minute… So you cannot answer in the same REST call.

The way it have to handle that is there is a list of activities per conversation, the user can add activities and the backend can add activites.

You can retrieve all the activities with:

curl --location --request GET 'http://localhost:3000/directline/conversations/239717d9-5589-78a9-ad5d-53fb4eae495a/activities'  

The answer will be like that:

{
    "activities": [
        {
            "channelData": {
                "clientActivityID": "16122752442687lh97zrqfx2",
                "clientTimestamp": "2021-02-02T14:14:04.268Z"
            },
            "text": "who are you",
            "textFormat": "plain",
            "type": "message",
            "channelId": "emulator",
            "from": {
                "id": "User",
                "name": "",
                "role": "user"
            },
            "locale": "es-ES",
            "timestamp": "2021-02-02T14:14:04.268Z",
            "entities": [
                {
                    "requiresBotState": true,
                    "supportsListening": true,
                    "supportsTts": true,
                    "type": "ClientCapabilities"
                }
            ],
            "serviceUrl": "http://localhost:3000",
            "conversation": {
                "id": "239717d9-5589-78a9-ad5d-53fb4eae495a"
            },
            "address": {
                "conversation": {
                    "id": "239717d9-5589-78a9-ad5d-53fb4eae495a"
                }
            },
            "id": "56dcd81e-8c71-307f-5320-4340c4c0691e"
        },
        {
            "type": "message",
            "serviceUrl": "http://localhost:3000",
            "channelId": "emulator",
            "conversation": {
                "id": "239717d9-5589-78a9-ad5d-53fb4eae495a"
            },
            "recipient": {
                "id": "User",
                "name": "",
                "role": "user"
            },
            "inputHint": "acceptingInput",
            "replyToId": "56dcd81e-8c71-307f-5320-4340c4c0691e",
            "id": "2ca146a0-2b40-7405-bccd-5b80274f16bf",
            "from": {
                "id": "directline",
                "name": "directline"
            },
            "text": "I am a virtual app"
        }
    ],
    "watermark": 2
}

As you can see you receive the activities but also a watermark. This is for doing a filter, so you can call now:

curl --location --request GET 'http://localhost:3000/directline/conversations/239717d9-5589-78a9-ad5d-53fb4eae495a/activities?watermark=2'  

To filter and tell the backend “hey, I already processed all message until watermark 2, so only send me messages from here”

Again, this protocol is from Microsoft, so you can obtain information here: https://docs.microsoft.com/en-us/azure/bot-service/rest-api/bot-framework-rest-direct-line-3-0-start-conversation?view=azure-bot-service-4.0

Another way is, if you already now that you will have only one answer per message of the user, and you know that this answer can be given in a rasonable time:

npm i @nlpjs/rest-connector

Given your example:

const { RestConnector } = require('@nlpjs/rest-connector');
...
  const dock = await dockStart();
  const container = dock.getContainer();
  container.use(RestConnector)
  await container.get('rest').start();

Now you can get a conversation id like this:

curl --location --request GET http://localhost:3000/rest/token

You’ll obtain something like:

{"id":"84923623-ac54-bf8d-e6a2-bb2d961d928c"}

Now to talk with the bot:

curl --location --request GET http://localhost:3000/rest/talk?conversationId=84923623-ac54-bf8d-e6a2-bb2d961d928c&text=who%20are%20you

And you’ll obtain:

{"type":"message","channelId":"rest","conversation":{"id":"84923623-ac54-bf8d-e6a2-bb2d961d928c"},"inputHint":"acceptingInput","id":"f5a808ac-faef-a2cb-831f-fdafe9e7954e","from":{"id":"rest","name":"rest"},"text":"I am a virtual app"}
0reactions
atubo2012commented, Apr 21, 2021

Hello, This protocol is the DirectLine from Microsoft. You will need first to create a conversation:

curl --location --request POST 'http://localhost:3000/directline/conversations'

This returns an object like this:

{
    "conversationId": "239717d9-5589-78a9-ad5d-53fb4eae495a",
    "expiresIn": 1800
}

From there don’t take into account the expiresIn, and focus in the conversationId. This will be the conversation id of this conversation.

Now is time to talk with the bot, like this:

curl --location --request POST 'http://localhost:3000/directline/conversations/239717d9-5589-78a9-ad5d-53fb4eae495a/activities' \
--header 'Content-Type: application/json' \
--data-raw '{"channelData":{"clientActivityID":"16122752442687lh97zrqfx2","clientTimestamp":"2021-02-02T14:14:04.268Z"},"text":"who are you","textFormat":"plain","type":"message","channelId":"webchat","from":{"id":"User","name":"","role":"user"},"locale":"es-ES","timestamp":"2021-02-02T14:14:04.268Z","entities":[{"requiresBotState":true,"supportsListening":true,"supportsTts":true,"type":"ClientCapabilities"}]}'

The will add one activity to the conversation, but you’ll receive the 200 even before the message is really processed by NLP. Why? Because imagine that this message triggers a huge process of database, or is an alarm of the type “remind me in 12 hours”. You cannot have the REST call hanging up there for hours. Even worst, the reaction to one message, is not always one message… it can be 0 messages, it can be 10 messages… It can be a message each minute… So you cannot answer in the same REST call.

The way it have to handle that is there is a list of activities per conversation, the user can add activities and the backend can add activites.

You can retrieve all the activities with:

curl --location --request GET 'http://localhost:3000/directline/conversations/239717d9-5589-78a9-ad5d-53fb4eae495a/activities'  

The answer will be like that:

{
    "activities": [
        {
            "channelData": {
                "clientActivityID": "16122752442687lh97zrqfx2",
                "clientTimestamp": "2021-02-02T14:14:04.268Z"
            },
            "text": "who are you",
            "textFormat": "plain",
            "type": "message",
            "channelId": "emulator",
            "from": {
                "id": "User",
                "name": "",
                "role": "user"
            },
            "locale": "es-ES",
            "timestamp": "2021-02-02T14:14:04.268Z",
            "entities": [
                {
                    "requiresBotState": true,
                    "supportsListening": true,
                    "supportsTts": true,
                    "type": "ClientCapabilities"
                }
            ],
            "serviceUrl": "http://localhost:3000",
            "conversation": {
                "id": "239717d9-5589-78a9-ad5d-53fb4eae495a"
            },
            "address": {
                "conversation": {
                    "id": "239717d9-5589-78a9-ad5d-53fb4eae495a"
                }
            },
            "id": "56dcd81e-8c71-307f-5320-4340c4c0691e"
        },
        {
            "type": "message",
            "serviceUrl": "http://localhost:3000",
            "channelId": "emulator",
            "conversation": {
                "id": "239717d9-5589-78a9-ad5d-53fb4eae495a"
            },
            "recipient": {
                "id": "User",
                "name": "",
                "role": "user"
            },
            "inputHint": "acceptingInput",
            "replyToId": "56dcd81e-8c71-307f-5320-4340c4c0691e",
            "id": "2ca146a0-2b40-7405-bccd-5b80274f16bf",
            "from": {
                "id": "directline",
                "name": "directline"
            },
            "text": "I am a virtual app"
        }
    ],
    "watermark": 2
}

As you can see you receive the activities but also a watermark. This is for doing a filter, so you can call now:

curl --location --request GET 'http://localhost:3000/directline/conversations/239717d9-5589-78a9-ad5d-53fb4eae495a/activities?watermark=2'  

To filter and tell the backend “hey, I already processed all message until watermark 2, so only send me messages from here”

Again, this protocol is from Microsoft, so you can obtain information here: https://docs.microsoft.com/en-us/azure/bot-service/rest-api/bot-framework-rest-direct-line-3-0-start-conversation?view=azure-bot-service-4.0

Another way is, if you already now that you will have only one answer per message of the user, and you know that this answer can be given in a rasonable time:

npm i @nlpjs/rest-connector

Given your example:

const { RestConnector } = require('@nlpjs/rest-connector');
...
  const dock = await dockStart();
  const container = dock.getContainer();
  container.use(RestConnector)
  await container.get('rest').start();

Now you can get a conversation id like this:

curl --location --request GET http://localhost:3000/rest/token

You’ll obtain something like:

{"id":"84923623-ac54-bf8d-e6a2-bb2d961d928c"}

Now to talk with the bot:

curl --location --request GET http://localhost:3000/rest/talk?conversationId=84923623-ac54-bf8d-e6a2-bb2d961d928c&text=who%20are%20you

And you’ll obtain:

{"type":"message","channelId":"rest","conversation":{"id":"84923623-ac54-bf8d-e6a2-bb2d961d928c"},"inputHint":"acceptingInput","id":"f5a808ac-faef-a2cb-831f-fdafe9e7954e","from":{"id":"rest","name":"rest"},"text":"I am a virtual app"}

Thank your sharing the solution.

Is there consideration about a solution based on socket.io in roadmap of nlp.js so that client could got answer from bots as soon as quickly and not need to send request with watermark parameter.

If there are other methods that could get event when activities’ve been updated besides of requesting “/directline/conversations/:conversationId/activities?watermark=2”

Read more comments on GitHub >

github_iconTop Results From Across the Web

Quick Start - GitHub
An NLP library for building bots, with entity extraction, sentiment analysis, automatic language identify, and so more - nlp.js/quickstart.md at master ...
Read more >
Getting started with NLP.js - DEV Community ‍ ‍
First, you'll need Node.js installed on your computer. If you haven't, you can get it here. Then, create a folder for your project,...
Read more >
Building an API with Node and Express | Daniel Shiffman
You can have access to this constructor function in your main app ( server.js ) with two additional steps. First, you must add...
Read more >
How To Add NLP And Language AI To Your Next.js App
Then, install the following dependencies: Cohere-ai is a Node.js library for interacting with the Cohere Language API; Express is an easy-to-use ...
Read more >
Building a sentiment analysis app with Node.js - LogRocket Blog
With our sentiment analysis route set up, the next step will be connecting it to our Express server. To do this, we'll import...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found