question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Unable to generate bot response. Details: Error: 400: => Runtime error: Function `ChatSkill.ExtractUserMemories` execution failed. Microsoft.SemanticKernel.AI.AIException: Invalid request: The request is not valid, HTTP status: 400

See original GitHub issue

Describe the bug I can run webapi and webapp, but get the error message been shown the browser like ‘Unable to generate bot response. Details: Error: 400: => Runtime error: Function ChatSkill.ExtractUserMemories execution failed. Microsoft.SemanticKernel.AI.AIException: Invalid request: The request is not valid, HTTP status: 400’ as below. image

A clear and concise description of what the bug is.

To Reproduce

  1. run webapi
  2. run webapp
  3. go to http://localhost:3000/
  4. input the chat: How many planets are there in the solar system?
  5. See error: Unable to generate bot response. Details: Error: 400: => Runtime error: Function ChatSkill.ExtractUserMemories execution failed. Microsoft.SemanticKernel.AI.AIException: Invalid request: The request is not valid, HTTP status: 400

Screenshots my appsetting.json image my webapi console image

Desktop (please complete the following information):

  • OS: Windows 10 64
  • IDE: run on PowerShell

Issue Analytics

  • State:closed
  • Created 4 months ago
  • Reactions:1
  • Comments:10 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
firelakecommented, May 18, 2023

Update: for the 404 error I mentioned above, it is because the embedding deployment/model id “text-embedding-ada-002” specified in the config had not been deployed in my Azure OAI resource. After deploying the model, I am able to setup and use the chat app successfully without any errors.

1reaction
hathind-mscommented, May 16, 2023

Can you please confirm if the AI endpoint and key values are set in Embeddings section in appsettings.json:

 "Embedding": {
    "Label": "Embeddings",
    "AIService": "AzureOpenAI",
    "DeploymentOrModelId": "text-embedding-ada-002",
    "Endpoint": "<value>", // ignored when AIService is "OpenAI"
    "Key": "<value>"
  },
Read more comments on GitHub >

github_iconTop Results From Across the Web

"HTTP 400 Bad Request" error when proxying ...
This error (HTTP 400 Bad Request) means that Internet Explorer was able to connect to the web server, but the webpage could not...
Read more >
Error codes - OpenAI API
This guide includes an overview on error codes you might see from both the API and our official Python library. Each error code...
Read more >
400 BAD request HTTP error code meaning?
A 400 means that the request was malformed. In other words, the data stream sent by the client to the server didn't follow...
Read more >
REST API HTTP response codes
400, Bad Request, The request URI does not match the APIs in the system, or the operation failed for unknown reasons. Invalid headers...
Read more >
HTTP 400 status code (Bad Request) - Amazon CloudFront
To fix this error, update your CloudFront distribution so that it finds the S3 bucket in the bucket's current AWS Region. To update...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found