question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[QUERY] ChatCompletionAsync Timeout / Internal server error

See original GitHub issue

Library name and version

Azure.AI.OpenAI 1.0.0-beta.6

Query/Question

Hi, ChatCompletionAsync waits for minutes without answering.

The frequency increased with Azure Studio ChatGPT-35-turbo 0613 16K.

Could we give Timeout to OpenAIClient init?

Sometimes, it gives “internal server error” after a few minutes.

Environment

No response

Issue Analytics

  • State:closed
  • Created 2 months ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
navba-MSFTcommented, Jul 25, 2023

@fatihyildizhan You can use the workaround for now by using a custom policy to set a timeout on all requests as shown below.

You can use the below constructor:

<html> <body>
OpenAIClient(Uri, TokenCredential, OpenAIClientOptions) Initializes a instance of OpenAIClient for use with an Azure OpenAI resource.
</body> </html>

And use the AddPolicy(HttpPipelinePolicy, HttpPipelinePosition) Method of the OpenAIClientOptions Class.

Note: The below sample is for the sdk for js. You can tweak it for sdk for net.

this.openai = new OpenAIClient(process.env.AZURE_OPENAI_URL, new AzureKeyCredential(process.env.AZURE_API_KEY), {
            apiVersion: process.env.AZURE_DEPLOYMENT_VERSION,
            retryOptions: {
                maxRetries: 3
            },
          additionalPolicies: [
              {
                policy: {
                  name: "customTimeoutPolicy",
                  sendRequest(request, next) {
                    // example that sets a timeout of 10 seconds
                    request.timeout = 10 * 1000;
                    return next(request);
                  },
                },
                position: "perCall",
              }
            ]
        });

Hope this helps.

1reaction
navba-MSFTcommented, Jul 25, 2023

@fatihyildizhan Thanks for reaching out to us and reporting this issue. We are looking into this issue and we will provide an update.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Managing timeout when waiting for the response from chat ...
I tried to get chat completion (gpt-4) for a prompt that is expected to return a long reply: async function askGPT(messages) { const ......
Read more >
Message "Async callback was not invoked within the 5000 ...
The timeout problem occurs when either the network is slow or many network calls are made using await . These scenarios exceed the...
Read more >
Configuring timeout for ChatCompletion Python - API
I have tried setting this timeout and while it is accepted by OpenAI without error it does not seem to have an effect....
Read more >
How to fix regular Timeout errors when using chatGPT3.5 ...
Hi, I have a Python V2 Function App that interacts with Azure OpenAI's chatGPT3.5 Turbo. To ensure that the max tokens aren't exceeded, ......
Read more >
Create an Azure OpenAI, LangChain, ChromaDB, and ...
This article shows how to quickly build chat applications using Python and leveraging powerful technologies such as OpenAI ChatGPT models ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found