boto3 client for lambda stops reading from a connection after 300 seconds
See original GitHub issueI have a Lambda function that contains code that needs to execute for 5 minutes or longer. I invoke this Lambda function using boto3
and wait for the response from the Lambda function (response being a json object).
I noticed that on using botocore.config.Config
to increase read_timeout
for the boto3.client
the client keeps the connection open but stops reading from the connection after 300 seconds (5 minutes) and throws a ReadTimeoutError
exception after the specified value of the read_timeout
parameter.
Below are the code snippets to reproduce the issue:
Client machine code: python: 3.5 OS: Ubuntu 16.04 Linux botocore version: 1.14.11
import json
import boto3
import botocore
config = botocore.config.Config(read_timeout=310, connect_timeout=310, retries={'max_attempts': 0})
client = boto3.client('lambda', config=config)
payload = {"body":some_string}
payload = json.dumps(payload)
response = client.invoke(
FunctionName='arn:aws:lambda:<region>:xxxxxxxxxxxx:function:function_name',
InvocationType='RequestResponse',
Payload=payload
)
Lambda function code: python: 3.7
import time
import json
def lambda_handler(event, context):
time.sleep(300)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
The above code produces the following error message:
botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL
after 310 seconds. I have tried experimenting with multiple values of time.sleep()
and read_timeout
. After referring to the CloudWatch logs I am fairly certain this only happens if Lambda executes for 300 seconds or more (the same can be verified by changing the sleep time in Lambda to 299)
I have referred to the issue #1104 and #205. What seems to be the issue here? Any help would be much appreciated.
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (3 by maintainers)
We’re experiencing an issue exactly like what is described above. Was there ever any real resolution here?
Closing this issue due to inactivity. Please reopen if you have any questions.