Memory leak using python3.8 and boto3 client
See original GitHub issueDescribe the bug We upgraded from python2.7 to python3.8, we are using boto3-1.18.31.dist-info, botocore-1.23.20.dist-info uname -a output is: Linux 3.10.0-1160.45.1.el7.x86_64 #1 SMP Wed Oct 13 17:20:51 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux.
Steps to reproduce Below is the snippet we are using:
import boto3
def poll_queue(client, qurl):
messages = client.receive_message(QueueUrl=qurl, MaxNumberOfMessages=1, WaitTimeSeconds=0)
if 'ResponseMetadata' in messages:
response_meta = messages['ResponseMetadata']
else:
return False
def main():
sqs = boto3.resource('sqs', region_name='us-west-2')
client = boto3.client('sqs', region_name='us-west-2')
onboard_tmp = (sqs.get_queue_by_name(QueueName='test1_q.fifo')).url
orch_tmp = (sqs.get_queue_by_name(QueueName='test2_q.fifo')).url
inst_tmp = (sqs.get_queue_by_name(QueueName='test3_q.fifo')).url
while True:
poll_queue(client, onboard_tmp)
poll_queue(client, orch_tmp)
poll_queue(client, inst_tmp)
main()
On examining the process memory usage, it continuously leaks, we are doing the following command to check(Note that the leak is slow and ultimately linux kernel oom kills the process.)
while true; do ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep <pid> ;date ; sleep 5; done
>>> 51384 root 5.1 python3 sqs_leak_test.py
The current memory usage as shown above is 5.1, this started from 0.1 and over the course of few hours it reached 5.1, ultimately it would go to 100.
Expected behavior Confirmed with python2.7 on the same host that issue does not happen, the “receive_message” is the culprit here.
Debug logs Let me know if more info is needed.
Issue Analytics
- State:
- Created 2 years ago
- Comments:10 (5 by maintainers)
Top GitHub Comments
Let me ask our devops team, will get back. I created a new instance with ubuntu and it does not seem to leak with the same program. Will try with different variations now. Let me get back.
Since we haven’t heard back for a few months I’m going to close this. If you’re still experiencing the issue please let us know. Thanks!