`Subreddit.new()` in a `while` loop without `time.sleep()` issues 2 or 3 GET requests per a second.
See original GitHub issueDescribe the bug
It might not be a bug, but I’d like to let you know. Subreddit.new()
in a while
loop without time.sleep()
issues 2 or 3 GET requests per a second. I think it’s too much. Since the README says "With PRAW there’s no need to introduce sleep calls in your code. ", beginners don’t include time.sleep()
and it may annoy a remote server.
To Reproduce
> cat no_sleep.py
import praw
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(message)s', datefmt='%I:%M:%S')
reddit = praw.Reddit('nmtake')
while True:
list(reddit.subreddit('redditdev').new(limit=1))
> python no_sleep.py 2>&1 | grep Fetching
10:49:27 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:28 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:28 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:29 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:30 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:30 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:31 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:32 Fetching: GET https://oauth.reddit.com/r/redditdev/new
^C
>
Expected behavior
Not expected, but I hope PRAW sleeps automatically (say 1 second per a request) when a remote server doesn’t return Ratelimit headers.
System Info
- OS: Arch LInux x86_4
- Python: 3.8.4
- PRAW Version: 7.1.0
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (9 by maintainers)
Top Results From Across the Web
Does putting time.sleep() in a while True loop stress the CPU ...
This means that every iteration now takes 0.1 seconds instead of 0 seconds, which means you can only do 10 operations in a...
Read more >time.sleep inside a while loop seems to not working correctly
I am trying to acquire the data at 1 Hz so I am using time.sleep(1) inside a while loop to read the data...
Read more >Python Reddit API Wrapper Documentation - PRAW
Each API request to Reddit must be separated by a 2 second delay, as per the API rules. So to get the highest....
Read more >python scrape load more button
Web scraping without getting blocked using Python - or any other tool - is ... more clarity over how to scrape data by...
Read more >Speeding Up Python with Concurrency, Parallelism, and asyncio
Details what concurrency and parallel programming are in Python and ... ClientSession() class is what allows us to make HTTP requests and ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
There were no sleeps at all. It was sending requests just as fast at the end as it did at the beginning. But I think that’s correct behavior by PRAW. If reddit doesn’t rate limit these types of requests, then there’s no point sleeping for them.
Thanks for the test! It’s okay as long as that behavior isn’t unexpected.