question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

`Subreddit.new()` in a `while` loop without `time.sleep()` issues 2 or 3 GET requests per a second.

See original GitHub issue

Describe the bug

It might not be a bug, but I’d like to let you know. Subreddit.new() in a while loop without time.sleep() issues 2 or 3 GET requests per a second. I think it’s too much. Since the README says "With PRAW there’s no need to introduce sleep calls in your code. ", beginners don’t include time.sleep() and it may annoy a remote server.

To Reproduce

> cat no_sleep.py 
import praw
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(message)s', datefmt='%I:%M:%S')

reddit = praw.Reddit('nmtake')
while True:
    list(reddit.subreddit('redditdev').new(limit=1))

> python no_sleep.py 2>&1 | grep Fetching
10:49:27 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:28 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:28 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:29 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:30 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:30 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:31 Fetching: GET https://oauth.reddit.com/r/redditdev/new
10:49:32 Fetching: GET https://oauth.reddit.com/r/redditdev/new
^C
>

Expected behavior

Not expected, but I hope PRAW sleeps automatically (say 1 second per a request) when a remote server doesn’t return Ratelimit headers.

System Info

  • OS: Arch LInux x86_4
  • Python: 3.8.4
  • PRAW Version: 7.1.0

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:9 (9 by maintainers)

github_iconTop GitHub Comments

1reaction
Watchful1commented, Aug 1, 2020

There were no sleeps at all. It was sending requests just as fast at the end as it did at the beginning. But I think that’s correct behavior by PRAW. If reddit doesn’t rate limit these types of requests, then there’s no point sleeping for them.

0reactions
nmtakecommented, Aug 1, 2020

Thanks for the test! It’s okay as long as that behavior isn’t unexpected.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Does putting time.sleep() in a while True loop stress the CPU ...
This means that every iteration now takes 0.1 seconds instead of 0 seconds, which means you can only do 10 operations in a...
Read more >
time.sleep inside a while loop seems to not working correctly
I am trying to acquire the data at 1 Hz so I am using time.sleep(1) inside a while loop to read the data...
Read more >
Python Reddit API Wrapper Documentation - PRAW
Each API request to Reddit must be separated by a 2 second delay, as per the API rules. So to get the highest....
Read more >
python scrape load more button
Web scraping without getting blocked using Python - or any other tool - is ... more clarity over how to scrape data by...
Read more >
Speeding Up Python with Concurrency, Parallelism, and asyncio
Details what concurrency and parallel programming are in Python and ... ClientSession() class is what allows us to make HTTP requests and ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found