does this work with threads?
See original GitHub issueI tried using this library with concurrent.futures.ThreadPoolExecutor to fill up the cache like so:
import requests
import requests_cache
from requests.adapters import HTTPAdapter
adapter = HTTPAdapter(max_retries=5)
session = requests_cache.core.CachedSession()
session.mount('http://', adapter)
from concurrent.futures import ThreadPoolExecutor, as_completed
urls = [....]
with ThreadPoolExecutor(max_workers=7) as executor:
future_to_url = {executor.submit(session.get, url): url for url in urls}
for i, future in enumerate(as_completed(future_to_url)):
url = future_to_url[future]
response = future.result()
It seemed to be working pretty slowly- I wasn’t sure exactly what was going on. Do you know if this is supported?
Issue Analytics
- State:
- Created 5 years ago
- Comments:5
Top Results From Across the Web
Threads vs. Processes: A Look At How They Work Within Your ...
Since threads share the same address space as the process and other threads within the process, it is easy to communicate between the...
Read more >Thread smart home explained: Everything you need to know ...
What is Thread and how does it work? ... Thread is a low-power mesh networking technology protocol designed to connect all the devices...
Read more >What Is a Thread?
Definition: A thread is a single sequential flow of control within a program. The real excitement surrounding threads is not about a single...
Read more >An Intro to Threading in Python
In this intermediate-level tutorial, you'll learn how to use threading in your Python programs. You'll see how to create threads, how to coordinate...
Read more >Introduction to Java threads - IBM Developer
Like processes, threads are independent, concurrent paths of execution through a program, and each thread has its own stack, its own program ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I’m having issues using request_cache with threads with the default sqlite backend.
I’m making requests that take a short time to finish with 40 threads and in some cases, I’m getting the following error:
database is locked
I’m using Pool from multiprocessing
from multiprocessing import Pool
where func performs a get request.
@JWCook Sorry but I’m not working on that project currently so I can’t answer.