database locked even with minimal concurrency
See original GitHub issueI am trying to insert multi threaded. I have 4 worker threads that pull data off a queue (producer/consumer model) and does an insert_many on that data.
def run(self):
while True:
try:
r = self.q.get_nowait()
except Empty:
r = None
if r is None:
#TODO: signalling
time.sleep(0.05)
continue
self.process(r)
def process(self, item):
table, data = item
with db.transaction():
table.insert_many(data).execute()
Here is how I setup my connection:
db = SqliteDatabase(database, autocommit=False, threadlocals=True, pragmas=(('journal_mode', 'WAL'), ('cache_size', 10000)))
No matter how I set it up, I occasionally get peewee.OperationalError: database is locked
.
I am guessing that the insert_many is holding the lock too long, and a timeout is being hit. Am I doing this incorrectly or should I be somehow handling the database is locked error and retrying?
Issue Analytics
- State:
- Created 7 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Handling Concurrency Without Locks | Haki Benita
In this article I present common concurrency challenges and how to overcome them with minimal locking. Other type of race.
Read more >File Locking And Concurrency In SQLite Version 3
A PENDING lock means that the process holding the lock wants to write to the database as soon as possible and is just...
Read more >OperationalError: database is locked - python - Stack Overflow
OperationalError: database is locked errors indicate that your application is experiencing more concurrency than sqlite can handle in default ...
Read more >Db2 concurrency and locking controls - IBM
The application sees only committed data. A lock is usually acquired to ensure that each row of data has been committed, but when...
Read more >Database locking & database isolation levels - Retool
Locks can exist across larger sections of your data, and even on an entire database.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Posting here because it is the last issue related to the database locked error. I’ve been dealing with this issue the last few months on my application, and here it is how I solved it. It was actually quite simple.
Reading on the web about the problem, most people seem to be solving it by using
timeout=10
. Specifically my app runs on very old/cheap computers and this was simply not cutting. I needed to pass an absurdly large value totimeout
to make it work,timeout=100000
.It did raise me the question, is timeout in seconds or milliseconds? I have a couple writes that take more than a second, and thought that setting it to
10
would solve the problem(becaus I thought it was 10 seconds), but it was only solved after it being larger than1000
.Keep reading if you are using Qt.
This problem was terribly inconsistent for me, because somehow the way Qt handles its threads would prevent it from happening most of the time. And it was impossible to debug it, because I couldn’t reproduce it whenever I wanted. The solution was also quite simple, create some python threads (not Qt threads), and run a bunch of concurrent writes, tweak the timeout number until it stops locking.
Hope this helps some other adventurous developers dealing with incredibly bad computers. 😉
Thanks for bringing this to my attention. I’ve updated the interfaces and docs so that milliseconds are used everywhere: 171af17f5cced96c15d5e32e832474aa1618ab8c