question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[question] How to prevent race condition?

See original GitHub issue

I am using LiteDB in a web application.

However, I am having this issue: I am storing a list of users, and each user has a data usage number. In each request, the user data usage number is loaded from LiteDB, and is increased. But this is my problem: if the client makes many simultaneous requests, the data usage number will be loaded before it is updated, resulting in an incorrect data usage.

To explain again: In database: [usage: 20]

request 1:

  • thread 1: server loads usage (is 20)
  • thread 1: does some processsing (adds 10), does not update user in litedb collection yet request 2 (happens during request 1’s data processing):
  • thread 2: server loads usage (is 20) <-- This is my problem, I want to lock the user so other requests can’t happen until first request processing is done
  • thread 2: server adds 10
  • thread 1 saves user (now usage is 30)
  • thread 2 saves user (now usage is 30)

the problem is that thread 2 doesn’t wait for thread 1 to finish, and incorrect final value is stored. I am using NancyFx with async/await requests; each request spawns a new thread.

i am using transactions only when saving the value.

thanks

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:17 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
DeanVanGreunencommented, Jan 6, 2017

Within the database.

Table: user_locks Fields : Bool write_lock default value off Bool read _lock default value off.

Before processing user data from user table for either read (select) or write (update / insert)

Do check of contents of the values of user_locks table

In structure of example: Table: users Id | username | data… 1| Steve | djdjsjs 2 | John | djdbdbdb 3 | most | dhdjsnsn

Table: user_locks Id | read_lock | write_lock 1 | 0 | 0 2 | 0 | 0 3 | 0 | 0

Program example

To explain again: In database: [usage: 20]

request 1:

  • thread 1: server loads usage (is 20)
  • thread 1: does some processsing (adds 10), does not update user in

litedb collection yet request 2 (happens during request 1’s data processing):

  • thread 2: server loads usage
  • thread 2: server adds 10
  • thread 1 saves user
  • thread 2 saves user

On each of user data in thread do the following: When reading user data Check read_lock If set to true do a wait loop for x. seconds then check again till. read_lock is false Set write_lock to true Read user data. Do processing Set write_lock to false

When writing (saving / updating) Check write_lock Do wait loop till value is false (same as read_lock logic) Set read_lock to true Do update / insert command etc… Set read_lock to false On Jan 6, 2017 2:51 AM, “nictaylr” notifications@github.com wrote:

could you please explain in more detail @xX-MLG-Xx https://github.com/xX-MLG-Xx

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mbdavid/LiteDB/issues/415#issuecomment-270801691, or mute the thread https://github.com/notifications/unsubscribe-auth/AVQ2C80Zs82SF8AlecmV1oiQ7-VYxb9iks5rPY_8gaJpZM4LcSIP .

0reactions
ghostcommented, Jan 6, 2017

@xX-MLG-Xx Here’s my implementation: https://github.com/0xFireball/PenguinUpload/blob/2bede37e349d824d5188d636edfc535ec6c4d58a/PenguinUpload/src/PenguinUpload/Infrastructure/Concurrency/UserLock.cs#L45-L60

Please let me know if I misunderstood you or did anything wrong 😛

UPDATE: I thought this might be nicer (I added a nicer asynchronous API in addition to the existing methods): https://github.com/0xFireball/PenguinUpload/blob/495591975b3d3f9b97aee7eb7db63589e4e636cb/PenguinUpload/src/PenguinUpload/Infrastructure/Concurrency/UserLock.cs#L67-L90 This will auto-call the Obtain and Release locks, and is awaitable

Read more comments on GitHub >

github_iconTop Results From Across the Web

[Question] Strategy to avoid race conditions : r/node
I'm working on a waitlist-queue system and I'm looking for a strategy to prevent race conditions when the server receives two requests at...
Read more >
What Is a Race Condition? | Baeldung on Computer Science
A race condition is usually difficult to reproduce, debug, and eliminate. We describe the bugs introduced by race conditions as heisenbugs.
Read more >
What is a Race Condition?
A race condition is an undesirable situation that occurs when a device or system attempts to perform two or more operations at the...
Read more >
How to prevent race condition for specific case?
It's more important for your program to be correct than fast – correct+slow is always better than fast+wrong. Focus on being correct first....
Read more >
How to prevent race conditions in a web application?
You've already given the answer: by tracking the date of change of objects and comparing it to the age of the data that...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found