Support transaction isolation and retry on serializable error
See original GitHub issueProblem
In PostgreSQL, it is possible to change the isolation level of each transaction or the default one from READ COMMITTED
to READ REPEATABLE
or SERIALIZATION
. Those isolation modes are necessary for certain applications (in finance for example), but also come with the price of having serialization errors raised. The recommended procedure in that case is to retry.
This is mostly needed when https://github.com/prisma/prisma2/issues/1844 lands.
Solution
The proposed solution would be two folds:
- Add a new keyword for isolation. For example:
prisma.user
.isolation({ type: Isolation.READ_REPEATABLE, serializationRetry: 3 })
.transaction(async (tx) => {});
- Add a global setting to the prisma client for the retry
const prisma = new PrismaClient({ serializationRetry: 3 });
Alternatives
Current alternative is to use raw queries.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:6
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Resolve serializable isolation error in Amazon Redshift
Concurrent write operations in Amazon Redshift must be serializable. This means that it must be possible for the transactions to run ...
Read more >SQLAlchemy, Serializable transactions isolation and retries in ...
I.e. if there is an database conflict error when running the function, the function is run again, now with more hopes the db...
Read more >Documentation: 15: 13.2. Transaction Isolation - PostgreSQL
This level emulates serial transaction execution for all committed transactions; as if transactions had been executed one after another, serially, rather than ...
Read more >Thread: Documenting when to retry on serialization failure
"Applications using this level must be prepared to retry transactions due to ... Both Repeatable Read and Serializable isolation levels can produce +...
Read more >Serializable Transactions | CockroachDB Docs
SERIALIZABLE isolation guarantees that even though transactions may execute in parallel, the result is the same as if they had executed one at...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
As a follow-up to the query about isolation levels: It really depends on the objective and the design of the database. For example, if you are in the business of handling currency sale/purchase contracts, and you have a database that represents the current rates for all currencies (and it is constantly being updated as the relative values change), and your process for creating the contract involves multiple reads of data from the database, you want everything you read to be ‘frozen in time’ so that all the details are accurate as of the time of processing. You can not afford to let changes in the exchange rates to vary because one read happens before another. This is where postgresql’s ‘repeatable read isolation level’ comes in. It becomes the database’s job to only let your transaction see data as it was at the time the transaction started. The transaction might have to capture all the relevant details as they were at the time as part of the creation of the contract, and it may take multiple reads to do that. If, during the creation of the contract, another process is committing updates to one or more of the currencies, it doesn’t matter. Your contract creation process is not supposed to see those changes. It’s a bit like when you arrange finance for a house, and the agency offers to ‘lock’ the interest rate. If you take that option, regardless of what changes happen, the rate your loan is written at is whatever the locked rate was at that time.
The postgres documentation on the subject is pretty good:https://www.postgresql.org/docs/13/transaction-iso.html For some workload you might want to have operations emulate a sequential single-thread like queue so you might want to set it to
Serializable
. But this means the transaction can fail and must be retried. This is really to allow the devs to use all the tools at their disposal to prevent concurrency issues. This feature is not exclusive and should be seen in greater effort to help solve those bugs may it be with locks,for update
selects, etc.