Supporting Postgres' `SET` across queries of a request
See original GitHub issueProblem
In Postgres you can set a user for a connection on the database:
await prisma.$executeRaw(`SET current_user_id = ${currentUser.id}`)
This SET can then be used in combination with row-level security (RLS) to issue queries like this:
select * from messages
that only give you back messages from that user. This technique is used by Postgraphile and Postgrest and really takes advantage of what Postgres offers.
Without access to the connection pool, there’s no way to guarantee you’ll get the same connection each query. Since SET values are bound to the query, subsequent queries may be missing the SET or may even override another request’s SET.
I’m not sure what the best approach is. Tying a connection to the lifecycle of a request has its own performance implications.
A point of reference potentially worth investigating. Go’s standard SQL library uses a connection pool under the hood that’s transparent to developers. Do they run into this problem too? If so, do they or how do they deal with it?
Originally from: https://github.com/prisma/prisma/issues/4303#issuecomment-756157408
Issue Analytics
- State:
- Created 3 years ago
- Reactions:43
- Comments:72 (10 by maintainers)
Top GitHub Comments
Idk how you guys have implemented this whole ‘SET’ stuff but I have been implementing the full support to RLS over this entire week for an app that we have using Remix + Prisma. And until this point, I really don’t know if this week was worth the work and if I had actually made a good decision to implement it. So I’ve decided to make this comment and leave it here in case someone is thinking about implementing it and to be 100% sure of what you are stepping into.
I feel like Prisma kinda has lost its point.
First of all, I need to clarify the reasons why I chose to work with Prisma in the first place and probably the main reasons why I feel like this. As a full stack developer, Prisma has increased my productivity in 2-3x at least simplify by replacing the hard work of handling SQL stuff and another 10x just by allowing me to do that in TYPESCRIPT. The ability to capture errors before even thinking that you could be doing something wrong is GOLD (or BTC for some 😅). The model map and the migration tool… shut up and take my money!!! Now, after I had spend this much time on this I feel like this whole RLS support has added too much complexity to setup, maintain the code and at the end I’m spending more time in the migration.sql file than the actual typescript.
So, what have I learned?
1. You need two DB users
As mentioned by @AllanOricil You’ll need to set up a new postgres/db user with limited permission so it doens’t bypass the RLS. Since I need to guarantee that developers can pull the project and things keep working, I needed to add the following script into my migrate file.
2. It needs to have two separate connection strings
One for the migrate script with full permission and one with the connection pooling using the
prisma
user you just created. If you are using supabase, you probably have two connection strings anyway so no big deal.See discuttion: #6485
3. You need to manually create policies and enable RLS
Here it is where things started getting overly complicated. So let’s go through this together so you can understand my point.
First you add the following on your migration file which depending on the number of tables your db has, you might have a bit of work to do and depending on the actual complexity of the policies you are screwed. There’s no debug here and you basically figure out that something is wrong only during runtime.
Now, I’m no SQL expert and I’m pretty sure there are hundreds of ways to do it better but the thing that scratches my head is what comes after few iterations of this. After few migrations created how do you tell what’s actually being applied to this model?
There’s no simple answer. You have to go looking around inside migration files to see what was applied last… in supabase there’s even a bug that after you teardown the schema, you have to turn the RLS off and on again to able to see what policies are applied to each table on their UI.
It should be something that we can maintain like a single source of true. To see and to edit everything through the schema with something like: Just as an example, I haven’t actually put too much thought on how this API would be.
Which brings me to my last consideration…
4. Queries need to be ran inside a transaction
Since it sets a specific variable only for the specific query, you need to wrap all queries around a transaction.
Following examples listed here by many contributors, I managed to create a middleware that puts everything in a transaction and also recreate the $transaction function so I didn’t need to go through the code replacing anything. BUT, long story short, it doesn’t work! The way supabase retrieves the session ends up getting mixed between requests made in parallel so I needed a way pass the session down the tree that’s where I started questioning this whole solution and where I gave up on this thing.
Here it is my implementation
Conclusion
The SET isn’t actually the hardest part to get to work. On the other hand, managing the policies is exponentially harder and making sure that the specific query has the specific variable for the policies is just throwing the best of prisma out of the window.
Ideally you want the query complaining on what policies you breaking
Well, that’s it. I’m feeling a bit frustrated for losing the time invested. I need to ship something that pays the bills for now otherwise I’d gladly open a PR. I hope this is useful to someone.
it’d be ideal if we can get first-class support for row-level security instead of hopping around the unknown landmine that is reusing sessions, especially for the purpose of authentication where one slip could be disastrous