Can connection pools be used for lots of long lived connections?
See original GitHub issueThis is more of a question then an issue. For my case i have a websocket server using the python websockets package. Since its async i decided to use this library and my implementation of it works. At least on the surface level. i did some stress tests and immediately found out that concurrency is an issue. it raises asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress. not the libs fault is obvious whats going on. i am creating a connection object at the global scope of the script and using that same object throughout the entire program for everything including for all different connections. These connections are long lived and ideally i would like to be able to support hundreds to thousands of websocket connections (users). A naive approach is opening a new asyncpg connection for every websocket connection but i doubt its a smart idea to open thousands of database connections if working with thousands of websocket connections.
In the documentation i found connection pooling however a concern of mine is in its example its used for short lived connections in the context of an http request. not a long lived socket. my idea was to have a pool connection object at the global scope and only acquire the pool for every database operation that happens during the life of that websocket connection. which under peak loads i would say is about once per second (for each connection). My concern with this however is the performance. Does it take time to acquire the pool or is it instant? and what happens under high loads with lots of concurrent operations where the pool is trying to be acquired but some other operation still hasn’t finished its with block? can multiple concurrent pool acquisitions happen? and about how much?
I’m going to attempt this implementation and test and respond with my findings. But if someone else can give an insight if this is a good idea and if it will scale well it would be greatly appreciated.
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (3 by maintainers)

Top Related StackOverflow Question
Well, the above code looks correct to me and I cannot reproduce the
cannot perform operationwith it. Also, you don’t need toset_type_codecafter each acquire, you can do it in theinitcallback passed tocreate_poolinstead, which would only be called when an actual new connection is opened (as opposed to reusing an existing connection).This is appropriate.
1 query per second is not that intensive and should be plenty of time to acquire/query/release multiple times over. I did a small benchmark on my laptop for a Stack Overflow answer a while ago, where a thousand acquire/query/release iterations completed in under 0.3s.
If there’s no contention, acquiring a connection is very quick.
If the pool is contended, tasks will wait in the fifo queue until a connection becomes available. There’s no hard limit on the number of waiters.
I’m not sure I understand the question completely. The number of concurrent connections in the pool are specified when you call
create_pool(max_size=<max-connections>), which is 10 by default, but you should adjust it to balance the database server resource utilization with connection acquisition latency.