Event triggers use too many postgres connections
See original GitHub issueTested with hasura 1.3.3 and 1.3.4-beta.2
We were surprised to see Hasura using upwards of 50 postgres sockets/connections with around 5000 transactions per second.
- create a simple table and attach a dummy event trigger
- insert a batch of 5000 events (using generate_series for example)
- monitor postgresql sockets with
select count(*) from pg_stat_activity; - observe 50+ additional sockets being used by hasura
- observe some warnings in hasura logs: “Events processor may not be keeping up with events generated in postgres, or we’re working on a backlog of events. Consider increasing HASURA_GRAPHQL_EVENTS_HTTP_POOL_SIZE”
Is this expected behavior?
Are event triggers not a viable solution in a “high” throughput situation?
I can confirm the sockets do get released once idle (within 8-10 minutes)
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Documentation: 15: CREATE EVENT TRIGGER - PostgreSQL
Event triggers are disabled in single-user mode (see postgres). If an erroneous event trigger disables the database so much that you can't even...
Read more >Thread: On login trigger: take three - Postgres Professional
I realized that on connect trigger should be implemented as EVENT TRIGGER. So I have reimplemented my patch using event trigger and use...
Read more >Connection handling best practice with PostgreSQL
For idle in transaction connections that have been running too long, using Postgres 9.6 or higher, you can take advantage of a dedicated ......
Read more >Database Triggers in PostgreSQL - DEV Community
The script creates a connection to the database, and when this connection ends, it reconnects the script to the database, so that it's ......
Read more >Tuning-Hasura-at-Scale (2020-05-15) - Toan Nguyen
Moreover, too many connections don't mean query performance will be ... If you use serverless applications for actions/event trigger that ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Event Trigger mechanism is fully configurable to achieve the throughput you need. But nothing comes free, so you may need to give resources.
As the logs indicate, there are possibly many events pending and hence increasing the number of max concurrent http workers via:
HASURA_GRAPHQL_EVENTS_HTTP_POOL_SIZEwill help here. Further, each http worker will mark the event as “delivered” so it acquires a PG connection, and hence it will also take resources from the Postgres connection pool whose size you can configure viaHASURA_GRAPHQL_PG_CONNECTIONS. Since you are running a test bench, the delivery is perhaps happening instantly and hence there is very high contention for the pool.@jflambert ~ it depends ~ what backend stack are you using? I’m using Elixir which is particularly well suited for that type of workload (I stole the code from here: https://github.com/supabase/realtime)
Here’s an example in Go: https://github.com/ihippik/wal-listener
You could also probably do in Node.