Server gone away with both pool_pre_ping and pool_recycle
See original GitHub issueI have an app that creates processes that each use a different engine and session. In this case, the error occurs for a single running process. I create an engine using the following options :
mdb_engine = create_engine('mysql+pymysql://' + user + ':' + password + '@' + hosts[0] + ':' + str(port) + '/' + db_name, pool_recycle=3600, pool_pre_ping=True)
Still, after executing the app in about 15 minutes since the first connection, I get a “MySQL server has gone away (ConnectionResetError(104, ‘Connection reset by peer’))”.
The timeouts used by my instance of MariaDB are these: ±--------------------------------------±---------+ | Variable_name | Value | ±--------------------------------------±---------+ | connect_timeout | 5 | | deadlock_timeout_long | 50000000 | | deadlock_timeout_short | 10000 | | delayed_insert_timeout | 300 | | idle_readonly_transaction_timeout | 0 | | idle_transaction_timeout | 0 | | idle_write_transaction_timeout | 0 | | innodb_flush_log_at_timeout | 1 | | innodb_lock_wait_timeout | 50 | | innodb_rollback_on_timeout | OFF | | interactive_timeout | 28800 | | lock_wait_timeout | 86400 | | net_read_timeout | 30 | | net_write_timeout | 60 | | rpl_semi_sync_master_timeout | 10000 | | rpl_semi_sync_slave_kill_conn_timeout | 5 | | slave_net_timeout | 60 | | thread_pool_idle_timeout | 60 | | wait_timeout | 28800 | ±--------------------------------------±---------+
Additionally, I don’t think the size of the object being inserted is the problem, as is the case sometimes, although I haven’t tested this hypothesis.
Could you possibly point out if I should be doing something differently?
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (4 by maintainers)
@zzzeek Thank you for the explanation!
What I was doing was similar to your second pattern, however I didn’t set expire_on_commit to False due to having only one at the end of the process. I could as you said extend the timeouts but I would rather avoid this solution. I think your first pattern is adequate so I’ll close this issue change my code to implement it. Once again, thank you for the availability and help! 👍
so I didn’t state how the ORM Session works with the engine/connection here. Because yes, you’re not actually calling engine.connect(), the Session is.
I think for the pattern where you want to query the DB, then work for 15 minutes without any DB access (is that right? or do you sometimes run a query within processing?) then commit the data is to work with the objects in a detached state for the 15 minutes period. so for that your pattern as you stated above would be:
create engine -> create session via session maker -> do some gets -> session.close() -> processing (about 15 mins) -> create new session -> change values and merge and/or add objects to new session -> commit -> close
what is happening there is we’re considering the Session as the proxy for “we’ve connected and opened a transaction”. if there’s no transaction, and we don’t plan to have one until a certain point, we just get rid of the Session altogether.
another pattern that could work would be if you kept the session, called commit(), and also set expire_on_commit to False, but I think the pattern where you make a new session after the 15 minute processing is more robust.
I should mention this does sort of depend a bit on what kind of database this is. like, if it’s just your process running on the DB and nothing else, there’s no concurrency requirement, you could have the 15 minute transaction open and just change the timeouts in the server, I haven’t looked at the variables above but im sure one of them would extend this timeout. it’s just not a general scalable practice to do it that way.