question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Server gone away with both pool_pre_ping and pool_recycle

See original GitHub issue

I have an app that creates processes that each use a different engine and session. In this case, the error occurs for a single running process. I create an engine using the following options : mdb_engine = create_engine('mysql+pymysql://' + user + ':' + password + '@' + hosts[0] + ':' + str(port) + '/' + db_name, pool_recycle=3600, pool_pre_ping=True)

Still, after executing the app in about 15 minutes since the first connection, I get a “MySQL server has gone away (ConnectionResetError(104, ‘Connection reset by peer’))”.

The timeouts used by my instance of MariaDB are these: ±--------------------------------------±---------+ | Variable_name | Value | ±--------------------------------------±---------+ | connect_timeout | 5 | | deadlock_timeout_long | 50000000 | | deadlock_timeout_short | 10000 | | delayed_insert_timeout | 300 | | idle_readonly_transaction_timeout | 0 | | idle_transaction_timeout | 0 | | idle_write_transaction_timeout | 0 | | innodb_flush_log_at_timeout | 1 | | innodb_lock_wait_timeout | 50 | | innodb_rollback_on_timeout | OFF | | interactive_timeout | 28800 | | lock_wait_timeout | 86400 | | net_read_timeout | 30 | | net_write_timeout | 60 | | rpl_semi_sync_master_timeout | 10000 | | rpl_semi_sync_slave_kill_conn_timeout | 5 | | slave_net_timeout | 60 | | thread_pool_idle_timeout | 60 | | wait_timeout | 28800 | ±--------------------------------------±---------+

Additionally, I don’t think the size of the object being inserted is the problem, as is the case sometimes, although I haven’t tested this hypothesis.

Could you possibly point out if I should be doing something differently?

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
DNCoelhocommented, Oct 11, 2019

@zzzeek Thank you for the explanation!

What I was doing was similar to your second pattern, however I didn’t set expire_on_commit to False due to having only one at the end of the process. I could as you said extend the timeouts but I would rather avoid this solution. I think your first pattern is adequate so I’ll close this issue change my code to implement it. Once again, thank you for the availability and help! 👍

0reactions
zzzeekcommented, Oct 11, 2019

so I didn’t state how the ORM Session works with the engine/connection here. Because yes, you’re not actually calling engine.connect(), the Session is.

I think for the pattern where you want to query the DB, then work for 15 minutes without any DB access (is that right? or do you sometimes run a query within processing?) then commit the data is to work with the objects in a detached state for the 15 minutes period. so for that your pattern as you stated above would be:

create engine -> create session via session maker -> do some gets -> session.close() -> processing (about 15 mins) -> create new session -> change values and merge and/or add objects to new session -> commit -> close

what is happening there is we’re considering the Session as the proxy for “we’ve connected and opened a transaction”. if there’s no transaction, and we don’t plan to have one until a certain point, we just get rid of the Session altogether.

another pattern that could work would be if you kept the session, called commit(), and also set expire_on_commit to False, but I think the pattern where you make a new session after the 15 minute processing is more robust.

I should mention this does sort of depend a bit on what kind of database this is. like, if it’s just your process running on the DB and nothing else, there’s no concurrency requirement, you could have the 15 minute transaction open and just change the timeouts in the server, I haven’t looked at the variables above but im sure one of them would extend this timeout. it’s just not a general scalable practice to do it that way.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Avoiding "MySQL server has gone away" on ... - Stack Overflow
2018 answer: In SQLAlchemy v1.2.0+, you have the connection pool pre-ping feature available to address this issue of "MySQL server has gone ......
Read more >
Connection Pooling - SQLAlchemy 1.4 Documentation
A common use case is allow the connection pool to gracefully recover when the database server has been restarted, and all previously established ......
Read more >
Loose connection to db : Forums - PythonAnywhere
(2006, 'MySQL server has gone away'). I have this in Config: app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {'pool_size' : 100, 'pool_recycle' ...
Read more >
MySQL server has gone away - Google Groups
Hi all, After successful queries and a 10 minute wait, I'm getting the popular "MySQL server has gone away." I have a single-threaded...
Read more >
idle timeout on database connections causes shunting - GitLab
OperationalError: (2006, "MySQL server has gone away (BrokenPipeError(32, 'Broken pipe'))") Oct 22 08:03:37 2020 (20771) LMTP message ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found