Using :memory: SQLite engine causes DB to be wiped repeatedly
See original GitHub issueIt appears that knex uses a connection pool with an idle timeout when connecting to SQLite in :memory: mode. This causes the database contents to be dropped after ~30s of inactivity (when the connection is closed), making it impossible to use that engine.
This is a particularly bad gotcha because a test suite, or non-interactive usage with something hammering the DB, will work fine.
Issue Analytics
- State:
- Created 7 years ago
- Reactions:2
- Comments:11 (5 by maintainers)
 Top Results From Across the Web
Top Results From Across the Web
File Locking And Concurrency In SQLite Version 3
Filesystem corruption following a power failure might cause the journal to be renamed or deleted.
Read more >What an in-memory database is and how it persists data ...
It means that each time you query a database or update data in a database, you only access the main memory. So, there's...
Read more >SQLAlchemy memory leak when instrumented objects not ...
Repeated insertions into sqlite database via sqlalchemy causing memory leak? 2 · AttributeError: module 'sqlalchemy.dialects' has no attribute ' ...
Read more >SQLite - Quick Guide - Tutorialspoint
If the database is an in-memory or temporary database, the database will be destroyed and the contents will be lost. Syntax. Following is...
Read more >Berkeley DB FAQ - Oracle
What are the differences between using SQLite and Berkeley DB? ... The Berkeley DB environment keeps memory for a fixed number of lockers,...
Read more > Top Related Medium Post
Top Related Medium Post
No results found
 Top Related StackOverflow Question
Top Related StackOverflow Question
No results found
 Troubleshoot Live Code
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free Top Related Reddit Thread
Top Related Reddit Thread
No results found
 Top Related Hackernoon Post
Top Related Hackernoon Post
No results found
 Top Related Tweet
Top Related Tweet
No results found
 Top Related Dev.to Post
Top Related Dev.to Post
No results found
 Top Related Hashnode Post
Top Related Hashnode Post
No results found

Faced same issue but i need pool size more than 1 So my solution is to use shared memory database
This could work as a workaround which sets timeouts to 100 hours.