question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Some Transactions are stuck in a PostgreSQL database which leads to run out of all connections from the database pool.

See original GitHub issue

Some Transactions are stuck in a PostgreSQL database which leads to run out of all connections from the database pool.

Current Behavior:

Database PostgreSQL transaction is stuck and leads to the database pool running out of connection, Connection pool was set to 10, I am running 45 Java projects with some are 400 plus dependencies.

Only restart the docker/application resolve this issue as of now

I am planning to add 200 plus projects on analysis,


07:56:47.729 ERROR [MetricsUpdateTask] HikariPool-4 - Connection is not available, request timed out after 30000ms.
--
07:56:47.729 ERROR [MetricsUpdateTask] HikariPool-4 - Connection is not available, request timed out after 30000ms.
08:41:10.906 INFO [InternalComponentIdentificationTask] Starting internal component identification task
08:41:40.896 ERROR [LoggableUncaughtExceptionHandler] An unknown error occurred in an asynchronous event or notification thread
javax.jdo.JDODataStoreException: HikariPool-4 - Connection is not available, request timed out after 30000ms.
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:542)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:456)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:263)
at alpine.persistence.AbstractAlpineQueryManager.getCount(AbstractAlpineQueryManager.java:386)
at org.dependencytrack.tasks.repositories.RepositoryMetaAnalyzerTask.inform(RepositoryMetaAnalyzerTask.java:63)
at alpine.event.framework.BaseEventService.lambda$publish$0(BaseEventService.java:99)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLTransientConnectionException: HikariPool-4 - Connection is not available, request timed out after 30000ms.
at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:697)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:196)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:161)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:413)
at org.datanucleus.store.rdbms.SQLController.getStatementForQuery(SQLController.java:315)
at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getPreparedStatementForQuery(RDBMSQueryUtils.java:224)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:629)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1975)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1864)
at org.datanucleus.store.query.Query.execute(Query.java:1846)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:439)
... 7 common frames omitted
08:41:40.907 ERROR [LoggableUncaughtExceptionHandler] An unknown error occurred in an asynchronous event or notification thread
javax.jdo.JDODataStoreException: HikariPool-4 - Connection is not available, request timed out after 30000ms.
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:542)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:456)
at org.datanucleus.api.jdo.JDOQuery.executeResultList(JDOQuery.java:376)
at org.dependencytrack.persistence.QueryManager.getAllComponents(QueryManager.java:609)
at org.dependencytrack.tasks.InternalComponentIdentificationTask.analyze(InternalComponentIdentificationTask.java:57)
at org.dependencytrack.tasks.InternalComponentIdentificationTask.inform(InternalComponentIdentificationTask.java:50)
at alpine.event.framework.BaseEventService.lambda$publish$0(BaseEventService.java:99)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLTransientConnectionException: HikariPool-4 - Connection is not available, request timed out after 30000ms.
at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:697)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:196)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:161)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:413)
at org.datanucleus.store.rdbms.SQLController.getStatementForQuery(SQLController.java:315)
at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getPreparedStatementForQuery(RDBMSQueryUtils.java:224)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:629)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1975)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1864)
at org.datanucleus.store.query.Query.execute(Query.java:1846)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:439)
... 8 common frames omitted
 

Steps to Reproduce:

Expected Behavior:

A transaction should be completed and the connection should be released back to the pool. A transaction should be timed-out if it takes longer than expected.

Environment:

  • Dependency-Track Version: 4-Snapshot (latest build)
  • Distribution: Docker
  • BOM Format & Version: CycloneDx
  • Database Server: PostgreSQL
  • Browser: FF/Linux

Additional Details:

(e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
rvsonicommented, Dec 15, 2020

I am still monitoring the Depencency track apps, i am runing it with 30 connection pool and its looks stable still now, i am keep obrasiving the resoure usage,

I guess default setting of 10 connection pool is too less to run few concurrent project analysis.

Recomonded setting looks atlease 20 connection pool. (Still need to monitor the resource usage)

1reaction
rvsonicommented, Dec 14, 2020

I started with 10 connection pool with default connection setting as config,

Now I increased connection pool to 50, let me check today, will add more details about the transaction are stuck in the database also will try to get the SQLs of transaction, to help more detail investigation in this case,

Thanks, Ravi

Read more comments on GitHub >

github_iconTop Results From Across the Web

Troubleshoot RDS for PostgreSQL error "FATAL - Amazon AWS
Manage the number of database connections. Use connection pooling. In most cases, you can use a connection pooler, such as an RDS Proxy...
Read more >
Long running READ queries stuck in "idle in transaction"
1 Answer 1 ... idle in transaction means the connection is not doing anything - it's "idle". The query has finished, if the...
Read more >
How to close idle connections in PostgreSQL automatically?
When the thread runs, it looks for any old inactive connections. A connection is considered inactive if its state is either idle ,...
Read more >
Common Reasons Why Connections Stay Open for a Long ...
If there are no long queries on the database side, the next step would be to check the network for any issues. Performance...
Read more >
31.10. Connection Pools and Data Sources - PostgreSQL
One implementation performs connection pooling, while the other simply provides access to database connections through the DataSource interface without any ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found