question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

SQL Lab queries fails intermittently when running in Async mode.

See original GitHub issue

Make sure these boxes are checked before submitting your issue - thank you!

  • I have checked the superset logs for python stacktraces and included it here as text if any
  • I have reproduced the issue with at least the latest release
  • I have checked the issue tracker for the same issue and I haven’t found one similar

I’m trying to run query in async mode using celery and redis. Using Postgress DB which is running in Webserver. Few query runs fast and give results.But query goes into pending state for long time and then i get the below error in worker. The issue is intermittent. :

[2017-11-30 08:36:31,897: ERROR/ForkPoolWorker-3] Query with id `179` could not be retrieved
[2017-11-30 08:36:31,897: ERROR/ForkPoolWorker-3] Sleeping for a sec before retrying...
[2017-11-30 08:36:32,901: ERROR/ForkPoolWorker-3] Query with id `179` could not be retrieved
[2017-11-30 08:36:32,901: ERROR/ForkPoolWorker-3] Sleeping for a sec before retrying...
[2017-11-30 08:36:33,904: ERROR/ForkPoolWorker-3] Query with id `179` could not be retrieved
[2017-11-30 08:36:33,904: ERROR/ForkPoolWorker-3] Sleeping for a sec before retrying...
[2017-11-30 08:36:34,907: ERROR/ForkPoolWorker-3] Query with id `179` could not be retrieved
[2017-11-30 08:36:34,908: ERROR/ForkPoolWorker-3] Sleeping for a sec before retrying...
[2017-11-30 08:36:35,911: ERROR/ForkPoolWorker-3] Query with id `179` could not be retrieved
[2017-11-30 08:36:35,912: ERROR/ForkPoolWorker-3] Sleeping for a sec before retrying...
[2017-11-30 08:36:36,915: ERROR/ForkPoolWorker-3] Task superset.sql_lab.get_sql_results[b51234ae-fed3-4d5e-af80-d12d20123ef6] raised unexpected: SqlLabException('Failed at getting query',)
Traceback (most recent call last):
  File "/home/ec2-user/superset/lib/python2.7/dist-packages/celery/app/trace.py", line 374, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/home/ec2-user/superset/lib/python2.7/dist-packages/celery/app/trace.py", line 629, in __protected_call__
    return self.run(*args, **kwargs)
  File "/home/ec2-user/superset/local/lib/python2.7/site-packages/superset/sql_lab.py", line 96, in get_sql_results
    query = get_query(query_id, sesh)
  File "/home/ec2-user/superset/local/lib/python2.7/site-packages/superset/sql_lab.py", line 68, in get_query
    raise SqlLabException("Failed at getting query")
SqlLabException: Failed at getting query

config_file in web server

class CeleryConfig(object):
  BROKER_URL = 'redis://xyz.cache.amazonaws.comcache.amazonaws.com:6379/0'
  CELERY_IMPORTS = ('superset.sql_lab', )
  CELERY_RESULT_BACKEND = 'redis://xyz.cache.amazonaws.com.cache.amazonaws.com:6379/0'
  CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
CELERY_CONFIG = CeleryConfig

from werkzeug.contrib.cache import RedisCache
RESULTS_BACKEND = RedisCache(
    host='xyz.cache.amazonaws.com', port=6379, key_prefix='superset_results')

SQLALCHEMY_DATABASE_URI = 'postgresql://user:password@localhost/myapp'

config file in worker server

class CeleryConfig(object):
  BROKER_URL = 'redis://xyz.cache.amazonaws.comcache.amazonaws.com:6379/0'
  CELERY_IMPORTS = ('superset.sql_lab', )
  CELERY_RESULT_BACKEND = 'redis://xyz.cache.amazonaws.com.cache.amazonaws.com:6379/0'
  CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
CELERY_CONFIG = CeleryConfig

from werkzeug.contrib.cache import RedisCache
RESULTS_BACKEND = RedisCache(
    host='xyz.cache.amazonaws.com', port=6379, key_prefix='superset_results')

SQLALCHEMY_DATABASE_URI = 'postgresql://user:password@<webserver_ip>:5432/myapp'

Superset version

0.20.1

Expected results

Queries should not fail

Actual results

Queries goes into “pending” status for long time and then fails. Intermittent issue

Steps to reproduce

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
syafiqdantecommented, Aug 2, 2019

Sorry for disturbing a closed issue, I have the same problem as well… It occurs when I use the ‘CTAS’ function in superset sql lab. Does anybody has the solution…?

my celery worker throws this error:

[2019-08-02 09:47:00,417: ERROR/ForkPoolWorker-1] Query with id `202` could not be retrieved
[2019-08-02 09:47:00,419: ERROR/ForkPoolWorker-1] Sleeping for a sec before retrying...
[2019-08-02 09:47:01,424: ERROR/ForkPoolWorker-1] Query with id `202` could not be retrieved
[2019-08-02 09:47:01,424: ERROR/ForkPoolWorker-1] Sleeping for a sec before retrying...
[2019-08-02 09:47:02,429: ERROR/ForkPoolWorker-1] Query with id `202` could not be retrieved
[2019-08-02 09:47:02,429: ERROR/ForkPoolWorker-1] Sleeping for a sec before retrying...
[2019-08-02 09:47:03,433: ERROR/ForkPoolWorker-1] Query with id `202` could not be retrieved
[2019-08-02 09:47:03,434: ERROR/ForkPoolWorker-1] Sleeping for a sec before retrying...
[2019-08-02 09:47:04,440: ERROR/ForkPoolWorker-1] Query with id `202` could not be retrieved
[2019-08-02 09:47:04,440: ERROR/ForkPoolWorker-1] Sleeping for a sec before retrying...
[2019-08-02 09:47:05,442: ERROR/ForkPoolWorker-1] Failed at getting query
Traceback (most recent call last):
  File "/root/incubator-superset/superset/sql_lab.py", line 136, in get_sql_results
    session=session, start_time=start_time)
  File "/root/incubator-superset/superset/sql_lab.py", line 224, in execute_sql_statements
    query = get_query(query_id, session)
  File "/root/incubator-superset/superset/sql_lab.py", line 96, in get_query
    raise SqlLabException('Failed at getting query')
superset.sql_lab.SqlLabException: Failed at getting query
[2019-08-02 09:47:05,455: ERROR/ForkPoolWorker-1] Query with id `202` could not be retrieved
[2019-08-02 09:47:05,456: ERROR/ForkPoolWorker-1] Sleeping for a sec before retrying...
[2019-08-02 09:47:06,461: ERROR/ForkPoolWorker-1] Query with id `202` could not be retrieved
[2019-08-02 09:47:06,462: ERROR/ForkPoolWorker-1] Sleeping for a sec before retrying...
[2019-08-02 09:47:07,467: ERROR/ForkPoolWorker-1] Query with id `202` could not be retrieved
[2019-08-02 09:47:07,467: ERROR/ForkPoolWorker-1] Sleeping for a sec before retrying...
[2019-08-02 09:47:08,472: ERROR/ForkPoolWorker-1] Query with id `202` could not be retrieved
[2019-08-02 09:47:08,473: ERROR/ForkPoolWorker-1] Sleeping for a sec before retrying...
[2019-08-02 09:47:09,477: ERROR/ForkPoolWorker-1] Query with id `202` could not be retrieved
[2019-08-02 09:47:09,478: ERROR/ForkPoolWorker-1] Sleeping for a sec before retrying...
[2019-08-02 09:47:10,482: ERROR/ForkPoolWorker-1] Failed at getting query
Traceback (most recent call last):
  File "/root/incubator-superset/superset/sql_lab.py", line 136, in get_sql_results
    session=session, start_time=start_time)
  File "/root/incubator-superset/superset/sql_lab.py", line 224, in execute_sql_statements
    query = get_query(query_id, session)
  File "/root/incubator-superset/superset/sql_lab.py", line 96, in get_query
    raise SqlLabException('Failed at getting query')
superset.sql_lab.SqlLabException: Failed at getting query

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/incubator-superset/superset/sql_lab.py", line 114, in session_scope
    yield session
  File "/root/incubator-superset/superset/sql_lab.py", line 140, in get_sql_results
    query = get_query(query_id, session)
  File "/root/incubator-superset/superset/sql_lab.py", line 96, in get_query
    raise SqlLabException('Failed at getting query')
superset.sql_lab.SqlLabException: Failed at getting query
[2019-08-02 09:47:10,504: ERROR/ForkPoolWorker-1] Task superset.sql_lab.get_sql_results[cb5c14c2-0c78-4ee2-8e00-b8c68f559f0e] raised unexpected: SqlLabException('Failed at getting query',)
Traceback (most recent call last):
  File "/root/incubator-superset/superset/sql_lab.py", line 136, in get_sql_results
    session=session, start_time=start_time)
  File "/root/incubator-superset/superset/sql_lab.py", line 224, in execute_sql_statements
    query = get_query(query_id, session)
  File "/root/incubator-superset/superset/sql_lab.py", line 96, in get_query
    raise SqlLabException('Failed at getting query')
superset.sql_lab.SqlLabException: Failed at getting query

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/celery/app/trace.py", line 385, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/celery/app/trace.py", line 648, in __protected_call__
    return self.run(*args, **kwargs)
  File "/root/incubator-superset/superset/sql_lab.py", line 140, in get_sql_results
    query = get_query(query_id, session)
  File "/root/incubator-superset/superset/sql_lab.py", line 96, in get_query
    raise SqlLabException('Failed at getting query')
superset.sql_lab.SqlLabException: Failed at getting query

I quite new with superset, anyhelp is appreciated…

1reaction
guillaumewibauxcommented, Jan 18, 2018

same problem here…

Read more comments on GitHub >

github_iconTop Results From Across the Web

SQL Lab queries fails intermittently when running in Async ...
SQL Lab queries fails intermittently when running in Async mode. #3966 ... I'm trying to run query in async mode using celery and...
Read more >
[GitHub] brunowego commented on issue #3966: SQL Lab queries ...
brunowego commented on issue #3966: SQL Lab queries fails intermittently when running in Async mode.
Read more >
How to Troubleshoot a Slow Running Query in SQL Server by ...
Learn about the largest online learning event on Azure Data, Analytics & AI covering 30+ technologies including SQL Server, Power BI, ...
Read more >
Session timeouts in SQL Server Always On Availability Groups
In a SQL AG configuration, we configure two or more replicas in either synchronous or asynchronous commit mode. These replicas are part of...
Read more >
Troubleshoot slow SQL Server performance caused by I/O ...
Query issues: SQL Server is saturating disk volumes with I/O requests and is pushing the I/O subsystem beyond capacity, which causes I/O ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found