question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Hasura using more CPU over time

See original GitHub issue

Hello,

I recently upgraded to hasura 1.0.0.beta.3 and I’ve noticed that CPU use in the docker container is generally increasing over time, even if no requests are being made to the hasura server.

Here are the logs produced by Hasura (mostly showing nothing is happening):

{"timestamp":"2019-07-17T21:51:27.827+0000","level":"info","type":"startup","detail":{"kind":"server_configuration","info":{"live_query_options":{"fallback_options":{"refetch_delay":1000},"multiplexed_options":{"batch_size":100,"refetch_delay":1000}},"transaction_isolation":"ISOLATION LEVEL READ COMMITTED","enabled_log_types":["http-log","websocket-log","startup","webhook-log","query-log"],"server_host":"HostAny","enable_allowlist":false,"log_level":"debug","auth_hook_mode":null,"use_prepared_statements":true,"unauth_role":null,"stringify_numeric_types":false,"enabled_apis":["metadata","graphql"],"enable_telemetry":false,"enable_console":true,"auth_hook":null,"jwt_secret":null,"cors_config":{"allowed_origins":"*","disabled":false,"ws_read_cookie":null},"console_assets_dir":null,"admin_secret_set":true,"port":8080}}}
{"timestamp":"2019-07-17T21:51:27.827+0000","level":"info","type":"startup","detail":{"kind":"postgres_connection","info":{"database":"postgres","retries":1,"user":"postgres","host":"postgres","port":5432}}}
{"internal":"could not connect to server: Connection refused\n\tIs the server running on host \"postgres\" (172.129.0.3) and accepting\n\tTCP/IP connections on port 5432?\n","path":"$","error":"connection error","code":"postgres-error"}
{"timestamp":"2019-07-17T21:51:29.951+0000","level":"info","type":"startup","detail":{"kind":"server_configuration","info":{"live_query_options":{"fallback_options":{"refetch_delay":1000},"multiplexed_options":{"batch_size":100,"refetch_delay":1000}},"transaction_isolation":"ISOLATION LEVEL READ COMMITTED","enabled_log_types":["http-log","websocket-log","startup","webhook-log","query-log"],"server_host":"HostAny","enable_allowlist":false,"log_level":"debug","auth_hook_mode":null,"use_prepared_statements":true,"unauth_role":null,"stringify_numeric_types":false,"enabled_apis":["metadata","graphql"],"enable_telemetry":false,"enable_console":true,"auth_hook":null,"jwt_secret":null,"cors_config":{"allowed_origins":"*","disabled":false,"ws_read_cookie":null},"console_assets_dir":null,"admin_secret_set":true,"port":8080}}}
{"timestamp":"2019-07-17T21:51:29.951+0000","level":"info","type":"startup","detail":{"kind":"postgres_connection","info":{"database":"postgres","retries":1,"user":"postgres","host":"postgres","port":5432}}}
{"internal":"could not connect to server: Connection refused\n\tIs the server running on host \"postgres\" (172.129.0.3) and accepting\n\tTCP/IP connections on port 5432?\n","path":"$","error":"connection error","code":"postgres-error"}
{"timestamp":"2019-07-17T21:51:31.891+0000","level":"info","type":"startup","detail":{"kind":"server_configuration","info":{"live_query_options":{"fallback_options":{"refetch_delay":1000},"multiplexed_options":{"batch_size":100,"refetch_delay":1000}},"transaction_isolation":"ISOLATION LEVEL READ COMMITTED","enabled_log_types":["http-log","websocket-log","startup","webhook-log","query-log"],"server_host":"HostAny","enable_allowlist":false,"log_level":"debug","auth_hook_mode":null,"use_prepared_statements":true,"unauth_role":null,"stringify_numeric_types":false,"enabled_apis":["metadata","graphql"],"enable_telemetry":false,"enable_console":true,"auth_hook":null,"jwt_secret":null,"cors_config":{"allowed_origins":"*","disabled":false,"ws_read_cookie":null},"console_assets_dir":null,"admin_secret_set":true,"port":8080}}}
{"timestamp":"2019-07-17T21:51:31.891+0000","level":"info","type":"startup","detail":{"kind":"postgres_connection","info":{"database":"postgres","retries":1,"user":"postgres","host":"postgres","port":5432}}}
{"timestamp":"2019-07-17T21:51:33.992+0000","level":"info","type":"startup","detail":{"kind":"db_init","info":"successfully initialised"}}
{"timestamp":"2019-07-17T21:51:33.992+0000","level":"info","type":"startup","detail":{"kind":"db_migrate","info":"already at the latest version. current version: \"17\""}}
{"timestamp":"2019-07-17T21:51:33.992+0000","level":"info","type":"startup","detail":{"kind":"schema-sync","info":{"thread_id":"ThreadId 86","instance_id":"5e7dd705-9331-474a-bda4-986d78801ffa","message":"listener thread started"}}}
{"timestamp":"2019-07-17T21:51:33.992+0000","level":"info","type":"startup","detail":{"kind":"schema-sync","info":{"thread_id":"ThreadId 87","instance_id":"5e7dd705-9331-474a-bda4-986d78801ffa","message":"processor thread started"}}}
{"timestamp":"2019-07-17T21:51:33.992+0000","level":"info","type":"startup","detail":{"kind":"event_triggers","info":"preparing data"}}
{"timestamp":"2019-07-17T21:51:33.992+0000","level":"info","type":"startup","detail":{"kind":"event_triggers","info":"starting workers"}}
{"timestamp":"2019-07-17T21:51:33.992+0000","level":"info","type":"startup","detail":{"kind":"server","info":{"time_taken":2.216395209,"message":"starting API server"}}}

Here is a screenshot of a grafana dashboard showing the CPU use over 6 hours:

hasura_cpu_usage

Here is a minimal docker-compose script that can recreate the test environment and the monitoring tools.

hasura_test.zip

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:12 (4 by maintainers)

github_iconTop GitHub Comments

5reactions
lexi-lambdacommented, Jul 23, 2019

@bartjuh4 If you’re running the container from the command line via docker run, you can add the -e GHCRTS=-I0 flag. If you’re using docker-compose, you can add

environment:
- GHCRTS=-I0

to your container configuration. Other mechanisms of running containers have other ways to control the environment.

0reactions
jberrymancommented, Jul 24, 2019
Read more comments on GitHub >

github_iconTop Results From Across the Web

Tune Hasura Performance | Hasura GraphQL Docs
Tune Hasura Performance. This page serves as a reference for suggestions on fine tuning the performance of your Hasura GraphQL Engine instance.
Read more >
Decreasing latency noise & maximizing performance ... - Hasura
understanding how to maximize performance in a running ... ...significantly reduces latency as the CPU is (waves hands) more ready to ramp ...
Read more >
Subscriptions execution and performance - Hasura
Hasura's CPU and memory utilization depends on the number of requests being served. For most use cases, the default values offer reasonable trade-off...
Read more >
Effect of Intel's Power Management on Webservers - Hasura
In this article we have seen that processor power management has significant effect on response times with Hasura. Using the performance ...
Read more >
How Hasura optimizes GraphQL API Performance
If you're getting started with Hasura and GraphQL performance, the technical principles in this webinar will be a great starting point.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found