question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

No response from gunicorn master within 120 seconds After Changing Worker Class

See original GitHub issue

Apache Airflow version: 1.10.10

Kubernetes version (if you are using kubernetes) (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T23:30:39Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.13-eks-2ba888", GitCommit:"2ba888155c7f8093a1bc06e3336333fbdb27b3da", GitTreeState:"clean", BuildDate:"2020-07-17T18:48:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS using EKS with ELB
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:

webserver stayed up for the 120ish seconds then crashed.

What you expected to happen:

I expected the webserver to not crash after the 120ish seconds

I think the issue might be with the gunicorn webserver code not handling the other worker classes properly.

How to reproduce it:

Use the stable/airflow helm chart to install airflow at version 7.6.0 In values.yaml, set AIRFLOW__WEBSERVER__WORKER_CLASS: "eventlet"

Anything else we need to know:

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:13
  • Comments:20 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
jonathonbattistacommented, Jul 30, 2021

For us, Kubernetes was killing the pods (contrary to what the logs seem to indicate). I am betting Received signal: 15. Closing gunicorn. means the 15 is a shutdown/terminate signal. Increasing the initialDelaySeconds of the Liveness Probe fixed it.

1reaction
potiukcommented, Oct 24, 2022

@LainerDonet - if you have this on latest version of Airlfow please report it with all details, logs etc.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Error: No response from Gunicorn master within 120 seconds ...
I want to monitor my airflow worker logs with the help of Prometheus. So I looked up on the internet and found statsd-exporter...
Read more >
[GitHub] [airflow] paulsef commented on issue #10964
[GitHub] [airflow] paulsef commented on issue #10964: No response from gunicorn master within 120 seconds After Changing Worker Class.
Read more >
apache/incubator-airflow - Gitter
Error: 'python:airflow.www.gunicorn_config' doesn't exist [2018-08-28 14:38:26,913] {cli.py:756} ERROR - No response from gunicorn master within 120 seconds
Read more >
Airflow webserver starting – gunicorn workers shutting down
I am running airflow 1.8 on centos7 on docker and my webserver is not getting to ... ERROR - No response from gunicorn...
Read more >
Permissions problem in Ubuntu when accessing airflow ...
... No response from gunicorn master within 120 seconds [2022-11-20 18:11:05,199] {webserver_command.py:218} ERROR - Shutting down webserver.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found