question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

uwsgi sample configuration

See original GitHub issue

I was looking for a configuration to run with nginx + uwsgi.

The only thing you need to do in order to make this work is adding the following line in uwsgi.ini: enable-threads=True This will enable threads raised by the app in uwsgi.

BUT, When I go to the expression browser or promdash, it doesn’t seem to report anything from the app, it seems it’s instrumenting from nowhere.

Issue Analytics

  • State:closed
  • Created 8 years ago
  • Comments:9 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
analytikcommented, Feb 3, 2017

@ge-fa - sure.

First, we run uWSGI with uwsgi --enable-threads --emperor /foo/bar/emperor/$ENV --disable-logging - we keep slightly different configurations for dev vs stage vs prod.

In each emperor/env folder, we keep two ini files - one for the app itself:

[uwsgi]
chdir           = /foo/bar
module          = wsgi
pidfile         = /tmp/uwsgi.pid
master          = true
http-socket     = 0.0.0.0:80
vacuum          = true
enable-threads  = true
processes       = 2
lazy            = false
threads         = 4
post-buffering  = true
harakiri        = 30
max-requests    = 5000
buffer-size     = 65535
stats           = 127.0.0.1:1717
stats-http      = true

and one for metrics service:

[uwsgi]
chdir           = /foo/bar
module          = metrics:api
pidfile         = /tmp/uwsgi-metrics.pid
http-socket     = 0.0.0.0:9090
vacuum          = true
enable-threads  = true
threads         = 1
processes       = 3
post-buffering  = true
harakiri        = 10
max-requests    = 10
buffer-size     = 65535
disable-logging = true

These can be adjusted of course, but do not turn on lazy mode! The app will start leaking memory horribly. Now you serve on 3 ports - 80 for Django, 1717 for uWSGI metrics, and 9090 for Prometheus.

Now metrics.py should contain a simple app with something like this - in this case:

import falcon
from prometheus_client import generate_latest
from prometheus_client.core import REGISTRY

from your_app.metrics import metrics as custom_metrics # this is just a dictionary with optional business or other app metrics, can be empty
from prometheus_django_redis import metrics as django_metrics
from prometheus_django_utils import process_redis_stuff, startup_prometheus

class MetricsResource(object):
    def on_get(self, req, resp):
        process_redis_stuff(django_metrics)
        process_redis_stuff(custom_metrics)
        resp.content_type = 'text/plain'
        resp.status = falcon.HTTP_200
        resp.body = generate_latest(REGISTRY)

api = startup_prometheus(MetricsResource, HealthzResource) # I omitted HealthzResource here

Now, the functionality in prometheus_django_redis is a bit hacky. I’m not sure if I can share the whole code, but the gist of it is this:

import time
from pickle import dumps

import redis
from prometheus_client import Gauge, Histogram

r = redis.Redis()

metrics = {
    'requests_total': Gauge(
        'django_http_requests_before_middlewares_total',
        'Total count of requests before middlewares run.'),
# many others
}

def get_time():
    return time.time()


def time_since(t):
    return get_time() - t


def incr_with_labels(metric, labels, amount=1):
    r.hincrby(metric, dumps(labels), amount)

# and then the middleware itself
class PrometheusBeforeMiddleware(object):
    """Monitoring middleware that should run before other middlewares."""

    def process_request(self, request):
        r.incr('requests_total')
        request.prometheus_before_middleware_event = get_time()

    def process_response(self, request, response):
        r.incr('responses_total')
        if hasattr(request, 'prometheus_before_middleware_event'):
            r.rpush('requests_latency_before', time_since(request.prometheus_before_middleware_event))
        else:
            r.incr('requests_unknown_latency_before')
        return response

And then the rules for writing to Redis instead of directly to prometheus are as follows:

  • r.incr for Gauge
  • r.hincrby for Gauge with labels
  • r.rpush for Histogram

To read them, have some utility file, like


import logging
import traceback
from collections import defaultdict
from pickle import loads

import falcon
import redis
import requests_unixsocket
from prometheus_client.core import GaugeMetricFamily, REGISTRY


r = redis.Redis()
session = requests_unixsocket.Session()
PREFIX = "uwsgi"
EXCLUDE_FIELDS = {"pid", "uid", "cwd", "vars"}
LABEL_VALUE_FIELDS = {"id", "name"}


def object_to_prometheus(prefix, stats_dict, labels, label_name=None):
    label_value = next((stats_dict[field] for field in LABEL_VALUE_FIELDS if field in stats_dict), None)
    if label_name is not None and label_value is not None:
        label_name = label_name.rstrip("s")
        labels = labels + [(label_name, str(label_value))]

    for name, value in stats_dict.items():
        name = name.replace(" ", "_")
        if name.isupper() or name in EXCLUDE_FIELDS:
            # If isupper - it is request vars. No need to save it.
            continue
        if isinstance(value, list):
            yield from list_to_prometheus("{}_{}".format(prefix, name), value, labels, name)
        elif name not in LABEL_VALUE_FIELDS and isinstance(value, (int, float)):
            yield "{}_{}".format(prefix, name), sorted(labels), value


def list_to_prometheus(prefix, stats_list, labels, label_name):
    for stats in stats_list:
        yield from object_to_prometheus(prefix, stats, labels, label_name)


def build_prometheus_stats(stats_addr):
    uwsgi_stats = get_stats(stats_addr)
    stats = object_to_prometheus(PREFIX, uwsgi_stats, [])
    grouped_stats = defaultdict(list)
    # Need to group all values by name, otherwise prometheus do not accept it
    for metric_name, labels, value in stats:
        grouped_stats[metric_name].append((labels, value))
    for metric_name, stats in grouped_stats.items():
        label_names = [name for name, _ in stats[0][0]]
        g = GaugeMetricFamily(metric_name, "", labels=label_names)
        for labels, value in stats:
            g.add_metric([value for _, value in labels], value)
        yield g


def get_stats_collector(stats_getter):
    class StatsCollector:
        def collect(self):
            yield from stats_getter()
    return StatsCollector()


def get_stats(stats_addr):
    resp = session.get(stats_addr)
    resp.raise_for_status()
    return resp.json()


def handle_error(e, req, resp, params):
    logging.error(traceback.format_exc())
    try:
        raise e
    except falcon.HTTPError:
        raise e
    except Exception:
        raise falcon.HTTPInternalServerError('Internal Server Error', str(e))


class PongResource(object):
    def on_get(self, req, resp):
        resp.status = falcon.HTTP_200
        resp.content_type = 'text/plain'
        resp.body = 'PONG'



def startup_prometheus(MetricsResource, HealthzResource,
                       stats_address="http://127.0.0.1:1717"):
    REGISTRY.register(get_stats_collector(lambda: build_prometheus_stats(stats_address)))
    api = falcon.API()
    api.add_error_handler(Exception, handler=handle_error)
    api.add_route('/metrics', MetricsResource())
    api.add_route('/healthz/ping', PongResource())
    api.add_route('/healthz/', HealthzResource())
    return api



def process_redis_stuff(metrics):
    """ Read metrics saved by several processes/threads in Redis, and turn them into Prometheus metrics

    if type is Gauge, read and set
    if Gauge with labels, hgetall and set
    if Histogram, read and empty the list, observe values one by one
    """
    for (metric_name, metric) in metrics.items():
        metric_type = type(metric).__name__
        # logging.debug('Investigating metric %s typed %s' % (metric_name, metric_type))
        if metric_type == 'Gauge':
            value = r.get(metric_name) or 0
            # logging.debug('Setting %s to %s' % (metric_name, value))
            metric.set(value)
        elif metric_type == '_LabelWrapper':
            # for simplicity, assume all labeled classes are Gauge - to change, check _wrappedClass
            labels_and_values = r.hgetall(metric_name)
            for (labels, value) in labels_and_values.items():
                value = float(value)
                clean_labels = {}
                for (lab, val) in loads(labels).items():
                    lab = type(lab) == bytes and lab.decode('utf-8') or lab
                    val = type(val) == bytes and val.decode('utf-8') or val
                    clean_labels[lab] = val
                # logging.debug('Setting %s to %s with labels %s' % (metric_name, value, clean_labels))
                metric.labels(clean_labels).set(value)
        elif metric_type == 'Histogram':
            # get all values in the list (Array)
            values = r.lrange(metric_name, 0, -1)
            # cut those values out from Redis
            r.ltrim(metric_name, len(values), -1)
            # logging.debug('Observing %s values for %s' % (len(values), metric_name))
            for val in values:
                metric.observe(float(val))

See? Simple!

Except… not at all. I mean, I’m sure there are better ways to do it, but I did whatever butchered way was easy enough to develop and deliver.

In other news, I am incredibly happy to develop in Node.js, where asynchronous programming is a breeze, I can start infinite number of http servers in a few lines, and don’t need nasty multi-threading / multiprocessing that eats gigabytes of memory to achieve all that. (That said, of course Python has its uses, but I no longer feel like http servers should be one of them, at least not unless you do something special like stackless/httptools/uvloop.)

Hope it helps!

EDIT: I should also note that we run each instance as a Docker container / Kubernetes pod, so there isn’t any problem with allocating the same ports for many different applications. The Redis also runs locally to the pod, started simply with redis & which I know is barbaric, but so far has worked reliably.

2reactions
analytikcommented, Jul 29, 2016

OK, maybe this will help someone, here’s what I did:

  • Converted a simple uWSGI Django application into a uWSGI Emperor with 2 apps: Django and Metrics
  • Added Django middleware that increments local Redis values (since 1 uWSGI Emperor = 1 docker container) where there was Counter in django-prometheus, and pushed onto a Redis list where there was a Histogram.
  • metrics app loads all these Redis keys on load, instead of Counter, there is Gauge used, so we can set it to whatever is in Redis. For Histogram, there’s just a loop of observe() for every item popped from the Redis list.
  • Django app exposes uWSGI stats too, on local port 1717.
  • Metrics (a tiny Falcon app) scrapes those, and converts them to Prometheus metrics, added together with the Django metrics from Redis.

I’ll be happy to provide more details if it would help someone, it’s just the code isn’t tidied up.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Configuring uWSGI — uWSGI 2.0 documentation
The configuration system is unified, so each command line option maps 1:1 with entries in the config files. Example: uwsgi --http-socket :9090 --psgi...
Read more >
How To Set Up uWSGI and Nginx to Serve Python Apps on ...
Configure a uWSGI Config File. In the above example, we manually started the uWSGI server and passed it some parameters on the command...
Read more >
Configuring uWSGI for Production Deployment | Bloomberg LP
We feel it is best to enable this field by default, as the risk of mistyping a uWSGI configuration parameter is greater in...
Read more >
uWSGI - ArchWiki
Web applications served by uWSGI are configured in /etc/uwsgi/ , where each of them requires its own configuration file (ini-style). Details can ...
Read more >
How to use Django with uWSGI
uWSGI supports multiple ways to configure the process. See uWSGI's configuration documentation. Here's an example command to start a uWSGI server:.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found