uwsgi sample configuration
See original GitHub issueI was looking for a configuration to run with nginx + uwsgi.
The only thing you need to do in order to make this work is adding the following line in uwsgi.ini:
enable-threads=True
This will enable threads raised by the app in uwsgi.
BUT, When I go to the expression browser or promdash, it doesn’t seem to report anything from the app, it seems it’s instrumenting from nowhere.
Issue Analytics
- State:
- Created 8 years ago
- Comments:9 (1 by maintainers)
Top Results From Across the Web
Configuring uWSGI — uWSGI 2.0 documentation
The configuration system is unified, so each command line option maps 1:1 with entries in the config files. Example: uwsgi --http-socket :9090 --psgi...
Read more >How To Set Up uWSGI and Nginx to Serve Python Apps on ...
Configure a uWSGI Config File. In the above example, we manually started the uWSGI server and passed it some parameters on the command...
Read more >Configuring uWSGI for Production Deployment | Bloomberg LP
We feel it is best to enable this field by default, as the risk of mistyping a uWSGI configuration parameter is greater in...
Read more >uWSGI - ArchWiki
Web applications served by uWSGI are configured in /etc/uwsgi/ , where each of them requires its own configuration file (ini-style). Details can ...
Read more >How to use Django with uWSGI
uWSGI supports multiple ways to configure the process. See uWSGI's configuration documentation. Here's an example command to start a uWSGI server:.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@ge-fa - sure.
First, we run uWSGI with
uwsgi --enable-threads --emperor /foo/bar/emperor/$ENV --disable-logging
- we keep slightly different configurations for dev vs stage vs prod.In each
emperor/env
folder, we keep two ini files - one for the app itself:and one for metrics service:
These can be adjusted of course, but do not turn on lazy mode! The app will start leaking memory horribly. Now you serve on 3 ports - 80 for Django, 1717 for uWSGI metrics, and 9090 for Prometheus.
Now
metrics.py
should contain a simple app with something like this - in this case:Now, the functionality in prometheus_django_redis is a bit hacky. I’m not sure if I can share the whole code, but the gist of it is this:
And then the rules for writing to Redis instead of directly to prometheus are as follows:
To read them, have some utility file, like
See? Simple!
Except… not at all. I mean, I’m sure there are better ways to do it, but I did whatever butchered way was easy enough to develop and deliver.
In other news, I am incredibly happy to develop in Node.js, where asynchronous programming is a breeze, I can start infinite number of http servers in a few lines, and don’t need nasty multi-threading / multiprocessing that eats gigabytes of memory to achieve all that. (That said, of course Python has its uses, but I no longer feel like http servers should be one of them, at least not unless you do something special like stackless/httptools/uvloop.)
Hope it helps!
EDIT: I should also note that we run each instance as a Docker container / Kubernetes pod, so there isn’t any problem with allocating the same ports for many different applications. The Redis also runs locally to the pod, started simply with
redis &
which I know is barbaric, but so far has worked reliably.OK, maybe this will help someone, here’s what I did:
observe()
for every item popped from the Redis list.I’ll be happy to provide more details if it would help someone, it’s just the code isn’t tidied up.