Running tile-reduce accross containers
See original GitHub issueThe current internal workings of tile-reduce make it very good for running on a single machine, but due to an issue with require('os').cpus.length, which doesn’t accurately report the number of CPU’s available within a docker container, tile-reduce can be very resource hungry when running in a container.
Rather than working with a single-machine/process-forking model for distributing work, could we write a version of tile-reduce that can run with workers as their own short-lived containers that communicate to a master container using HTTP or TCP? For use in ECS, we could use https://github.com/mapbox/ecs-watchbot to manage the orchestration of running new containers on an AWS ECS cluster. We could possibly create affordances for other orchestration tools (like mesos or kubernetes) if other people would like them.
Issue Analytics
- State:
- Created 7 years ago
- Comments:6 (3 by maintainers)

Top Related StackOverflow Question
As someone who recently lived through the use case this is meant to address, I lean toward not dedicating concerted resources toward a custom tile-reduce version. It is easy to stack-parameterize
MaxWorkersand tie it toreservation.cpuas a way of getting a tile-reduce job up and running on ECS. If that framework is not too offensive to the platform team, I’d suggest calling that out somewhere in the docs as a strategy to ease the transition a bit. But also acknowledge that a full leveraging of ECS Watchbot is a more elegant approach, whenever the developer feels comfortable.An HTTP/TCP based mapreduce implementation is going to have very different performance characteristics compared to tile-reduce. The communication protocol has a major effect on architecture considerations for any particular job such as tile zoom or memory usage. I think something like this is worthwhile, but it’s a very different problem than what tile-reduce is trying to solve (closer to Hadoop).
If the noisy neighbor problem is a common footgun though, I would be in favor of making maxWorkers a required parameter with no default. We can document an example that sets maxWorkers to
os.cpus, so the normal “I’m running this on a laptop” use case is easy to achieve. In general, I do think it is always best practice to carefully consider how many workers you want to use, even on a laptop (for other reasons like RAM constraints).