Kubernetes batch system doesn't provide Docker for CWL workflows that use Docker
See original GitHub issueOn Gitter, it was reported that if you take our Kubernetes batch system and try to use it to run a CWL workflow that uses a DcokerRequirement, it will fail.
The Kubernetes batch system does not privilege and start up Docker inside the pod, because we haven’t usually needed it: at UCSC we use user-mode Singularity to run containers when on Kubernetes. But cwlrunner
insists on using Docker, and the Singularity code isn’t part of Toil itself while Docker-running code is.
We should make it possible for the Kubernetes batch system to launch all workers as privileged, if the user asks for it, and to start up a Docker daemon before running jobs. Then, we can make the CWL execution code ask for this, either when it sees that the workflow is going to need Docker, all the time unless instructed not to, or maybe just when instructed to do so, depending on what seems most user-friendly.
The right way to do this would be a needs-docker requirement on the jobs, I think, but we don’t really have the infrastructure to support one-off annotation requirements like that yet. So I think it has to be all or nothing.
┆Issue is synchronized with this Jira Task ┆Issue Number: TOIL-662
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (6 by maintainers)
Top GitHub Comments
@mr-c That’s great! Maybe we can just upgrade the Singularity version that Toil ships to 3.6 and call it done.
With regards to the singularity cache, seems that with singularity version 3.6 there is:
https://github.com/hpcng/singularity/blob/master/CHANGELOG.md#new-features--functionalities-2
See also https://github.com/biowdl/singularity-permanent-cache