question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Kubernetes: Allow to specify NodeSelector and/or Affinity for Runners

See original GitHub issue

Yesterday I’ve installed Predator for the first time and launched my first test. Good job on turning Artillery into distributed load test that runs as kubernetes Jobs! ❤️

I wanted to test the HPA for some of my application to make sure that HPA was setup right and see how much pounding they could take.

In order to make sure the tests were not interferring with the application being tested, I’ve created a new GKE node pool with a specific label purpose:loadtesting and I’ve configured my app to never be scheduled on one of those nodes (using affinity).

Here is how I created the node pool in GKE:

    gcloud container node-pools create ${POOL_NAME} \
           --cluster ${CLUSTER} \
           --enable-autoscaling \
           --machine-type n1-standard-4 \
           --max-nodes 12 \
           --min-nodes 3 \
           --node-version 1.15.11-gke.5 \
           --num-nodes 4 \
           --scopes gke-default,logging-write,monitoring \
           --zone ${ZONE} \
           --node-labels=purpose=loadtesting

Then I wanted the tests (runner) to run ONLY on nodes which had the label purpose:loadtesting.

So that’s my suggestion, being able to force (using nodeSelector for example) the runner pods on specific nodes (or prevent them to run on some nodes using affinity).

I was able to install the predator helm chart and specify that I wanted that to run in my loadtesting nodes by using:

helm install predator zooz/predator --set nodeSelector.purpose=loadtesting

and that worked just fine. But that is not having any impact on the runner.

So what I ended up doing is to clone your repo, and I made a small change to src/jobs/models/kubernetes/jobTemplate.js and now the runners runs on the nodes I want.

The diff looks like:

--- a/src/jobs/models/kubernetes/jobTemplate.js
+++ b/src/jobs/models/kubernetes/jobTemplate.js
@@ -30,6 +30,9 @@ module.exports.createJobRequest = (jobName, runId, parallelism, environmentVaria
                             'env': Object.keys(environmentVariables).map(environmentVariable => ({ name: environmentVariable, value: environmentVariables[environmentVariable] }))
                         }
                     ],
+                    'nodeSelector': {
+                        'purpose': 'loadtesting'
+                    },
                     'restartPolicy': 'Never'
                 }
             },

But this only works for my case. It would be nice if the helm chart values had a section called runners where I could specify things like nodeselector or affinity or tolerations which would be applied to the runner Jobs.

The alternative would be to change the UI to ask for these settings, but… since that only apply to kubernetes, I thought it made more sense to have this as part of the chart configurable values…

Let m know if you need any more details.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
enudlercommented, Jun 19, 2020

Hi @NivLipetz @roychri I would like to start working on that one.

Proposal: Add a configmap to predator helm chart where you can define the runner params: for example:

apiVersion: v1 kind: ConfigMap metadata: name: predator-runner-configmap data: template: |- { "spec": { "template": { "metadata": { "annotations": { "traffic.sidecar.istio.io/excludeOutboundPorts": "8060" } } } } }

Predator will merge the given template of the runner with its own current ‘hardcoded’ job template. this will give good flexibility for future uses and adjustments.

What do you think?

1reaction
enudlercommented, Jul 14, 2020

@roychri although the issue is closed I realized that I put a wrong example so example updated 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

Assigning Pods to Nodes - Kubernetes
Inter-pod affinity/anti-affinity allows you to constrain Pods against labels on other Pods. Node affinity. Node affinity is conceptually similar ...
Read more >
Learn How to Assign Pods to Nodes in Kubernetes Using ...
Inter-Pod affinity allows co-location by scheduling Pods onto nodes that already have specific Pods running. In addition to the mentioned ...
Read more >
Assign Gitlab Runner daemon's pod and the jobs's pods to ...
I made an investigation about this feature, and the available way to assign the pods to specific nodes is to use the node...
Read more >
Kubernetes node affinity: examples & instructions -
You can specify the nodeSelector in the PodSpec using a key-value pair. If the key-value pair matches exactly the label defined in the...
Read more >
Kubernetes node affinity: Placing pods on specific nodes |
How can we do this? Will Kubernetes allow us to assign pods onto a specific node which we decide? Absolutely, yes. But to...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found