question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Feature Request: Default pod affinity

See original GitHub issue

Dask-kubernetes is currently configured to include two default pod/taint tolerations (see #109, #110, #145). This feature allows dask-worker pods to be the sole inhabitants of a node/node-pool, provided the cluster is configured with the corresponding taints. Another scheduling pattern would be to attract dask-worker pods to a specific set of nodes. While this has been done successfully with nodeSelectors in the past, it is not a particularly user-friendly option.

I’m wondering if we could come up with a universal dask-worker label and then have dask-kubernetes set a preferredDuringSchedulingIgnoredDuringExecution affinity on each pod. Has anyone tried this? Are there specific downsides to doing this? My understanding is that the nodeSelector approach will be eventually deprecated in favor of affinity rules.

I think something like this would work (haven’t tested):

spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        preference:
          matchExpressions:
          - key: k8s.dask.org/node-purpose
            operator: In
            values:
            - worker

The main benefits here are:

  1. This doesn’t require users/admins to set anything, existing applications without labels would continue to function as normal
  2. It would allow admins to direct traffic to specific nodes without asking the user to configure their pod template

Thoughts from @yuvipanda, @jacobtomlinson, @betatim?

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:8 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
jhammancommented, May 10, 2019

Okay, I see where you are headed. I think we should start with a default prefer affinity. Then follow up with making the prefer vs require option configurable at the dask kubernetes config level.

0reactions
jhammancommented, May 17, 2019

closed by #147

Read more comments on GitHub >

github_iconTop Results From Across the Web

Assigning Pods to Nodes - Kubernetes
The affinity feature consists of two types of affinity: Node affinity functions like the nodeSelector field but is more expressive and allows ...
Read more >
Advanced Scheduling and Pod Affinity/Anti-affinity
Pod affinity /anti-affinity allows you to constrain which nodes your pod is eligible to be scheduled on based on the labels on other...
Read more >
Kubernetes node affinity: examples & instructions -
Kubernetes node affinity is an advanced scheduling feature that helps administrators optimize the distribution of pods across a cluster.
Read more >
Learn How to Assign Pods to Nodes in Kubernetes Using ...
This feature is the opposite to node and Pod affinity and is known as ... These are all default labels attached to Kubernetes...
Read more >
Chapter 2. Controlling pod placement onto nodes (scheduling)
Pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found