why does scale return a list of k8s pods?
See original GitHub issueI’m noticing some unpleasant verbose output when calling scale on a KubeCluster:
from dask_kubernetes import KubeCluster
cluster = KubeCluster()
cluster.scale(2)
Based on my experience with other dask cluster objects, I would not expect the scale()
method to return anything. Instead, I get a list of kubernetes.client.models.v1_pod.V1Pod
objects:
```
[{'api_version': 'v1',
'kind': 'Pod',
'metadata': {'annotations': None,
'cluster_name': None,
'creation_timestamp': datetime.datetime(2019, 4, 26, 16, 48, 48, tzinfo=tzlocal()),
'deletion_grace_period_seconds': None,
'deletion_timestamp': None,
'finalizers': None,
'generate_name': 'dask-jovyan-37e0345b-2',
'generation': None,
'initializers': None,
'labels': {'app': 'dask',
'component': 'dask-worker',
'dask.org/cluster-name': 'dask-jovyan-37e0345b-2',
'user': 'jovyan'},
'name': 'dask-jovyan-37e0345b-2m92tx',
'namespace': 'nasa-prod',
'owner_references': None,
'resource_version': '3277582',
'self_link': '/api/v1/namespaces/nasa-prod/pods/dask-jovyan-37e0345b-2m92tx',
'uid': '299f8f18-6843-11e9-93cb-1639dbec9e42'},
'spec': {'active_deadline_seconds': None,
'affinity': None,
'automount_service_account_token': None,
'containers': [{'args': ['dask-worker',
'--nthreads',
'2',
'--no-bokeh',
'--memory-limit',
'7GB',
'--death-timeout',
'60'],
'command': None,
'env': [{'name': 'DASK_SCHEDULER_ADDRESS',
'value': 'tcp://192.168.26.96:37821',
'value_from': None}],
'env_from': None,
'image': '783380859522.dkr.ecr.us-east-1.amazonaws.com/pangeo-nasa:54d8260',
'image_pull_policy': 'IfNotPresent',
'lifecycle': None,
'liveness_probe': None,
'name': 'dask-jhamman',
'ports': None,
'readiness_probe': None,
'resources': {'limits': {'cpu': '1750m',
'memory': '7G'},
'requests': {'cpu': '1',
'memory': '7G'}},
'security_context': None,
'stdin': None,
'stdin_once': None,
'termination_message_path': '/dev/termination-log',
'termination_message_policy': 'File',
'tty': None,
'volume_mounts': [{'mount_path': '/var/run/secrets/kubernetes.io/serviceaccount',
'mount_propagation': None,
'name': 'default-token-szzrb',
'read_only': True,
'sub_path': None}],
'working_dir': None}],
'dns_policy': 'ClusterFirst',
'host_aliases': None,
'host_ipc': None,
'host_network': None,
'host_pid': None,
'hostname': None,
'image_pull_secrets': None,
'init_containers': None,
'node_name': None,
'node_selector': {'alpha.eksctl.io/nodegroup-name': 'dask-worker'},
'priority': 0,
'priority_class_name': None,
'restart_policy': 'Never',
'scheduler_name': 'default-scheduler',
'security_context': {'fs_group': None,
'run_as_non_root': None,
'run_as_user': None,
'se_linux_options': None,
'supplemental_groups': None},
'service_account': 'default',
'service_account_name': 'default',
'subdomain': None,
'termination_grace_period_seconds': 30,
'tolerations': [{'effect': 'NoExecute',
'key': 'node.kubernetes.io/not-ready',
'operator': 'Exists',
'toleration_seconds': 300,
'value': None},
{'effect': 'NoExecute',
'key': 'node.kubernetes.io/unreachable',
'operator': 'Exists',
'toleration_seconds': 300,
'value': None}],
'volumes': [{'aws_elastic_block_store': None,
'azure_disk': None,
'azure_file': None,
'cephfs': None,
'cinder': None,
'config_map': None,
'downward_api': None,
'empty_dir': None,
'fc': None,
'flex_volume': None,
'flocker': None,
'gce_persistent_disk': None,
'git_repo': None,
'glusterfs': None,
'host_path': None,
'iscsi': None,
'name': 'default-token-szzrb',
'nfs': None,
'persistent_volume_claim': None,
'photon_persistent_disk': None,
'portworx_volume': None,
'projected': None,
'quobyte': None,
'rbd': None,
'scale_io': None,
'secret': {'default_mode': 420,
'items': None,
'optional': None,
'secret_name': 'default-token-szzrb'},
'storageos': None,
'vsphere_volume': None}]},
'status': {'conditions': None,
'container_statuses': None,
'host_ip': None,
'init_container_statuses': None,
'message': None,
'phase': 'Pending',
'pod_ip': None,
'qos_class': 'Burstable',
'reason': None,
'start_time': None}}]
```
Is this the intended behavior? Is there an existing way to silence this output?
My version details:
dask 1.1.1
distributed 1.25.3
dask_kubernetes 0.7.0
kubernetes 4.0.0
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (4 by maintainers)
Top Results From Across the Web
Scale down Kubernetes pods - Stack Overflow
So I have watched kubectl get pods output and saw how all pods were terminated. Now it tells No resources found. . As...
Read more >How to auto scale Kubernetes pods for microservices
Horizontal Pod Autoscaler The HPA scales the number of pods in a deployment based on a custom metric or a resource metric of...
Read more >StatefulSets - Kubernetes
Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. Like a...
Read more >Scaling Deployments, StatefulSets & Custom Resources - KEDA
It allows you to define the Kubernetes Deployment or StatefulSet that you want KEDA to scale based on a scale trigger. KEDA will...
Read more >Monitoring Kubernetes Performance Metrics | Datadog
In particular, a large disparity between desired and running pods can point to bottlenecks, such as your nodes lacking the resource capacity to ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Yeah, I think this should be removed.
@yuvipanda do you have any thoughts on this? Should we remove this?