[Feature Request] Ability to specify the AZ when creating NodeGroups
See original GitHub issueTL;DR Proposal
It would be nice to have an availabilityZone?: pulumi.Input<string> in ClusterNodeGroupOptions that does the same thing I did below:
Use case
Because ELB volumes can only be mounted by an instance in the same AZ, it makes sense to be able to place all nodes of an EKS cluster in a specific AZ when using ELB volumes, as you’re expecting a pod should be able to mount an ELB volume regardless of the node it’s scheduled on.
Current Behavior
When creating a cluster without specifying a VPC, the default VPC is used, together with the default subnets in that VPC, and we end up with nodes scattered throughout the whole region, placed in as many AZ as are default subnets.
Workaround
An ES2 instance is placed in the same AZ its subnet is placed in, so the way we can place a NodeGroup’s nodes in a specific AZ is to set nodeSubnetIds in ClusterNodeGroupOptions to a subnet in that AZ. To be able to specify the literal AZ name (e.g. eu-central-1c), I’ve come up with a function that given an eks.Cluster and an AZ name, returns the subnet id placed in that AZ:
export function getSubnetIdInAZ(cluster: eks.Cluster, az: string): Output<string> {
const { subnetIds } = cluster.eksCluster.vpcConfig;
return subnetIds.apply(async ids => {
const subnets = await Promise.all(ids.map(id => aws.ec2.getSubnet({ id })));
const subnet = subnets.find(subnet => subnet.availabilityZone === az);
if (!subnet) {
throw new Error(`No subnet found in ${az} zone`);
}
return subnet.id;
});
}
, which I then used like this:
const cluster = new eks.Cluster('cluster', {
skipDefaultNodeGroup: true,
});
cluster.createNodeGroup('worker', {
/* ... */
nodeSubnetIds: [getSubnetIdInAZ(cluster, 'eu-central-1c')],
});
Issue Analytics
- State:
- Created 4 years ago
- Reactions:5
- Comments:8 (2 by maintainers)

Top Related StackOverflow Question
Another reason to provide this ability is that sometimes AWS returns this error:
As it’s documented here, I feel it must be quite frequent: https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html
eksctleventually added support for that reason: https://github.com/weaveworks/eksctl/issues/118It would be unfortunate that this wrapper automatically create the needed VPC and Subnet (if not provided with one), but does not handle such error.
I encountered the
targeted availability zone does not currently have sufficient capacity to support the clusterand do not know how to fix it in Pulumi. Has this been addressed by any chance?