(eks): Why is creating managed node groups for imported clusters allowed, but not for clusters crossing the stackborder
See original GitHub issue❓ General Issue
Creating managed nodegroups is somehow strange:
- Creating a managed nodegroup with the cluster in the same stack works like a charm …
- Creating a managed nodegroup in another stack with passing the cluster doesn’t work … (https://github.com/markus7811/aws-cdk/blob/48e0d950ef0d52117608c5ca37fcb5a57e9324df/packages/%40aws-cdk/aws-eks/lib/managed-nodegroup.ts#L315)
- Creating a managed nodegroup in another stack with using the ICluster interface … no problem
export class EksClusterStack extends cdk.Stack {
readonly cluster: eks.Cluster;
readonly nodegroup: eks.Nodegroup;
constructor() {
this.cluster = new eks.Cluster() ...
/* Works! */
this.nodegroup = new eks.nodegroup(... {
cluster: this.cluster,
...
}
}
}
/* wont work */
export class NodegroupWontWork extends cdk.Stack {
constructor( ... props) {
new eks.nodegroup(... {
cluster: props.cluster,
/* in this case cluster from EksClusterStack is passed as "real" eks.Cluster */
...
}
}
}
/* works */
export class NodegroupWorks extends cdk.Stack {
constructor( ... props) {
new eks.nodegroup(... {
cluster: eks.fromClusterAttributes( /* in this case I reconstruct the cluster form stack 1 */),
...
}
}
}
The Question
Why do you prohibit 2 and allow 3. For sure, it has something to do with the auth configmap?! I have to say, that i didn’t run deep tests on what would happen if I rewrite the configmap with cdk after the nodegroup was added … that’s the only thing, I could imagine that can go wrong…
So is it a real problem (cdk will edit the configmap after nodegroup was attached)?
Shouldn’t it be forbidden or allowed for “both” cluster and icluster?
Environment
not relevant…
Issue Analytics
- State:
- Created 3 years ago
- Comments:10 (9 by maintainers)
Top Results From Across the Web
Managed node groups - Amazon EKS - AWS Documentation
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
Read more >AWS EKS - How to Scale Your Cluster | Pulumi Blog
Managed Node Groups : Let EKS manage and scale nodes based on declarative specifications; EC2: Manage nodes by hand using explicit groups, EC2 ......
Read more >Create EKS Self-Managed Node Group
This section will guide you through creating a self-managed node group. Fast Forward. If you already have a node group for your cluster,...
Read more >Implementation of AWS EKS Node Group Using Terraform
Amazon EKS managed node groups automate the provisioning and lifecycle ... of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
Read more >aws_eks_node_group | Resources | hashicorp/aws
In addition to all arguments above, the following attributes are exported: arn - Amazon Resource Name (ARN) of the EKS Node Group. id...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@markus7811 Yeah we cannot expose
aws-auth
onICluster
because it would potentially overwrite existing configuration in imported clusters.For now, the way to configure node groups in another stack would have to be by creating the
nodeRole
in the cluster stack as well, as you’ve already mentioned here.@markus7811 Just to make sure I don’t make any unwanted assumptions, could you please attach small code snippets for the 3 scenarios you described?