(eks): Support isolated VPCs
See original GitHub issueProvisioning clusters inside an isolated vpc (i.e no internet access) is not currently supported. This is because the lambda functions that operate the cluster need to invoke the EKS service, which does not offer a VPC endpoint.
Use Case
We’ve seen users mentioning their environment uses an isolated VPC.
Other
Adding some information here to possibly facilitate alternative approaches.
If you have a proxy setup, you can inject proxy information to the handlers via custom environment variables.
const proxy = "https://proxy.mycompany.com:8080/”;
new eks.Cluster(this, 'Cluster', {
...,
kubectlEnvironment: {
HTTPS_PROXY: proxy,
},
clusterHandlerEnvironment: {
HTTPS_PROXY: proxy
}
})
Also, following is a list of AWS services that our Lambda handlers interact with in order to operate the cluster. All of these services offer a VPC endpoint except for EKS.
- Lambda
- Step Functions
- CloudFormation
- STS
- S3
- EKS
Related: https://github.com/aws/aws-cdk/issues/10036
Once EKS does offer a VPC endpoint, it would be nice if we just provision the necessary endpoints given if we identify that the VPC does not have internet access (internet gateway, NAT).
- 👋 I may be able to implement this feature request
- ⚠️ This feature might incur a breaking change
This is a 🚀 Feature Request
Issue Analytics
- State:
- Created 3 years ago
- Reactions:14
- Comments:8 (3 by maintainers)
Top GitHub Comments
In my scenario, my “isolated” subnets aren’t really isolated from the internet as I use a TGW to route traffic via an egress network. If you try for private and natGateways=0, CDK insists you call them isolated. If you call them isolated, you can’t put EKS on them.
Is there a workaround to this, or could there be some sort of “I know what I’m doing” override added?
Sure thing Stacktrace (file paths lightly sanitized):
cdk_eks_stack.py Line 48: