Creating EKS cluster failing when providing credentials via pulumi config secrets instead of environment
See original GitHub issueHello!
- Vote on this issue by adding a 👍 reaction
- To contribute a fix for this issue, leave a comment (and link to your pull request, if you’ve opened one already)
Issue details
I’m writing a utility to create infrastructure for us via the pulumi automation api. I’m also using the AWS STS SDK to perform the assume role command to acquire AWS. credentials. I have a stack which creates a simple EKS cluster for our CI runners. When providing AWS credentials via aws:accessKey, aws:secretKey, aws:token, the creation of the EKS cluster fails at quite a late stage due to being unable to communicate with the EKS Cluster API. Note that a lot of the related AWS objects, including the AWS cluster itself are successfully created, so for most of the process, the provided credentials are being used.
Because I can successfully update the stack when run manually (via pulumi up) with credentials provided as environment variables, I tried altering my automation code to programatically set environment variables rather than configuration secrets for the stack, and it started completing.
My hunch is that something to do with our configuration causes pulumi eks to use the k8s API as well as the AWS API and that there is an issue with that part of the process which causes the credentials to not be collected.
Steps to reproduce
Running the code below with credentials provided via pulumi config will fail. Providing the same credentials via environment variables will succeed.
/**
* Creates necessary dependencies and then sets up an eks cluster for running ci jobs
*/
import * as pulumi from "@pulumi/pulumi"
import * as eks from "@pulumi/eks";
import * as awsx from "@pulumi/awsx";
/**
* Create a VPC with the given name
* @param name the name of the vpc
* @returns
*/
const createVPC = (name: string):awsx.ec2.Vpc => {
const vpc = new awsx.ec2.Vpc(name, {});
return vpc;
}
/**
* Set the name and vpc to use for an eks cluster
*/
interface ClusterOptions {
name: string
vpc: awsx.ec2.Vpc
}
/**
* Create an EKS cluster with the provided options set
* @param opts options for the eks cluster
* @returns
*/
const createCluster = (opts: ClusterOptions): eks.Cluster => new eks.Cluster(opts.name, {
vpcId: opts.vpc.id,
publicSubnetIds: opts.vpc.publicSubnetIds,
privateSubnetIds: opts.vpc.privateSubnetIds,
nodeAssociatePublicIpAddress: false,
nodeGroupOptions: {
desiredCapacity: 3,
minSize: 2,
maxSize: 5,
instanceType: "t3.xlarge",
nodeRootVolumeSize: 100,
},
version: "1.21",
useDefaultVpcCni: true,
enabledClusterLogTypes: ["api", "audit", "controllerManager", "scheduler"],
createOidcProvider: true,
});
const stackConfig = new pulumi.Config()
const awsConfig = new pulumi.Config("aws")
const baseName = `${stackConfig.require("subaccount")}-${awsConfig.require("region")}-ci-cluster`
const vpc = createVPC(`${baseName}-vpc`);
const cluster = createCluster({name: `${baseName}-eks`, vpc});
export const eksClusterName = cluster.eksCluster.id;
export const eksKubeconfig = cluster.kubeconfig;
export const oidcProviderArn = cluster.core.oidcProvider?.arn
export const oidcProviderUrl = cluster.core.oidcProvider?.url
Expected: The update succeeds Actual: The update fails
Issue Analytics
- State:
- Created 2 years ago
- Reactions:2
- Comments:6 (1 by maintainers)

Top Related StackOverflow Question
I think I’m seeing the same thing, my errors are:
and
Note that we cheat and manually configure an aws profile using the config based credentials so that the token retrieval kubeconfig these things use is able to retrieve it. It does appear that the token retrieval works but the token is unauthorized.
any update? I have the same issue trying to create a eks.Cluster with fargate