[elasticsearch] CDK seems not to handle multi-az domains correctly
See original GitHub issueContext: https://github.com/aws/aws-cdk/issues/10965
Short version: setting 3 AZs for an Amazon ES multi-AZ domain doesn’t work with the default aws_vpc, which seems to deploy in 2 zones and I can’t find a way to push to 3 zones.
Amazon ES domains can deploy into 1, 2, or 3 AZs, controlled by customer config. Obviously, the VPC must have that many AZs/subnets to support this configuration. See the referenced issue for my struggle with subnets and sending the correct data type. As part of that process, I expanded to 3 AZs. CDK diff did not report a problem, but cdk deploy failed when the subnet count was only 2.
zone_awareness=es.ZoneAwarenessConfig(enabled=True,
availability_zone_count=3),
Yields this errror:
(.env) handler@laptop:~/code/cdk-vpc $ cdk deploy
jsii.errors.JavaScriptError:
Error: When providing vpc options you need to provide a subnet for each AZ you are using
at new Domain (/private/var/folders/4f/2b3kckld2mn59m48yyp5h_2hjr9cf9/T/jsii-kernel-lOc9Ud/node_modules/@aws-cdk/aws-elasticsearch/lib/domain.js:465:19)
at /Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7934:49
at Kernel._wrapSandboxCode (/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_embedded/jsii/jsii-runtime.js:8422:19)
at Kernel._create (/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7934:26)
at Kernel.create (/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7678:21)
at KernelHost.processRequest (/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7458:28)
at KernelHost.run (/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7396:14)
at Immediate._onImmediate (/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7399:37)
at processImmediate (internal/timers.js:461:21)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "app.py", line 9, in <module>
CdkVpcStack(app, "cdk-vpc")
File "/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_runtime.py", line 69, in __call__
inst = super().__call__(*args, **kwargs)
File "/Users/handler/code/cdk-vpc/cdk_vpc/cdk_vpc_stack.py", line 26, in __init__
domain = es.Domain(self, 'cdkd1',
File "/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_runtime.py", line 69, in __call__
inst = super().__call__(*args, **kwargs)
File "/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/aws_cdk/aws_elasticsearch/__init__.py", line 4478, in __init__
jsii.create(Domain, self, [scope, id, props])
File "/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_kernel/__init__.py", line 250, in create
response = self.provider.create(
File "/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_kernel/providers/process.py", line 336, in create
return self._process.send(request, CreateResponse)
File "/Users/handler/code/cdk-vpc/.env/lib/python3.8/site-packages/jsii/_kernel/providers/process.py", line 321, in send
raise JSIIError(resp.error) from JavaScriptError(resp.stack)
jsii.errors.JSIIError: When providing vpc options you need to provide a subnet for each AZ you are using
So, I also added
vpc = ec2.Vpc(self, 'cdkvpc', max_azs=3)
But get the same error.
Printing vpc.private_subnets, shows I have 2 (why not 3? I’m deploying in us-west-2). So I changed my node count and zone count to 2 and that worked.
As far as I can tell, there’s no way to span 3 zones with a VPC, which mismatches the Amazon Elasticsearch Service best practice, 3-zone deployment. Or is there a less-obvious way to have the vpc in 3 zones?
Reproduction Steps
from aws_cdk import (
aws_ec2 as ec2,
aws_elasticsearch as es,
core
)
class CdkVpcStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
vpc = ec2.Vpc(self, 'cdkvpc', max_azs=3)
es_sec_grp = ec2.SecurityGroup(self, 'ESSecGrpCDK',
vpc=vpc,
allow_all_outbound=True,
security_group_name='ESSecGrpCDK')
es_sec_grp.add_ingress_rule(ec2.Peer.any_ipv4(), ec2.Port.tcp(80))
es_sec_grp.add_ingress_rule(ec2.Peer.any_ipv4(), ec2.Port.tcp(443))
domain = es.Domain(self, 'cdkd1',
version=es.ElasticsearchVersion.V7_7,
domain_name='cdkd1',
capacity=es.CapacityConfig(data_node_instance_type='t3.small.elasticsearch',
data_nodes=3),
ebs=es.EbsOptions(enabled=True,
volume_size=10,
volume_type=ec2.EbsDeviceVolumeType.GP2),
vpc_options=es.VpcOptions(
security_groups=[es_sec_grp],
subnets=vpc.private_subnets,
),
zone_awareness=es.ZoneAwarenessConfig(enabled=True,
availability_zone_count=3),
enforce_https=True,
node_to_node_encryption=True,
encryption_at_rest={
"enabled": True
},
use_unsigned_basic_auth=True,
fine_grained_access_control={
"master_user_name": "admin",
},
)
What did you expect to happen?
The domain should deploy in 3 AZs, according to the config. Alternately, add a min_azs parameter to the vpc() construction.
What actually happened?
cdk deploy failed with insufficient subnets.
Environment
(.env) handler@laptop:~/code/cdk-vpc $ aws --version aws-cli/1.18.114 Python/3.8.3 Darwin/18.7.0 botocore/1.13.50 (.env) handler@laptop:~/code/cdk-vpc $ cdk --version 1.68.0 (build a6a3f46) (.env) handler@laptop:~/code/cdk-vpc $ node -v v12.19.0
- OS : MacOS Mojave 10.14.6 (.env) handler@laptop:~/code/cdk-vpc $ python --version Python 3.8.3
This is 🐛 Bug Report
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (3 by maintainers)
Top GitHub Comments
Aight, that makes sense. Though I’d really argue default of 3 for HA instead of 2. Maybe add validation for stack failure if we deploy to a zone w/ fewer than 3 AZs
On Mon, Nov 23, 2020 at 11:56 AM Eli Polonsky notifications@github.com wrote:
– ⍰⍰⍰
⚠️COMMENT VISIBILITY WARNING⚠️
Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.