S3 Bucket: bucket not deleted although autoDeleteObjects is set to true
See original GitHub issueDescribe the bug
We have a KMS encrypted bucket as a destination for firehose, created with:
.removalPolicy(RemovalPolicy.DESTROY) .autoDeleteObjects(true)
We create four DirectPut deliveryStreams, each using the bucket.
Expected Behavior
When the Kinesis Stack is deleted, the S3 Bucket should be deleted as well.
Current Behavior
We get the following error in CloudFormation:
The bucket you tried to delete is not empty (Service: Amazon S3; Status Code: 409; Error Code: BucketNotEmpty; Request ID: G414G0X7S6D79N2C; S3 Extended Request ID: /XuG6xxhpZdz+4UxBvWJYnlAMTbsGT0S9uKRGR0nVZDg/spFKcIiBCis3irsCqHcTZM6WIDJtsA=; Proxy: null)
Reproduction Steps
` class KinesisStack( scope: Construct, config: StageConfig, props: StackProps ) : Stack(scope, Stacks.ABEO_KINESIS, props) {
init {
val bucketName = config.kinesis.bucketName.replace("ENV", config.name)
val kmsKey = Key.Builder.create(this,"${this.stackName}-key")
.removalPolicy(RemovalPolicy.DESTROY)
.pendingWindow(Duration.days(7))
.enableKeyRotation(false)
.alias("${this.stackName}-key")
.build()
val traceBucket = Bucket.Builder.create(this, "${this.stackName}-bucket")
.removalPolicy(RemovalPolicy.DESTROY)
.autoDeleteObjects(true)
.publicReadAccess(false)
.bucketName(bucketName)
.blockPublicAccess(BlockPublicAccess.BLOCK_ALL)
.encryption(BucketEncryption.KMS)
.encryptionKey(kmsKey)
.versioned(false)
.lifecycleRules(
listOf(
LifecycleRule.builder()
.enabled(true)
.expiration(Duration.days(90))
.build()
)
)
.build()
val firehoseRole = Role.Builder.create(this, "firehoseRole")
.assumedBy(ServicePrincipal("firehose.amazonaws.com"))
.description("Firehose role")
.build()
traceBucket.grantWrite(firehoseRole)
config.kinesis.traceStreams.forEach {
createDeliveryStream(it,traceBucket,firehoseRole, kmsKey)
.node
.addDependency(firehoseRole)
}
}
private fun createDeliveryStream(streamName:String, bucket: IBucket, firehoseRole: Role, kmsKey: IKey) =
CfnDeliveryStream.Builder.create(this, streamName,)
.deliveryStreamName(streamName)
.deliveryStreamType("DirectPut")
.deliveryStreamEncryptionConfigurationInput(
CfnDeliveryStream.DeliveryStreamEncryptionConfigurationInputProperty.builder()
.keyType("CUSTOMER_MANAGED_CMK")
.keyArn(kmsKey.keyArn)
.build()
)
.s3DestinationConfiguration(S3DestinationConfigurationProperty.builder()
.bucketArn(bucket.bucketArn)
.roleArn(firehoseRole.roleArn)
.prefix("trace/$streamName/")
.compressionFormat("UNCOMPRESSED")
.encryptionConfiguration(CfnDeliveryStream.EncryptionConfigurationProperty.builder()
.kmsEncryptionConfig(CfnDeliveryStream.KMSEncryptionConfigProperty.builder()
.awskmsKeyArn(kmsKey.keyArn)
.build())
.build())
.bufferingHints(CfnDeliveryStream.BufferingHintsProperty.builder()
.sizeInMBs(128)
.intervalInSeconds(900)
.build())
.build())
.build()
} `
Possible Solution
No response
Additional Information/Context
No response
CDK CLI Version
2.27.0 (build 8e89048)
Framework Version
software.amazon.awscdk:aws-cdk-lib:2.24.1
Node.js Version
v16.15.1
OS
Codepipeline/Win/MacOS
Language
Java
Language Version
11 (Kotlin)
Other information
No response
Issue Analytics
- State:
- Created a year ago
- Comments:6 (2 by maintainers)
Top GitHub Comments
We found the issue. It was the order, how we deleted stacks. The stack writing to kinesis and kinesis itself were deleted in parallel, and this caused a race condition. Although the lambda deleted the objects, but some were immediately recreated by the still running writer stack,
Our fault. I am closing the issue. Thanks for your support
⚠️COMMENT VISIBILITY WARNING⚠️
Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.