(aws-logs): log retention update can race against the target lambda's log group creation resulting in OperationAbortedException
See original GitHub issueWhen a lambda runs a log group and stream are created if they do not already exist. When creating a CDK Lambda function, the log retention update can then race with the background log group creation resulting in an OperationAbortedException
error.
Reproduction Steps
Hard to reproduce reliably as it relies on a race condition between AWS and the log retention lambda.
What did you expect to happen?
Log retention policy is properly applied.
What actually happened?
2021-07-22T02:52:24.985Z a53fc34d-6d93-4590-94fd-8115d0b93e3b INFO OperationAbortedException: A conflicting operation is currently in progress against this resource. Please try again.
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:52:27)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:690:12)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:116:18) {
code: 'OperationAbortedException',
time: 2021-07-22T02:52:24.925Z,
requestId: '[redacted]',
statusCode: 400,
retryable: false,
retryDelay: 56.75647811158137
}
Environment
- CDK CLI Version : 1.110.1
- Framework Version: 1.110.1
- Node.js Version: 10.24.0
- OS : Buster (container)
- Language (Version): all, Python (3.8)
Other
AWS log group creation cloudtrail event:
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "[...]",
"arn": "[...]",
"accountId": "[...]",
"accessKeyId": "[...]",
"sessionContext": {
"sessionIssuer": {
...
},
"webIdFederationData": {},
"attributes": {
"creationDate": "2021-07-22T02:52:23Z",
"mfaAuthenticated": "false"
}
}
},
"eventTime": "2021-07-22T02:52:24Z",
"eventSource": "logs.amazonaws.com",
"eventName": "CreateLogGroup",
"awsRegion": "us-west-2",
"sourceIPAddress": "[...]",
"userAgent": "awslambda-worker/1.0 rusoto/0.42.0 rust/1.52.1 linux",
"requestParameters": {
"logGroupName": "/aws/lambda/mylambda"
},
"responseElements": null,
"requestID": "[...]",
"eventID": "[...]",
"readOnly": false,
"eventType": "AwsApiCall",
"apiVersion": "20140328",
"managementEvent": true,
"recipientAccountId": "...",
"eventCategory": "Management"
}
Log retention API call:
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "...",
"arn": "...",
"accountId": "...",
"accessKeyId": "...",
"sessionContext": {
"sessionIssuer": {
...
},
"webIdFederationData": {},
"attributes": {
"creationDate": "2021-07-22T02:52:23Z",
"mfaAuthenticated": "false"
}
}
},
"eventTime": "2021-07-22T02:52:24Z",
"eventSource": "logs.amazonaws.com",
"eventName": "CreateLogGroup",
"awsRegion": "us-west-2",
"sourceIPAddress": "[...]",
"userAgent": "aws-sdk-nodejs/2.880.0 linux/v12.22.1 exec-env/AWS_Lambda_nodejs12.x promise",
"errorCode": "OperationAbortedException",
"errorMessage": "A conflicting operation is currently in progress against this resource. Please try again.",
"requestParameters": {
"logGroupName": "/aws/lambda/mylambda"
},
"responseElements": null,
"requestID": "[...]",
"eventID": "[...]",
"readOnly": false,
"eventType": "AwsApiCall",
"apiVersion": "20140328",
"managementEvent": true,
"recipientAccountId": "[...]",
"eventCategory": "Management"
}
https://github.com/aws/aws-cdk/pull/2237 seems to be a fix for the same potential error on the actual log retention lambda’s log group, but not the target.
Issue Analytics
- State:
- Created 2 years ago
- Comments:12 (6 by maintainers)
Top Results From Across the Web
Fix "Log group does not exist" for Lambda logs in CloudWatch
Last updated: 2021-04-14. When I try to view logs for my AWS Lambda function in the Amazon CloudWatch console, I get a "Log...
Read more >@aws-cdk/aws-logs | Yarn - Package Manager
This library supplies constructs for working with CloudWatch Logs. Log Groups/Streams. The basic unit of CloudWatch is a Log Group. Every log group...
Read more >com.amazonaws.services.logs.AWSLogs.java Source code
You can then retrieve the associated log data from CloudWatch Logs using the ... no limit on the number of log streams that...
Read more >Ensure CloudWatch log groups specify retention days
Description. Enabling CloudWatch retention establishes how long log events are kept in AWS CloudWatch Logs. Retention settings are assigned to CloudWatch log ......
Read more >aws_cloudwatch_log_group | Resources | hashicorp/aws
If you select 0, the events in the log group are always retained and never ... the AWS KMS CMK is disassociated from...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’ve opened a new ticket since many people have been reporting the issue seems to be happening still. I believe I’ve made some findings in regards to why it might still be causing problems: https://github.com/aws/aws-cdk/issues/17546
We’ve been hitting this as well, “@aws-cdk/core”: “1.126.0” Still seems to be a bug here somewhere: @ddl-denis-parnovskiy
EDIT: Same problem with
Custom::LogRetention | LambdaGETprojectsprojectNameapikeysLogRetention247393C8 Received response status [FAILED] from custom resource. Message returned: A conflicting operation is currently in progress against this resource. Please try again. (RequestId: 62f8c1aa-d164-402b-9e75-d852f9092acb)