ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:
See original GitHub issueIn our project, Lambda was last deployed successfully by CI with claudia
2021-09-14 ~16:17 CET. There have been no issues earlier.
A next attempt by CI, at 2021-09-15 ~1649 CET failed. Retry attempts failed. Manual attempts via CLI failed. Manual upload, publish, and creation of an alias, worked via the console (but no working version was produced because there was no investment in getting the package right).
Nothing of relevance was changed (always a strong statement, I know). There was no update of claudia
, or related
packages between 2 deploys. A retry of the successful deployed version failed too.
Retries 2021-09-16 ~ 10:10 CET failed again.
The reported error is always the same:
updating configuration lambda.setupRequestListeners
ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:eu-west-1:NNNNN:function:XXXXXXXX
at Object.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/json.js:52:27)
at Request.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/rest_json.js:55:8)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
at callNextListener (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:96:12)
at IncomingMessage.onEnd (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/event_listeners.js:313:13)
at IncomingMessage.emit (events.js:412:35)
at IncomingMessage.emit (domain.js:470:12)
at endReadableNT (internal/streams/readable.js:1317:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21) {
code: 'ResourceConflictException',
time: 2021-09-16T08:19:05.924Z,
requestId: 'cf98db8a-0457-4f92-9a68-19b37f326508',
statusCode: 409,
retryable: false,
retryDelay: 45.98667333028396
}
But this happens quick, before the package is build, or after.
We’ve felt for a while that claudia
does some things twice, first checking, and then doing. When the error appears
late, we see several mentions of lambda.setupRequestListeners
loading Lambda config
loading Lambda config sts.getCallerIdentity
loading Lambda config sts.setupRequestListeners
loading Lambda config sts.optInRegionalEndpoint
loading Lambda config lambda.getFunctionConfiguration FunctionName=XXXXXXXX
loading Lambda config lambda.setupRequestListeners
packaging files
packaging files npm pack -q /opt/atlassian/pipelines/agent/build
packaging files npm install -q --no-audit --production
[…]
validating package
validating package removing optional dependencies
validating package npm install -q --no-package-lock --no-audit --production --no-optional
[…]
validating package npm dedupe -q --no-package-lock
updating configuration
updating configuration lambda.updateFunctionConfiguration FunctionName=XXXXXXXX
updating configuration lambda.setupRequestListeners
updating configuration lambda.updateFunctionConfiguration FunctionName=XXXXXXXX
updating configuration lambda.setupRequestListeners
ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:eu-west-1:NNNNNNNN:function:XXXXXXXX
at Object.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/json.js:52:27)
at Request.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/rest_json.js:55:8)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
at callNextListener (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:96:12)
at IncomingMessage.onEnd (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/event_listeners.js:313:13)
at IncomingMessage.emit (events.js:412:35)
at IncomingMessage.emit (domain.js:470:12)
at endReadableNT (internal/streams/readable.js:1317:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21) {
code: 'ResourceConflictException',
time: 2021-09-16T08:19:05.924Z,
requestId: 'cf98db8a-0457-4f92-9a68-19b37f326508',
statusCode: 409,
retryable: false,
retryDelay: 45.98667333028396
}
Resources on the internet are barely any help.
AWS Lambda - Troubleshoot invocation issues in Lambda
mentions ResourceConflictException
, but with a different message, and refers to VPCs, which we are not using.
UpdateFunctionConfiguration
,
PublishVersion,
UpdateFunctionCode
and others mention more
generally:
ResourceConflictException
The resource already exists, or another operation is in progress.
HTTP Status Code: 409
Other resources are no help:
- https://discuss.hashicorp.com/t/problem-updating-aws-lambda-function/20597
- https://stackoverflow.com/questions/58971446/resourceconflictexception-the-function-could-not-be-updated
Terraform Error publishing version when lambda using container updates code #17153 (Jan. 2021) mentions a “lock” / “last update status”, which we can watch during execution using
> watch aws --profile YYYYYYY --region eu-west-1 lambda get-function-configuration --function-name XXXXXXXX
The output looks like
{
"FunctionName": "XXXXXXXX",
"FunctionArn": "arn:aws:lambda:eu-west-1:NNNNNNNNNNN:function:XXXXXXXX
"Runtime": "nodejs14.x",
"Role": "arn:aws:iam::NNNNNNNNNNN:role/execution/lambda-execution-XXXXXXXX",
"Handler": "lib/service.handler",
"CodeSize": 76984324,
"Description": "[…]",
"Timeout": 30,
"MemorySize": 2048,
"LastModified": "2021-09-16T08:19:05.000+0000",
"CodeSha256": "zQb6Vss0Zlug46HRjA8+bNe0i1TP6NWfrm70hC6zC90=",
"Version": "$LATEST",
"Environment": {
"Variables": {
"NODE_ENV": "production"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "9d5f5431-6f2f-4d39-9794-d86778b34446",
"Layers": [
{
"Arn": "arn:aws:lambda:eu-west-1:NNNNNNNNNNN:layer:chrome-aws-lambda:25",
"CodeSize": 51779390
}
],
"State": "Active",
"LastUpdateStatus": "Successful",
"PackageType": "Zip"
}
most of the time, but we see LastUpdateStatus
change for a moment before the error occurs.
Terraform aws_lambda_function ResourceConflictException due to a concurrent update operation #5154 says, in 2018,
OK, I’ve figured out what’s happening here based on a comment here: AWS has some sort of limit on how many concurrent modifications you can make to a Lambda function.
serverless ‘Concurrent update operation’ error for multi-function service results in both deployment and rollback failure. #4964 reports the same issue in 2018, and remarks:
I just heard back from AWS Premium Support, and they offered up a solution and the cause of the issue. It’s not so much an issue with too many functions, as it is trying to do too many updates with a single function.
So, this appears to be a timing issue. Claudia should take it slower?
Issue Analytics
- State:
- Created 2 years ago
- Reactions:6
- Comments:49 (1 by maintainers)
Started for us as well. if someone wants to do quick fix until its fixed in ClaudiaJs, use below around Claudia commands:
v5.14.0 should fix this