question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Getting ThrottlingException: Rate exceeded with multiple instances of the transport running

See original GitHub issue

It might be some combination of my having a large number of log groups to iterate over and having a large number of forked subprocesses, each with its own isolated winston logger instance using this transport, but even when the logger is sitting idle and not logging anything, I’m getting this stack trace from the aws-sdk:

'ThrottlingException: Rate exceeded\n    at Request.extractError (c:\\project\\node_modules\\aws-sdk\\lib\\protocol\\json.js:43:27)\n    at Request.callListeners (c:\\project\\node_modules\\aws-sdk\\lib\\sequential_executor.js:105:20)\n    at Request.emit (c:\\project\\node_modules\\aws-sdk\\lib\\sequential_executor.js:77:10)\n    at Request.emit (c:\\project\\node_modules\\aws-sdk\\lib\\request.js:596:14)\n    at Request.transition (c:\\project\\node_modules\\aws-sdk\\lib\\request.js:21:10)\n    at AcceptorStateMachine.runTo (c:\\project\\node_modules\\aws-sdk\\lib\\state_machine.js:14:12)\n    at c:\\project\\node_modules\\aws-sdk\\lib\\state_machine.js:26:10\n    at Request.<anonymous> (c:\\project\\node_modules\\aws-sdk\\lib\\request.js:37:9)\n    at Request.<anonymous> (c:\\project\\node_modules\\aws-sdk\\lib\\request.js:598:12)\n    at Request.callListeners (c:\\project\\node_modules\\aws-sdk\\lib\\sequential_executor.js:115:18)\n    at Request.emit (c:\\project\\node_modules\\aws-sdk\\lib\\sequential_executor.js:77:10)\n    at Request.emit (c:\\project\\node_modules\\aws-sdk\\lib\\request.js:596:14)\n    at Request.transition (c:\\project\\node_modules\\aws-sdk\\lib\\request.js:21:10)\n    at AcceptorStateMachine.runTo (c:\\project\\node_modules\\aws-sdk\\lib\\state_machine.js:14:12)\n    at c:\\project\\node_modules\\aws-sdk\\lib\\state_machine.js:26:10\n    at Request.<anonymous> (c:\\project\\node_modules\\aws-sdk\\lib\\request.js:37:9)\n    at Request.<anonymous> (c:\\project\\node_modules\\aws-sdk\\lib\\request.js:598:12)\n    at Request.callListeners (c:\\project\\node_modules\\aws-sdk\\lib\\sequential_executor.js:115:18)\n    at callNextListener (c:\\project\\node_modules\\aws-sdk\\lib\\sequential_executor.js:95:12)\n    at IncomingMessage.onEnd (c:\\project\\node_modules\\aws-sdk\\lib\\event_listeners.js:208:11)\n    at emitNone (events.js:73:20)\n    at IncomingMessage.emit (events.js:167:7)\n    at endReadableNT (_stream_readable.js:906:12)\n    at nextTickCallbackWith2Args (node.js:455:9)\n    at process._tickDomainCallback (node.js:410:17)'

I’ve read through the code and I can’t find anywhere this transport might be doing continuous polling of the stream while idle, so I’m at a bit of a loss how the SDK can be throwing throttling errors without any API calls being made. Any thoughts?

Issue Analytics

  • State:closed
  • Created 8 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
ghostcommented, Dec 30, 2015

My use case is forked invocations of worker process that each bring in my logging module. In that case you would have multiple processes uploading to the same stream. Seems like I just need to follow the same practice that AWS does for how highly concurrent processes like Lambda log to CloudWatch and just append some arbitrary process-specific value to the stream name. Just making the stream name itself a per-process uuid has gotten rid of the issue.

I did confirm with AWS support that the throttles I was hitting are per-stream, not global to the CloudWatch Logs service or to my AWS account, though there are undocumented rate limits there too.

Thanks for the help.

0reactions
lazywithclasscommented, Feb 2, 2016

@hdn8 I am going to push a 1.0 hopefully soon enough. Open a new issue after that in case you still get the error.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Troubleshoot "ThrottlingException" or "Rate exceeded" errors ...
Stagger the intervals of the API calls so that they don't all run at the same time. Use APIs that return more than...
Read more >
DynamoDB rate limit error without throttling on the table #1665
The error message you got about retry quotas being exceeded means that you are getting some sort of retryable error (like a throttle)...
Read more >
Handle throttling problems, or '429 - Azure Logic Apps
So, if your logic app exceeds these limits, your logic app resource gets throttled, not just a specific instance or run. To find...
Read more >
Fargate ThrottlingException Rate exceeded - Stack Overflow
I reached out to AWS support and got the following answer: ECS' run-task API, when launching a Fargate task, is throttled at 1...
Read more >
Rate Limiting Policy | MuleSoft Documentation
An map containing one or more rate limit specifications. rateLimits. ... for example, the policy rate-limits GET requests independently of POST requests.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found