"Too many open files" when trying to send many parallel requests
See original GitHub issueHi,
I’m getting “Too many open files” trying to execute InventoryCustomBatch in a parallel requests (max 400).
The stacktrace ->
at Google.Apis.Http.ConfigurableMessageHandler.<SendAsync>d__43.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Net.Http.HttpClient.<FinishSendAsync>d__58.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Google.Apis.Requests.ClientServiceRequest`1.<ExecuteUnparsedAsync>d__26.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Google.Apis.Requests.ClientServiceRequest`1.<ExecuteAsync>d__23.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
I’m running my code in a AWS-Lambda which means that is a UNIX-Like OS, and UNIX ~open a file~ for every request, so we need to dispose httpmessage to close the ~file~.
I’m not sure about that, but I think that this code
Does not dispose the message if cancellationRequested
Can you guys help me?
Issue Analytics
- State:
- Created 5 years ago
- Comments:11
Top Results From Across the Web
Too many open files when using requests package python
"Too many open files" is likely a reference to the fact that each Session and its single POST request hogs a TCP socket...
Read more >How to get rid of the "too many open files" error by tuning ...
When facing a Too many open files error, you must first analyze your application design to see if there's no bad design causing...
Read more >DNS error when sending >1024 parallel requests with ...
Usually it default to 1024 max file handles per process. That means that you are trying to open too many sockets, which is...
Read more >How to Fix the 'Too Many Open Files' Error in Linux
It means that a process has opened too many files (file descriptors) and cannot open new ones. On Linux, the “max open file...
Read more >Too Many Open Files in Bitbucket Server
This error indicates that the limit has been reached and Bitbucket Server is unable to open additional files to complete the on-going operations ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Thanks for the extra details.
Each
HttpRequestMessage
is disposed of at the end of ausing
statement in ClientServiceRequest. It looks like all your requests are using the same client, so there’s onlyHttpClient
instance.I suspect that you’re trying to use too high a level of concurrency; and if you limit how many concurrent requests are active this will solve the problem. Are you able to test this?
@marcosvcp Thanks for the update. When you say “eventually”, do you mean that it now takes significantly longer for the error to occur than it used to? Has anything changed about the run-time environment you’re using? Is this still running on AWS-Lambda? Please can you post another full exception and stack-trace, so we can confirm it’s the same?