TooManyFiles error on uploading file
See original GitHub issueHello I’m implementing a service for uploading files, which requries uploading huge files to server I do it in chunks which requires a lot of requests. After some time I always get outofmemory exeception:
Throwing OutOfMemoryError "Failed to allocate a 2060 byte allocation with 1328 free bytes and 1328B until OOM"
04-04 13:02:27.608 7536-10064/by.set.pibox E/art: Throwing OutOfMemoryError "Failed to allocate a 3904 byte allocation with 0 free bytes and 0B until OOM" (recursive case)
04-04 13:02:27.751 7536-10064/by.set.pibox E/art: "pool-15-thread-1" prio=5 tid=96 Runnable
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: | group="main" sCount=0 dsCount=0 obj=0x32c0aee0 self=0xe13af000
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: | sysTid=10064 nice=0 cgrp=default sched=0/0 handle=0xdd40e880
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: | state=R schedstat=( 32306483502 2875351514 12061 ) utm=2248 stm=982 core=3 HZ=100
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: | stack=0xdb821000-0xdb823000 stackSize=1036KB
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: | held mutexes= "mutator lock"(shared held)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: native: #00 pc 0000485c /system/lib/libbacktrace_libc++.so (UnwindCurrent::Unwind(unsigned int, ucontext*)+23)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: native: #01 pc 00003005 /system/lib/libbacktrace_libc++.so (Backtrace::Unwind(unsigned int, ucontext*)+8)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: native: #02 pc 00243911 /system/lib/libart.so (art::DumpNativeStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, int, char const*, art::mirror::ArtMethod*)+68)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: native: #03 pc 002267e9 /system/lib/libart.so (art::Thread::DumpStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&) const+140)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: native: #04 pc 00229ac7 /system/lib/libart.so (art::Thread::ThrowOutOfMemoryError(char const*)+258)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: native: #05 pc 0013ce97 /system/lib/libart.so (art::gc::Heap::ThrowOutOfMemoryError(art::Thread*, unsigned int, art::gc::AllocatorType)+818)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: native: #06 pc 0013eda7 /system/lib/libart.so (art::gc::Heap::AllocateInternalWithGc(art::Thread*, art::gc::AllocatorType, unsigned int, unsigned int*, unsigned int*, art::mirror::Class**)+634)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: native: #07 pc 00229f0d /system/lib/libart.so (art::mirror::Array* art::mirror::Array::Alloc<true>(art::Thread*, art::mirror::Class*, int, unsigned int, art::gc::AllocatorType, bool) (.constprop.210)+884)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: native: #08 pc 0022a3fd /system/lib/libart.so (_jobject* art::Thread::CreateInternalStackTrace<false>(art::ScopedObjectAccessAlreadyRunnable const&) const+176)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: native: #09 pc 001fca8b /system/lib/libart.so (art::Throwable_nativeFillInStackTrace(_JNIEnv*, _jclass*)+18)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: native: #10 pc 00000c1d /data/dalvik-cache/arm/system@framework@boot.oat (Java_java_lang_Throwable_nativeFillInStackTrace__+80)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at java.lang.Throwable.nativeFillInStackTrace!(Native method)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at java.lang.Throwable.fillInStackTrace(Throwable.java:166)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at java.lang.Throwable.<init>(Throwable.java:95)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at java.lang.Error.<init>(Error.java:48)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at java.lang.VirtualMachineError.<init>(VirtualMachineError.java:46)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:44)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at okio.Segment.<init>(Segment.java:58)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at okio.SegmentPool.take(SegmentPool.java:46)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at okio.Buffer.writableSegment(Buffer.java:1120)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at okio.Buffer.write(Buffer.java:940)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at okio.RealBufferedSink.write(RealBufferedSink.java:95)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at com.squareup.okhttp.RequestBody$2.writeTo(RequestBody.java:96)
04-04 13:02:27.752 7536-10064/by.set.pibox E/art: at com.squareup.okhttp.internal.http.HttpEngine$NetworkInterceptorChain.proceed(HttpEngine.java:887)
04-04 13:02:27.753 7536-10064/by.set.pibox E/art: at com.squareup.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:749)
04-04 13:02:27.753 7536-10064/by.set.pibox E/art: at com.squareup.okhttp.Call.getResponse(Call.java:268)
04-04 13:02:27.753 7536-10064/by.set.pibox E/art: at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:224)
04-04 13:02:27.753 7536-10064/by.set.pibox E/art: at by.set.pibox.upload.uploadService.HttpUploadTask$1.intercept(HttpUploadTask.java:122)
04-04 13:02:27.753 7536-10064/by.set.pibox E/art: at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:221)
04-04 13:02:27.753 7536-10064/by.set.pibox E/art: at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:195)
04-04 13:02:27.753 7536-10064/by.set.pibox E/art: at com.squareup.okhttp.Call.execute(Call.java:79)
04-04 13:02:27.753 7536-10064/by.set.pibox E/art: at by.set.pibox.upload.uploadService.HttpUploadTask.loadToAzure(HttpUploadTask.java:227)
04-04 13:02:27.753 7536-10064/by.set.pibox E/art: at by.set.pibox.upload.uploadService.HttpUploadTask.upload(HttpUploadTask.java:159)
I use two instances of OkHttp, one gets the url for uploading, and the other loads the chunk to the cloud service. Here’s the code for file uploading:
HashMap<String, String> headers = response.getHeaders();
String url = response.getUrl();
int readCount;
byte[] buffer;
FileInputStream stream = new FileInputStream(file);
stream.skip(uploadedBodyBytes);
byte[] readBuffer = new byte[PART_SIZE];
readCount = stream.read(readBuffer, 0, PART_SIZE);
buffer = new byte[readCount];
System.arraycopy(readBuffer, 0, buffer, 0, readCount);
stream.close();
RequestBody content = RequestBody.create(MediaType.parse(contentType), buffer, 0, buffer.length);
Request.Builder builder = new Request.Builder();
for (Map.Entry<String, String>entry : headers.entrySet()) {
builder.addHeader(entry.getKey(), entry.getValue());
}
Request azureRequest = builder.url(url).put(content).build();
Response azureResponse = client.newCall(azureRequest).execute();
if (azureResponse.code()/100==2 &&shouldContinue) {
Log.e(getClass().getSimpleName(), azureResponse.code() + " " + azureResponse.message());
uploadedBodyBytes +=readCount;
if (uploadedBodyBytes>=totalBodyBytes)
confirmUpload();
else {
broadcastProgress(uploadedBodyBytes, totalBodyBytes);
upload();
}
}
Issue was previously closed :Previous link But this can’t be the case, because actually the response body returned by the cloud service is empty In fact, I removed these parts of code, used only for debugging, but still get the same error Please help
Issue Analytics
- State:
- Created 7 years ago
- Comments:8 (3 by maintainers)
Top Results From Across the Web
Error 'Too many files selected: Attach up to 10 files.'
In Lightning Experience, attaching more than 10 files at once results in an error message: "Too many files selected: Attach up to 10...
Read more >too many files - Microsoft Community
1. Since the folder is not synced, you could try to right-click it and choose 'Free up space', to see if the folder...
Read more >Too many open files error - Delicious Brains
WP Offload Media may display a “Too many open files” error notice when attempting to offload media files to your storage provider.
Read more >How to Fix the 'Too Many Open Files' Error in Linux?
It means that a process has opened too many files (file descriptors) and cannot open new ones. On Linux, the “max open file...
Read more >too_many_requests when uploading multiple files
Hello, I am using the javascript sdk and using dbx.filesUpload multiple times to upload multiple files, but I am getting too many requests ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Well got away from OutOfMemory error, thank you Jake But now get TooManyFiles exception after uploading around 240 chunks
https://github.com/square/okhttp/issues/1943#issuecomment-151024020