[Storage] LOH Allocation Uploading and Downloading
See original GitHub issueIs your feature request related to a problem? Please describe.
Doing some scalabilty testing on our application, prefview reported high allocation to Large Object Heap stemming from UploadObjectAsync
and DownloadObjectAsync
APIs. Tracing this back to ResumableUpload.cs and MediaDownloader.cs .This seems due, in part, to default chunk size of 10 MB. A 10 MB byte array is allocated and used to chunk the stream.
I am able to lower the chunk size (and thus byte array size) for downloads low enough that allocations stay off the Large Object Heap. However, due to the limitation in minimum upload chunk size of 1024 * 256 bytes, I am unable to keep upload byte arrays off the LOH.
Describe the solution you’d like
It would be ideal to find a way to buffer the streams without excess memory allocations.
Perhaps it’s possible to reduce the allocations in .Net core environment by using ArrayPool<T> feature. This may require multi targeting.
Describe alternatives you’ve considered None
Additional context Testing done on:
- OS: Windows Server 2016
- .NET version: netcoreapp3.1
- Package name and version: Google.Cloud.Storage.V1
Can provide prefview screen shots if desired.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:11
Right, this is now fixed in google-api-dotnet-client - but it won’t be available until we’ve done a new release of Google.Apis etc. We’ve got a new feature going in there soon, at which point we can release 1.45 of Google.Apis, and I can target that in a new beta of the storage library.
Sorry it’s taken so long to get to this, but I’m very pleased with the result - not only does it avoid allocating any large objects, but it also massively reduces the amount of allocation in general, when you’re uploading a large seekable stream or a small-but-not-seekable stream.
Just to say, I’m afraid this is currently relatively low priority compared with the other things we’ve got going on at the moment. I haven’t forgotten about this, but I’m unlikely to get to it for a little while.