question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

buffer size for multipart s3 downloads

See original GitHub issue

I noticed recently that for a large download, the awscli (aws s3 cp s3://...) was faster than using boto3.s3.transfer.MultipartDownloader.

After running a few tests of downloading an 8GB file, it looks like maybe the size of the I/O buffer here may have something to do with it. I don’t understand why, but making that buffer size larger (e.g., 256KB or 1024KB instead of the current 16KB) seems to improve download speeds consistently for me.

Perhaps that buffer size should be increased, or maybe just made configurable? I don’t understand the pros and cons other than that making it larger seems to help for my use case.

Times for downloading an 8GB file from S3 to a g2.2xlarge instance (I just changed the number in the line of code mentioned above):

  • 100 seconds with 1024KB buffer
  • 106 seconds with 256KB buffer
  • 118 seconds with 16KB buffer (current boto3 code)
  • 256 seconds with 4KB buffer

Code for testing:

import time
import boto3
import logging
from concurrent.futures import ProcessPoolExecutor

t0 = time.time()

logging.basicConfig(level='DEBUG')
logging.getLogger('botocore').setLevel('INFO')
client = boto3.client('s3')

config = boto3.s3.transfer.TransferConfig(
    multipart_threshold=64 * 1024 * 1024,
    max_concurrency=10,
    num_download_attempts=10,
    multipart_chunksize=16 * 1024 * 1024,
    max_io_queue=10000
)

config = boto3.s3.transfer.TransferConfig()
transfer = boto3.s3.transfer.MultipartDownloader(client, config, boto3.s3.transfer.OSUtils())
transfer.download_file('bucket-name', 'path/to/big/file/foo.npy', 'foo2.npy', 8000000000, {})
print("TIME: {} SECONDS".format(time.time() - t0))

I previously mentioned this here.

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:11 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
kyleknapcommented, Aug 3, 2016

With the release of 1.4.0 of boto3, you now have the option to both io_chunksize and max_io_queue so for the environment where the network speed is much faster than the io speed you can configure it in a way to make io stop being the bottleneck: https://boto3.readthedocs.io/en/latest/reference/customizations/s3.html#boto3.s3.transfer.TransferConfig

It is important to note that with the current configuration, the defaults should be suitable. Now the io_chunksize is 256KB, which seems to be a good default value as I have found in my testing and testing from others, @gisjedi. For me with the current default configurations, boto3 achieves the same speed for downloads as the CLI for large downloads on larger instances.

Closing out issue as the defaults should now be resulting in better performance and the necessary configuration parameters related to io are now exposed to tweak to make the download faster if the results from using the defaults are still not as desired.

0reactions
jameslscommented, Jul 28, 2016
Read more comments on GitHub >

github_iconTop Results From Across the Web

Amazon S3 multipart upload limits - AWS Documentation
Amazon S3 multipart upload limits ; Maximum object size, 5 TiB ; Maximum number of parts per upload, 10,000 ; Part numbers, 1...
Read more >
Resolve issues with uploading large files in Amazon S3
Note: If you use the Amazon S3 console, the maximum file size for uploads is 160 GB. To upload a file that is...
Read more >
Can I stream a file upload to S3 without a content-length ...
Initiate S3 Multipart Upload. · Gather data into a buffer until that buffer reaches S3's lower chunk-size limit (5MiB). · Upload that buffer...
Read more >
Upload or download large files to and from Amazon S3 using ...
For more information, see Uploading an object using multipart upload.
Read more >
manager - Go Packages
Package manager provides utilities to upload and download objects from S3 ... DefaultUploadPartSize is the default part size to buffer chunks of a...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found