question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Bug report: Setting up a different AWS S3 region in production.py

See original GitHub issue

I work out of Nairobi, so I set up a Digital Ocean “droplet” in London and wanted my static and media files hosted in London as well.

The default settings for AWS s3 are:

STATIC_URL = 'https://s3.amazonaws.com/%s/static/' % AWS_STORAGE_BUCKET_NAME and MEDIA_URL = 'https://s3.amazonaws.com/%s/media/' % AWS_STORAGE_BUCKET_NAME

These settings would produce a static url for a web site favicon image of s3.amazonaws.com/staticstore.example/static/images/favicon.ico where staticstore.example is the AWS_STORAGE_BUCKET_NAME environment variable.

After a number of unsuccessful tries [ like 2 days 😦 ], I went to my s3 console and found out that my favicon is being served from s3 in Europe(London) with the “s3.eu-west-2.amazonaws.com/staticstore.example/static/images/favicon.ico” url.

To replicate: I did 3 things:

  1. Created a new vanilla cookiecutter-django project, built it with the local.yml and made sure it has its “static” and “media” files are being served.

  2. In production.py, I changed the following settings:

    ...
    MEDIA_URL = 'https://s3.eu-west-2.amazonaws.com/%s/media/' % AWS_STORAGE_BUCKET_NAME
    
    # Static Assets
    # ------------------------
    
    STATIC_URL = 'https://s3.eu-west-2.amazonaws.com/%s/static/' % AWS_STORAGE_BUCKET_NAME
    ...
    
  3. I built a new s3 bucket whose only change was region set to “EU(London)”. I built a new user and created an s3 policy and attached the new user to the s3 bucket.

I destroyed the droplet and built a new one using the changed production.yml. Using the docker-compose -f production.yml up command, I could see all the static files being copied to my new bucket.

Expected behavior: On running the website, only django html appeared and all the static files were not served. On checking it out, I found that the {% static 'images/favicon.ico ' %} was converted to https://s3.amazonaws.com/staticstore.example/static/images/favicon.ico and yet I expected https://s3.eu-west-2.amazonaws.com/staticstore.example/static/images/favicon.ico

While I can not see where the bug would come from if settings.MEDIA_URL and settings.STATIC_URL were being processed by Django properly, the only other option is the storages.backends.s3boto3 was not reading the production.py configurations but making up its own default url based on some rule.

Can anybody else confirm this?

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:8 (7 by maintainers)

github_iconTop GitHub Comments

2reactions
orzarchicommented, Sep 20, 2017

It seems like Boto 3 will sometime rewrite your s3 urls for you to use what is called virtual hosting addressing.

I’m not sure I understood your problem fully, but to avoid this behaviour you can set the django-storages setting AWS_S3_SIGNATURE_VERSION to ‘s3v4’ and see if that helps.

I used it to fix problems with django-compressor: Because these rewrites caused my {%static%} tags to issue urls that didn’t contain the s3 address set in COMPRESS_URL, django-compressor complained.

1reaction
orzarchicommented, Sep 22, 2017

I had the same journey and you and reached almost the exact same config. Just a tip: Your cache headers are not being set with that config, as the setup in cookiecutter is targeted to boto, and not boto3. Here is the correct setting:

AWS_EXPIRY = 60 * 60 * 24 * 7
AWS_S3_OBJECT_PARAMETERS = {
    'CacheControl': ('max-age=%d, s-maxage=%d, must-revalidate' % (AWS_EXPIRY, AWS_EXPIRY))
}
Read more comments on GitHub >

github_iconTop Results From Across the Web

Bug report: Setting up a different AWS S3 region in production ...
Try use AWS_S3_REGION_NAME on production settings file. From django-storages docs: AWS_S3_REGION_NAME (optional: default is None ) Name of the ...
Read more >
Replicating objects - Amazon Simple Storage Service
S3 Cross-Region Replication (CRR) is used to copy objects across Amazon S3 buckets in different AWS Regions. CRR can help you do the...
Read more >
Copy data from an S3 bucket in one account and Region to ...
Copy and synchronize data from S3 buckets in different accounts and Regions by using AWS CLI and IAM.
Read more >
Error Responses - Amazon Simple Storage Service
Error Code Description HTTP Status Code AccessControlListNotSupported The bucket does not allow ACLs. 400 Bad Request AccessDenied Access Denied 403 Forbidden BucketAlreadyOwnedByYou 409 Conflict (in all...
Read more >
Security best practices for Amazon S3
Describes guidelines and best practices for addressing security issues in Amazon S3.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found