Django static files get from wrong url - django

I am migrating an old Django 2.2 project to Django 3.2.
I use AWS Cloudfront and S3.
I have done most of the tasks to do. But i have something weird in my static files serving.
When 'static' template tag is called, the rendered url is "https//<cloudfront_url>/static.." instead of "https://<cloudfront_url>/static..".
The comma disapears ! It obviously results to a 404 not found.
It worked without any problem on Django2.2.
So for the moment i dirty patched this by doing a '.replace("https//", "https://")' on django static template tag.
My settings relatives to static files are :
# settings.py
STATICFILES_LOCATION = 'static'
AWS_CLOUDFRONT_DOMAIN = <idcloudfront>.cloudfront.net
STATICFILES_STORAGE = 'app.custom_storages.StaticStorage'
STATIC_URL = "https://%s/%s/" % (AWS_CLOUDFRONT_DOMAIN, STATICFILES_LOCATION)
AWS_STORAGE_STATIC_BUCKET_NAME = 'assets'
# app.custom_storages.py
class StaticStorage(S3Boto3Storage):
location = settings.STATICFILES_LOCATION
bucket_name = settings.AWS_STORAGE_STATIC_BUCKET_NAME
def __init__(self, *args, **kwargs):
kwargs['custom_domain'] = settings.AWS_CLOUDFRONT_DOMAIN
super(StaticStorage, self).__init__(*args, **kwargs)
this seems to come from django storage since staticfiles_storage.url('css/a_css_file.css') return url without ":".
Oh ok after further investigations i see that come from storages.storages.backends.S3boto3.S3Boto3Storage from django-storages package.
https://github.com/jschneier/django-storages/blob/master/storages/backends/s3boto3.py#L577
url = '{}//{}/{}{}'.format(
self.url_protocol,
self.custom_domain,
filepath_to_uri(name),
'?{}'.format(urlencode(params)) if params else '',
)
":" seems to be missing.
Temporary fixed it by adding url_protocol = "https:" in my StaticStorage custom class and opened an issue on django-storage library.

Related

Amazon S3 + Cloudfront with Django - Access error in serving static files (400 - Bad Request authorization mechanism not supported)

I'm struggling with an issue I'm encountering while testing my Django project's production environment, and more especially with my static (and media) content by the use of S3 + Cloudfront.
As I'm developing on Django, I make use of the latest version of django-storage.
The problem is that in spite of loading all the environment variables in my settings.py file (more details below), my website is still always trying to load the static/media content using the S3 direct URLs of the form https://bucket_name.s3.eu-west-3.amazonaws.com/static/filename.
The static content cannot be loaded using these URLs and I get the following error : Failed to load resource: the server responded with a status of 400 (Bad Request).
When I try to access these URLs in my browser I get the following message : The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256
This error seems quite weird to me as I specified the signature version in my settings file (perhaps I miss something?).
The other point is that I want to rely on Cloudfront as first layer so that my files get a unique path of the form "https://xxxxxx.cloudfront.net/static/..." So I defined a Cloudfront distribution with OAI and Bucket policy configured.
But I still don't get my files served through this URL and get the same problem as before (without Cloudfront).
For info, if I manually replace the first part of static files URLs by the Cloudfront URI (without even touching to the arguments (which are respectively AWSAccessKeyId, Signature and Expiry) the access is working.
Here is my settings.py file parameters:
AWS_ACCESS_KEY_ID = "myAccessKeyID"
AWS_SECRET_ACCESS_KEY = "myAWSAccessKey"
AWS_STORAGE_BUCKET_NAME = "my_bucket_name"
AWS_DEFAULT_ACL = None
AWS_S3_CUSTOM_DOMAIN = "xxxxxx.cloudfront.net"
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400'
}
AWS_S3_SIGNATURE_VERSION = 's3v4'
AWS_S3_REGION_NAME = 'eu-west-3'
STATIC_LOCATION = 'static'
STATIC_ROOT = '/%s/' % STATIC_LOCATION
STATIC_URL='https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, STATIC_LOCATION)
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3StaticStorage'
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),)
PUBLIC_MEDIA_LOCATION = 'media'
MEDIAFILES_LOCATION = 'media'
MEDIA_ROOT = '/%s/' % MEDIAFILES_LOCATION
MEDIA_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, PUBLIC_MEDIA_LOCATION)
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
I browsed on the internet to find posts on similar issues, and I implemented the different solution suggested like explicitely defining the variables AWS_S3_SIGNATURE_VERSION and AWS_S3_REGION_NAME ('eu-west-3' here as my bucket is in Paris) or make the origin domain name very explicit with its region in the Cloudfront distribution parameters. Unfortunately, this didn't work for me so far.
I also wiped my browser cache and tried again with new buckets/Cloudfront distribution without more success...
At this stage, I made my bucket publicly accessible. The collectstatic method is working to populate the bucket. The cloudfront distribution seems ok as I can manually get access to my bucket using it when I 're-construct' the URL. Finally, the IAM user has S3FullAccess as well as CloudfrontFullAccess rights (for testing purpose). But perhaps I miss an important setting on AWS side...
Many thanks in advance for your help that would be greatly appreciated.
Edit 15th June :
Following #Trent advice, I defined the following custom storage class as STATICFILES_STORAGE. Variables seem to be correctly imported (I print them in the python console before collectstatic) but I still get the same issue.
In my settings.py file :
STATICFILES_STORAGE = 'myproject.storage_backends.StaticStorage'
DEFAULT_FILE_STORAGE = 'myproject.storage_backends.PublicMediaStorage'
Here is the code of my custom storage class storage_backends.py:
from storages.backends.s3boto3 import S3Boto3Storage
class StaticStorage(S3Boto3Storage):
location = 'static'
custom_domain = "xxxxxx.cloudfront.net"
signature_version = "s3v4"
region = "eu-west-3"
default_acl = "public-read"
def __init__(self, *args, **kwargs):
kwargs['custom_domain'] = custom_domain
kwargs['signature_version'] = signature_version
kwargs['region_name'] = region
super(StaticStorage, self).__init__(*args, **kwargs)
class PublicMediaStorage(S3Boto3Storage):
location = 'media'
file_overwrite = False
def __init__(self, *args, **kwargs):
kwargs['custom_domain'] = "xxxxxx.cloudfront.net"
kwargs['signature_version'] = "s3v4"
super(PublicMediaStorage, self).__init__(*args, **kwargs)

How to sync the upload progress bar with upload on s3 bucket using Django Rest Framework

I am working on a REST API (using Django Rest Framework). I am trying to upload a video by sending a post request to the endpoint I made.
Issue
The video does upload to the s3 bucket, but the upload progress shows 100% within a couple of seconds only however large file I upload.
Why is this happening and how can I solve this it?
PS: Previously I was uploading on local storage, and the upload progress was working fine.
I am using React.
First of all you make sure you've installed these library: boto3==1.14.53, botocore==1.17.53, s3transfer==0.3.3, django-storages==1.10
settings.py :
INSTALLED_APPS = [
'storages',
]
AWS_ACCESS_KEY_ID = 'your-key-id'
AWS_SECRET_ACCESS_KEY = 'your-secret-key'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
DEFAULT_FILE_STORAGE = 'your_project-name.storage_backends.MediaStorage'
MEDIA_URL = "https://%s/" % AWS_S3_CUSTOM_DOMAIN
#File upload setting
BASE_URL = 'http://example.com'
FILE_UPLOAD_PERMISSIONS = 0o640
DATA_UPLOAD_MAX_MEMORY_SIZE = 500024288000
then make a storage_backends python file inside your project folder where settings.py file is located.
storage_backends.py:
import os
from tempfile import SpooledTemporaryFile
from storages.backends.s3boto3 import S3Boto3Storage
class MediaStorage(S3Boto3Storage):
bucket_name = 'your-bucket-name'
file_overwrite = False
def _save(self, name, content):
"""
We create a clone of the content file as when this is passed to
boto3 it wrongly closes the file upon upload where as the storage
backend expects it to still be open
"""
# Seek our content back to the start
content.seek(0, os.SEEK_SET)
# Create a temporary file that will write to disk after a specified
# size. This file will be automatically deleted when closed by
# boto3 or after exiting the `with` statement if the boto3 is fixed
with SpooledTemporaryFile() as content_autoclose:
# Write our original content into our copy that will be closed by boto3
content_autoclose.write(content.read())
# Upload the object which will auto close the
# content_autoclose instance
return super(MediaStorage, self)._save(name, content_autoclose)

Django static url with digitalocean spaces

I have an droplet on digitalocean which for the most part is working great, however i have enabled the cdn option in digitalocean spaces (similar to aws s3 storage) and in trying to load static files using the cdn url, i cannot seem to get the url working correctly so suspect i am missing the obvious?
For example when i change the STATIC_URL in settings to the cdn no change is seen in the web page source?
If i change the AWS_S3_ENDPOINT_URL and MEDIA_ENDPOINT_URL then the source does change but the files are not found and as one can guess a collectstatic no longer works, So i assume that the AWS_S3_ENDPOINT_URL & MEDIA_ENDPOINT_URL need to remain as is and i merely need to ensure the static_url is used?
I did read somewhere that it was not good practice to change the templates from {%static.... to {%static_url so have not done that, is this something i should update or not?
Settings:
AWS_S3_ENDPOINT_URL = 'https://nyc3.digitaloceanspaces.com'
MEDIA_ENDPOINT_URL = 'https://nyc3.digitaloceanspaces.com/media/'
AWS_STORAGE_BUCKET_NAME = 'mysitestaging'
#STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# if False it will create unique file names for every uploaded file
AWS_S3_FILE_OVERWRITE = False
STATICFILES_STORAGE = 'mysite.settings.storage_backends.StaticStorage'
DEFAULT_FILE_STORAGE = 'mysite.settings.storage_backends.MediaStorage'
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
# the sub-directories of media and static files
STATIC_ROOT = 'static'
MEDIA_ROOT = 'media'
AWS_DEFAULT_ACL = 'public-read'
BUCKET_ROOT = '{}/{}/'.format(AWS_S3_ENDPOINT_URL, STATIC_ROOT)
# the regular Django file settings but with the custom S3 URLs
STATIC_URL = '{}/{}/'.format('https://cdn.mysite.com', STATIC_ROOT)
MEDIA_URL = '{}/{}/'.format('https://cdn.mysite.com', MEDIA_ROOT)
Source view returns:
https://nyc3.digitaloceanspaces.com/mysitestaging/static/img/apple-touch-icon.png?AWSAccessKeyId=37FLLPUJLEUO5IG7R4GQ&Signature=eof5%2BZvHPo%2FRSzvKQsrobXkcOZ0%3D&Expires=1586789962
cdn.mysite.com
is an alias of
mysitestaging.nyc3.cdn.digitaloceanspaces.com.
My storage_backends.py:
import os
from storages.backends.s3boto3 import S3Boto3Storage
from my_site.settings import core_settings
class StaticStorage(S3Boto3Storage):
location = core_settings.STATIC_ROOT
class MediaStorage(S3Boto3Storage):
location = core_settings.MEDIA_ROOT
Ok figured this out by actually re-reading the docs:
https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html
Adding the following worked immediately:
AWS_S3_CUSTOM_DOMAIN = 'cdn.mysite.com'
Hope it helps someone else.

How to setup django-compressor on heroku, offline compression to S3

I followed every QA suggestions found on SO and in different blogs, Everything works ok on my dev machine and nothing works on heroku.
here are my settings:
DEFAULT_FILE_STORAGE = 'arena.utils.MediaRootS3BotoStorage' # media files
# storage
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.environ.get('AWS_STORAGE_BUCKET_NAME')
AWS_PRELOAD_METADATA = True # necessary to fix manage.py collectstatic command to only upload changed files instead of all files
S3_URL = 'https://%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
MEDIA_URL = S3_URL + '/media/'
STATIC_URL = S3_URL + '/static/'
ADMIN_MEDIA_PREFIX = STATIC_URL + 'admin/'
COMPRESS_URL = STATIC_URL
COMPRESS_OFFLINE = True
COMPRESS_STORAGE = 'utils.CachedS3BotoStorage'
STATICFILES_STORAGE = COMPRESS_STORAGE
When i run collectstatic/compress everything is ok, i see the files being collected to S3 and put in proper places. I see the manifest file.
Loading any page with compressor tag, show an error OfflineGenerationError: You have offline compression enabled but key "d2a53169c44dec41ce3ee7da19b2b9d4" is missing from offline manifest. Running python manage.py compress again solves nothing. when i check the manifest file, indeed the key it looks for doesn't exist.
What is going wrong here?
Question i already checked:
How to configure django-compressor and django-staticfiles with Amazon's S3?
Django Compressor with S3 URL Heroku
Configuring django-compressor with remote storage (django-storage - amazon s3)
On my side I have very similar config, and I'm successfully using compressor for more than 2 years.
settings.py
COMPRESS_STORAGE = 'MyAwesomeApp.app.CachedS3BotoStorage.CachedS3BotoStorage'
AWS_ACCESS_KEY_ID = '#######'
AWS_SECRET_ACCESS_KEY = '########################+#########+BqoQ'
AWS_STORAGE_BUCKET_NAME = 'myAmazonS3cdn.myawesomewebsite.com'
AWS_S3_SECURE_URLS = False
AWS_QUERYSTRING_AUTH = False
COMPRESS_ROOT = 'MyAwesomeApp/static'
STATIC_ROOT = 'MyAwesomeApp/static/javascript'
COMPRESS_OUTPUT_DIR = 'compressed'
STATICFILES_STORAGE = COMPRESS_STORAGE
STATIC_URL = "http://myAmazonS3cdn.myawesomewebsite.com/"
COMPRESS_URL = STATIC_URL
COMPRESS_ENABLED = True
CachedS3BotoStorage.py
from django.core.files.storage import get_storage_class
from storages.backends.s3boto import S3BotoStorage
from django.core.files.base import File
class CachedS3BotoStorage(S3BotoStorage):
"""
S3 storage backend that saves the files locally, too.
"""
def __init__(self, *args, **kwargs):
super(CachedS3BotoStorage, self).__init__(*args, **kwargs)
self.local_storage = get_storage_class("compressor.storage.CompressorFileStorage")()
def save(self, name, content):
name = super(CachedS3BotoStorage, self).save(name, content)
self.local_storage._save(name, content)
return name
I'm running python managep.py compress locally, and having manifest generated on my static files directory. Heroku only deals with the collecstatic and delivers the most recent manifest version to my cdn.
Regards,
I completed the above solution with some lines, to fix the problem that create many (multiples) manifest_%.json in Amazon S3
in setting.py:
STATICFILES_STORAGE = 'your_package.s3utils.CachedS3BotoStorage'
in s3utils.py:
from storages.backends.s3boto import S3BotoStorage
from django.core.files.storage import get_storage_class
class CachedS3BotoStorage(S3BotoStorage):
"""
S3 storage backend that saves the files locally, too.
"""
location = 'static'
def __init__(self, *args, **kwargs):
super(CachedS3BotoStorage, self).__init__(*args, **kwargs)
self.local_storage = get_storage_class(
"compressor.storage.CompressorFileStorage")()
def url(self, name):
"""
Fix problem images admin Django S3 images
"""
url = super(CachedS3BotoStorage, self).url(name)
if name.endswith('/') and not url.endswith('/'):
url += '/'
return url
def save(self, name, content):
name = super(CachedS3BotoStorage, self).save(name, content)
self.local_storage._save(name, content)
return name
# HERE is secret to dont generating multiple manifest.json and to delete manifest.json in Amazon S3
def get_available_name(self, name):
if self.exists(name):
self.delete(name)
return name
I found a git repository that contains post_compile hooks to solve this problem. It runs compress after Heroku built the Django app (and also installs lessc if you need less in your compressor settings).
https://github.com/nigma/heroku-django-cookbook

Django Compressor with S3 URL Heroku

I am currently using django compressor and django storages to run my static media off of s3. My files are as follows:
My storage as per the docs is:
from django.core.files.storage import get_storage_class
from storages.backends.s3boto import S3BotoStorage
class CachedS3BotoStorage(S3BotoStorage):
"""
S3 storage backend that saves the files locally, too.
"""
def __init__(self, *args, **kwargs):
super(CachedS3BotoStorage, self).__init__(*args, **kwargs)
self.local_storage = get_storage_class(
"compressor.storage.CompressorFileStorage")()
def save(self, name, content):
name = super(CachedS3BotoStorage, self).save(name, content)
self.local_storage._save(name, content)
return name
my settings are:
# S3 Storage Section
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
STATICFILES_STORAGE = DEFAULT_FILE_STORAGE
AWS_ACCESS_KEY_ID = os.environ['AWS_ACCESS_KEY_ID']
AWS_SECRET_ACCESS_KEY = os.environ['AWS_SECRET_ACCESS_KEY']
AWS_STORAGE_BUCKET_NAME = os.environ['AWS_STORAGE_BUCKET_NAME']
# AWS_S3_SECURE_URLS = False #turns off https for static files (necessary)
# Used to make sure that only changed files are uploaded with collectstatic
AWS_PRELOAD_METADATA = True
# Django compressor settings
STATICFILES_FINDERS += (
'compressor.finders.CompressorFinder',
)
COMPRESS_ENABLED = True
COMPRESS_OFFLINE = True
COMPRESS_URL = STATIC_URL
COMPRESS_ROOT = STATIC_ROOT
COMPRESS_STORAGE = 'erp.storage.CachedS3BotoStorage'
STATICFILES_STORAGE = 'erp.storage.CachedS3BotoStorage'
AWS_LOCATION = 'static'
AWS_QUERYSTRING_EXPIRE = 7200
COMPRESS_JS_FILTERS = [
'compressor.filters.template.TemplateFilter',
]
There is a lot of media to compress which is why I have opted to use offline compression and run the manage.py compress command rather than run collectstatic on dyno restarts as it is just to slow.
Django compressor provides me with a querystring which is great however it contains html which does not load. i.e.
<link rel="stylesheet" href="site-url/static/CACHE/css/da0c0fa8dd51.css?Signature=Signature&Expires=Expires&AWSAccessKeyId=key
the two amp; items should not be there. I would rather have it secure but I've also tried AWS_S3_SECURE_URLS = False in the settings which does not seem to change things which makes me think there is something wrong.
I'm using django 1.4 so maybe there is something incompatible.
You can use AWS_QUERYSTRING_AUTH = False in your settings.py to prevent those querystring items
I can confirm that if using compress management command you'll need to redo the manifest the file. I did it manually but I'm sure there is a better way. Pretty small problem but I spent a bit of time on this and perhaps it will save someone else some time.