I can't push my local files to amazon S3 using django 'collectstatic'.
My settings.py, all the settings that may be relevant:
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
INSTALLED_APPS += ('storages',)
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
AWS_ACCESS_KEY_ID = os.environ['AWSAccessKeyId']
AWS_SECRET_ACCESS_KEY = os.environ['AWSSecretKey']
AWS_STORAGE_BUCKET_NAME = os.environ['AWS_STORAGE_BUCKET_NAME']
S3_URL = 'http://%s.s3.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME
STATIC_URL = S3_URL
STATICFILES_DIRS = (
os.path.join(BASE_DIR, "static"),)
STATIC_ROOT = 'staticfiles'
If I run collectsatic with these settings, I get the error:
0 static files copied, 139 unmodified.
If I change the STATIC_ROOT to "/" (this is how I'd like it to be, given the folder distribution in the Amazon Bucket), I get the error:
OSError: [Errno 13] Permission denied: '/css'
Please help, I've run out of ideas. I've double-checked the credentials and I've also tried to run it from heroku.
heroku run python manage.py collectstatic
Same errors.
According to the docs, you'll need to add
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
to your settings file.
Related
I have a Django project I am deploying to Heroku. I am using AWS S3 to store and serve static assets. In my project assets folder, I have these folders and files:
However, when python manage.py collectstatic --noinput gets run, only two folders (admin and rest_framework) and all other files are being copied to s3 (see screenshot below in s3)
Can someone point out what could be the issue? Here are my settings.py
STATIC_URL = config('AWS_URL') + '/staticfiles/'
MEDIA_URL = config('AWS_URL') + '/media/'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3StaticStorage'
AWS_ACCESS_KEY_ID = config('AWS_ACCESS_KEY', 'default')
AWS_SECRET_ACCESS_KEY = config('AWS_SECRET_KEY', 'default')
AWS_STORAGE_BUCKET_NAME = config('AWS_S3_BUCKET')
AWS_S3_BUCKET_CNAME = config('AWS_S3_BUCKET_CNAME')
AWS_URL = config('AWS_URL')
AWS_PRELOAD_METADATA = True
AWS_DEFAULT_ACL = config('AWS_DEFAULT_ACL', 'public-read')
AWS_S3_REGION_NAME = config('AWS_S3_REGION_NAME', 'eu-west-2')
AWS_S3_SIGNATURE_VERSION = config('AWS_S3_SIGNATURE_VERSION', 's3v4')
I have setup a AWS S3 bucket for both my static and media files but it currently only works on local host but not when I try on gunicorn or when I deploy to Heroku.
When I take a look at the network information on local host I can see the network is attempting to access the files from "https://mybucketname.s3.amazonaws.com/static/image.png". However, when I try this on gunicorn or heroku it is attempting to serve the files from "https://none.s3.amazonaws.com/static/image.png".
I am unsure why it is using 'none' instead of 'mybucketname' and my settings are below.
settings.py
from pathlib import Path
import os
import django_heroku
BASE_DIR = Path(__file__).resolve().parent.parent
ALLOWED_HOSTS = ['xxxx.herokuapp.com', '127.0.0.1',]
INSTALLED_APPS = [
xxx,
'storages',
]
MIDDLEWARE = [
xxx,
'whitenoise.middleware.WhiteNoiseMiddleware',
]
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.environ.get('AWS_STORAGE_BUCKET_NAME')
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
AWS_S3_FILE_OVERWRITE = False
AWS_S3_REGION_NAME = "us-east-1"
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
AWS_LOCATION = 'static'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
]
STATIC_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, AWS_LOCATION)
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
DEFAULT_FILE_STORAGE = 'src.storage_backends.MediaStorage'
# Configure Django App for Heroku.
django_heroku.settings(locals(), staticfiles=False)
If it's relevant my AWS s3 settings are set for '*' allowed hosts, public access is enabled for everything and my IAM account has full s3 access enabled.
Any help much appreciated - have been following this guide https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html
#Ikaro0 ended up being correct. The os.environment.get I was using (dotenv) was causing some issue, only reading some of the environment file. Worked after switching to python-decouple.
My issue is that every time I deploy my code to Heroku, when collecting static, all the static files within modules are copied again, even if they haven't changed. This means my free amazon S3 bucket copy limit is being reached after just a few deploys because it is copying 400 files each deployment.
The issue must be with some Heroku setting because when running manage.py collectstatic in my IDE, it does not re-copy the files even when using the S3 bucket as default.
I have DISABLE_COLLECTSTATIC=1 in my heroku config vars, and set it in the heroku CLI to be doubly sure.
I have no idea why it is doing this. There's nothing useful in the deployment logs either. Any help would be appreciated.
This is the log:
Successfully compiled 1 referred SASS/SCSS files.
Debug mode is off.
386 static files copied, 106 unmodified.
The 106 unmodified files are the bootstrap SASS and a few others of my own. the 386 files are drom django admin and django-countries. Despite not changing, they are always copied. No settings swap seems to fix the issue.
Relevant parts of my settings.py:
STATICFILES_FINDERS = [
# Default finders
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# finders for sass files
'sass_processor.finders.CssFinder',
]
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static/'),)
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
# Settings for AWS bucket
AWS_S3_OBJECT_PARAMETERS = {
'Expires': 'Thu, 31 Dec 2099 20:00:00 GMT',
'CacheControl': 'max-age=94608000',
}
AWS_STORAGE_BUCKET_NAME = os.environ.get('AWS_STORAGE_BUCKET_NAME')
AWS_S3_REGION_NAME = 'eu-west-3'
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'
AWS_DEFAULT_ACL = 'public-read'
AWS_S3_OBJECT_PARAMETERS = {'CacheControl': 'max-age=86400'}
STATICFILES_LOCATION = 'static'
STATICFILES_STORAGE = 'custom_storages.StaticStorage'
STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{STATICFILES_LOCATION}/'
So I figured it out after some work.
Despite having used DISABLE_COLLECTSTATIC=1 in other django projects and things running smoothly, in this case it turned out to be the problem. When I thought it must be a heroku setting creating the issue I realised there could only be one setting that interferes with static collection.
Removing the setting from heroku has resolved the issue. Everything now copies correctly as it should.
I want to upload my staticfiles to amazon s3 storage, but I can't stop django from just putting them in a directory staticfiles in the project root. I have boto3 in my requirements.txt and have set
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
I have successfully set up s3 with media files, to the same bucket (under different directory).
Collectstatic also seems disregards the STATIC_ROOT variable in settings.py when I comment out the s3-settings. When running python3 manage.py collectstatic I expect it to gather static files into STATIC_ROOT as it says in the docs here https://docs.djangoproject.com/en/2.1/ref/settings/ (under static_root). But even if I give a different value to STATIC_ROOT, collectstatic always collects into a directory on the root called staticfiles.
STATIC_ROOT = os.path.join(BASE_DIR, 'this_is_not_used_by_collectstatic')
The rest of my s3 settings:
# Amazon S3
AWS_ACCESS_KEY_ID = os.environ['AWS_ACCESS_KEY_ID']
AWS_SECRET_ACCESS_KEY = os.environ['AWS_SECRET_ACCESS_KEY']
AWS_STORAGE_BUCKET_NAME = 'my_bucket_name'
AWS_S3_REGION_NAME = 'eu-north-1'
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
AWS_DEFAULT_ACL = None
AWS_LOCATION = 'static'
STATIC_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, AWS_LOCATION)
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# media
DEFAULT_FILE_STORAGE = 'korvir.storage_backends.MediaStorage'
If you're using django-heroku and and you have the following in your code, as per the instructions:
django_heroku.settings(locals())
... then it will overwrite your STATIC_ROOT setting. If you want to preserve your own STATIC_ROOT setting, then make sure you set STATIC_ROOT after calling django_heroku.settings.
After collecting my staticfiles and storing them in an Amazon Bucket (AWS S3), when I run the project locally it still uses the staticfiles stored online, this is a problem cause when I want to make a change on a css file for ex, I have to run collectstatic or manually upload the file to Amazon. I tried adding a new setting variable "LOCAL_STATICFILES" like this:
settings.py
LOCAL_STATICFILES = False
if not LOCAL_STATICFILES:
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
AWS_ACCESS_KEY_ID = os.environ['AWSAccessKeyId']
AWS_SECRET_ACCESS_KEY = os.environ['AWSSecretKey']
AWS_STORAGE_BUCKET_NAME = os.environ['AWS_STORAGE_BUCKET_NAME']
S3_URL = 'http://%s.s3.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME
STATIC_URL = S3_URL
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
if LOCAL_STATICFILES:
STATIC_URL = '/static/'
STATIC_ROOT = '/'
But when I turn LOCAL_STATICFILES to True and runserver, django can't find them.
The project's folders look like this:
project
app
app
static
css
js
img
templates
What am I doing wrong?
First of all: Ensure you have a way to distinguish whether you are, or not, in an environment supporting the Amazon bucket configuration. This means, usually this will be your production environment, where you already configured the amazon bucket settings.
So you will:
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# this is the base project path
if 'AWSAccessKeyId' in os.environ:
# assume the presence of this key will determine whether
# we are, or not, in the bucket-supporting environment
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
AWS_ACCESS_KEY_ID = os.environ['AWSAccessKeyId']
AWS_SECRET_ACCESS_KEY = os.environ['AWSSecretKey']
AWS_STORAGE_BUCKET_NAME = os.environ['AWS_STORAGE_BUCKET_NAME']
STATIC_URL = 'http://%s.s3.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME
# static url will be the re
STATIC_ROOT = None
# STATIC_ROOT doesn't matter since you will not invoke
# `manage.py collectstatic` from this environment. You
# can safely let it to None, or anything like:
# os.path.join(BASE_DIR, 'static')
else:
STATIC_URL = '/static/'
# the static files url will point to your local,
# development, server
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
# here, STATIC_ROOT matters, since you will invoke
# `manage.py collectstatic` from your local environment.