I'm using an Amazon S3 bucket for my media files on my django project. I've enabled Transfer Acceleration on my s3 bucket and I've also changed these settings from this
S3_URL = '//%s.s3.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME
MEDIA_URL = '//%s.s3.amazonaws.com/media/' % AWS_STORAGE_BUCKET_NAME
to
S3_URL = '//%s.s3-accelerate.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME
MEDIA_URL = '//%s.s3-accelerate.amazonaws.com/media/' % AWS_STORAGE_BUCKET_NAME
Unfortunately, these changes have made 0 difference when it comes to my upload speed. From where I am, my upload seems to hit 200kb/s, which is fairly slow. I'm on a 330mbit download / 15mbit upload connection.
Questions:
Is 200kb/s a good speed for upload to s3?
Are there any other obvious or not so obvious things I can check for or change so that my speeds improve?
Thanks for your help!
Related
I am using a development server to test uploading and retrieving static files from AWS S3 using Django storages and Boto3. The file upload worked but I cannot retrieve the files.
This is what I get:
And when I check out the URL in another tab I get this
**This XML file does not appear to have any style information associated with it. The document tree is shown below.**
<Error>
<Code>IllegalLocationConstraintException</Code>
<Message>The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.</Message>
<RequestId></RequestId>
<HostId></HostId>
</Error>
Also I configured the settings.py with my own credentials and IAM user
AWS_ACCESS_KEY_ID = <key>
AWS_SECRET_ACCESS_KEY = <secret-key>
AWS_STORAGE_BUCKET_NAME = <bucket-name>
AWS_DEFAULT_ACL = None
AWS_S3_FILE_OVERWRITE = False
AWS_S3_REGION_NAME = 'me-south-1'
AWS_S3_USE_SSL = True
AWS_S3_VERIFY = False
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
Please check in your AWS Identity & Access Management Console (IAM) whether your access keys have proper S3 permissions assigned to them.
Also, make sure you have installed AWS CLI and setup your credentials in your machine.
You can try running the below command and verify it.
$ aws s3 ls
2018-12-11 17:08:50 my-bucket
2018-12-14 14:55:44 my-bucket2
Reference : https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html
I am using django-s3-storage==0.11.2 and boto3==1.4.4. These are in the settings.py:
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
]
STATIC_ROOT = os.path.join(BASE_DIR, 'static_cdn')
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media_cdn')
AWS_S3_BUCKET_NAME = "my-bucket-name"
AWS_ACCESS_KEY_ID = 'test_id_x'
AWS_SECRET_ACCESS_KEY = 'test_id_x+test_id_x'
DEFAULT_FILE_STORAGE = "django_s3_storage.storage.S3Storage"
STATICFILES_STORAGE = "django_s3_storage.storage.StaticS3Storage"
AWS_S3_ADDRESSING_STYLE = "auto"
AWS_S3_BUCKET_AUTH_STATIC = False
AWS_S3_MAX_AGE_SECONDS_STATIC = 60 * 60 * 24 * 365 # 1 year.
AWS_S3_BUCKET_AUTH = False
AWS_S3_MAX_AGE_SECONDS = 60 * 60 * 24 * 365 # 1 year.
I have also ran these command:
manage.py s3_sync_meta django.core.files.storage.default_storage
But when I run collectstatic or this command
manage.py s3_sync_meta django.contrib.staticfiles.storage.staticfiles_storage
I get this error:
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid bucket name "": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$"
I have already created the bucket, and the bucket name is correct. Because this works and does not gives any error:
s3.meta.client.head_bucket(Bucket='my-bucket-name')
I don't know what am I missing here? Could you help me out please.
Alright, it looks confusing for me too.
Below are my observations -
1 . Bucket name pattern
Bucket name should not have '/' in them .
It would be good if you can update the AWS_S3_BUCKET_NAME from
"my-bucket-name" to the pattern which actually resembles with your
bucket name.
Source: https://github.com/boto/botocore/issues/680
2 . In the Django S3 Storage Documentation , it says
If your are updating a project that used django-storages
just for S3 file storage, migration is trivial.
Follow the installation instructions, replacing 'storages' in INSTALLED_APPS.
Be sure to scrutinize the rest of your settings file for changes,
most notably AWS_S3_BUCKET_NAME for AWS_STORAGE_BUCKET_NAME.
Can you please try to change AWS_S3_BUCKET_NAME_STATIC = bass-line-shop in your settings.py ?
Let me know, if it helps!
I have a very big issue with Amazon S3. I am working on a Django app and I want to store file on S3:
My settings are:
AWS_STORAGE_BUCKET_NAME = 'tfjm2-inscriptions'
AWS_ACCESS_KEY_ID = 'id'
AWS_SECRET_ACCESS_KEY = 'key'
AWS_S3_CUSTOM_DOMAIN = '%s.s3-eu-west-1.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
And I get this error: S3ResponseError: 301 Moved Permanently
Some same issues on the Internet say that it is because it is a non-US bucket and I did tried with a US-standard bucket but it get a 401 Forbidden error.....
I do not know what to do.
Please help me.
Thank you
You can do this:
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
in terminal run 'nano ~/.boto'
if there is some configs try to comment or rename file and connect again. (it helps me)
http://boto.cloudhackers.com/en/latest/boto_config_tut.html
there is boto config file directories. take a look one by one and clean them all, it will work by default configs. also configs may be in .bash_profile, .bash_source...
I guess you must allow only KEY-SECRET
Solve by changing your code to:
AWS_STORAGE_BUCKET_NAME = 'tfjm2-inscriptions'
AWS_ACCESS_KEY_ID = 'id'
AWS_SECRET_ACCESS_KEY = 'key'
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
AWS_S3_REGION_NAME = 'us-east-2' ##### Use the region name where your bucket is created
AWS allow you to access a create and access buckets in the same region as an optimization measure and as such if you create a bucket in say 'us-west-2' you'll get a 301 if try accessing it from a different region (Africa, Europe and even East US).
You should specify the region if requesting the bucket from outside its region.
My Django project uses django_compressor to store JavaScript and CSS files in an S3 bucket via boto via the django-storages package.
The django-storages-related config includes
if 'AWS_STORAGE_BUCKET_NAME' in os.environ:
AWS_STORAGE_BUCKET_NAME = os.environ['AWS_STORAGE_BUCKET_NAME']
AWS_HEADERS = {
'Cache-Control': 'max-age=100000',
'x-amz-acl': 'public-read',
}
AWS_QUERYSTRING_AUTH = False
# This causes images to be stored in Amazon S3
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
# This causes CSS and other static files to be served from S3 as well.
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
STATIC_ROOT = ''
STATIC_URL = 'https://{0}.s3.amazonaws.com/'.format(AWS_STORAGE_BUCKET_NAME)
# This causes conpressed CSS and JavaScript to also go in S3
COMPRESS_STORAGE = STATICFILES_STORAGE
COMPRESS_URL = STATIC_URL
This works except that when I visit the objects in the S3 management console I see the equals sign in the Cache-Control header has been changed to %3D, as in max-age%3D100000, and this stops caching from working.
I wrote a little script to try to fix this along these lines:
max_age = 30000000
cache_control = 'public, max-age={}'.format(max_age)
con = S3Connection(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY)
bucket = con.get_bucket(settings.AWS_STORAGE_BUCKET_NAME)
for key in bucket.list():
key.set_metadata('Cache-Control', cache_control)
but this does not change the metadata as displayed in Amazon S3 management console.
(Update. The documentation for S3 metadata says
After you upload the object, you cannot modify object metadata. The only way to modify object metadata is to make copy of the object and set the metadata. For more information, go to PUT Object - Copy in the Amazon Simple Storage Service API Reference. You can use the Amazon S3 management console to update the object metadata but internally it makes an object copy replacing the existing object to set the metadata.
so perhaps it is not so surprising that I can’t set the metadata. I assume get_metadata is only used when creating the data in the first place.
end update)
So my questions are, first, can I configure django-storages so that it creates the cache-control header correctly in the first place, and second, is the metadata set with set_metadata the same as the metadata viewed with S3 management console and if not what is the latter and how do I set it programatically?
Use ASCII string as values solves this for me.
AWS_HEADERS = {'Cache-Control': str('public, max-age=15552000')}
If you want to add cache control while uploading the file....
headers = {
'Cache-Control':'max-age=604800', # 60 x 60 x 24 x 7 = 1 week
'Content-Type':content_type,
}
k = Key(self.get_bucket())
k.key = filename
k.set_contents_from_string(contents.getvalue(), headers)
if self.public: k.make_public()
If you want to add cache control to existing files...
for key in bucket.list():
print key.name.encode('utf-8')
metadata = key.metadata
metadata['Cache-Control'] = 'max-age=604800' # 60 x 60 x 24 x 7 = 1 week
key.copy(AWS_BUCKET, key, metadata=metadata, preserve_acl=True)
This is tested in boto 2.32 - 2.40.
A Note for future visitors: Use AWS_S3_OBJECT_PARAMETERS instead of AWS_HEADERS with boto3.
Also, CacheControl instead of Cache-Control.
So finally it will be,
AWS_S3_OBJECT_PARAMETERS = {'CacheControl' : str('max-age=525960')} #one year
Source: https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html
cache_control is a property of key, not part of metadata.
So to set cache-control for all the objects in a bucket, you can do this:
s3_conn = S3Connection(AWS_KEY, AWS_SECRET)
bucket = s3_conn.get_bucket(AWS_BUCKET_NAME)
bucket.make_public()
for key in bucket.list():
key = bucket.get_key(key.name)
key.cache_control = 'max-age=%d, public' % (3600 * 24 * 360 * 2)
print key.name + ' ' + key.cache_control
I have followed the very terse guide provided for django-storages, transitioned from local file storage, and have come up against this exception:
Could not load Boto's S3 bindings.
settings.py
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
AWS_ACCESS_KEY_ID = "xxxxxx"
AWS_SECRET_ACCESS_KEY = "xxxxxxxxx"
AWS_STORAGE_BUCKET_NAME = "images"
models.py
class CameraImage(models.Model):
...
image = models.ImageField(upload_to='images')#get_image_path)
What does that exception mean? How do I fix it?
From looking at the source code, it appears you need to have the python-boto library installed. This is also mentioned in the documentation you link to.
There's been an update it's now "pip install boto"