Error "Could not load Boto's S3 bindings." - django

I have followed the very terse guide provided for django-storages, transitioned from local file storage, and have come up against this exception:
Could not load Boto's S3 bindings.
settings.py
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
AWS_ACCESS_KEY_ID = "xxxxxx"
AWS_SECRET_ACCESS_KEY = "xxxxxxxxx"
AWS_STORAGE_BUCKET_NAME = "images"
models.py
class CameraImage(models.Model):
...
image = models.ImageField(upload_to='images')#get_image_path)
What does that exception mean? How do I fix it?

From looking at the source code, it appears you need to have the python-boto library installed. This is also mentioned in the documentation you link to.

There's been an update it's now "pip install boto"

Related

problem using ~/.aws/credentials and ~/.aws/config

i am trying to call S3 bucket using python script. i already create the credentials and config file too. using this kind of format in credentials:
[default]
aws_access_key_id= my_key_id
aws_secret_access_key= my_secret_access_key
and config file:
[default]
region=ap-southeast-1
i already set my env variable too like this:
[![enter image description here][1]][1]
i tried to run this script:
import boto3
# Create an S3 client
s3 = boto3.client('s3')
s3.put_object(Body='testing', Bucket='file-server-datalake', Key= 'test.txt')
and i got NoCredentialsError
is there any way to solve this?

Django Storage and Boto3 not retrieving Media from AWS S3

I am using a development server to test uploading and retrieving static files from AWS S3 using Django storages and Boto3. The file upload worked but I cannot retrieve the files.
This is what I get:
And when I check out the URL in another tab I get this
**This XML file does not appear to have any style information associated with it. The document tree is shown below.**
<Error>
<Code>IllegalLocationConstraintException</Code>
<Message>The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.</Message>
<RequestId></RequestId>
<HostId></HostId>
</Error>
Also I configured the settings.py with my own credentials and IAM user
AWS_ACCESS_KEY_ID = <key>
AWS_SECRET_ACCESS_KEY = <secret-key>
AWS_STORAGE_BUCKET_NAME = <bucket-name>
AWS_DEFAULT_ACL = None
AWS_S3_FILE_OVERWRITE = False
AWS_S3_REGION_NAME = 'me-south-1'
AWS_S3_USE_SSL = True
AWS_S3_VERIFY = False
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
Please check in your AWS Identity & Access Management Console (IAM) whether your access keys have proper S3 permissions assigned to them.
Also, make sure you have installed AWS CLI and setup your credentials in your machine.
You can try running the below command and verify it.
$ aws s3 ls
2018-12-11 17:08:50 my-bucket
2018-12-14 14:55:44 my-bucket2
Reference : https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html

S3ResponseError: 301 Moved Permanently - Django

I have a very big issue with Amazon S3. I am working on a Django app and I want to store file on S3:
My settings are:
AWS_STORAGE_BUCKET_NAME = 'tfjm2-inscriptions'
AWS_ACCESS_KEY_ID = 'id'
AWS_SECRET_ACCESS_KEY = 'key'
AWS_S3_CUSTOM_DOMAIN = '%s.s3-eu-west-1.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
And I get this error: S3ResponseError: 301 Moved Permanently
Some same issues on the Internet say that it is because it is a non-US bucket and I did tried with a US-standard bucket but it get a 401 Forbidden error.....
I do not know what to do.
Please help me.
Thank you
You can do this:
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
in terminal run 'nano ~/.boto'
if there is some configs try to comment or rename file and connect again. (it helps me)
http://boto.cloudhackers.com/en/latest/boto_config_tut.html
there is boto config file directories. take a look one by one and clean them all, it will work by default configs. also configs may be in .bash_profile, .bash_source...
I guess you must allow only KEY-SECRET
Solve by changing your code to:
AWS_STORAGE_BUCKET_NAME = 'tfjm2-inscriptions'
AWS_ACCESS_KEY_ID = 'id'
AWS_SECRET_ACCESS_KEY = 'key'
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
AWS_S3_REGION_NAME = 'us-east-2' ##### Use the region name where your bucket is created
AWS allow you to access a create and access buckets in the same region as an optimization measure and as such if you create a bucket in say 'us-west-2' you'll get a 301 if try accessing it from a different region (Africa, Europe and even East US).
You should specify the region if requesting the bucket from outside its region.

How to use S3 for Production on Heroku and local static css for development?

I have just installed Grunt in my Django app. In my blogengine app I have the folder: assets/css/global.scss. Grunt minifies this .scss file to static/css/global.css.
I am still developing the app locally. I have been running grunt sass and watch to minify the scss file to css as I'm working on it.
However, I've set the static url, etc to be my Amazon S3 bucket. This means when I run collectstatic, I have to wait ages for it to upload to S3 so I can see my changes.
I am wanting to eventually deploy this to Heroku, but in the meantime, how do I set my static content to work locally and setup production settings to use S3?
This is in settings.py:
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.6/howto/static-files/
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
AWS_ACCESS_KEY_ID = 'XXXXXXXXXXXXXX'
AWS_SECRET_ACCESS_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxx'
AWS_STORAGE_BUCKET_NAME = 'ingledow'
STATIC_URL = 'http://ingledow.s3.amazonaws.com/'
You could mess with the DEBUG settings. In local development, set DEBUG to True, and Django will handle serving all the static files. Once you push to production, set DEBUG to False and the S3 settings will kick in. You could have different settings files or you could set an environment variable both locally and on Heroku and call it in your settings (i.e.: `DEBUG = os.environ['DEBUG'].
In your bashrc, set a environment flag:
alias DJANGO_ENV=local
(Alternatively, just do this in your local shell: export DJANGO_ENV=local)
Then in settings.py:
import os
if os.environ.get( 'DJANGO_ENV', '' ) == 'local':
# SETUP LOCAL SETTINGS
else:
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
AWS_ACCESS_KEY_ID = 'XXXXXXXXXXXXXX'
AWS_SECRET_ACCESS_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxx'
AWS_STORAGE_BUCKET_NAME = 'ingledow'
STATIC_URL = 'http://ingledow.s3.amazonaws.com/'
Turn off the local settings when doing pushstatic (eg. "unset DJANGO_ENV"). On production (ie. Heroku), you won't have the DJANGO_ENV system variable, so it will default to AWS files.

Trouble setting cache-cotrol header for Amazon S3 key using boto

My Django project uses django_compressor to store JavaScript and CSS files in an S3 bucket via boto via the django-storages package.
The django-storages-related config includes
if 'AWS_STORAGE_BUCKET_NAME' in os.environ:
AWS_STORAGE_BUCKET_NAME = os.environ['AWS_STORAGE_BUCKET_NAME']
AWS_HEADERS = {
'Cache-Control': 'max-age=100000',
'x-amz-acl': 'public-read',
}
AWS_QUERYSTRING_AUTH = False
# This causes images to be stored in Amazon S3
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
# This causes CSS and other static files to be served from S3 as well.
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
STATIC_ROOT = ''
STATIC_URL = 'https://{0}.s3.amazonaws.com/'.format(AWS_STORAGE_BUCKET_NAME)
# This causes conpressed CSS and JavaScript to also go in S3
COMPRESS_STORAGE = STATICFILES_STORAGE
COMPRESS_URL = STATIC_URL
This works except that when I visit the objects in the S3 management console I see the equals sign in the Cache-Control header has been changed to %3D, as in max-age%3D100000, and this stops caching from working.
I wrote a little script to try to fix this along these lines:
max_age = 30000000
cache_control = 'public, max-age={}'.format(max_age)
con = S3Connection(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY)
bucket = con.get_bucket(settings.AWS_STORAGE_BUCKET_NAME)
for key in bucket.list():
key.set_metadata('Cache-Control', cache_control)
but this does not change the metadata as displayed in Amazon S3 management console.
(Update. The documentation for S3 metadata says
After you upload the object, you cannot modify object metadata. The only way to modify object metadata is to make copy of the object and set the metadata. For more information, go to PUT Object - Copy in the Amazon Simple Storage Service API Reference. You can use the Amazon S3 management console to update the object metadata but internally it makes an object copy replacing the existing object to set the metadata.
so perhaps it is not so surprising that I can’t set the metadata. I assume get_metadata is only used when creating the data in the first place.
end update)
So my questions are, first, can I configure django-storages so that it creates the cache-control header correctly in the first place, and second, is the metadata set with set_metadata the same as the metadata viewed with S3 management console and if not what is the latter and how do I set it programatically?
Use ASCII string as values solves this for me.
AWS_HEADERS = {'Cache-Control': str('public, max-age=15552000')}
If you want to add cache control while uploading the file....
headers = {
'Cache-Control':'max-age=604800', # 60 x 60 x 24 x 7 = 1 week
'Content-Type':content_type,
}
k = Key(self.get_bucket())
k.key = filename
k.set_contents_from_string(contents.getvalue(), headers)
if self.public: k.make_public()
If you want to add cache control to existing files...
for key in bucket.list():
print key.name.encode('utf-8')
metadata = key.metadata
metadata['Cache-Control'] = 'max-age=604800' # 60 x 60 x 24 x 7 = 1 week
key.copy(AWS_BUCKET, key, metadata=metadata, preserve_acl=True)
This is tested in boto 2.32 - 2.40.
A Note for future visitors: Use AWS_S3_OBJECT_PARAMETERS instead of AWS_HEADERS with boto3.
Also, CacheControl instead of Cache-Control.
So finally it will be,
AWS_S3_OBJECT_PARAMETERS = {'CacheControl' : str('max-age=525960')} #one year
Source: https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html
cache_control is a property of key, not part of metadata.
So to set cache-control for all the objects in a bucket, you can do this:
s3_conn = S3Connection(AWS_KEY, AWS_SECRET)
bucket = s3_conn.get_bucket(AWS_BUCKET_NAME)
bucket.make_public()
for key in bucket.list():
key = bucket.get_key(key.name)
key.cache_control = 'max-age=%d, public' % (3600 * 24 * 360 * 2)
print key.name + ' ' + key.cache_control