This backend doesn't support absolute paths Django Google Storage - django

I have a Django app in Compute Engine. I set up a Google Cloud Storage as storage for my media files. In one endpoint, you can request for the file information including the file path. When I POST for this endpoint it returns:
This backend doesn't support absolute paths.
For simplicity my view for the endpoint look like this:
class FilesView(APIView):
permission_classes = (permissions.AllowAny,)
def post(self, request):
...
path = layer.file.path
response_message = {'file': path }
return Response(response_message, status.HTTP_200_OK)
I have done the following:
Create a service account and download the JSON.
Configure it to my Django settings.
I added the service account to the permissions in the bukcet i.e. set as Storage Admin.
I added allUsers to have permission: Storage Legacy Object Reader.
I changed the bucket from Uniform to Fine-grained.
Here is in my settings:
DEFAULT_FILE_STORAGE = 'storages.backends.gcloud.GoogleCloudStorage'
STATICFILES_STORAGE = 'storages.backends.gcloud.GoogleCloudStorage'
GS_BUCKET_NAME = 'sample-bucket'
GCP_CREDENTIALS = os.path.join(BASE_DIR, 'sample-bucket-credentials.json')
GS_CREDENTIALS = service_account.Credentials.from_service_account_file(GCP_CREDENTIALS)
I can download the file in the admin though.

Related

Custom S3Boto3Storage with django-storages

I developed a Django app that I'm using VM's disk for saving and serving media and static files but in one of my models, I want to save my files in a FileField connected to my MinIO object storage. I set up the settings like this in the settings.py
AWS_ACCESS_KEY_ID = '###'
AWS_SECRET_ACCESS_KEY = '###'
AWS_S3_ENDPOINT_URL = '###'
and in my model I used S3Storage like this:
class CustomStorageBucket(S3Boto3Storage):
bucket_name = "files"
class Document(BaseModel):
document_file = models.ImageField(storage=CustomStorageBucket(),upload_to='documents')
with these codes, I can save my files into the storage but the URLs in the admin panel do not works properly because it points to the media files URL something like this :
http://localhost:8000/media/documents/file.jpg
but I want it to be like this ( presigned URL) :
https://object-storage.app/files/documents/file.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=XXX&X-Amz-Date=XXX&X-Amz-Expires=432000&X-Amz-SignedHeaders=host&X-Amz-Signature=XXX
Try to set MEDIA_URL variable
MEDIA_URL = 'https://object-storage.app/files/'

Amazon S3 + Cloudfront with Django - Access error in serving static files (400 - Bad Request authorization mechanism not supported)

I'm struggling with an issue I'm encountering while testing my Django project's production environment, and more especially with my static (and media) content by the use of S3 + Cloudfront.
As I'm developing on Django, I make use of the latest version of django-storage.
The problem is that in spite of loading all the environment variables in my settings.py file (more details below), my website is still always trying to load the static/media content using the S3 direct URLs of the form https://bucket_name.s3.eu-west-3.amazonaws.com/static/filename.
The static content cannot be loaded using these URLs and I get the following error : Failed to load resource: the server responded with a status of 400 (Bad Request).
When I try to access these URLs in my browser I get the following message : The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256
This error seems quite weird to me as I specified the signature version in my settings file (perhaps I miss something?).
The other point is that I want to rely on Cloudfront as first layer so that my files get a unique path of the form "https://xxxxxx.cloudfront.net/static/..." So I defined a Cloudfront distribution with OAI and Bucket policy configured.
But I still don't get my files served through this URL and get the same problem as before (without Cloudfront).
For info, if I manually replace the first part of static files URLs by the Cloudfront URI (without even touching to the arguments (which are respectively AWSAccessKeyId, Signature and Expiry) the access is working.
Here is my settings.py file parameters:
AWS_ACCESS_KEY_ID = "myAccessKeyID"
AWS_SECRET_ACCESS_KEY = "myAWSAccessKey"
AWS_STORAGE_BUCKET_NAME = "my_bucket_name"
AWS_DEFAULT_ACL = None
AWS_S3_CUSTOM_DOMAIN = "xxxxxx.cloudfront.net"
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400'
}
AWS_S3_SIGNATURE_VERSION = 's3v4'
AWS_S3_REGION_NAME = 'eu-west-3'
STATIC_LOCATION = 'static'
STATIC_ROOT = '/%s/' % STATIC_LOCATION
STATIC_URL='https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, STATIC_LOCATION)
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3StaticStorage'
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),)
PUBLIC_MEDIA_LOCATION = 'media'
MEDIAFILES_LOCATION = 'media'
MEDIA_ROOT = '/%s/' % MEDIAFILES_LOCATION
MEDIA_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, PUBLIC_MEDIA_LOCATION)
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
I browsed on the internet to find posts on similar issues, and I implemented the different solution suggested like explicitely defining the variables AWS_S3_SIGNATURE_VERSION and AWS_S3_REGION_NAME ('eu-west-3' here as my bucket is in Paris) or make the origin domain name very explicit with its region in the Cloudfront distribution parameters. Unfortunately, this didn't work for me so far.
I also wiped my browser cache and tried again with new buckets/Cloudfront distribution without more success...
At this stage, I made my bucket publicly accessible. The collectstatic method is working to populate the bucket. The cloudfront distribution seems ok as I can manually get access to my bucket using it when I 're-construct' the URL. Finally, the IAM user has S3FullAccess as well as CloudfrontFullAccess rights (for testing purpose). But perhaps I miss an important setting on AWS side...
Many thanks in advance for your help that would be greatly appreciated.
Edit 15th June :
Following #Trent advice, I defined the following custom storage class as STATICFILES_STORAGE. Variables seem to be correctly imported (I print them in the python console before collectstatic) but I still get the same issue.
In my settings.py file :
STATICFILES_STORAGE = 'myproject.storage_backends.StaticStorage'
DEFAULT_FILE_STORAGE = 'myproject.storage_backends.PublicMediaStorage'
Here is the code of my custom storage class storage_backends.py:
from storages.backends.s3boto3 import S3Boto3Storage
class StaticStorage(S3Boto3Storage):
location = 'static'
custom_domain = "xxxxxx.cloudfront.net"
signature_version = "s3v4"
region = "eu-west-3"
default_acl = "public-read"
def __init__(self, *args, **kwargs):
kwargs['custom_domain'] = custom_domain
kwargs['signature_version'] = signature_version
kwargs['region_name'] = region
super(StaticStorage, self).__init__(*args, **kwargs)
class PublicMediaStorage(S3Boto3Storage):
location = 'media'
file_overwrite = False
def __init__(self, *args, **kwargs):
kwargs['custom_domain'] = "xxxxxx.cloudfront.net"
kwargs['signature_version'] = "s3v4"
super(PublicMediaStorage, self).__init__(*args, **kwargs)

How to sync the upload progress bar with upload on s3 bucket using Django Rest Framework

I am working on a REST API (using Django Rest Framework). I am trying to upload a video by sending a post request to the endpoint I made.
Issue
The video does upload to the s3 bucket, but the upload progress shows 100% within a couple of seconds only however large file I upload.
Why is this happening and how can I solve this it?
PS: Previously I was uploading on local storage, and the upload progress was working fine.
I am using React.
First of all you make sure you've installed these library: boto3==1.14.53, botocore==1.17.53, s3transfer==0.3.3, django-storages==1.10
settings.py :
INSTALLED_APPS = [
'storages',
]
AWS_ACCESS_KEY_ID = 'your-key-id'
AWS_SECRET_ACCESS_KEY = 'your-secret-key'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
DEFAULT_FILE_STORAGE = 'your_project-name.storage_backends.MediaStorage'
MEDIA_URL = "https://%s/" % AWS_S3_CUSTOM_DOMAIN
#File upload setting
BASE_URL = 'http://example.com'
FILE_UPLOAD_PERMISSIONS = 0o640
DATA_UPLOAD_MAX_MEMORY_SIZE = 500024288000
then make a storage_backends python file inside your project folder where settings.py file is located.
storage_backends.py:
import os
from tempfile import SpooledTemporaryFile
from storages.backends.s3boto3 import S3Boto3Storage
class MediaStorage(S3Boto3Storage):
bucket_name = 'your-bucket-name'
file_overwrite = False
def _save(self, name, content):
"""
We create a clone of the content file as when this is passed to
boto3 it wrongly closes the file upon upload where as the storage
backend expects it to still be open
"""
# Seek our content back to the start
content.seek(0, os.SEEK_SET)
# Create a temporary file that will write to disk after a specified
# size. This file will be automatically deleted when closed by
# boto3 or after exiting the `with` statement if the boto3 is fixed
with SpooledTemporaryFile() as content_autoclose:
# Write our original content into our copy that will be closed by boto3
content_autoclose.write(content.read())
# Upload the object which will auto close the
# content_autoclose instance
return super(MediaStorage, self)._save(name, content_autoclose)

How to make django uploaded images to display in CloudFront frontend + Beanstalk Backend

I have created a backend django app using AWS Beanstalk, and a frontend reactjs app deployed using cloudfront (plus S3)
I have a model in backend that does
class EnhancedUser(AbstractUser):
# some other attributes
picture = models.ImageField(blank=True)
my settings.py has
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '<my_elastic_beanstalk_domain>/media/'
Since I'm using cloudfront, if i just set the MEDIA_URL to /media/, it would just append /media/ to my cloudfront url, so I have to hardcode it to my backend url
and then, following the django docs, I added the static part to my urls.py
urlpatterns = [
path('admin/', admin.site.urls),
# some other urls
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
Note that django doc does mention we can't use absolute url for MEDIA_URL, but I have no alternative solution at the moment
When I upload my image, it doesn't get stored in the right place, but I cannot open it with the url. It returns a 404 saying the img's url is not part of urls list
My question is:
How do I set it up so I can display the image
Since the images will be updated through users/admins, these will be stored in the EC2 instance created in beanstalk, so every time I deploy, I think they will be wiped. How do I prevent this?
Take a look at using django-storages to save your uploads. I use S3 for storing uploads of a django/docker/EB deployment, and include django settings that look something like this (I keep them in settings/deployment.py):
if 'AWS_ACCESS_KEY_ID' in os.environ:
# Use Amazon S3 for storage for uploaded media files
# Keep them private by default
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# Amazon S3 settings.
AWS_ACCESS_KEY_ID = os.environ["AWS_ACCESS_KEY_ID"]
AWS_SECRET_ACCESS_KEY = os.environ["AWS_SECRET_ACCESS_KEY"]
AWS_STORAGE_BUCKET_NAME = os.environ["AWS_STORAGE_BUCKET_NAME"]
AWS_S3_REGION_NAME = os.environ.get("AWS_S3_REGION_NAME", None)
AWS_S3_SIGNATURE_VERSION = 's3v4'
AWS_AUTO_CREATE_BUCKET = False
AWS_HEADERS = {"Cache-Control": "public, max-age=86400"}
AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = 'private'
AWS_QUERYSTING_AUTH = True
AWS_QUERYSTRING_EXPIRE = 600
AWS_S3_SECURE_URLS = True
AWS_REDUCED_REDUNDANCY = False
AWS_IS_GZIPPED = False
MEDIA_ROOT = '/'
MEDIA_URL = 'https://s3.{}.amazonaws.com/{}/'.format(
AWS_S3_REGION_NAME, AWS_STORAGE_BUCKET_NAME)
USING_AWS = True

Pointing to multiple S3 buckets in s3boto

In settings.py I have:
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
AWS_ACCESS_KEY_ID = 'xxxxxxxxxxxxx'
AWS_SECRET_ACCESS_KEY = 'xxxxxxxxxxxxx'
AWS_STORAGE_BUCKET_NAME = 'static.mysite.com'
This is pointing to my S3 bucket static.mysite.com and works fine when I do manage.py collectstatic, it uploads all the static files to my bucket. However, I have another bucket which I use for different purposes and would like to use in certain areas of the website, for example if I have a model like this:
class Image(models.Model):
myobject = models.ImageField(upload_to='my/folder')
Now when Image.save() is invoked, it will still upload the file to the S3 bucket in AWS_STORAGE_BUCKET_NAME, however I want this Image.save() to be point to another S3 bucket. Any clean way of doing this? I don't want to change settings.py in run time nor implement any practices that violate the key principles of django, i.e. having a pluggable easy-to-change backend storage.
The cleanest way for you would be to create a subclass of S3BotoStorage, and override default bucket name in the init method.
from django.conf import settings
from storages.backends.s3boto import S3BotoStorage
class MyS3Storage(S3BotoStorage):
def __init__(self, *args, **kwargs):
kwargs['bucket'] = getattr(settings, 'MY_AWS_STORAGE_BUCKET_NAME')
super(MyS3Storage, self).__init__(*args, **kwargs)
Then specify this class as your DEFAULT_FILE_STORAGE and leave STATICFILES_STORAGE as it is, or vise versa.