How can I upload and download files with graphene-django? - django

I'm currently using graphene-django v2.0 and I've absolutely no clue of how can I upload and download files like images, does anyone have an example of a query where you can download an Image and a mutation where you can upload one?

UPLOADS
You don't need to invent your own frontend code to add a file upload to a mutation -- there are existing packages that do this already. For example, apollo-upload-client if you are using Apollo.
To receive an uploaded file on the backend, the files are going to be available in the dictionary request.FILES. So any mutation handling a file upload needs to examine info.context.FILES.items to get and save the file data. The specifics of this code are going to depend on the ultimate destination of the saved file.
(UPDATE) However, if possible I would recommend not using graphene-django to upload files because it adds a large amount of complexity on both the backend and the frontend. My team ultimately scrapped our working graphene-django file upload code and replaced it with a standard Django file upload.
DOWNLOADS
For downloads, I would recommend not using graphQL for the actual download. Instead create a Django function view that returns a HttpResponse or FileResponse and sets the Content-Disposition header. Something like
from django.http import HttpResponse
def download(request):
... do stuff to get file contents and file name and mime_type
response = HttpResponse(file_contents, content_type=mime_type)
response['Content-Disposition'] = 'attachment; filename="{}"'.format(file_name)
return response
Then add this download path to your urls.py and to a graphQL query response. So graphQL would be used to get the download path, but actually downloading the file would be a regular Django page.

Related

Return existing csv file in django as download

I have a csv file in an assets folder with a few entries in my django project, and I want my angular frontend to be able to download this csv file. The existing examples show how to create a new csv file and send that, but I don't need to create a new csv file, I already have one, so how does the view/controller have to look? How can I return a csv file in django and make it able to be downloaded by my frontend?
Please don't mention this reference, as it is not my intention to create a new csv file:
https://docs.djangoproject.com/en/4.0/howto/outputting-csv/
You can use a FileResponse object to return a file from a view, pass as_attachment=True to offer the file to the user as a download
def my_view(request):
return FileResponse(open('path/to/filename.csv', 'rb'), as_attachment=True)

Migrating from Client-Django-S3 image/file upload to Client-S3 Presigned URL upload, while maintaining FileField/ImageField?

The current state of our app is as follows: A client makes a POST request to our Django + DRF app server with one or more files, the django server processes the files, then uploads and saves it to S3. This is done using the amazing django-storages & boto3 libraries. We eventually will have the reference url to our S3 files in our Database.
A very simplified example looks like this:
# models.py
class TestImageModel(models.Model):
image = models.ImageField(upload_to='<path_in_bucket>', storage=S3BotoStorage())
# serializers.py
class TestImageSerializer(serializers.ModelSerializer):
image = serializers.ImageField(write_only=True)
The storages library handles uploading to S3 and calling something like:
TestImageModel.objects.first().image.url
will return the reference to the image url in S3 (technically in our case it will be cloudfront URL since we use it as a CDN, and set the custom_domain in S3BotoStorage class)
This was the initial approach we took, but we are noticing heavy server memory usage due to the fact that images are first uploaded to our server, then uploaded again to S3. To scale efficiently, we would like to move to an approach where to instead upload directly from Client to S3 using presigned URLs. I found documentation on how to do so here: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html.
The new strategy I will adopt for image upload:
Client requests Presigned url from Django API
Client uploads to S3 directly
Django updates reference to uploaded image in database with ImageField.
My question is about 3). How can I tell the django record that an already uploaded image in S3 should be the location of the ImageField, without having to reupload the image through S3BotoStorage class?
An alternative could be to change the ImageField to a URLField, and just store the link of the new image, but this will disallow me from using the features of an ImageField (Forms in django admin, deleting directly from S3 using the storage class .delete(), etc).
How can I update the ImageField to point to an existing file in the same storage, or is there a better way to go about moving to a direct Client-S3 upload strategy with Django and ImageField?
So apparantly you can just do something like this:
a = TestImageModel.objects.first()
a.image.name = 'path_to_image'
a.save()
And everything works perfectly :)

"NotImplementedError at /myfile/download/8" while downloading zip file in Django

I am trying to download zip file which contains multiple files in same or different formats.
This zip file get downloaded after clicking on a button "Download".
This functionality works perfectly on local development server. But after deploying the web app on Google cloud it throws
"NotImplementedError at /myfile/download/8"
This backend doesn't support absolute paths.
...
Cloud storage has respective path with the file still it is not working, why?
Everything is working fine on local machine, but fails on production, why?
Please help! Thanks in advance.
I think that calling the path() property of the FileSystemStorage class, or the url() property where the contents of the file referenced by name can be accessed and then using absolute URLs for downloading files from either of the two is what's flagging this error:
“For storage systems that aren’t accessible from the local filesystem,
this will raise NotImplementedError instead.”
You should try and avoid saving to absolute paths; there is a File Storage API which abstracts these types of operations for you. Looking at the documentation, it appears that the save() function supports passing a file-like object instead of a path.
Also, if you need to serve a zip file download in Django, you just need to serve it as attachment using Django's HttpResponse:
from django.http.response import HttpResponse
def zip_file_view(request):
response = HttpResponse(open('/path/to/your/zipfile.zip', 'rb'), content_type='application/zip')
response['Content-Disposition'] = 'attachment; filename=any_name_you_like.zip'
return response

Django Rest_Framework File upload using Ajax

I'm beginner of django rest_framework.
I want to implement file upload feature to my project,
and I did some search, but I could not get any helpful example.
So, is there somebody who can tell me some reference or example in rest framework file upload?
Uploading a file using Django Rest Framework can be done independently from the method that you are using to send the request (Ajax in this case).
You can follow this to have more information about how it is done

Force django_compressor to recompile css/less files

We let users upload their own custom css/less to our django app.
All our css/less files are compressed by django_compressor "on-the-fly".
When the app is initially deployed all css files are moved to the collect-static directory.
When users upload custom css styles they replace one of the less files in the collect-static directory.
The problem with that is that the changes only appear when the apache is being reloaded thus a new css file gets generated by django-compressor.
Is there a way to force django-compressor to regenerate it's compiled and cached files? I would not feel comfortable triggering a sudo services apache2 reload at django application level.
I can come up with two possible solutions, I don't like both of them very much.
You can call compress (doc) from within incron or cron:
python manage.py compress
Or you can set a very low COMPRESS_REBUILD_TIMEOUT. (doc)
BTW you have the user scripts as a seperate bundle, right?
I used a different approach. I am now using offline compression, which is faster and better for multi server deployments anyways.
I give the user an interface to change certain css and less values. I save those css/less values in a database table, so that the user can edit the stuff easily.
To make the new css/less values available to the frontend (compiled css files) I write the values the user entered in a less file to the disk and re-run the python manage.py compress command.
This way the compiled compressor files are generated and if the user entered invalid less code, which would lead to compile errors, the compressor stops and keeps the old css files.
Here's my save() method:
def save(self, *args, **kwargs):
#write the less file to the file system
#everytime the model is saved
try:
file_location = os.path.join(settings.STATIC_ROOT, 'styles', 'less', 'custom_styles.less')
with open(file_location, 'w') as f:
f.write(render_to_string('custom_styles/custom_stylesheet_tmpl.txt', {'platform_customizations': self}))
except IOError:
#TODO show error message to user in admin backend and via messaging system
raise IOError
#re-run offline compress command in prod mode
management.call_command('compress')
super(PlatformCustomizations, self).save(*args, **kwargs)