I'm having problems when uploading lots of files in Django. The context is the following: I've a spreadsheet with one or more columns being image filenames; those images are being uploaded through an form with input type=file and the option multiple.
With few lines - say 70, everything goes fine. But with more lines, and consequently more images, there's a IOError happening in random positions.
I've checked several questions about file/image upload in Django but couldn't find any that is related to my problem.
The model I'm using is the Product model of LFS (www.getlfs.com). We are developing a system that is based on LFS and to facilitate the creation of dozens of products in batch we wrote some views and templates to receive the main product properties through a spreadsheet. Each line is a product and the columns are the desired properties.
LFS uses a custom class ImageWithThumbsField(ImageField) to store the product's image and when saving the product instance (got from the spreadsheet), all thumbnails are generated. This is a time (cpu) consuming task, and my initial guess is that for some reason the temporary file is deleted before all processing had occurred.
Is there a way to keep these uploaded files for more time? Any other approach suggested to be able to process hundreds of uploaded files? Any hints on what can be happening?
Hope you can understand my question. I can post code if need.
Links to relevant portions of LFS code:
where thumbnails are generated:
https://github.com/diefenbach/django-lfs/blob/master/lfs/core/fields/thumbs.py
product model
https://github.com/diefenbach/django-lfs/blob/master/lfs/catalog/models.py
Thanks in advance!
It sounds like you are running out of memory. When django processess uploads, until the form is validated all of the files are either:
kept in memory inside the python/wsgi process/worker. (Usual mode of op for runserver)
In this case, you are uploading enough photos to fill up the process memory and running out of space. This will be non-deterministic as to where the IOError happens as you can imagine (GC Dependent).
Temporarily stored in /tmp/ (usual setup of apache)
In this case, the webserver's ramfs is full of images that have not yet been written to disk. In this case it should IOError arround the same place.
In either case, you should not be bulk uploading images in this way anyway. Apache/Django is not designed for it. Try uploading a single product/image per request/response, and all your problems will go away.
Related
I have a general understanding question. I am building a flutter app that relies on a content library containing text files, latex equations, images, pdfs, videos etc.
The content lies on an aws amplify backend. Depending on the navigation of the user in the app, the corresponding data is fetched and displayed.
I am not sure about the correct way of fetching the data. The current method (which works) is that the data is stored in an S3 bucket. When data is requested, the data is downloaded to a temporary directory and then opened and processed in the app. This is actually not slow, but I feel that it is not the way it should be done.
When data is downloaded a file transfer notification pops up, which bothers me because it is shown all the time. Also I would like to read the data directly with something like a get request, without downloading the file first (specially for text files, which I would like to read directly into a String). But here I don't know how it works, because I don't see that you can save data in a file system with the other amplify services like data store or the rest api. Also, the S3 bucket is an intuitive way of storing data that is easy to use for the content creators of my company, for me it seems that the S3 bucket is the way to go. However with S3 I have only figured out the download method to fetch data.
Could someone give me a hint on what is the correct approach for this use case? Thank you very much!
I am developing an application with Django, AWS S3 and hosted on Heroku.
At one point users have to upload multiple large files, totaling around 150MB each time.
I have tried various approaches.
1st attempt: directly call the save method of the Django form:
Result: the request takes more than 30 seconds and returns a timeout.
2nd attempt: temporarily save the file to a Heroku directory and read it from Celery task.
Result: Cannot save because it throws FileNotFoundError: [Errno 2] No such file or directory on production.
3rd attempt: pass the uploaded files (in memory files) to a celery task but the bytes cannot be serialized and passed to the task neither with json or with pickle.
Could anyone help me please?
Thanks advance.
Another approach can be like that
Expose an APIs to generate presigned URL for Frontend (Steps are here).
Upload files by using that URL from the frontend in async way. That will offload your computation at Backend.
After successful upload, you will get an URL of file location. Now save the S3 URL along with other fields data to Django model.
You can upload more than 150MB file size by this method. Your system will be scalable.
Now, I have realized the uploading process is like that:
1. Generate the HTTP request object, and set the value to request.FILE by using uploadhandler.
2. In the views.py, the instance of FieldFile which is the mirror of FileField will call the storage.save() to upload file.
So, as you see, django always use the cache or disk to pass the data, if your file is too large, it will cost too much time.
And the design I want to figure this problem is to custom an uploadhandler which will call storage.save() by using input raw data. The only question is how can I modify the actions of FileField?
Thanks for any help.
you can use this package
Add direct uploads to AWS S3 functionality with a progress bar to file input fields.
https://github.com/bradleyg/django-s3direct
You can use one of the following packages
https://github.com/cloudinary/pycloudinary
http://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html
Is it possible in Django to load data into the Memory, process it and return it to the User?
Example:
User uploads image -> Python Script process it (e.g. make it s/w) -> User sees processed Image on same Webpage
(Other Examples would be all these "convert-online" sites like pdf2doc.com)
Is it a bad idea to load it into the Memory?
Would a Queue and saving it on a CDN be a better solution?
I'm trying to understand the possibilities of handling files from the User, which don't need to be safed. I appreciate any further ideas/considerations.
I want to allow users upload an image through the Django admin, crop and scale that image in memory (probably using PIL), and save it to Amazon S3 without saving the image on the local filesystem. I'll save the image path in my database, but that is the only aspect of the image that is saved locally. I'd like to integrate this special image upload widget into the normal model form on the admin edit page.
This question is similar, except the solution is not using the admin interface.
Is there a way that I can intercept the save action, do manipulations and saving of the image to S3, and then save the image path and the rest of the model data like normal? I have a pretty good idea of how I would crop and scale and save the image to S3 if I can just get access to the image data.
See https://docs.djangoproject.com/en/dev/topics/http/file-uploads/#changing-upload-handler-behavior
If images are smaller than a particular size, the will already be stored only in memory, so you can likely tune the FILE_UPLOAD_MAX_MEMORY_SIZE parameter to suit your needs. Additionally, you'll have to make sure that you don't access the .path field of these uploaded images, because that will cause them to be written out to a file. Instead, use (for example) the .read() method. I haven't tested this, but I believe this will work:
image = PIL.Image(request.FILES['my_file'])
Well if you don't want to touch the Admin part of Django then you can define scaling in the models save() method.
But when using the ImageField in Django. Django can actually do the saving for you. It has height and width options available.
https://docs.djangoproject.com/en/dev/ref/models/fields/#imagefield
For uploading to S3 I really suggest using django-storages backends from:
https://bitbucket.org/david/django-storages/src (preferably S3-boto version)
That way you basically will not have to write any code yourself. You can just use available libraries and solutions that people have tested.