I've been reading about and trying various thumbnailing apps for Django. These are the requirements:
All generated thumbnails must be saved in a S3 bucket separate from the original images, i.e. separate storage class
When the image instance is deleted, the original image file along with all generated thumbnails must be delete as well
Minimize expensive inefficiencies, ex. Fetching the url of a thumbnail to serialize in DRF shouldn't look in S3 bucket to see if it exists every time, etc.
VersatileImageField fails the first requirement. ImageKit fails the second requirement. The third requirement is where I'm most confused; I'm trying to inform myself on best practices but the information is fragmented and I'm not confident in making a decision based on what I've learned so far.
From what I've read, my impression is that the most efficient behavior would be as follows:
generate the thumbnail immediately upon save and assume it always exists
to access thumbnail, generate the URL based on the original image's filename and thumbnail dimensions/quality, since we know it definitely exists
post_delete will delete all thumbnails and original file
I'd be most interested in learning about the differences in the approaches that easy-thumbnails and sorl-thumbnail take (if they align with the process I very briefly outlined above or if they have something even more efficient), and the advantages/disadvantages in each of their methodologies.
I hope this may help you
in the model, there are two fields image and thumbnail, in view make validation image type and size after that generate thumbnail using Pill
from PIL import Image as Img
from io import BytesIO
def create(self,request):
mutable = request.POST._mutable
request.POST._mutable = True
for value in request.FILES.items():
im = Img.open(value[1])
im.thumbnail((425, 236), Img.ANTIALIAS)
buffer = BytesIO()
im.save(fp=buffer, format='JPEG')
requset.POST['thumbnail'] = ContentFile(buffer.getvalue(), thumnail_name)
request.POST._mutable = mutable
to save images in folder and thumbnails anther folder you can use different path with upload_to in ImageField
I'm not sure if this is helpful, but I've used easy-thumbnails in the past and I'm fairly sure that it does all of the things that you're asking for if you configure it a bit. Configuring it with the save function is a bit tricky, as the save function doesn't care to be configured, but it's not impossible. The main thing that can cause problems is that you have to use 'save and continue editing' to access and use the thumbnail option. It won't be visible until you do so if you haven't created it already since it's created on save.
def save()
found_id = self.id
super(Team, self).save(*args, **kwargs)
if self.image and found_id is None and self.original_image_width and self.original_image_height:
self.image = get_thumbnailer(self.image).get_thumbnail({
'size': (self.original_image_width, self.original_image_height)
}).name
super(Team, self).save(*args, **kwargs)
Related
good day everyone, i am a self learner and quite new to django and i am facing some questions currently, i created a model which will have images upload to a self-defined directory as like this:
class OverwriteStorage(FileSystemStorage):
def get_available_name(self, name, max_length=None):
self.delete(name)
return name
class DealerList(models.Model):
def user_dir_folder(instance, filename):
return 'dealerImg/{0}/{1}'.format(instance.dealerCompanyName, filename)
dealerCompanyName = models.CharField(max_length=100)
Cert = models.ImageField(storage=OverwriteStorage(), upload_to=user_dir_folder)
Img = models.ImageField(storage=OverwriteStorage(), upload_to=user_dir_folder)
and i installed ‘django_cleanup.apps.CleanupConfig’ into the settings file to cleanup those related files in case if an instance is deleted (but the empty folder stay existed which i dont know how to deal with yet, but this is second question).
now what i am facing is, if an instance’s ‘dealerCompanyName’ field is updated, the folder which created named with the ‘dealerCompanyName’ won’t get updated along (also django will delete those uploaded files that cant match any instance now after ‘dealerCompanyName’ is updated since i installed cleanup, but this is the third question),
what i wanna ask is, is there anyway to update that folder’s name too if ‘dealerCompanyName’ field get updated?
by the way if there are answer for the second and third questions mentioned above will be very very appreciated since i will face them very soon.
Hopefully could get some solutions since i have tried for long time but none is working. Many Thanks!
I'm on Django 1.4 for the moment. I'm trying to figure out a way to copy an image field from one object to another without hitting the file system to grab the height and width. In case it's relevant, the source and destination objects are of different models. And the reason this is of interest is that I'm using S3 as a storage backend, which is more cumbersome than a simple disk hit.
Usually, if I say:
obj2.image = obj1.image
It will grab the file from the FS, figure out the width and height, and save those to the specified width and height fields. However, in principle I think I should be able to say:
obj2.image_width = obj1.image_width
obj2.image_height = obj1.image_height
obj2.image = obj1.image
In fact, I imagine that Django itself could do this under the hood without grabbing the file every time. Perhaps it does so in some versions later than 1.4.
I've been playing around with it a lot, and looking at the Django source, and learning a bit about the quirks of when it happens. update_dimension_fields is the function in question, and it's called with force=True in ImageFileDescriptor.__set__.
One way I've figured out how to partially avoid it is:
obj2.image = obj1.image
obj3.image = obj1.image
obj4.image = obj1.image
...
Ie, it only hits the file system the first time. But of course it still hits it unnecessarily the first time, and further this doesn't help anything between requests. The other thing I thought of is:
Model.objects.filter(pk=obj2.pk).update(
image=obj1.image,
image_width=obj1.image_width,
image_height=obj2.image_height
)
This only works because it's dumbed down as far as ORM safety goes. I'd use it, except I want to do this within a ModelForm. Which means I'd have to override the save function:
def save(self, *args, **kwargs):
obj = super(MyModelForm, self).save(*args, **kwargs)
Model.objects.filter(pk=obj.pk).update( # extra query!
image=source.image,
image_width=source.image_width,
image_height=source.image_height
)
return Model.objects.get(pk=obj.pk) # another extra query!
The problem is the two extra queries, just to get around a framework quirk. The second extra query is so that obj.image is the expected image, for whatever consumes the form. I could skip it if I were 100% sure the form consumer isn't going to need it, which is not a good idea.
So is there a way around this? Is there a good safety concern I'm not thinking of, that explains why I'm not able to do this?
I'm attempting to detect changes to an ImageField in order to programatically sync the changes with Hg. The model containing the ImageField is being localized using Django-multilingual, so I have to detect changes for each field individually rather than just assume the file changed every time.
I am using pre and post save signals to accomplish this, saving the instance in the pre-save and detecting the changes in the field values in the post-save. This works great for when images are added, removed, or changed with an image of a different filename. However, when I upload an image of the same filename, my code is unable to detect that the image actually changed so no file changes get synced with Hg.
I want to be able to generate a checksum for the old file (easily done as I know where it lives from the presave instance), and compare this to a checksum of the new file (not as easy, as trying to pull it in from the field value takes me to the old file).
If there is a way for me to find the newly uploaded file (presumably in memory as Django doesn't temp save files under 2.5MB), and save it to a temporary directory, it would be easy for me to generate a checksum for it. However, I am not sure where I would get the file from.
Where could I get the file from during a post_save signal? Or is there another method of accomplishing this change detection that I haven't thought of?
Thanks,
Rich
Add this snippet"
def has_changed(instance, field):
if not instance.pk:
return False
old_value = instance.__class__._default_manager.\
filter(pk=instance.pk).values(field).get()[field]
return not getattr(instance, field) == old_value
then in your save
def save(self, *args, **kwargs):
if has_changed(self, 'field_here'):
super(Sneetch, self).save(*args, **kwargs)
As with Django 1.2.5 a model containing a filefield will not automatically delete the associated files any more when the model is deleted. Check the release notes here: http://docs.djangoproject.com/en/1.2/releases/1.2.5/
I am quite new to Django, so i wonder what would be a good way to preserve the old behaviour, as i have a need for it. Is it enough to just override the model.save method?
if you take a look at changeset 15321 in django's code repository, you'll see that this functionality has been removed by deleting a signal handler that the FileField had which intercepted the its parent model's delete event and subsequently tried to delete its file.
The functionality is quite easy to restore, you just need to "undo" these changes. One warning though: The problem with deleting files within transactions is real and if you delete files "the old fashioned way" you could end up deleting stuff even when a rollback occurs. Now, if that doesn't pose a problem for you, read on!
We can easy subclass the FileField and restore that functionality without touching the original class. This could should do that (note that I'm just restoring the old functionality deleted from the code):
from django.db.models.fields.files import FileField
class RetroFileField(FileField):
# restore the old delete file when model is deleted functionality
def __init__(self, verbose_name=None, name=None, upload_to='', storage=None, **kwargs):
# init FileField normally
super(RetroFileField, self).__init__(verbose_name, name, upload_to, storage, **kwargs)
def contribute_to_class(self, cls, name):
# restore the SIGNAL that is handled when a model is deleted
super(RetroFileField, self).contribute_to_class(cls, name)
signals.post_delete.connect(self.delete_file, sender=cls)
def delete_file(self, instance, sender, **kwargs):
file = getattr(instance, self.attname)
# If no other object of this type references the file,
# and it's not the default value for future objects,
# delete it from the backend.
if file and file.name != self.default and \
not sender._default_manager.filter(**{self.name: file.name}):
file.delete(save=False)
elif file:
# Otherwise, just close the file, so it doesn't tie up resources.
file.close()
(I haven't tested the code... but it should be more or less ok)
You should put this code in a fields.py module in your project, or wherever it makes sense to you. Just remember, that from now on, instead of using django.db.models.FileField you'll be using yourproject.fields.RetroFileField for your file fields. And if you're using image fields and you depend on this functionality too... well... I think you'll need to subclass the image fields too and make them use your RetroFileField instead of the original FileField.
Oh, and if you don't like the name of the field, just rename it to something more appropriate, just remember to update the super() calls inside.
Hope this helps!
Another note: You should see if you can just use a cron job to delete orphaned files like the changelog suggests.
Currently I'm using the below code to allow a user to download a file that's attached to a couchdb document. I'm using couchdbkit with Django.
def get_file(request, user_id):
user = User.objects.get(pk = user_id)
application = user.get_profile().application()
attachment_name = request.GET.get('name', None)
assert attachment_name
attachment = application.fetch_attachment(attachment_name, stream=False)
return HttpResponse(attachment, content_type=application._attachments[attachment_name]['content_type'])
This works, but I am concerned about memory usage on the machine. Is this method efficient, or will large files dump to memory before being passed to the HttpResponse? I have used stream=True, but I'm not sure best how to test this. The documentation on couchdbkit is sparse to say the least. I intend to use something similar to this throughout the application and want to get the method right the first time. :)