I am trying to understand how Django ImageKit works with respect to creating thumbnail files (for example). I am using the example code:
from django.db import models
from imagekit.models import ImageSpecField
from imagekit.processors import ResizeToFill
class Profile(models.Model):
avatar = models.ImageField(upload_to='avatars')
avatar_thumbnail = ImageSpecField(source='avatar',
processors=[ResizeToFill(100, 50)],
format='JPEG',
options={'quality': 60})
I am uploading the avatar image from an app. This works fine with an entry made in the Profile table and the file created in AWS S3. What I am struggling to understand is when/where/how the avatar_thumbnail is created. Do I have to do something explicit to get it stored in AWS S3 along with the avatar image? Or is the avatar_thumbnail only ever created on the fly? I need it stored somewhere for later use.
I don't get it 100%, but from what I understood the thumbnail is a generator that is only called when the thumbnail is first requested, and is then cached.
My personal experience with it suggests so as well. I created a dummy instance of the model (same code as above) through the admin interface. I then created an html page that displays the thumbnails with template tagging (<img src="instance.thumbnail.url">). Checking my folders, no images generated so far. Then I launch a server, navigate to that page. It takes unusual time to load (that's an indication that the thumbnails are being created) on the first try, but then it speeds up. And the files are there.
By default, ImageKit generates ImageSpecField images when they are needed, not when the model object is created. To change the behavior, you can use the cache file strategies. Default value of IMAGEKIT_DEFAULT_CACHEFILE_STRATEGY is JustInTime, which can be changed to Optimistic which creates images on model object creation, or to custom strategy.
Moreover, you can set different strategies for individual ImageSpecFields by providing the cachefile_strategy parameter.
Related
I do not really understand how the database works when using in production.
My stack:
Django
Heroku
AWS S3
PostgresSQL on Heroku
Users can generate some images on my app. The images are saved to AWS S3, and in some feature I want to retrieve the last generated image.
This below is my model where the images are saved in.
models.py:
class imgUploadModel(models.Model):
auto_increment_id = models.AutoField(primary_key=True, default=True)
image = models.ImageField(null=True, blank=True, upload_to="images/")
And here the view where the images is taken again and handled in some features.
view.py:
imgname = imgUploadModel.objects.all().last().image
As you can see I use .last() to get to the latest images which was generated.
Now to my questions:
In production, could it be that one user sees another users images? Or how does the Dynos (from heroku) separate the sessions?
Since the AWS S3 bucket is just a memory storage without dividing it by users, I assume that one user can see other users images. Especially then, when user A creates an Img, and user B clicks on 'latest image'.
If it is so, how can I create Dynos or Buckets or anything else to prevent this behaviour.
I just do not really understand it from a logical point of view.
everyone. I decide to use Cloudinary for storing images.
This is my model field with image:
avatar = models.ImageField(upload_to='avatar/', default='avatar/default.png', blank=True)
All works fine for me, but I have one little issue.
When I uploaded the image from admin panel, cloudinary upload it to my cloudinary folder 'avatar' with some modified name, for example: 'july2022_kda4th' or 'july2022_aidkdk'
But when I uploaded this image from admin panel of another database (production), cloudinary upload the image with another name. So, I have two similar images in cloudinary. It's not convenient.
How can I fix it?
By default, if you don't supply a public_id in the upload API call, a random string is assigned to the asset. You can read more here: https://cloudinary.com/documentation/upload_images#public_id.
It sounds like you want to use the asset filename as the public_id so what you can do is:
In forms set use_filename=true and unique_filename=false as described in this link
OR if above is not working you can
Create an upload preset https://cloudinary.com/documentation/upload_presets
Enable Use filename or externally defined Public ID: option
Disable Unique filename:
Set this preset as the default upload API/UI (https://cloudinary.com/documentation/upload_presets#default_upload_presets) or include this upload_preset in your upload call
I need to give the admin the feature of uploading an image for an ImageField using AJAX, and then crop the portion of his choice (with a predefined dimension ratio or resolution) and then save the cropped image in the database.
I tried django-image-cropping and django-ajaximage for this.
#Using django-image-cropping
from image_cropping import ImageRatioField
class Alumnus(models.Model):
photo = models.ImageField(null=True, blank=True)
cropped_photo = ImageRatioField('photo', '430x360')
#Using django-ajaximage
from ajaximage.fields import AjaxImageField
class Alumnus(models.Model):
photo = AjaxImageField(
upload_to='alumni_photos',
max_height=400,
max_width=400,
crop=True
)
While django-ajaximage uploads an image using AJAX, but it doesn't allow the admin to choose which part of the image he wants to be cropped, django-image-cropping crops an image in two steps: first we need to upload an image, save it to the db, then again we need to open the object and select crop portion, and save it again to the database, which i feel is unnecessarily cumbersome. Any suggestions?
It looks like you'll need a JS library in the browser that does the actual cropping. Then you can use AJAX to send it to the server.
DarkroomJS might be just what you need. It uses the HTML5 canvas to do the image editing in browser. It's actually got a few more features than you need, but it should get the job done.
The django-client-side-image-cropping library crops the image on client-side (Using the Croppie Javascript library) to a specific size. It is compatible with django-admin sites. It does not use AJAX. It uses InMemoryUploadedFile to temporarily store the original file.
django-cropper-image is an app I made for client side cropping and compressing uploaded images via Django’s app using with help cropper.js. github link django-cropper-image.
from django.db import models
from django_cropper_image.fields import ImageCropperField
class Images(models.Model):
image = ImageCropperField(upload_to='image',max_length=255)
I'm implementing an image upload feature for my Django app (plain Django 1.4 , NOT the non-rel version) running on Google App Engine. The uploaded image is wrapped in a Django model which allows the user to add attributes like a caption and search tags.
The upload is performed by creating a Blobstore upload url through the function call blobstore.create_upload_url(url). The function argument is the url to which the Bobstore redirects when the upload is complete. I want this to be the url of the default Django form handler that performs the save/update of the model that wraps the image so I don't have to duplicate default Django behaviour for form validation, error reporting and database update.
I tried supplying reverse('admin:module_images_add') to create_upload_url() but this doesn't work as it throws an [Errno 30] Read-only file system exception. I presume this originates from the default Django form handler again trying to upload the file the standard Django way but then hits the brick wall of Google App Engine not allowing access to the file system.
At the moment, the only way I can see to get this working without duplicating code is by strictly separating processes: one for defining an image model instance and the second for uploading the actual image. Not very intuitive.
See also this question and answer which I posted earlier.
Any suggestions on how to get this working using one form and reusing Django default form handlers?
EDIT:
I've been reading up on decorators (I'm relatively new to Python) and from what I read, decorators appear to able to modify the behaviour of existing Python code. Would it be possible to change the runtime behaviour of the existing form handler to solve the above using a decorator? I obviously have to (1) develop the decorator and (2) attach it to the default handler. I'm not sure if (2) is possible as it has to be done runtime. I cannot patch the Django code running on GAE...
Well, I finally managed to get this working. Here's what I did in case anyone runs into this as well:
(1) I removed the ImageFile attribute from my model. It ended up causing Django to try and do a file upload from the file system which is not allowed in GAE.
(2) I added a Blobstore key to my model which is basically the key to the GAE BlobStore blob and is required to be able to serve the image at a later stage. On a side note: this attribute has limited length using the GAE SDK but is considerably longer in GAE production. I ended up defining a TextField for it.
(3) Use storage.py with Daniel Roseman's adaption from this question and add the BlobstoreFileUploadHandler to the file handlers in your SETTINGS.PY. It will ensure that the Blobstore key is there in the request for you to save with your model.
(4) I created a custom admin form which contains an ImageField named "image". This is required as it allows you to pick a file. The ImageField is actually "virtual" as its only purpose on the form is to allow me to pick a file for uploading. This is crucial as per (1).
(5) I overwrote render_change_form() method of my ModelAdmin class which will prepare a Blobstore upload url. The upload url has two versions: one for adding new images and one saving changes to existing. Upload urls are passed to the template via the context object.
(6) I modified the change_form.html to include the Blobstore upload url from (5) as the form's action.
(7) I overwrote the save_model() method of my ModelAdmin:
def save_model(self, request, obj, form, change):
if request.FILES.has_key("blobkey"):
blob_key = request.FILES["blobkey"].blobstore_info._BlobInfo__key
obj.blobstore_key = blob_key
super(PhotoFeatureAdmin, self).save_model(request, obj, form, change)
This allows me to retrieve the blob key as set by the upload handler and set it as a property of my model.
For deletion of image models, I added a special function which is triggered by the delete signal of the model. This will keep the Blobstore in sync with the image models in the app.
That's it. The above allows to upload images to the blob store of GAE where each blob is neatly wrapped in a Django model object which admin users can maintain. The good thing is that there's no need to duplicate standard Django behaviour and the model object of the image can easily be extended with attributes in the future.
Final word: in my opinion the support for blobs in plain Django on GAE is currently very poor considering the above. It should be much easier to achieve this, without having to rely on Django non-rel code and a rather long list of modifications; alternatively Google should state something about this in their developer documents. Unless I missed something, this is undocumented territory.
I want to upload images to a gallery app. I want the user to be able to either load images normaly, or upload on zip file containing all the images for that gallery. Then it must be uncompressed and all images must be added to that model. This is for the admin site.
Any ideas?
You could either use the existing django-app django-photologue which enables you to do that or have a look at how it is implemented there: https://code.google.com/p/django-photologue/source/browse/trunk/photologue/models.py.
If you see that photlogue is lacking some of the functionality you need, you could also subclass and extend photologue's models in your app!