I'm using Cloudinary and CarrierWave to upload images from my Rails application, and it works fine.
My requirement is that a user can have a one image, so if a user already have an image and if he/she uploads a new one the previous image should be overridden by the new one.
My problem is, when I upload the new image to Cloudinary it is not invalidating the previous image and hence old image is still shown as users image.
Then I found an option called invalidate and tried to use it, but no luck.
This is my Cloudinary class
class PictureUploader < CarrierWave::Uploader::Base
include Cloudinary::CarrierWave
version :show do
process :invalidate => true
end
end
and it my view
recipe.picture_url(:show)
but this shows the old image and not the new one. What am I missing?
When a Cloudinary image is accessed for the first time, it gets cached in the CDN.
You can indeed update the image, by re-uploading while keeping the public ID, but if you access the same URL, you might be still delivered with the CDN cached version of the image.
You can tell Cloudinary to invalidate the image through the CDN, however note that enabling the invalidate parameter should be included at the upload process, and not inside the 'versions' block, as invalidation is applied upon re-uploading and not on delivery. Also note that it might take up to an hour for the invalidation to fully propagate through the CDN.
It is recommended to use the versions component instead. Adding the 'versions' component to the URL tells Cloudinary to force delivery of the latest version of the image while bypassing CDN cached versions. The updated version value is returned with every upload call. For more information:
http://cloudinary.com/documentation/rails_image_manipulation#image_versions
While it takes a while for invalidation to propagate, the 'version' component affects immediately.
Related
In my react native app with AWS Amplify I use S3 Image to show images from my S3 bucket. I want to know how to display an animation(a gif) till my S3 Images load and after the image has loaded then show the image?
<S3Image imgKey={info.Imageurl} style={styles.image}/>
The documentation has a handleOnLoad attribute that's called when the image loads.
This should work:
<S3Image imgKey={info.Imageurl} style={styles.image} handleOnLoad={()=>{console.log('da-ta!'}}/>
Note, it appears that you will need to render the S3Image tag to start the image loading. I don't know what it's appearance is while the image is loading, but if you want to show/hide it you should do that via the style attribute and not by including or excluding the component from render with conditional logic like {isImageLoaded && <S3Image.../>}.
EDIT
The above is wrong, per #sama that attribute is not available for React Native. You could get a link to the image and then include that in an <img src={imageUrl}> tag. Probably make your own component that takes a component to display as the loading state and the key of the object in S3.
The default config for Storage.get has download = false so you'll get a signed-url that points the the image object in the S3 bucket. Your component can show the loading image while it awaits the real image's url, then plug that into an image tag. You'll still need to wait on the image tag to actually fetch the image, so keep it hidden until you get the image's onload event, then set set the image to visible and hide your spinner.
const signedURL = await Storage.get(key)
I'm trying to figure out how to purge a set of URLs without purging one by one (which is inefficient and buggy).
I'm also trying to figure out how to do this without purging content that we don't want purged.
Essentially, when I push updated files to the S3 bucket that my CDN points to, I want to purge any files that have changed -- but not purge files that have stayed the same.
I'm trying to figure out the difference between setting cache headers on CDN vs setting cache headers (the x-amz-meta-surrogate-key specifically I think?).
Could I somehow configure the metadata for the changed objects (when I push them to the s3 bucket) such that those files get purged and not the others?
(for what its worth, I'm using Fastly for CDN service).
I'm trying to figure out how to purge a set of urls without purging one by one
This is typically done by setting a Surrogate-Key on your origin's response. You can set the same 'key' on multiple different pages to support purging all of those pieces of content at the same time from one purge request.
For example: you could have www.example.com/abc sending Surrogate-Key: red blue while www.example.com/xyz sending Surrogate-Key: green yellow red.
So with Fastly you can issue a 'purge by key' request and that means you can purge the /abc page using the blue key, as it's unique to that page (although in that case you might as well just 'purge by url') but you can purge both /abc and /xyz by issuing a 'purge by key' request using the key red as that key is set on the response for both pages.
As far as coupling this to AWS S3, there is a Fastly documentation page that might help...
You can mark content with a surrogate key and use it to purge groups of specific URLs at once without purging everything, or purging each URL singularly. On the Amazon S3 side, you can use the x-amz-meta-surrogate-key header to mark your content as you see fit, and then on the Fastly side set up a Header configuration to translate the S3 information into the header we look for. -- https://docs.fastly.com/en/guides/setting-surrogate-key-headers-for-amazon-s3-origins
Some other Fastly material that might help you here:
https://docs.fastly.com/en/guides/getting-started-with-surrogate-keys
https://developer.fastly.com/reference/http-headers/Surrogate-Key/
We are using S3 for our image upload process. We approve all the images that are uploaded on our website. The process is like:
Clients upload images on S3 from javascript at a given path. (using token)
Once, we get back the url from S3, we save the S3 path in our database with 'isApproved flag false' in photos table.
Once the image is approved through our executive, the images start displaying on our website.
The problem is that the user may change the image (to some obscene image) after the approval process through the token generated. Can we somehow stop users from modifying the images like this?
One temporary fix is to shorten the token lifetime interval i.e. 5 minutes and approve the images after that interval only.
I saw this but didn't help as versioning is also replacing the already uploaded image and moving previously uploaded image to new versioned path.
Any better solutions?
You should create a workflow around the uploaded images. The process would be:
The client uploads the image
This triggers an Amazon S3 event notification to you/your system
If you approve the image, move it to the public bucket that is serving your content
If you do not approve the image, delete it
This could be an automated process using an AWS Lambda function to update your database and flag photos for approval, or it could be done manually after receiving an email notification via Amazon SNS. The choice is up to you.
The benefit of this method is that nothing can be substituted once approved.
Hey I have started using Cloudfront. In my application I have images in s3 bucket.
User can update these images .When user update the image ,image get created in the s3bucket and replaces the older image with the new image .After the image get still the older image get dispalyed to user as for GET operations I am using Cloudfront so the older image is retrieved from the cloudfront cache.
So is there any technique to resolve this ...
As is the case with pretty much every CDN, you have to invalidate the cache to get the CDN to start serving the new version. http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html
I would suggest reading all the content at that link under the "Adding, Removing, or Replacing Objects in a Distribution" section. Actually I would suggest reading all the CloudFront documentation so that you can understand how the service you are using works.
You can resolve your issue by setting up your cache TTL to 0.
Go to "AWS Dashboard | S3 | Your bucket | Your file | Edit Properties | Metadata".
There set your "Cache-Control" value to "max-age=0".
More information here:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html
I want to allow users upload an image through the Django admin, crop and scale that image in memory (probably using PIL), and save it to Amazon S3 without saving the image on the local filesystem. I'll save the image path in my database, but that is the only aspect of the image that is saved locally. I'd like to integrate this special image upload widget into the normal model form on the admin edit page.
This question is similar, except the solution is not using the admin interface.
Is there a way that I can intercept the save action, do manipulations and saving of the image to S3, and then save the image path and the rest of the model data like normal? I have a pretty good idea of how I would crop and scale and save the image to S3 if I can just get access to the image data.
See https://docs.djangoproject.com/en/dev/topics/http/file-uploads/#changing-upload-handler-behavior
If images are smaller than a particular size, the will already be stored only in memory, so you can likely tune the FILE_UPLOAD_MAX_MEMORY_SIZE parameter to suit your needs. Additionally, you'll have to make sure that you don't access the .path field of these uploaded images, because that will cause them to be written out to a file. Instead, use (for example) the .read() method. I haven't tested this, but I believe this will work:
image = PIL.Image(request.FILES['my_file'])
Well if you don't want to touch the Admin part of Django then you can define scaling in the models save() method.
But when using the ImageField in Django. Django can actually do the saving for you. It has height and width options available.
https://docs.djangoproject.com/en/dev/ref/models/fields/#imagefield
For uploading to S3 I really suggest using django-storages backends from:
https://bitbucket.org/david/django-storages/src (preferably S3-boto version)
That way you basically will not have to write any code yourself. You can just use available libraries and solutions that people have tested.