Any idea where can I remove the image fetch limit? Because I have images in a Magento site that is hosted in Amazon S3. If I change the image url to S3, it fetches the images including all the thumbnails, but eventually, blocks the thumbnails, and only fetches the main image.
But if I host the image in my other server (Not Amazon S3), it doesn't have any limit. It will fetch all the images again and again, regardless of how many times I refresh it.
Here are examples:
www.shoptv.com.ph/active-posture.html - Image hosted in S3
dev.shoptv.com.ph/active-posture.html - Image hosted in Dreamhost
As you can see, the thumbnails are all present in DH, but in S3, it doesn't show up. But if you use the direct permalink of the images, it actually shows. For example:
Amazon S3:
http://s3.shoptv.com.ph/images/601938/601938-1.jpg
http://s3.shoptv.com.ph/images/601938/601938-2.jpg
http://s3.shoptv.com.ph/images/601938/601938-3.jpg
http://s3.shoptv.com.ph/images/601938/601938-4.jpg
Dreamhost:
http://dostscholars.org/images/601938/601938-1.jpg
http://dostscholars.org/images/601938/601938-2.jpg
http://dostscholars.org/images/601938/601938-3.jpg
http://dostscholars.org/images/601938/601938-4.jpg
All the images are present. But if you host it in S3, and include it in your media.phtml in Magento, it just won't show.
I suspect that it has something to do with my Amazon S3 settings, maybe a limit somewhere in the S3 dashboard that I can't find.
There is no image limit in Amazon S3.
Your problem is caused by the fact that the www.shoptv.com.ph/active-posture.html page is missing this HTML code (which I got from dev.shoptv.com.ph/active-posture.html):
<div class="more-views">
<h2>More Views</h2>
<ul class="product-image-thumbs">
It isn't displaying the images because there is no HTML telling the web browser to display the images!
Related
I have so far allowed users to upload images to my server and then used CF's FileGetMimeType() function to determine if the MIME type is valid (.e.g jpg)
The problem is that FileGetMimeType() wants a full path to the file on the server to work. Amazon S3 is just a URL of where the image is stored. In order to get FileGetMimeType() to work, I have to first upload the image to Amazon S3 then download it again using CFHTTP and then determine the file type. This seems way less efficient than the old way.
So why not just upload to my own server first, determine the MIME type, and then upload to S3 right? I can't do that because some of these files are going to be huge with thousands of users uploading at the same time. We're talking videos as well as images.
Is there an efficient way to upload files to an external server i.e. Amazon S3 and then get the MIME type somehow without having to download the file all over again? Can it be done on S3's end?
We are using S3 for our image upload process. We approve all the images that are uploaded on our website. The process is like:
Clients upload images on S3 from javascript at a given path. (using token)
Once, we get back the url from S3, we save the S3 path in our database with 'isApproved flag false' in photos table.
Once the image is approved through our executive, the images start displaying on our website.
The problem is that the user may change the image (to some obscene image) after the approval process through the token generated. Can we somehow stop users from modifying the images like this?
One temporary fix is to shorten the token lifetime interval i.e. 5 minutes and approve the images after that interval only.
I saw this but didn't help as versioning is also replacing the already uploaded image and moving previously uploaded image to new versioned path.
Any better solutions?
You should create a workflow around the uploaded images. The process would be:
The client uploads the image
This triggers an Amazon S3 event notification to you/your system
If you approve the image, move it to the public bucket that is serving your content
If you do not approve the image, delete it
This could be an automated process using an AWS Lambda function to update your database and flag photos for approval, or it could be done manually after receiving an email notification via Amazon SNS. The choice is up to you.
The benefit of this method is that nothing can be substituted once approved.
I am having a web server running in Digitalocean. We are in a situation where we need to upload our clients' images (media files). Now, as far as the understating goes, in the following case:
class Car(models.Model):
...
photo = models.ImageField(storage=fs)
The photo field will store the url of the file, right?
Now we might move to amazon (with S3 as the storage system) in near future. Will the model field value change?
How do you, then, suggest us in taking care of the change (as we will have two servers then, one EC2 for django & S3 for media & static files)?
You can upload your static and media files to S3 during deployment after collectstatic. In case of multiple servers, you only need to run collectstatic in one server, then you will be able to reach same content from S3 for multiple servers.
If you take a look https://www.caktusgroup.com/blog/2014/11/10/Using-Amazon-S3-to-store-your-Django-sites-static-and-media-files/ there is a detailed guide on how to store files on S3.
I searched all over and found a method to cache images on Amazon S3. Whenever I upload an image, I add a meta element of cache-control and then set max-age=86400. However, on any sort of speed test site it says that my images do not have a cache applied to them.
I am not sure if it matters, but I have CloudFront linked to this S3 bucket. Sorry, but completely new to AWS.
Anyone know why my images may not be caching?
on any sort of speed test site it says that my images do not have a cache applied to them.
That isn't what this says. The screenshot says they have a short freshness lifetime, and longer than 1 week is recommended.
Your setting of max-age=86400 is only 24 hours.
Hey I have started using Cloudfront. In my application I have images in s3 bucket.
User can update these images .When user update the image ,image get created in the s3bucket and replaces the older image with the new image .After the image get still the older image get dispalyed to user as for GET operations I am using Cloudfront so the older image is retrieved from the cloudfront cache.
So is there any technique to resolve this ...
As is the case with pretty much every CDN, you have to invalidate the cache to get the CDN to start serving the new version. http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html
I would suggest reading all the content at that link under the "Adding, Removing, or Replacing Objects in a Distribution" section. Actually I would suggest reading all the CloudFront documentation so that you can understand how the service you are using works.
You can resolve your issue by setting up your cache TTL to 0.
Go to "AWS Dashboard | S3 | Your bucket | Your file | Edit Properties | Metadata".
There set your "Cache-Control" value to "max-age=0".
More information here:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html