I am using Amazon CloudFront with the files coming from Amazon S3. I wasn't originally setting the Amazon S3 Metadata to send a Cache-Control header, but I changed it a few weeks ago. Most of the images are showing with the new header. However, I have some that still do not.
For example if I hit this
https://s3.us-east-2.amazonaws.com/channelnet-useast-prod/Themes/Default/Images/phone.png, I see
Cache-Control:max-age=86400
But if I go to the CloudFront URL that points to that S3 image
http://dfb8oqhjho7zs.cloudfront.net/Themes/Default/Images/phone.png, I do not.
As a test, I made a copy of the image, uploaded it to S3, set the Cache-Control header, and verified the header is set when I access it via S3
https://s3.us-east-2.amazonaws.com/channelnet-useast-prod/Themes/Default/Images/phone-matttest.png
or CloudFront
http://dfb8oqhjho7zs.cloudfront.net/Themes/Default/Images/phone-matttest.png
How do I get CloudFront to refresh whatever Amazon-side caching is going on here?
You need to clear/invalidate the CloudFront cache so that it will check your origin for updates.
Related
I'm storing a static web site on Amazon S3 and delivering by Cloudfront. I pushed changes to S3 but Cloudfront is still showing the old files. I made an invalidation in Cloudfront with this path /* which is supposed to clear tho whole cache but it still shows the old staff. I'm wondering how to solve this.
Is it possible to protect data embedded on my website via Amazon CloudFront from hotlinking or other downloads? I am mainly interested in protecting webfonts from being downloaded.
Amazon CloudFront is connected to a S3 Bucket
S3 Bucket Policy controls allowed domains for files via CloudFront
You think that could work?
Since you have cloudfront setup connected to your s3 bucket, you can use Cloudfront Signed Urls to prevent download by anyone from the public.
You can put your fonts in a folder called fonts for example, and setup a separate behaviour in cloudfront for any path that contains /fonts/ and in there you can activate Restrict Viewer Access.
In your website, you will need to add some way to generate the presigned url for this only when your webpage is loaded and you can put a short expiry time for this URL.
Whenever I make a change to my S3 bucket my CloudFront doesn't update to the new content. I have to create an invalidation every time in order to see the new content. Is there another way to make CloudFront load the new content whenever I push content to my S3 bucket?
Let me answer your questions inline.
Whenever I make a change to my S3 bucket my CloudFront doesn't update
to the new content. I have to create an invalidation every time in
order to see the new content.
Yes, this is the default behavior in CloudFront unless you have defined the TTL values to be zero (0).
Is there another way to make CloudFront load the new content whenever
I push content to my S3 bucket?
You can automate the invalidation using AWS Lambda. To do this;
Create an S3 event trigger to invoke a lambda function when you upload any new content to S3.
Inside the Lambda function, write the code to invalidate the CloudFront distribution using AWS CloudFront SDK createInvalidation method.
Note: Make sure the Lambda function has an IAM Role with policy permission to trigger an invalidation in CloudFront.
You have created the S3 origin with cache settings or you have added cache headers to your S3 bucket policy.
If you check your browser request you can check on the cache headers and why it is getting cached.
You can find a list of http cache headers and how they are used here
Hope it helps.
Cloudfront keeps cache at edge points for minimum of one hour.
What you can do, as suggested by the docs, you can use versioned files.
BUT :
New versions of the files for the most popular pages might not be served for up to 24 hours because CloudFront might have retrieved the files for those pages just before you replaced the files with new versions
So I guess your best bet is invalidation.
EDIT: you can prevent the caching behaviour of versioned files if you change their names.
I have a S3 bucket containing some files. This serves as the origin for a Cloudfront distribution.
Whenever I try to access these files via Chrome, Chrome automatically adds a header saying it will accept the gzipped version. This leads Cloudfront to redirect the request to the S3 bucket, so I lose out on the benefit provided by Cloudfront, rendering it useless. These are files of a proprietary format.
If I do a curl request and omit the header relating to gzip compression, Cloudfront will serve me the file instead of redirecting. This is what I want in Chrome too, but I can't force Chrome to change its request headers without changing some flags, which is not an option in my use-case.
How do I get Cloudfront to serve me the file instead of redirecting me to the bucket?
I've set permission on Bucket and CORS header is set on any file in root of the bucket but not for files inside a folder in Bucket. Is there a way to set CORS on files inside folder as well?
If you have configured CORS on a bucket, then that configuration is active for all files in the bucket. S3 had no other options for CORS. If it appears that files outside the root don't have CORS active, you are almost certainly seeing cached responses, coming from somewhere other than S3 (which does not cache anything, itself).
I know this is old, but it might help. The CORS might come from Cloudfront because of the cached response.
To remove the cached files from a Cloudfront distribution you can do so going directly to the CloudFront distribution in the AWS Cli -> Invalidations and create an invalidation with /*. This will removed all the cached objects from the root