Does cloudfront support questionsmarks at URLs to fetch a new resource - amazon-web-services

We are porting a website to a AWS cloudfront backed infrastructure which serves static resource from an S3 bucket.
We access the resource like this
http://example.com/static/foo.css?hash=1
Our CMS generates a new hash when the files changed (cachebusting).
So we upload the file foo.css to the S3 bucket and access
http://example.com/static/foo.css?hash=2
But still the old contents are displayed. It seems to be cached in cloudfront?
Can this be avoided?
Or does the cms need to be modified?

You have to configure your CloudFront distribution to forward query parameters and use them as part of the cache key. Documented here.

Related

Protect webfonts via Amazon CloudFront from download

Is it possible to protect data embedded on my website via Amazon CloudFront from hotlinking or other downloads? I am mainly interested in protecting webfonts from being downloaded.
Amazon CloudFront is connected to a S3 Bucket
S3 Bucket Policy controls allowed domains for files via CloudFront
You think that could work?
Since you have cloudfront setup connected to your s3 bucket, you can use Cloudfront Signed Urls to prevent download by anyone from the public.
You can put your fonts in a folder called fonts for example, and setup a separate behaviour in cloudfront for any path that contains /fonts/ and in there you can activate Restrict Viewer Access.
In your website, you will need to add some way to generate the presigned url for this only when your webpage is loaded and you can put a short expiry time for this URL.

CloudFront for website with custom CMS hosted out of AWS

I've got a website with custom CMS hosted outside of AWS.
How can I set up CloudFront?
My idea is to upload current content to S3 bucket. Set CloudFront up with origin from that S3.
This also require some changes in CMS itself:
To replace static content (pics) urls for the one from S3bucket.
Check if file exists on S3 then use S3 if not use copy on local server.
To interact with S3 butcket API. (uploading/deleting)
When uploading new file, upload it to S3bucket too
When deleting file, delete it from S3 too
Delete only from the bucket.
Is there any other easier way? Mayby I'm just overthinking and cannot see the simplest solution.

AWS CloudFront Not Updating

Whenever I make a change to my S3 bucket my CloudFront doesn't update to the new content. I have to create an invalidation every time in order to see the new content. Is there another way to make CloudFront load the new content whenever I push content to my S3 bucket?
Let me answer your questions inline.
Whenever I make a change to my S3 bucket my CloudFront doesn't update
to the new content. I have to create an invalidation every time in
order to see the new content.
Yes, this is the default behavior in CloudFront unless you have defined the TTL values to be zero (0).
Is there another way to make CloudFront load the new content whenever
I push content to my S3 bucket?
You can automate the invalidation using AWS Lambda. To do this;
Create an S3 event trigger to invoke a lambda function when you upload any new content to S3.
Inside the Lambda function, write the code to invalidate the CloudFront distribution using AWS CloudFront SDK createInvalidation method.
Note: Make sure the Lambda function has an IAM Role with policy permission to trigger an invalidation in CloudFront.
You have created the S3 origin with cache settings or you have added cache headers to your S3 bucket policy.
If you check your browser request you can check on the cache headers and why it is getting cached.
You can find a list of http cache headers and how they are used here
Hope it helps.
Cloudfront keeps cache at edge points for minimum of one hour.
What you can do, as suggested by the docs, you can use versioned files.
BUT :
New versions of the files for the most popular pages might not be served for up to 24 hours because CloudFront might have retrieved the files for those pages just before you replaced the files with new versions
So I guess your best bet is invalidation.
EDIT: you can prevent the caching behaviour of versioned files if you change their names.

Moving files without breaking links in AWS s3

We have a website hosted which uses s3 bucket for storing different files (say policy repo). we want to reorganize the files in a proper folder under S3 bucket without breaking the link to these objects in our website. Is there a way to achieve this in S3?
You could use a Cloudfront distribution in-front of your s3 bucket and use AWS Lambda Edge to re-write the URLs mapping to the new moved folder paths without breaking the old URLs.
From Configuring a Webpage Redirect - Amazon Simple Storage Service:
If your Amazon S3 bucket is configured for website hosting, you can redirect requests for an object to another object in the same bucket or to an external URL.
You set the redirect by adding the x-amz-website-redirect-location property to the object metadata. The website then interprets the object as 301 redirect. To redirect a request to another object, you set the redirect location to the key of the target object.

Redirecting request for non-existent s3 bucket to different bucket

There is a bucket with some world readable content, which is being referenced from many places. We have migrated the contents of the bucket to a new bucket.
Now, we need remove the old bucket, but we cannot remove the endpoints/reference for the objects which were generated in the old bucket.
for example:
Old bucket name: xxx-yyy
Sample endpoint : https://s3.amazonaws.com/xxx-yyy/facebook.png
New bucket name: abc-pqr
Sample endpoint : https://s3.amazonaws.com/abc-pqr/facebook.png
Any request coming to non-existent xxx-yyy bucket should redirect to abc-pqr bucket. We do not want to remove the endpoints, we just want to redirect the request coming to the objects with the endpoint to the new bucket.
It appears that you are referencing files directly in Amazon S3. This format of URL is not able to redirect requests.
Amazon S3 buckets have a capability called Static Website hosting, which gives additional capabilities such as default Index & Error pages, plus the ability to setup a Webpage Redirect.
However, this requires a different URL to access your objects (eg http://xxx-yyy/s3-website-us-west-2.amazonaws.com/facebook.png). Given that you are unable to change your existing links, this would not be an option.
Your only option would be to create web pages in the original S3 bucket that use an HTML redirect to forward browsers to the new location.
With your current setup that's not possible. If you would have used AWS Cloudfront then you could have easily achieved that