Amazon CloudFront to serve dynamic Vue.js Build flies - amazon-web-services

I am using Amazon ec2 server, 53 DNS, and my website is hosted at namecheap (with ssl).
I figured that since Vue build files are usually big and they take much time to download, it'd be better if I could serve them using a CDN server.
However, these are not static files, in the sense that every time that I change my Vue source code and build and upload to my server, the files' content change and so do their names.
So is there an option for a CDN that searches for files and then serves them if they are found?
I've read everywhere S3 being mentioned along with CloudFront, but it seems that it only supports specific files upload, and uploading my Vue build files everytime I change my code is inconvenient.

Configure TTL as 0 in CloudFront for this image. You have option to configure a different TTL for different paths in origin.
CloudFront will not cache the files if the content is changed with a TTL of 0. For all incoming requests, CloudFront will check with origin if there is a change in the file contents and will automaoically refresh it's cache if the original content changes.
This should be an ideal TTL value for dynamically generated contents. If you think that the file would change infrequently, you can perhaps configure the TTL to a bit higher value.

Related

Continuous Delivery issues with S3 and AWS CloudFront

I'm building out a series of content websites, and I've built a working CodePipeline that allows me to push edits to HTML files on github that instantly reflect in the S3 bucket, and consequently on the live website.
I created a cloudfront distro to get HTTPS for my website. The certificate and distro work fine, and it populates with my index.html in my S3 bucket, but the changes made via my github pipeline to the S3 bucket are reflected in the S3 bucket but not the CloudFront Distribution.
From what I've read, the edge locations used in cloudfront don't update their caches super often, and when they do, they might not update the edited index.html file because it has the same name as the old version.
I don't want to manually rename my index.html file in S3 every time one of my writers needs to post a top 10 Tractor Brands article or implement an experimental, low-effort clickbait idea, so that's pretty much off the table.
My overall objective is to build something where teams can quickly add an article with a few images to the website that goes live in minutes, and I've been able to do it so far but not with HTTPS.
If any of you know a good way of instantly updating CloudFront Distributions without changing file names, that would be great. Othterwise I'll probably have to start over because I need my sites secured and the ability to update them instantly.
You people are awesome. Thanks a million for any help.
You need to invalidate files from the edge caches. It's a simple and quick process.
You can automate the process yourself in your pipeline, or you could potentially use a third-party tool such as aws-cloudfront-auto-invalidator.

How can I clear the cache on a static Cloud Storage website after bucket file changes?

I have a static website that I'm serving through Google Cloud. This is done by storing the static files in a publicly-accessible bucket, and using that bucket as the backend for an HTTPS load balancer. (The CDN option for the load balancer is NOT selected.)
The site loads fine, but my problem is that when I update the bucket contents, those changes take an unpredictable amount of time to reflect in the browser. I am explicitly refreshing, and I am also trying while the Chrome console is open, with "disable cache" selected in the Network tab.
I have ensured that the bucket code is actually updated by navigating to the "object details" page in Cloud Storage for the javascript file in question, and visiting the provided "Link URL". I grep it for my changes and I see them. Then I visit my website, view source, open the linked js file in a new tab, grep for my changes, and do not see them. So they are in the bucket, but being cached somewhere.
I'm not sure if the caching I'm experiencing is happening in the browser or at some layer in Google Cloud. But how can I make it so that when I change the bucket contents, I can see those changes immediately in my browser? How can I ensure the cache, wherever it's happening, clears after each bucket update?
Here an extract of the documentation:
Note also that because objects can be cached at various places on the Internet there is no way to force a cached object to expire globally (unlike the way you can force your browser to refresh its cache). If you want to prevent serving cached versions of publicly readable objects, set "Cache-Control:no-cache, max-age=0" on the object.
So, I recommend you to set the cache max-age to 0 for your file if you don't want to have any latency when you update your bucket. However it's a trade off between low serving latency/low cost (less read in storage, less egress to pay) and low update latency.
All depend of your use case!
Seems there is a dedicated command for one time cache invalidation:
# Find the name of URL mapping.
gcloud compute url-maps list
# Invalidate some path.
gcloud compute url-maps invalidate-cdn-cache prod-lb --path='/test/*'
Invalidation takes forever (tens of minutes!) so additional --async might be desired and you can check the job status with:
gcloud compute url-maps list-cdn-cache-invalidations --global prod-lb
https://cloud.google.com/cdn/docs/invalidating-cached-content

AWS CloudFront Root Object Update Latency

I deploy a single-page application website by pushing assets to AWS S3 and serving the files through CloudFront. As per this answer, it isn't possible for me to serve files directly from S3 using SSL under my own domain, so I don't have a choice about using CloudFront if I want to serve files in this way.
When I redeploy, I generate a new timestamped, root HTML file (which itself links to the updated JS and CSS bundles), push it to S3 along with everything else, and then make that new file the new Default Root Object for the CloudFront distribution via the AWS. This prevents CloudFront from caching everything and hiding the updates.
The problem is that, occasionally, CloudFront takes a long time to update the root object. As I write this, I'm tabbing over hitting refresh every 60 seconds waiting for an important change to hit production. CloudFront shows the correct (newest) root object via the web console but it also shows "Status: In Progress."
At times, this delay is barely noticeable, and other times it's quite long. Today it is approaching an hour delay.
How can I avoid this? I'm open to both changes to this deployment method using S3 and CloudFront OR switching to an alternative platform that is known to handle this use case better.
This is how I solved it.
Enable caching values to 0 seconds in cloud front.
I have also noticed browser caches document served.
I had to add http headers to S3 bucket to serve every object not to cache,
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
Documentation on Object Expiration:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html
If you set your cache to a long time and like to remove the cache from CloudFront, you can do the invalidation on the root object.
Hope it helps.

AWS CloudFront website not being updated

Our website hosted on CloudFront has not been updated for almost 24 hours now.
The CloudFront invalidation updated a few of the files. I can see on S3 all of the files have been updated. Performing a GET on these files I can see the timestamps are all correct except for one of the files (a javascript minified file called app.min.js) which still has an old timestamp on it. However, looking at S3, the app.min.js file has the correct updated timestamp. Forcing a no cache on the file, the app.min.js still reflects the old file.
Does anyone have any suggestions on what could be happening here?
Your files are still being cached somewhere. If it's not cached in CloudFront, it may be cached in your browser or somewhere else between CloudFront and you.
Invalidating the CloudFront distribution does not invalidate the cache in your browser. So make sure you're using a fresh browser to test this. Better yet, use curl.
Invalidate CloudFront again
Restart your browser
Use a different browser
Use a different computer
Use curl to avoid local caches
Do anything to eliminate the possibility of hitting a cached version.
Also:
Adding "no cache" to a file on S3 won't have any effect on the cached version in CloudFront. You'll need to invalidate the cache again to force CloudFront to get the new version.
The default TTL for CloudFront is 24 hours. So once it hits 24 hours, it should re-get the file from the origin. You can look at the headers to see how long it will be before the TTL runs out.

Setting up Amazon Cloudfront without S3

I want to use Cloudfront to serve images and CSS from my static website. I have read countless articles showing how to set it up with Amazon S3 but I would like to just host the files on my host and use cloud front to speed up delivery of said files, I'm just unsure on how to go about it.
So far I have created a distribution on CloudFront with my Origin Domain and CName and deployed it.
Origin Domain: example.me CName media.example.me
I added the CNAME for my domain:
media.mydomain.com with destination xxxxxx.cloudfront.net
Now this is where I'm stuck? Do I need to update the links in my HTML to that cname so if the stylesheet was http://example.me/stylesheets/screen.css do I change that to http://media.example.me/stylesheets/screen.css
and images within the stylesheet that were ../images/image1.jpg to http://media.example.me/images/image1.jpg?
Just finding it a little confusing how to link everything it's the first time I have really dabbled in using a CDN.
Thanks
Yes, you will have to update the paths in your HTML to point to CDN. Typically if you have a deployment/build process this link changing can be done at that time (so that development time can use the local files).
Another important thing to also handle here is the versioning the CSS/JS etc. You might make frequent changes to your CSS/JS. When you make any change typically CDNs take 24 hrs to reflect. (Another option is invalidating files on CDN, this is but charged explicitly and is discouraged). The suggested method is to generate a path like "media.example.me/XYZ/stylesheets/screen.css", and change this XYZ to a different number for each deployment (time stamp /epoch will do). This way with each deployment, you need to invalidate only the HTML and other files are any way a new path and will load fresh. This technique is generally called finger-printing the URLs.
Yes, you would update the references to your CSS files to load via the CDN domain. If image paths within CSS do not include a domain, they will also automatically load via cloudfront.