Reduce CloudFront costs when serving static S3 website over HTTPS - amazon-web-services

I maintain a 'hobby' website to experiment with AWS technologies. Because it's a pure hobby, I am trying to keep its costs as low as possible, and only use those services that are absolutely necessary.
Over the months, the website has started to generate some traffic, about 30-50 hits per day, and on some days it has had up to 1K hits per day.
I am using CloudFront (CF) for the main purpose of having HTTPS and having a way to connect my domain with my S3 website bucket, but the costs have been going up as a result of the increase in hits.
Obviously, at this stage, the costs are manageable (few dollars p. month), but as I said my goal is to keep costs to an absolute minimum, and CF is starting to be the lion share of my costs.
Reviewing the CF costs in Bill Details, show me that HTTPS requests and especially Bandwidth make up the lion share of the costs.
I am looking for a way that allows me to continue using CF for the HTTPS and for having a way to point my domain securely serve from the S3 bucket, but to reduce costs resulting from the requests and bandwidth.
The website is static and entirely hosted on S3. It contains:
an index.html - auto-updated every hour
10 category pages (250 kilobyte in size each) - auto-updated every hour, they contain links to the detail pages
< 1,000 details pages (100 kilobyte in size each) - these are created once, and then never changed again
< 1,000 images (50 kilobyte in size each) - each detail page has 1 image, their behaviour is the same as details pages
My CF configuration is as follows:
no Origin Custom Headers
Behaviour:
Path pattern: Default (*)
Viewer protocol policy: Redirect HTTP to HTTPS
Cache Based on Selected Request Headers: Whitelist
Whitelist Headers: Referer
Object Caching: Customize
Min. TTL: 0
Max. TTL: 31536000
Default TTL: 0
Forward Cookies: None
Query String Forwarding and Caching: None
No geo restrictions
Analysing the majority of CF cost being Bandwidth, this tells me that it may be the page and image files that is causing this. I.e. when people load my pages, and the image files are being served, it adds up to 100 kb + 50 kb per page.
Based on my research on CF, I suspect that the Path Pattern and TTL parameters is what needs to be optimised here to achieve a cost reduction. If someone could point me in the right direction that would be great.

Bandwidth costs are proportional to the amount of data retrieved from your website.
Amazon S3: 9c/GB
Amazon CloudFront: 8.5c/GB to 17c/GB depending upon location
Some ideas to reduce your costs:
Change the CloudFront distribution to use Price Class 100, which only serves traffic from the lower-cost locations. Users in other locations will have slower access, but you'll save money!
Increase your default TTL so that content remains cached longer, resulting in fewer repeat requests.
Activate and examine CloudFront Access Logs to analyse incoming traffic. It might be that a lot of requests are coming from spiders and bots. You can limit such access by creating a robots.txt file.
Reduce the filesize of images by lowering quality. The trade-off in quality might be worth the cost savings.
Make a less-popular website. That will lower your costs! :)

Related

Total cache space limit per CloudFront distribution per POP?

How much maximum cache space allowed per distribution in AWS CloudFront CDN per each POP?
For example:
If I run a 4K video sharing website and each video size is approximately 2GB.
100K users came from same city in one day to watch videos (from 10K video collection) on website.
So, CloudFront need to serve 10,000 different videos for 100K users and each video size is 2GB. So, total 20,000 GB (20TB) of space.
So, Is CloudFront store all that 20TB of content as cache in that specific POP?
I heard once that Amazon Prime Video uses CloudFront and they have videos that get cached in CloudFront, but I have never seen any actual figures published about the size of the cache.
My assumption is that CloudFront caches everything, but older stuff falls off the cache when they run out of space. So, if your videos keep getting watched, they'll stay in the cache.
CloudFront uses a Regional model, so if a cache at the edge does not have content (eg Manchester, England) it will go the cache in the nearest Region (London). If that cache is missing the content, it will go back to the source. So, this means that several edges can benefit from a nearby regional cache, which is more likely to have content since it would receive more 'hits' (and I assume it would also have a larger cache).
If you want to measure how well CloudFront is caching, you can determine whether something was served from the cache by looking for X-Cache: Hit from cloudfront or X-Cache: Miss from cloudfront in the page headers.
CloudWatch can also provide a Cache Hit Rate, which provides the proportion of requests that were served from CloudFront edge caches instead of going to origin servers for content.

How to use CloudFront efficiently for less popular website?

We are building a website which contains a lot of images and data. We have optimized a lot to make the website faster. Then we decided to use AWS CloudFront also to make it faster for all regions around the world. The app works faster after the integration of CloudFront.
But later we found that the data will load to CloudFront cache only when the website asks for it. So we are afraid that the initial load will take the same time as it used to take without the CDN because it loads from S3 to CDN first and then to the user.
Also, we used the default TTL values (ie., 24 hours). In our case, a user may log in once or twice per week to this website. So in that case also, the advantage of caching won't work here as well because the caching expires after 24 hours. Will raising the time of TTL (Maximum TTL) to a larger value solve the issue? Does it cost more money? And I also read that, increasing to a longer TTL is not a good idea as it has some disadvantages also for updating the data in s3.
Cloudfront will cache the response only after the first user requests for it. So it will be slow for the first user, but it will be significantly faster for every other user after the first user. So it does make sense to use Cloudfront.
Using the default TTL value is okay. Since most users will see the same content and the website has a lot of static components as well. Every user except the first user will see a fast response from your website. You could even reduce this to 10-12 hours depending on how often you expect your data to change.
There is no additional cost to increasing your TTL. However invalidation requests are charged. So if you want to remove a cache, there will be a cost added to it. So I would prefer to keep a short TTL as short as your data is expected to change, so you dont have to invalidate existing caches when your data changes. At the same time, maximum number of users can benefit from your CDN.
No additional charge for the first 1,000 paths requested for invalidation each month. Thereafter, $0.005 per path requested for invalidation.
UPDATE: In the event that you only have 1 user using the website over a long period of time (1 week or so), it might not be of much benefit to use CloudFront at all. CloudFront and all caching services are only effective when there are multiple users requesting for the same resources.
However you might still have a marginal benefit using CloudFront, as the requests will be routed from the edge location to S3 over AWS's backbone network which is much faster than the internet. But whether this is cost effective for you or not depends on how many users are using the website and how slow it is.
Aside from using CloudFront, you could also try S3 Cross Region Replication to increase your overall speed. Cross Region Replication can replicate your buckets to a different region as and when they are added in one region. This can help to minimize latency for users from other regions.

GCP CDN not caching data

We have deployed our website on GCP VM, and enabled GCP CDN in front of the VM. When we browse website in most of the cases GCP CDN making requests to the Origin VM.
I am using below stack driver query to check the cache hits.
resource.type="http_load_balancer"
resource.labels.forwarding_rule_name="rule_name"
httpRequest.serverIp="gcpvmip"
httpRequest.requestUrl="request_url"
httpRequest.cacheFillBytes > 0
Based on your latest comment, it sounds like you're expecting all requests to your site to be served from Cloud CDN's caches without contacting your origin server. However, it's normal to see cache misses when using a CDN. Each CDN operates numerous caches, not one big global cache. The fact the content for one URL has been inserted into one cache does not mean it will be present in all caches everywhere. Further, unpopular cache entries are routinely evicted from cache to make room for more popular content.
Here are some relevant excerpts from the Cloud CDN docs:
Cloud CDN uses caches in numerous locations around the world. Caching
is reactive in that an object is stored in a particular cache if a
request goes through that cache and if the response is cacheable. An
object stored in one cache does not automatically replicate into other
caches; cache-to-cache fill happens only in response to a
client-initiated request.
https://cloud.google.com/cdn/docs/overview
Note that the expiration time is an upper bound on how long a cache
entry remains valid. There is no guarantee that a cache entry will
remain in the cache until it expires.
https://cloud.google.com/cdn/docs/caching
Note, though, that Cloud CDN operates numerous caches around the
world, and old cache entries are routinely evicted to make room for
new content. As a result, multiple cache fills per resource are
expected as part of normal operation.
https://cloud.google.com/cdn/docs/support#low-hit-rate
If you're seeing low cache hit rates for popular content, that last link has suggestions that should help.
I know exactly what the problem is... GCP CDN does not have Origin Shield feature. Even worse, with GCP almost every request comes from a different one of its massive number of CDN PoPs around the world. Without Origin Shield, your app server is the origin server and it has to fill the cache of every CDN edge point.
In my experience you should use GCP CDN only for DOS protection & caching and improving the TTFB performance of HTML requests (specially to offload SSL handshake). Use another CDN for caching other assets with better Cache/Hit ratio.
Some CDN providers have Origin Shield which helps with the cache hit ratio. E.g. create cdn.yourdomain.com with a CDN provider that has Origin Shield Feature and serve all other static content from there.
I know it may sound crazy to put a CDN in front of your CDN, but trust me it works amazing and you can even save money if you go with a CDN that charges less for bandwidth. Also, GCP CDN only caches content up to 10MB.

GET from CloudFront after PUT returns old data

I am using CloudFront to access my S3 bucket.
I perform both GET and PUT operations to retrieve and update the data. The problem is that after i send PUT request with new data, GET request still returns the older data. I do see that the file is updated in S3 bucket.
I am performing both GET and PUT from iOS application. However, i tried performing GET request using regular browsers and i still receive older data.
Do i need to do anything in addition to make CloudFront refresh its data?
Cloudfront caches your data. How long depends on the headers the origin serves content with and the distribution settings.
Amazon has a document with the full results of how they interac, but if you haven't set your cache control headers, not changed any cloudfront settings, then by default data is cached for upto 24 hours.
You can either:
set headers indicating how long to cache content for (e.g. Cache-Control: max-age=300 to allow caching for up to 5 minutes). How exactly you do this depends on how you are uploading the content, at a pinch you can use the console
Use the console / api to invalidate content. Beware that only the first 1000 invalidations a month a free - beyond that amazon charges. In addition, invalidations take 10-15 minutes to process.
Change the naming strategy for your s3 data so that new data is served under a different name (perhaps less relevant in your case)
When you PUT an object into S3 by sending it through Cloudfront, Cloudfront proxies the PUT request back to S3, without interpreting it within Cloudfront... so the PUT request changes the S3 object, but the old version of the object, if cached, would have no reason to be evicted from the Cloudfront cache, and would continue to be served until it is expired, evicted, or invalidated.
"The" Cloudfront cache is not a single thing. Cloudfront has over 50 global edge locations (reqests are routed to what should be the closest one, using geolocating DNS), and objects are only cached in locations through which they have been requested. Sending an invalidation request to purge an object from cache causes a background process at AWS to contact all of the edge locations and request the object be purged, if it exists.
What's the point of uploading this way, then? The point has to do with the impact of packet loss, latency, and overall network performance on the throughput of a TCP connection.
The Cloudfront edge locations are connected to the S3 regions by high bandwidth, low loss, low latency (within the bounds of the laws of physics) connections... so the connection from the "back side" of Cloudfront towards S3 may be a connection of higher quality than the browser would be able to establish.
Since the Cloudfront edge location is also likely to be closer to the browser than S3 is, the browser connection is likely to be of higher quality and more resilient... thereby improving the net quality of the end-to-end logical connection, by splitting it into two connections. This feature is solely about performance:
http://aws.amazon.com/blogs/aws/amazon-cloudfront-content-uploads-post-put-other-methods/
If you don't have any issues sending directly to S3, then uploads "through" Cloudfront serve little purpose.

How to reduce Amazon Cloudfront costs?

I have a site that has exploded in traffic the last few days. I'm using Wordpress with W3 Total Cache plugin and Amazon Cloudfront to deliver the images and files from the site.
The problem is that the cost of Cloudfront is quite huge, near $500 just the past week. Is there a way to reduce the costs? Maybe using another CDN service?
I'm new to CDN, so I might not be implementing this well. I've created a cloudfront distribution and configured it on W3 Total Cache Plugin. However, I'm not using S3 and don't know if I should or how. To be honest, I'm not quite sure what's the difference between Cloudfront and S3.
Can anyone give me some hints here?
I'm not quite sure what's the difference between Cloudfront and S3.
That's easy. S3 is a data store. It stores files, and is super-scalable (easily scaling to serving 1000's of people at once.) The problem is that it's centralized (i.e. served from one place in the world.)
CloudFront is a CDN. It caches your files all over the world so they can be served faster. If you squint, it looks like they are 'storing' your files, but the cache can be lost at any time (or if they boot up a new node), so you still need the files at your origin.
CF may actually hurt you if you have too few hits per file. For example, in Tokyo, CF may have 20 nodes. It may take 100 requests to a file before all 20 CF nodes have cached your file (requests are randomly distributed). Of those 100 requets, 20 of them will hit an empty cache and see an additional 200ms latency as it fetches the file. They generally cache your file for a long time.
I'm not using S3 and don't know if I should
Probably not. Consider using S3 if you expect your site to massively grow in media. (i.e. lots of use photo uploads.)
Is there a way to reduce the costs? Maybe using another CDN service?
That entirely depends on your site. Some ideas:
1) Make sure you are serving the appropriate headers. And make sure your expires time isn't too short (should be days or weeks, or months, ideally).
The "best practice" is to never expire pages, except maybe your index page which should expire every X minutes or hours or days (depending on how fast you want it updated.) Make sure every page/image says how long it can be cached.
2) As stated above, CF is only useful if each page is requested > 100's of times per cache time. If you have millions of pages, each requested a few times, CF may not be useful.
3) Requests from Asia are much more expensive than the from the US. Consider launching your server in Toyko if you're more popular there.
4) Look at your web server log and see how often CF is requesting each of your assets. If it's more often than you expect, your cache headers are setup wrong. If you setup "cache this for months", you should only see a handful of requests per day (as they boot new servers, etc), and a few hundred requests when you publish a new file (i.e. one request per CF edge node).
Depending on your setup, other CDNs may be cheaper. And depending on your server, other setups may be less expensive. (i.e. if you serve lots of small files, you might be better off doing your own caching on EC2.)
You could give cloudflare a go. It's not a full CDN so it might not have all the features as cloudfront, but the basic package is free and it will offload a lot of traffic from your server.
https://www.cloudflare.com
Amazon Cloudfront costs Based on 2 factor
Number of Requests
Data Transferred in GB
Solution
Reduce image requests. For that combine small images into one image and use that image
https://www.w3schools.com/css/tryit.asp?filename=trycss_sprites_img (image sprites)
Don't use CDN for video file because video size is high and this is responsible for too high in CDN coast
What components make up your bill? One thing to check with W3 Total Cache plugin is the number of invalidation requests it is sending to CloudFront. It's known to send a large amount of invalidations paths on each change, which can add up.
Aside from that, if your spend is predictable, one option is to use CloudFront Security Savings Bundle to save up to 30% by committing to a minimum amount for a one year period. It's self-service, so you can sign up in the console and purchase additional commitments as your usage grows.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/savings-bundle.html
Don't forget that cloudfront has 3 different price classes, which will influence how far your data is being replicated, but at the same time, it will make it cheaper.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PriceClass.html
The key here is this:
"If you choose a price class that doesn’t include all edge locations, CloudFront might still occasionally serve requests from an edge location in a region that is not included in your price class. When this happens, you are not charged the rate for the more expensive region. Instead, you’re charged the rate for the least expensive region in your price class."
It means that you could use price class 100 (the cheapest one) and still get replication on regions you are not paying for <3