I am wondering if anyone has an simple answer to this. If you hit an Akamai server for an image, but the response is returned with a 304 code instead of a 200, does Akamai charge for the call since no data is returned with a 304 and image is served from the browser cache?
If by charge you mean count against your monthly bandwidth allotment, then no. Assuming you're using their Origin Pull service, the only exception is if the file is in your cache but not stored on Akamai's edge servers. In that case, Akamai pulls the file from your server which would incur a small bandwidth hit as both incoming and outgoing traffic is counted on Akamai.
Well, it depends on how you are being billed. By bandwidth or by pageviews?
If by bandwidth, then yes, you would be charged and bits would be delivered on your behalf.
There is no discrepancy if your object is in cache. If the object is in cache, then Akamai won't need to go back to origin to fetch the data.
3xx responses still transfer a small number of bytes per request to tell you about the response. This typically consists of HTTP headers / cookies + 3xx response and the URL it wants to redirect you to.
Therefore you will be incurring a small amount of cost between your user and the edge node, and if the request is a cache miss or not cacheable, then also bandwidth cost between your origin server and Akamai.
Related
AWS CloudFront and Azure CDN can dynamically compress files under certain circumstances. But do they also support dynamic compression for HTTP range requests?
I couldn't find any hints in the documentations only on the Google Cloud Storage docs.
Azure:
Range requests may be compressed into different sizes. Azure Front Door requires the content-length values to be the same for any GET HTTP request. If clients send byte range requests with the accept-encoding header that leads to the Origin responding with different content lengths, then Azure Front Door will return a 503 error. You can either disable compression on Origin/Azure Front Door or create a Rules Set rule to remove accept-encoding from the request for byte range requests.
See: https://learn.microsoft.com/en-us/azure/frontdoor/standard-premium/how-to-compression
AWS:
HTTP status code of the response
CloudFront compresses objects only when the HTTP status code of the response is 200, 403, or 404.
--> Range-Request has status code 206
See:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/206
• Yes, Azure CDN also supports dynamic compression for HTTP range requests wherein it is known as ‘object chunking’. You can describe object chunking as dividing the file to be retrieved from the origin server/resource into smaller chunks of 8 MB. When a large file is requested, the CDN retrieves smaller pieces of the file from the origin. After the CDN POP server receives a full or byte-range file request, the CDN edge server requests the file from the origin in chunks of 8 MB.
• After the chunk arrives at the CDN edge, it's cached and immediately served to the user. The CDN then prefetches the next chunk in parallel. This prefetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file is downloaded (if requested), all byte ranges are available (if requested), or the client terminates the connection.
Also, this capability of object chunking relies on the ability of the origin server to support byte-range requests; if the origin server doesn't support byte-range requests, requests to download data greater than 8mb size will fail.
Please find the below link for more details regarding the above: -
https://learn.microsoft.com/en-us/azure/cdn/cdn-large-file-optimization#object-chunking
Also, find the below link for more clarification on the types of compression and the nature of compression for Azure CDN profiles that are supported: -
https://learn.microsoft.com/en-us/azure/cdn/cdn-improve-performance#azure-cdn-standard-from-microsoft-profiles
Some tests have shown when dynamic compression is enabled in AWS CloudFront the range support is disabled. So Range and If-Range headers are removed from all request.
I have .net core MVC API implementation. In my controller I try to query for 800 records from DB. In result my response body size is abound 6MB. In that case response time is over 6s. My service is in AWS cloud.
I made several tests to make diagnostic of service. In all these scenarios I still ready 800 records from DB. Here is list of my experiments:
Return only 10 records - my response time were under 800ms always and size of response body 20kB.
Return only 100 records - my response time were over 800ms but not timeouts and size of response body 145kB
Try to use my custom json serialization in controller await JsonSerializer.SerializeAsync(HttpContext.Response.Body, limitedResult); - a bit better result but like 10% only
Return only 850 records - my response time were over 6s but and size of response body 6MB
In service I don't have problem with memory or with restart service.
Looks like for Kastrel the problem is to serve large response data.
My objections are connected with buffers I/O which for large response will use disk what can affect performance of AWS docker image.
Question is how to optimize Kastrel to serve large response size?
UPDATE:
I enabled zip compression on server side. My files are compressed quite good because of json format. But result is exact the SAME. Network bandwidth is not a problem. So looks like between my controller and compression is bottleneck. Any suggestion how to configure .net core service to handle large response (>1 MB)?
Since you are already sure that network is not an issue - this mostly points to time being spent in serialization. You can try running the application on local machine and using a profiler such as PerfView to see where the time is spent most for big json.
I'm not sure if CloudFront is a right choice for this purpose, so correct me please if I'm wrong.
I want to broadcast some information to all website users each 2-3 seconds. So instead of introducing websockets, I decided to cache 10 KB at CloudFront, and perform short-polling from web client each 2-3 seconds.
CloudFront should request data from HTTP server. Suppose HTTP server response latency is 200ms, and CloudFront get rps equal to 500. Cache get outdated, and during that 200ms that CloudFront need to refreh data from server - it will receive 500 * 0.2 = 100 requests. What is the behaviour of CloudFront when it receive 100 requests at the point where data are outdated but server hasn't respond yet?
I have an application deployed on AWS Elastic Beanstalk, I added some simple licensing to stop abuse of the api, the user has to pass a licensekey as a field
i.e
search.myapi.com/?license=4ca53b04&query=fred
If this is not valid then the request is rejected.
However until the monthly updates the above query will always return the same data, therefore I now point search.myapi.com to an AWS CloudFront distribution, then only if query is not cached does it go to actual server as
direct.myapi.com/?license=4ca53b04&query=fred
However the problem is that if two users make the same query they wont be deemed the same by Cloudfront because the license parameter is different. So the Cloudfront caching is only working at a per user level which is of no use.
What I want to do is have CloudFront ignore the license parameters for caching but not the other parameters. I dont mind too much if that means user could access CloudFront with invalid license as long as they cant make successful query to server (since CloudFront calls are cheap but server calls are expensive, both in terms of cpu and monetary cost)
Perhaps what I need is something in front of CloudFront that does the license check and then strips out the license parameter but I don't know what that would be ?
Two possible come to mind.
The first solution feels like a hack, but would prevent unlicensed users from successfully fetching uncached query responses. If the response is cached, it would leak out, but at no cost in terms of origin server resources.
If the content is not sensitive, and you're only trying to avoid petty theft/annoyance, this might be viable.
For query parameters, CloudFront allows you to forward all, cache on whitelist.
So, whitelist query (and any other necessary fields) but not license.
Results for a given query:
valid license, cache miss: request goes to origin, origin returns response, response stored in cache
valid license, cache hit: response served from cache
invalid license, cache hit: response served from cache
invalid license, cache miss: response goes to origin, origin returns error, error stored in cache.
Oops. The last condition is problematic, because authorized users will receive the cached error if the make the same query.
But we can fix this, as long as the origin returns an HTTP error for an invalid request, such as 403 Forbidden.
As I explained in Amazon CloudFront Latency, CloudFront caches responses with HTTP errors using different timers (not min/default/max-ttl), with a default of t minutes. This value can be set to 0 (or other values) for each of several individual HTTP status codes, like 403. So, for the error code your origin returns, set the Error Caching Minimum TTL to 0 seconds.
At this point, the problematic condition of caching error responses and playing them back to authorized clients has been solved.
The second option seems like a better idea, overall, but would require more sophistication and probably cost slightly more.
CloudFront has a feature that connects it with AWS Lambda, called Lambda#Edge. This allows you to analyze and manipulate requests and responses using simple Javascript scripts that are run at specific trigger points in the CloudFront signal flow.
Viewer Request runs for each request, before the cache is checked. It can allow the request to continue into CloudFront, or it can stop processing and generate a reaponse directly back to the viewer. Generated responses here are not stored in the cache.
Origin Request runs after the cache is checked, only for cache misses, before the request goes to the origin. If this trigger generates a response, the response is stored in the cache and the origin is not contacted.
Origin Response runs after the origin response arrives, only for cache misses, and before the response goes onto the cache. If this trigger modifies the response, the modified response stored in the cache.
Viewer Response runs immediately before the response is returned to the viewer, for both cache misses and cache hits. If this trigger modifies the response, the modified response is not cached.
From this, you can see how this might be useful.
A Viewer Request trigger could check each request for a valid license key, and reject those without. For this, it would need access to a way to validate the license keys.
If your client base is very small or rarely changes, the list of keys could be embedded in the trigger code itself.
Otherwise, it needs to validate the key, which could be done by sending a request to the origin server from within the trigger code (the runtime environment allows your code to make outbound requests and receive responses via the Internet) or by doing a lookup in a hosted database such as DynamoDB.
Lambda#Edge triggers run in Lambda containers, and depending on traffic load, observations suggest that it is very likely that subsequent requests reaching the same edge location will be handled by the same container. Each container only handles one request at a time, but the container becomes available for the next request as soon as control is returned to CloudFront. As a consequence of this, you can cache the results in memory in a global data structure inside each container, significantly reducing the number of times you need to ascertain whether a license key is valid. The function either allows CloudFront to continue processing as normal, or actively rejects the invalid key by generating its own response. A single trigger will cost you a little under $1 per million requests that it handles.
This solution prevents missing or unauthorized license keys from actually checking the cache or making query requests to the origin. As before, you would want to customize the query string whitelist in the CloudFront cache behavior settings to eliminate license from the whitelist, and change the error caching minimum TTL to ensure that errors are not cached, even though these errors should never occur.
Cloudfront is configured to cache the images from our app. I found that the images were evicted from the cache really quickly. Since the images are generated dynamically on the fly, this is pretty intense for our server. In order to solve the issue I set up a testcase.
Origin headers
The image is served from our origin server with correct Last-Modified and Expires headers.
Cloudfront cache behaviour
Since the site is HTTPS only I set the Viewer Protocol Policy to HTTPS. Forward Headers is set to None and Object Caching to Use Origin Cache Headers.
The initial image request
I requested an image at 11:25:11. This returned the following status and headers:
Code: 200 (OK)
Cached: No
Expires: Thu, 29 Sep 2016 09:24:31 GMT
Last-Modified: Wed, 30 Sep 2015 09:24:31 GMT
X-Cache: Miss from cloudfront
A subsequent request
A reload a little while later (11:25:43) returned the image with:
Code: 304 (Not Modified)
Cached: Yes
Expires: Thu, 29 Sep 2016 09:24:31 GMT
X-Cache: Hit from cloudfront
A request a few hours later
Nearly three hours later (at 14:16:11) I went to the same page and the image loaded with:
Code: 200 (OK)
Cached: Yes
Expires: Thu, 29 Sep 2016 09:24:31 GMT
Last-Modified: Wed, 30 Sep 2015 09:24:31 GMT
X-Cache: Miss from cloud front
Since the image was still cached by the browser it loaded quickly. But I cannot understand how the Cloudfront could not return the cached image. Therefor the app had to generate the image again.
I read that Cloudfront evicts files from its cache after a few days of being inactive. This is not the case as demonstrated above. How could this be?
I read that Cloudfront evicts files from its cache after a few days of being inactive.
Do you have an official source for that?
Here's the official answer:
If an object in an edge location isn't frequently requested, CloudFront might evict the object—remove the object before its expiration date—to make room for objects that have been requested more recently.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html
There is no guaranteed retention time for cached objects, and objects with low demand are more likely to be evicted... but that isn't the only factor you may not have considered. Eviction may not be the issue, or the only issue.
Objects cached by CloudFront are like Schrödinger's cat. It's a loose analogy, but I'm running with it: whether an object is "in the cloudfront cache" at any given instant is not a yes-or-no question.
CloudFront has somewhere around 53 edge locations (where your browser connects and the content is physically stored) in 37 cities. Some major cities have 2 or 3. Each request that hits cloudfront is routed (via DNS) to the most theoretically optimal location -- for simplicity, we'll call it the "closest" edge to where you are.
The internal workings of Cloudfront are not public information, but the general consensus based on observations and presumably authoritative sources is that these edge locations are all independent. They don't share caches.
If, for example, your are in Texas (US) and your request routed through and was cached in Dallas/Fort Worth, TX, and if the odds are equal that you any request from you could hit either of the Dallas edge locations, then until you get two misses of the same object, the odds are about 50/50 that your next request will be a miss. If I request that same object from my location, which I know from experience tends to route through South Bend, IN, then the odds of my first request being a miss are 100%, even though it's cached in Dallas.
So an object is not either in, or not in, the cache because there is no "the" (single, global) cache.
It is also possible that CloudFront's determination of the "closest" edge to your browser will change over time.
CloudFront's mechanism for determining the closest edge appears to be dynamic and adaptive. Changes in the topology of the Internet at large can change shift which edge location will tend to receive requests sent from a given IP address, so it is possible that over the course of a few hours, that the edge you are connecting to will change. Maintenance or outages or other issues impacting a particular edge could also cause requests from a given source IP address to be sent to a different edge than the typical one, and this could also give you the impression of objects being evicted, since the new edge's cache would be different from the old.
Looking at the response headers, it isn't possible to determine which edge location handled each request. However, this information is provided in the CloudFront access logs.
I have a fetch-and-resize image service that handles around 750,000 images per day. It's behind CloudFront, and my hit/miss ratio is about 50/50. That is certainly not all CloudFront's fault, since my pool of images exceeds 8 million, the viewers are all over the world, and my max-age directive is shorter than yours. It has been quite some time since I last analyzed the logs to determine which and how "misses" seemed unexpected (though when I did, there definitely were some, but their number was not unreasonable), but that is done easily enough, since the logs tell you whether each response was a hit or a miss, as well as identifying the edge location... so you could analyze that to see if there's really a pattern here.
My service stores all of its output content in S3, and when a new request comes in, it first sends a quick request to the S3 bucket to see if there is work that can be avoided. If a result is returned by S3, then that result is returned to CloudFront instead of doing all the fetching and resizing work, again. Mind you, I did not implement that capability because of the number of CloudFront misses... I designed that in from the beginning, before I ever even tested it behind CloudFront, because -- after all -- CloudFront is a cache, and the contents of a cache are pretty much volatile and ephemeral, by definition.
Update: I stated above that it does not appear possible to identify the edge location forwarding a particular request by examining the request headers from CloudFront... however, it appears that it is possible with some degree of accuracy by examining the source IP address of the incoming request.
For example, a test request sent to one of my origin servers through CloudFront arrives from 54.240.144.13 if I hit my site from home, or 205.251.252.153 when I hit the site from my office -- the locations are only a few miles apart, but on opposite sides of a state boundary and using two different ISPs. A reverse DNS lookup of these addresses shows these hostnames:
server-54-240-144-13.iad12.r.cloudfront.net.
server-205-251-252-153.ind6.r.cloudfront.net.
CloudFront edge locations are named after the nearest major airport, plus an arbitrarily chosen number. For iad12 ... "IAD" is the International Air Transport Association (IATA) code for Washington, DC Dulles airport, so this is likely to be one of the edge locations in Ashburn, VA (which has three, presumably with different numerical codes at the end, but I can't confirm that from just this data). For ind6, "IND" matches the airport at Indianapolis, Indiana, so this strongly suggests that this request comes through the South Bend, IN, edge location. The reliability of this test would depend on the consistency with which CloudFront maintains its reverse DNS entries. It is not documented how many independent caches might be at any given edge location; the assumption is that there's only one, but there might be more than one, having the effect of increasing the miss ratio for very small numbers of requests, but disappearing into the mix for large numbers of requests.