I have read the cache key documentation on cloudfront:
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-cloudfront-announces-cache-and-origin-request-policies/
I have a lambda#edge function that gets triggered by Viewer Request. There I set request.uri to values I need to fetch proper files from S3.
From the documentation I don't see anything about request.uri being part of what the cache key considers.
Is there a way to make request.uri affect the cache key?
In Lambda#Edge, request.uri (where the request object is event.Records[0].cf.request) is the resource (path) component of the request URL, including the filename and extension.
This is always part of the cache key. This is default behavior and can't be disabled.
By default, [the cache key] consists of the CloudFront distribution hostname and the resource portion of the request URL (path, file name, and extension)
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-cloudfront-announces-cache-and-origin-request-policies/
...however, this isn't the complete answer.
The value of request.uri can be modified by viewer request and/or origin request triggers... so an important consideration here is when is this value evaluated for determining the cache key?
If the trigger is a viewer request trigger, the cache key uses the value of request.uri as it stands after the trigger runs. This means if a viewer request trigger modifies the value of request.uri (and returns the modified request object to CloudFront) then the cache key is modified to contain the revised value. Cache lookup occurs immediately after the viewer request trigger finishes and returns control to CloudFront.
Things are very different in an origin request trigger. Modifying request.uri in an origin request trigger does not change the cache key (which is completely frozen at that point, as the cache has already been checked and a cacheable response will be stored under the same cache key that generated the cache miss). The value as it stands before the origin request trigger runs is what's in the cache key.
Changing request.uri in either trigger type will also change the request URI that the origin server receives.
Related
My REST API occasionally needs to return a 413 'Payload too large' response.
As context: I use AWS with API Gateway and Lambda. Lambda has a maximum payload of 6Mb. Sometimes - less than 0.1% of requests - the payload is greater that 6Mb and my API returns a 413 status.
The way I deal with this is to provide an alternative way to request the data from the API - as a URL with the URL linked to the data stored as a json file on S3. The S3 is in a bucket with a lifecycle rule that automatically deletes the file after a short period.
This works OK, but has the unsatisfying characteristic that a large payload request results in the client making 3 separate calls:
Make a standard request to the API and receive the 413 response
Make a second request to the API for the data stored at an S3 URL. I use an asURL=true parameter in the GET request for this.
Make a third request to retrieve the data from the S3 bucket
An alternative I'm considering is embedding the S3 URL in the 413 response. For example, embedding it in a custom header. This would avoid the need for the second call.
I could also change the approach so that every request is returned as an S3 URL but then 99.9% of the requests would unnecessarily make 2 calls rather than just 1.
Is there a best practice here, or equally, bad practices to avoid?
I would do the way you said - embed S3 URL in the 413 response. Then the responsibility of recovering from 413 will be on the client to check for 413 in the response and call s3. If the consumer is internal then it would be ok. It could be an inconvenience if the consumer is external.
I am trying to understand Minimum TTL, Maximum TTL and Default TTL with this document.
As my understanding, Maximum TTL is used when HTTP cache header appears in respons to limit maximum cache time and Default TTL is used when there is no HTTP cache header to use as default cache time.
However, for Maximum TTL, there is no specific mention.
In addition, It mentions the relation with forwarding head. Does it mean that if I set any HTTP header to forward to an origin and Minimum TTL is not 0, it doesn't cache anything?
Minimum TTL
Specify the minimum amount of time, in seconds, that you want objects to stay in CloudFront caches before CloudFront forwards another request to your origin to determine whether the object has been updated. The default value for Minimum TTL is 0 seconds.
Important
. If you configure CloudFront to forward all headers to your origin for a cache behavior, CloudFront never caches the associated objects. Instead, CloudFront forwards all requests for those objects to the origin. In that configuration, the value of Minimum TTL must be 0.
When deciding whether and for how long to cache an object, CloudFront uses the following logic:
Check for any Cache-Control response header with these values:
no-cache
no-store
private
If any of these is encountered, stop, and set the object's TTL¹ to the configured value of Minimum TTL. A non-zero value means CloudFront will cache objects that it would not otherwise cache.
Otherwise, find the origin's directive for how long the object may be cached. In order, find one of these response headers:
Cache-Control: s-maxage=x
Cache-Control: max-age=x
Expires
Stop on the first value encountered using this ordering, then continue to the next step.
If no value was found, use Default TTL. Stop.
Otherwise, using the value discovered in the previous step:
If smaller than Minimum TTL, then set the object's TTL to Minimum TTL; otherwise,
If larger than Maximum TTL, then set the object's TTL to Maximum TTL; otherwise,
Use the value found in the previous step as the object's TTL.
See https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html.
It's important to note that the TTL determines how long CloudFront is allowed to cache the response. It does not dictate how long CloudFront is required to cache the response. CloudFront can evict objects from cache before TTL expires, if the object is rarely accessed.
Whitelisting some (but not all) headers for forwarding to the origin does not change any of the above logic.
What it does change is how objects are evaluated to determine whether a cached response is available.
For example, if you forward the Origin header to the origin, then each unique value for an Origin header creates a different cache entry. Two requests that are identical, except for their Origin header, are then considered different objects... so a cached response for Origin: https://one.example.com would not be used if a later request for the same resource included Origin: https://two.example.com. Both would be sent to the origin, and both would be cached independently, for use in serving future requests with same the matching request header.
CloudFront does this because if you need to forward headers to the origin, then this implies that the origin will potentially react differently to different values for the whitelisted headers... so they are cached separately.
Forwarding headers unnecessarily will thus reduce your cache hit rate unnecessarily.
There is no documented limit to the number of different copies of the same resource that CloudFront can cache, based on varying headers.
But forwarding all headers to the origin reduces to almost zero the chance of any future request being truly identical. This would potentially consume a lot of cache storage, storing objects that would never again be reused, so CloudFront treats this as a special case, and does not allow any caching under this condition. As a result, you are required to set Minimum TTL to 0 for consistency.
¹the object's TTL as used here refers to CloudFront's internal timer for each cached object that tracks how long it is allowed to continue to serve the cached object without checking back with the origin. The object's TTL inside CloudFront is known only to CloudFront, so this value does not impact browser caching.
Does Cloudfront require special settings to trigger a log?
I have the following flow:
Devices -> Cloudfront -> API Gateway -> Lambda Function
which works, but Cloudwatch doesn't seem to create logs for the lambda function (or API Gateway).
However, the following flow creates logs:
Web/Curl -> API Gateway -> Lambda Function
In comments, above, we seem to have arrived at a conclusion that unanticipated client-side caching (or caching somewhere between the client and the AWS infrastructure) may be a more appropriate explanation for the observed behavior, since there is no known mechanism by which an independent CloudFront distribution could access a Lambda function via API Gateway and cause those requests not to be logged by Lambda.
So, I'll answer this with a way to confirm or reject this hypothesis.
CloudFront injects a header into both requests and responses, X-Amz-Cf-Id, containing opaque tokens that uniquely identify the request and the response. Documentation refers to these as "encrypted," but for our purposes, they're simply opaque values with a very high probability of uniqueness.
In spite of having the same name, the request header and the response header are actually two uncorrelated values (they don't match each other on the same request/response).
The origin-side X-Amz-Cf-Id is sent to the origin server in the request is only really useful to AWS engineers, for troubleshooting.
But the viewer-side X-Amz-Cf-Id returned by CloudFront in the response is useful to us, because not only is it unique to each response (even responses from the CloudFront cache have different values each time you fetch the same object) but it also appears in the CloudFront access logs as x-edge-request-id (although the documentation does not appear to unambiguously state this).
Thus, if the client side sees duplicate X-Amz-Cf-Id values across multiple responses, there is something either internal to the client or between the client and CloudFront (in the client's network or ISP) that is causing cached responses to be seen by the client.
Correlating the X-Amz-Cf-Id from the client across multiple responses may be useful (since they should never be the same) and with the CloudFront logs may also be useful, since this confirms the timestamp of the request where CloudFront actually generated this particular response.
tl;dr: observing the same X-Amz-Cf-Id in more than one response means caching is occurring outside the boundaries of AWS.
Note that even though CloudFront allows min/max/default TTLs to impact how long CloudFront will cache the object, these settings don't impact any downstream or client caching behavior. The origin should return correct Cache-Control response headers (e.g. private, no-cache, no-store) to ensure correct caching behavior throughout the chain. If the origin behavior can't be changed, then Lambda#Edge origin response or viewer response triggers can be used to inject appropriate response headers -- see this example on Server Fault.
Note also that CloudFront caches 4xx/5xx error responses for 5 minutes by default. See Amazon CloudFront Latency for an explanation and steps to disable this behavior, if desired. This feature is designed to give the origin server a break, and not bombard it with requests that are presumed to continue to fail, anyway. This behavior may cause various problems in testing as well as production, so there are cases where it should be disabled.
I have an application deployed on AWS Elastic Beanstalk, I added some simple licensing to stop abuse of the api, the user has to pass a licensekey as a field
i.e
search.myapi.com/?license=4ca53b04&query=fred
If this is not valid then the request is rejected.
However until the monthly updates the above query will always return the same data, therefore I now point search.myapi.com to an AWS CloudFront distribution, then only if query is not cached does it go to actual server as
direct.myapi.com/?license=4ca53b04&query=fred
However the problem is that if two users make the same query they wont be deemed the same by Cloudfront because the license parameter is different. So the Cloudfront caching is only working at a per user level which is of no use.
What I want to do is have CloudFront ignore the license parameters for caching but not the other parameters. I dont mind too much if that means user could access CloudFront with invalid license as long as they cant make successful query to server (since CloudFront calls are cheap but server calls are expensive, both in terms of cpu and monetary cost)
Perhaps what I need is something in front of CloudFront that does the license check and then strips out the license parameter but I don't know what that would be ?
Two possible come to mind.
The first solution feels like a hack, but would prevent unlicensed users from successfully fetching uncached query responses. If the response is cached, it would leak out, but at no cost in terms of origin server resources.
If the content is not sensitive, and you're only trying to avoid petty theft/annoyance, this might be viable.
For query parameters, CloudFront allows you to forward all, cache on whitelist.
So, whitelist query (and any other necessary fields) but not license.
Results for a given query:
valid license, cache miss: request goes to origin, origin returns response, response stored in cache
valid license, cache hit: response served from cache
invalid license, cache hit: response served from cache
invalid license, cache miss: response goes to origin, origin returns error, error stored in cache.
Oops. The last condition is problematic, because authorized users will receive the cached error if the make the same query.
But we can fix this, as long as the origin returns an HTTP error for an invalid request, such as 403 Forbidden.
As I explained in Amazon CloudFront Latency, CloudFront caches responses with HTTP errors using different timers (not min/default/max-ttl), with a default of t minutes. This value can be set to 0 (or other values) for each of several individual HTTP status codes, like 403. So, for the error code your origin returns, set the Error Caching Minimum TTL to 0 seconds.
At this point, the problematic condition of caching error responses and playing them back to authorized clients has been solved.
The second option seems like a better idea, overall, but would require more sophistication and probably cost slightly more.
CloudFront has a feature that connects it with AWS Lambda, called Lambda#Edge. This allows you to analyze and manipulate requests and responses using simple Javascript scripts that are run at specific trigger points in the CloudFront signal flow.
Viewer Request runs for each request, before the cache is checked. It can allow the request to continue into CloudFront, or it can stop processing and generate a reaponse directly back to the viewer. Generated responses here are not stored in the cache.
Origin Request runs after the cache is checked, only for cache misses, before the request goes to the origin. If this trigger generates a response, the response is stored in the cache and the origin is not contacted.
Origin Response runs after the origin response arrives, only for cache misses, and before the response goes onto the cache. If this trigger modifies the response, the modified response stored in the cache.
Viewer Response runs immediately before the response is returned to the viewer, for both cache misses and cache hits. If this trigger modifies the response, the modified response is not cached.
From this, you can see how this might be useful.
A Viewer Request trigger could check each request for a valid license key, and reject those without. For this, it would need access to a way to validate the license keys.
If your client base is very small or rarely changes, the list of keys could be embedded in the trigger code itself.
Otherwise, it needs to validate the key, which could be done by sending a request to the origin server from within the trigger code (the runtime environment allows your code to make outbound requests and receive responses via the Internet) or by doing a lookup in a hosted database such as DynamoDB.
Lambda#Edge triggers run in Lambda containers, and depending on traffic load, observations suggest that it is very likely that subsequent requests reaching the same edge location will be handled by the same container. Each container only handles one request at a time, but the container becomes available for the next request as soon as control is returned to CloudFront. As a consequence of this, you can cache the results in memory in a global data structure inside each container, significantly reducing the number of times you need to ascertain whether a license key is valid. The function either allows CloudFront to continue processing as normal, or actively rejects the invalid key by generating its own response. A single trigger will cost you a little under $1 per million requests that it handles.
This solution prevents missing or unauthorized license keys from actually checking the cache or making query requests to the origin. As before, you would want to customize the query string whitelist in the CloudFront cache behavior settings to eliminate license from the whitelist, and change the error caching minimum TTL to ensure that errors are not cached, even though these errors should never occur.
I'm new to AWS Lambda and Lambda Edge and I'm trying to understand the purpose. In Lambda Edge I see in the promo page it kind of implies Edge is a middleware in that you can modify the request.
I see in the nodejs examples that to continue processing of a request you can use callback(null, request); but whenever I use that I get a 502 response.
For example can I add/modify a header and then continue a request to a Cloudformation or API Gateway backend or must the lambda return an object of some sort?
Here is an example (logs show header is added and all is well, just that curl returns 502):
exports.handler = (event, context, callback) => {
console.log(context);
console.log(event);
const request = event;
request.headers.bar = 'foo';
console.log(event);
callback(null, request);
};
One of the issues that sometimes stumps people when trying to use Lambda#Edge is that testing your script successfully in the Lambda console only tests whether your code can run without throwing exceptions.
What it doesn't test for is whether your test event looks like a test event that CloudFront would generate, or whether your returned values would actually be interpreted as valid by CloudFront.
The event you are passed is a complex object containing one record, and inside there is cf (CloudFront) which contains request. If you are a response trigger, it also contains response.
See Lambda#Edge Event Structure. There are test event templates in the Lambda console for various types of CloudFront interactions.
So you need to get the original event's request from the right part of the structure:
const request = event; // incorrect
const request = event.Records[0].cf.request; // correct
Headers are confusing at first, but this is actually an example of very sensible engineering design. HTTP headers are not case sensitive in HTTP/1.x and are always lowercase in http/2, but JavaScript object keys are always case sensitive... so a sensible representation of headers takes all of these factors into account, as well as the fact that some headers can appear more than once and the ordering can be relevant.
In headers, the object key is always lowercase, and each value is an array of objects that contains a key (which must match the outer key, except for lettercase) and a value (the header).
request.headers.bar = 'foo'; // incorrect
request.headers['bar'] = [ { key: 'Bar', value: 'foo' } ]; // correct
Additionally, certain headers are blacklisted -- for reasons of security or simple sanity, you can't add or manipulate them.
See Headers in the Lambda#Edge section of the CloudFront Developer Guide.
Also, remember that CloudFront is a cache, and caches have cache keys -- the unique value that identifies a specific request, so that other, identical requests can be determined to actually be identical and served with the same response. The cache key in CloudFront consists of only what CloudFront is configured to send to the origin -- which does not include headers that you haven't whitelisted for forwarding to the origin in the Cache Behavior settings. Trying to set or modify a header inappropriately will result in a 502 error. In the example above, you would need to whitelist the Bar header for forwarding to the origin, in the cache distribution settings.
You might find it initially easier to learn by trying to modify a response, rather than a request, because they are somewhat more forgiving.
Note that in request triggers, you have essentially four possible outcomes:
leave the request unmodified and return control to CloudFront by using return callback(null, request); without changing anything
modify the request by modifying the request object and then `return callback(null, request);
stop further CloudFront processing, and generate a response directly by building a valid response object and calling return callback(null, response);
throw a hard exception by setting the first callback() response to something other than null
In a viewer request trigger, generating a response returns the response to the viewer without checking the cache and without caching the response.
An origin request trigger only fires after the cache has already been checked, and the object is not there. If you generate a response in an origin request trigger, the response is stored in the cache and returned to the requester. The request is never sent to the origin if you generate a response in this trigger. If you modify the request, it is sent to the origin and the response is cached unless configured not to be cached.
An origin response trigger modifies or replaces the response from the origin, and the modified response is stored on the cache.
A viewer response trigger modifies or replaces the response that was either fetched from the cache or from the origin. The modified response is not cached.
Response triggers are also able to inspect the original request, in cases where this might be desirable.