I'm new to AWS Lambda and Lambda Edge and I'm trying to understand the purpose. In Lambda Edge I see in the promo page it kind of implies Edge is a middleware in that you can modify the request.
I see in the nodejs examples that to continue processing of a request you can use callback(null, request); but whenever I use that I get a 502 response.
For example can I add/modify a header and then continue a request to a Cloudformation or API Gateway backend or must the lambda return an object of some sort?
Here is an example (logs show header is added and all is well, just that curl returns 502):
exports.handler = (event, context, callback) => {
console.log(context);
console.log(event);
const request = event;
request.headers.bar = 'foo';
console.log(event);
callback(null, request);
};
One of the issues that sometimes stumps people when trying to use Lambda#Edge is that testing your script successfully in the Lambda console only tests whether your code can run without throwing exceptions.
What it doesn't test for is whether your test event looks like a test event that CloudFront would generate, or whether your returned values would actually be interpreted as valid by CloudFront.
The event you are passed is a complex object containing one record, and inside there is cf (CloudFront) which contains request. If you are a response trigger, it also contains response.
See Lambda#Edge Event Structure. There are test event templates in the Lambda console for various types of CloudFront interactions.
So you need to get the original event's request from the right part of the structure:
const request = event; // incorrect
const request = event.Records[0].cf.request; // correct
Headers are confusing at first, but this is actually an example of very sensible engineering design. HTTP headers are not case sensitive in HTTP/1.x and are always lowercase in http/2, but JavaScript object keys are always case sensitive... so a sensible representation of headers takes all of these factors into account, as well as the fact that some headers can appear more than once and the ordering can be relevant.
In headers, the object key is always lowercase, and each value is an array of objects that contains a key (which must match the outer key, except for lettercase) and a value (the header).
request.headers.bar = 'foo'; // incorrect
request.headers['bar'] = [ { key: 'Bar', value: 'foo' } ]; // correct
Additionally, certain headers are blacklisted -- for reasons of security or simple sanity, you can't add or manipulate them.
See Headers in the Lambda#Edge section of the CloudFront Developer Guide.
Also, remember that CloudFront is a cache, and caches have cache keys -- the unique value that identifies a specific request, so that other, identical requests can be determined to actually be identical and served with the same response. The cache key in CloudFront consists of only what CloudFront is configured to send to the origin -- which does not include headers that you haven't whitelisted for forwarding to the origin in the Cache Behavior settings. Trying to set or modify a header inappropriately will result in a 502 error. In the example above, you would need to whitelist the Bar header for forwarding to the origin, in the cache distribution settings.
You might find it initially easier to learn by trying to modify a response, rather than a request, because they are somewhat more forgiving.
Note that in request triggers, you have essentially four possible outcomes:
leave the request unmodified and return control to CloudFront by using return callback(null, request); without changing anything
modify the request by modifying the request object and then `return callback(null, request);
stop further CloudFront processing, and generate a response directly by building a valid response object and calling return callback(null, response);
throw a hard exception by setting the first callback() response to something other than null
In a viewer request trigger, generating a response returns the response to the viewer without checking the cache and without caching the response.
An origin request trigger only fires after the cache has already been checked, and the object is not there. If you generate a response in an origin request trigger, the response is stored in the cache and returned to the requester. The request is never sent to the origin if you generate a response in this trigger. If you modify the request, it is sent to the origin and the response is cached unless configured not to be cached.
An origin response trigger modifies or replaces the response from the origin, and the modified response is stored on the cache.
A viewer response trigger modifies or replaces the response that was either fetched from the cache or from the origin. The modified response is not cached.
Response triggers are also able to inspect the original request, in cases where this might be desirable.
Related
I have read the cache key documentation on cloudfront:
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-cloudfront-announces-cache-and-origin-request-policies/
I have a lambda#edge function that gets triggered by Viewer Request. There I set request.uri to values I need to fetch proper files from S3.
From the documentation I don't see anything about request.uri being part of what the cache key considers.
Is there a way to make request.uri affect the cache key?
In Lambda#Edge, request.uri (where the request object is event.Records[0].cf.request) is the resource (path) component of the request URL, including the filename and extension.
This is always part of the cache key. This is default behavior and can't be disabled.
By default, [the cache key] consists of the CloudFront distribution hostname and the resource portion of the request URL (path, file name, and extension)
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-cloudfront-announces-cache-and-origin-request-policies/
...however, this isn't the complete answer.
The value of request.uri can be modified by viewer request and/or origin request triggers... so an important consideration here is when is this value evaluated for determining the cache key?
If the trigger is a viewer request trigger, the cache key uses the value of request.uri as it stands after the trigger runs. This means if a viewer request trigger modifies the value of request.uri (and returns the modified request object to CloudFront) then the cache key is modified to contain the revised value. Cache lookup occurs immediately after the viewer request trigger finishes and returns control to CloudFront.
Things are very different in an origin request trigger. Modifying request.uri in an origin request trigger does not change the cache key (which is completely frozen at that point, as the cache has already been checked and a cacheable response will be stored under the same cache key that generated the cache miss). The value as it stands before the origin request trigger runs is what's in the cache key.
Changing request.uri in either trigger type will also change the request URI that the origin server receives.
Using API Gateway, I am trying to define a POST end point that accepts application/json to do the following:
Trigger a Lambda asynchronously
Respond with a JSON payload composed of elements from the request body
I have #1 working. I think it's by the book.
It's #2 I'm getting tripped up on. It looks like I don't have access to the request body in the context of the response mapping template. I have access to the original query params with $input.params but I cannot find any property that will give me the original request body, and I need it to get the data that I want to respond with. It's either that or I need to figure out how to get the asynchronous launch of a Lambda to somehow provide the original request body.
Does anyone know if this is possible?
My goal is to ensure that my API responds as fast as possible without incurring a cold start of a Lambda to respond AND simultaneously triggering an asynchronous workflow by starting a Lambda. I'd also be willing to integrate with SNS instead of Lambda directly and have Lambda subscribe to the topic but I don't know if that will get me access to the data I need in the response mapping template.
From https://stackoverflow.com/a/61482410/3221253:
Save the original request body in the integration mapping template:
#set($context.requestOverride.path.body = $input.body)
Retrieve it in the integration mapping response:
#set($body = $context.requestOverride.path.body)
{
"statusCode": 200,
"body": $body,
}
You can also access specific attributes:
#set($object = $util.parseJson($body))
{
"id": "$object.id"
}
To access the original request directly, you should use a Proxy Integration for Lambda rather than mapping things via a normal integration. You'll be able to access the entire request context, such as headers, path params, etc.
I have determined that it is not possible to do what I want to do.
The question is: Why is not possible to change http status before return responses in ViewerResponseEvents in lambda edge?
I have a Lambda function that must check every single response and change it's status code based on JSON file that I have in my lambda function. I deployed this lambda function in Viewer response because I want that this function executes before return every single response. Aws Documentation ( https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/ ) says if you want to execute a function for all requests it should be placed in viewer events.
So, I've created a simple function that basically is cloning and changing the http status code of response before return it. I did this code for test:
exports.handler = async (event) => {
const request = event.Records[0].cf.request;
console.log(request);
console.log(`Original response`);
const response = event.Records[0].cf.response;
console.log(`Original response`);
console.log(response);
//clone response just for change the status code
let cloneResponseReturn = JSON.parse(JSON.stringify(response));
cloneResponseReturn.status = 404;
cloneResponseReturn.statusDescription = 'Not Found';
console.log('Log Clone Response Return');
console.log(cloneResponseReturn);
return cloneResponseReturn;
};
When I access the log in cloudwatch, it shows that response has http 404 code, but for some reason, cloudfront still returning the response with 200 status code. (I've cleared browsers cache, tested it in other tools such as postman, but in all of them CloudFront returns HTTP 200)
CloudWatch Log and Response print:
If I change this function to execute in origin response it will work, but I don't want to execute it ONLY in cache miss (+as aws tell us that origin events will be executed only in that case+). As origin events are executed only in cache miss, to execute that redirects I would have to create a chache header buster to make sure that origin events will be always executed.
Is really weird this behaviour of edge lambda. Does anyone have any idea how I can solve this? I already tried to clean cache of browsers and tools that I am using for test the requests, also clean the cache of my distribution but still not working.
I've posted the question in AWS Forum a week ago but it still without answer: https://forums.aws.amazon.com/message.jspa?messageID=885516#885516
Thanks in advance.
Does Cloudfront require special settings to trigger a log?
I have the following flow:
Devices -> Cloudfront -> API Gateway -> Lambda Function
which works, but Cloudwatch doesn't seem to create logs for the lambda function (or API Gateway).
However, the following flow creates logs:
Web/Curl -> API Gateway -> Lambda Function
In comments, above, we seem to have arrived at a conclusion that unanticipated client-side caching (or caching somewhere between the client and the AWS infrastructure) may be a more appropriate explanation for the observed behavior, since there is no known mechanism by which an independent CloudFront distribution could access a Lambda function via API Gateway and cause those requests not to be logged by Lambda.
So, I'll answer this with a way to confirm or reject this hypothesis.
CloudFront injects a header into both requests and responses, X-Amz-Cf-Id, containing opaque tokens that uniquely identify the request and the response. Documentation refers to these as "encrypted," but for our purposes, they're simply opaque values with a very high probability of uniqueness.
In spite of having the same name, the request header and the response header are actually two uncorrelated values (they don't match each other on the same request/response).
The origin-side X-Amz-Cf-Id is sent to the origin server in the request is only really useful to AWS engineers, for troubleshooting.
But the viewer-side X-Amz-Cf-Id returned by CloudFront in the response is useful to us, because not only is it unique to each response (even responses from the CloudFront cache have different values each time you fetch the same object) but it also appears in the CloudFront access logs as x-edge-request-id (although the documentation does not appear to unambiguously state this).
Thus, if the client side sees duplicate X-Amz-Cf-Id values across multiple responses, there is something either internal to the client or between the client and CloudFront (in the client's network or ISP) that is causing cached responses to be seen by the client.
Correlating the X-Amz-Cf-Id from the client across multiple responses may be useful (since they should never be the same) and with the CloudFront logs may also be useful, since this confirms the timestamp of the request where CloudFront actually generated this particular response.
tl;dr: observing the same X-Amz-Cf-Id in more than one response means caching is occurring outside the boundaries of AWS.
Note that even though CloudFront allows min/max/default TTLs to impact how long CloudFront will cache the object, these settings don't impact any downstream or client caching behavior. The origin should return correct Cache-Control response headers (e.g. private, no-cache, no-store) to ensure correct caching behavior throughout the chain. If the origin behavior can't be changed, then Lambda#Edge origin response or viewer response triggers can be used to inject appropriate response headers -- see this example on Server Fault.
Note also that CloudFront caches 4xx/5xx error responses for 5 minutes by default. See Amazon CloudFront Latency for an explanation and steps to disable this behavior, if desired. This feature is designed to give the origin server a break, and not bombard it with requests that are presumed to continue to fail, anyway. This behavior may cause various problems in testing as well as production, so there are cases where it should be disabled.
I have an application deployed on AWS Elastic Beanstalk, I added some simple licensing to stop abuse of the api, the user has to pass a licensekey as a field
i.e
search.myapi.com/?license=4ca53b04&query=fred
If this is not valid then the request is rejected.
However until the monthly updates the above query will always return the same data, therefore I now point search.myapi.com to an AWS CloudFront distribution, then only if query is not cached does it go to actual server as
direct.myapi.com/?license=4ca53b04&query=fred
However the problem is that if two users make the same query they wont be deemed the same by Cloudfront because the license parameter is different. So the Cloudfront caching is only working at a per user level which is of no use.
What I want to do is have CloudFront ignore the license parameters for caching but not the other parameters. I dont mind too much if that means user could access CloudFront with invalid license as long as they cant make successful query to server (since CloudFront calls are cheap but server calls are expensive, both in terms of cpu and monetary cost)
Perhaps what I need is something in front of CloudFront that does the license check and then strips out the license parameter but I don't know what that would be ?
Two possible come to mind.
The first solution feels like a hack, but would prevent unlicensed users from successfully fetching uncached query responses. If the response is cached, it would leak out, but at no cost in terms of origin server resources.
If the content is not sensitive, and you're only trying to avoid petty theft/annoyance, this might be viable.
For query parameters, CloudFront allows you to forward all, cache on whitelist.
So, whitelist query (and any other necessary fields) but not license.
Results for a given query:
valid license, cache miss: request goes to origin, origin returns response, response stored in cache
valid license, cache hit: response served from cache
invalid license, cache hit: response served from cache
invalid license, cache miss: response goes to origin, origin returns error, error stored in cache.
Oops. The last condition is problematic, because authorized users will receive the cached error if the make the same query.
But we can fix this, as long as the origin returns an HTTP error for an invalid request, such as 403 Forbidden.
As I explained in Amazon CloudFront Latency, CloudFront caches responses with HTTP errors using different timers (not min/default/max-ttl), with a default of t minutes. This value can be set to 0 (or other values) for each of several individual HTTP status codes, like 403. So, for the error code your origin returns, set the Error Caching Minimum TTL to 0 seconds.
At this point, the problematic condition of caching error responses and playing them back to authorized clients has been solved.
The second option seems like a better idea, overall, but would require more sophistication and probably cost slightly more.
CloudFront has a feature that connects it with AWS Lambda, called Lambda#Edge. This allows you to analyze and manipulate requests and responses using simple Javascript scripts that are run at specific trigger points in the CloudFront signal flow.
Viewer Request runs for each request, before the cache is checked. It can allow the request to continue into CloudFront, or it can stop processing and generate a reaponse directly back to the viewer. Generated responses here are not stored in the cache.
Origin Request runs after the cache is checked, only for cache misses, before the request goes to the origin. If this trigger generates a response, the response is stored in the cache and the origin is not contacted.
Origin Response runs after the origin response arrives, only for cache misses, and before the response goes onto the cache. If this trigger modifies the response, the modified response stored in the cache.
Viewer Response runs immediately before the response is returned to the viewer, for both cache misses and cache hits. If this trigger modifies the response, the modified response is not cached.
From this, you can see how this might be useful.
A Viewer Request trigger could check each request for a valid license key, and reject those without. For this, it would need access to a way to validate the license keys.
If your client base is very small or rarely changes, the list of keys could be embedded in the trigger code itself.
Otherwise, it needs to validate the key, which could be done by sending a request to the origin server from within the trigger code (the runtime environment allows your code to make outbound requests and receive responses via the Internet) or by doing a lookup in a hosted database such as DynamoDB.
Lambda#Edge triggers run in Lambda containers, and depending on traffic load, observations suggest that it is very likely that subsequent requests reaching the same edge location will be handled by the same container. Each container only handles one request at a time, but the container becomes available for the next request as soon as control is returned to CloudFront. As a consequence of this, you can cache the results in memory in a global data structure inside each container, significantly reducing the number of times you need to ascertain whether a license key is valid. The function either allows CloudFront to continue processing as normal, or actively rejects the invalid key by generating its own response. A single trigger will cost you a little under $1 per million requests that it handles.
This solution prevents missing or unauthorized license keys from actually checking the cache or making query requests to the origin. As before, you would want to customize the query string whitelist in the CloudFront cache behavior settings to eliminate license from the whitelist, and change the error caching minimum TTL to ensure that errors are not cached, even though these errors should never occur.