I am using ALB as a trigger for lambda function, when I post request to the alb, I can see the trigger request in cloud watch, however, there's no response to the post request I posted using postman.
I added logs in the code to check it enters the lambda or not but I can't see python logs. Also, I added a role with ElasticLoadBalancingFullAccess to lambda. but still no response; I am not sure how to debug or move on further I tried multiple things, I even added context.done(respons) to the lambda handler, I also changed the format to be json format returns the status code and the body. Any insights will be appreciated.
EDIT:
details about ALB:
listeners: port 80
target: lambda type and I choose my lambda function
security group: simple security groups allow public access (it works fine as I am triggering the lambda by the request)
lambda code:
def lambda_handler(event,context):
# Initialize you log configuration using the base class
context.succeed({
'statusCode': 200,
'body': json.dumps("wuccedd")
})
also I noticed when I put an error intentially in the lambda function like this forexample:
def lambda_handler(event,context):
#error
x= 10/0,,,,,
context.succeed({
'statusCode': 200,
'body': json.dumps("wuccedd")
})
and this also kept the request stuck, which means it doesn't enter the handler function, any idea why the function can be triggered in cloud watch but the handler function isn't entered
Related
Problem
When I add the Idempotency configuration of aws-lambda-powertools my function code is not executed propertly.
The AWS lambda serves as message handler for a MS Teams chatbot when the function performs a cold start the async code within the handler is not executed and no message is returned to the user. I also don't see any logs so it seems that the code in the async handler is not executed at all.
Could this be due to the way I handle my async handler?
Code
#idempotent(persistence_store=persistence_layer, config=cfg)
def lambda_handler(event:dict, context: dict):
asyncio.get_event_loop().run_until_complete(lambda_messages(event))
payload = json.loads(event["body"])
return {"status": 400, "payload": payload}
The issue was due to the timeout of my aws sam function not being configured properly. Because of aws-labmda-powertools it was hard to debug as the error was not easily vissible.
Using API Gateway, I am trying to define a POST end point that accepts application/json to do the following:
Trigger a Lambda asynchronously
Respond with a JSON payload composed of elements from the request body
I have #1 working. I think it's by the book.
It's #2 I'm getting tripped up on. It looks like I don't have access to the request body in the context of the response mapping template. I have access to the original query params with $input.params but I cannot find any property that will give me the original request body, and I need it to get the data that I want to respond with. It's either that or I need to figure out how to get the asynchronous launch of a Lambda to somehow provide the original request body.
Does anyone know if this is possible?
My goal is to ensure that my API responds as fast as possible without incurring a cold start of a Lambda to respond AND simultaneously triggering an asynchronous workflow by starting a Lambda. I'd also be willing to integrate with SNS instead of Lambda directly and have Lambda subscribe to the topic but I don't know if that will get me access to the data I need in the response mapping template.
From https://stackoverflow.com/a/61482410/3221253:
Save the original request body in the integration mapping template:
#set($context.requestOverride.path.body = $input.body)
Retrieve it in the integration mapping response:
#set($body = $context.requestOverride.path.body)
{
"statusCode": 200,
"body": $body,
}
You can also access specific attributes:
#set($object = $util.parseJson($body))
{
"id": "$object.id"
}
To access the original request directly, you should use a Proxy Integration for Lambda rather than mapping things via a normal integration. You'll be able to access the entire request context, such as headers, path params, etc.
I have determined that it is not possible to do what I want to do.
The question is: Why is not possible to change http status before return responses in ViewerResponseEvents in lambda edge?
I have a Lambda function that must check every single response and change it's status code based on JSON file that I have in my lambda function. I deployed this lambda function in Viewer response because I want that this function executes before return every single response. Aws Documentation ( https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/ ) says if you want to execute a function for all requests it should be placed in viewer events.
So, I've created a simple function that basically is cloning and changing the http status code of response before return it. I did this code for test:
exports.handler = async (event) => {
const request = event.Records[0].cf.request;
console.log(request);
console.log(`Original response`);
const response = event.Records[0].cf.response;
console.log(`Original response`);
console.log(response);
//clone response just for change the status code
let cloneResponseReturn = JSON.parse(JSON.stringify(response));
cloneResponseReturn.status = 404;
cloneResponseReturn.statusDescription = 'Not Found';
console.log('Log Clone Response Return');
console.log(cloneResponseReturn);
return cloneResponseReturn;
};
When I access the log in cloudwatch, it shows that response has http 404 code, but for some reason, cloudfront still returning the response with 200 status code. (I've cleared browsers cache, tested it in other tools such as postman, but in all of them CloudFront returns HTTP 200)
CloudWatch Log and Response print:
If I change this function to execute in origin response it will work, but I don't want to execute it ONLY in cache miss (+as aws tell us that origin events will be executed only in that case+). As origin events are executed only in cache miss, to execute that redirects I would have to create a chache header buster to make sure that origin events will be always executed.
Is really weird this behaviour of edge lambda. Does anyone have any idea how I can solve this? I already tried to clean cache of browsers and tools that I am using for test the requests, also clean the cache of my distribution but still not working.
I've posted the question in AWS Forum a week ago but it still without answer: https://forums.aws.amazon.com/message.jspa?messageID=885516#885516
Thanks in advance.
There is a resource /{myvar} defined in API Gateway, with GET method. Integration request points to Lambda function, with Lambda Proxy integration enabled.
When I invoke test execution from API Resource Editor of this resource and method, it works for queries like
/abc
/abc?def=ghi
but it fails to execute query like
/abc?def
with following response body visible in test console:
{
"cause": "Unable to invoke. Please try again later.",
"logref": "f6c905bd-cc71-11e8-a731-37e05a411010",
"message": ""
}
and also Response Headers and Logs boxes below are empty.
When I publish such resource to stage, accessing it through HTTPS in browser results with {"message": "Internal server error"} See edit below
How to deal with that? How can I capture whole resource path with or without query, with no Gateway crash? It fails the same way also for greedy resource /{myvar+}.
EDIT
After redeployment problem no longer occurs on stage. It is still occurs within Method Execution window in API Gateway Resources Editor.
you can capture the query string params through the "event" object sent to your Function Handler, it is called queryStringParameters. You can just log this and go through it on CloudWatch to see what exactly is failing.
P.S. Sorry for posting as an answer, don't have rep for comment ^^
I'm attempting to integrate my lambda function, which must run async because it takes too long, with API gateway. I believe I must, instead of choosing the "Lambda" integration type, choose "AWS Service" and specify Lambda. (e.g. this and this seem to imply that.)
However, I get the message "AWS ARN for integration must contain path or action" when I attempt to set the AWS Subdomain to the ARN of my Lambda function. If I set the subdomain to just the name of my Lambda function, when attempting to deploy I get "AWS ARN for integration contains invalid path".
What is the proper AWS Subdomain for this type of integration?
Note that I could also take the advice of this post and set up a Kinesis stream, but that seems excessive for my simple use case. If that's the proper way to resolve my problem, happy to try that.
Edit: Included screen shot
Edit: Please see comment below for an incomplete resolution.
So it's pretty annoying to set up, but here are two ways:
Set up a regular Lambda integration and then add the InvocationType header described here http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html. The value should be 'Event'.
This is annoying because the console won't let you add headers when you have a Lambda function as the Integration type. You'll have to use the SDK or the CLI, or use Swagger where you can add the header easily.
Set the whole thing up as an AWS integration in the console (this is what you're doing in the question), just so you can set the InvocationType header in the console
Leave subdomain blank
"Use path override" and set it to /2015-03-31/functions/<FunctionARN>/invocations where <FunctionARN> is the full ARN of your lambda function
HTTP method is POST
Add a static header X-Amz-Invocation-Type with value 'Event'
http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html
The other option, which I did, was to still use the Lambda configuration and use two lambdas. The first (code below) runs in under a second and returns immediately. But, what it really does is fire off a second lambda (your primary one) that can be long running (up to the 15 minute limit) as an Event. I found this more straightforward.
/**
* Note: Step Functions, which are called out in many answers online, do NOT actually work in this case. The reason
* being that if you use Sequential or even Parallel steps they both require everything to complete before a response
* is sent. That means that this one will execute quickly but Step Functions will still wait on the other one to
* complete, thus defeating the purpose.
*
* #param {Object} event The Event from Lambda
*/
exports.handler = async (event) => {
let params = {
FunctionName: "<YOUR FUNCTION NAME OR ARN>",
InvocationType: "Event", // <--- This is KEY as it tells Lambda to start execution but immediately return / not wait.
Payload: JSON.stringify( event )
};
// we have to wait for it to at least be submitted. Otherwise Lambda runs too fast and will return before
// the Lambda can be submitted to the backend queue for execution
await new Promise((resolve, reject) => {
Lambda.invoke(params, function(err, data) {
if (err) {
reject(err, err.stack);
}
else {
resolve('Lambda invoked: '+data) ;
}
});
});
// Always return 200 not matter what
return {
statusCode : 200,
body: "Event Handled"
};
};