How the handler of a Lambda Function works? - amazon-web-services

I want to call this Lambda Function with a payload, for example a username that I will choose:
{
"iamuser": "Joe"
}
I don't understand how the handler of a Lambda function works and how to create my event object in the handler in json. At each run I want to pass a different iamuser value for the Lambda.
import boto3
import botocore.exceptions
import json
iamuser = {}
client_iam = boto3.client('iam')
def create_user(iamuser):
create_user = client_iam.create_user(UserName=iamuser)
def lambda_handler(event, context):
try:
client_iam.get_user(UserName=iamuser)
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchEntity":
create_user(useriam)

In Java, we can implement RequestHandler and override method handleRequest(paremeters).This way, you can pass the input to the Lambda handle request.
For Python, you can take a look at below examples from AWS Docs:
https://docs.aws.amazon.com/lambda/latest/dg/python-handler.html
In short(as per AWS Docs), The Lambda function handler is the method in your function code that processes events. When your function is invoked, Lambda runs the handler method. When the handler exits or returns a response, it becomes available to handle another event.
Hope this helps.

Related

How to retrieve delivery_attempt from event triggered cloud function?

I a writing a Python Cloud Function and I would like to retrieve the "delivery_attempt" attribute.
Based on the documentation, the Cloud Function gets only 2 parameters: event (of type PubsubMessage) and context.
def hello_pubsub(event, context):
"""Background Cloud Function to be triggered by Pub/Sub.
Args:
event (dict): The dictionary with data specific to this type of
event. The `#type` field maps to
`type.googleapis.com/google.pubsub.v1.PubsubMessage`.
The `data` field maps to the PubsubMessage data
in a base64-encoded string. The `attributes` field maps
to the PubsubMessage attributes if any is present.
context (google.cloud.functions.Context): Metadata of triggering event
including `event_id` which maps to the PubsubMessage
messageId, `timestamp` which maps to the PubsubMessage
publishTime, `event_type` which maps to
`google.pubsub.topic.publish`, and `resource` which is
a dictionary that describes the service API endpoint
pubsub.googleapis.com, the triggering topic's name, and
the triggering event type
`type.googleapis.com/google.pubsub.v1.PubsubMessage`.
Returns:
None. The output is written to Cloud Logging.
"""
How can I retrieve the delivery_attempt on a message?
When trigging a Cloud Function via Pub/Sub, one does not have access to the delivery attempt field. Instead, the way to get access is to set up an HTTP-based Cloud Function and use the trigger URL as the push endpoint in a subscription you create separately.
Then, you can access the delivery attempt as follows:
def hello_world(request):
"""Responds to any HTTP request.
Args:
request (flask.Request): HTTP request object.
Returns:
The response text or any set of values that can be turned into a
Response object using
`make_response <http://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>`.
"""
request_json = request.get_json()
print(request_json["deliveryAttempt"])

AWS Lambda AP Gateway Handling DIfferent Routes

I have 3 webhooks that calls my API Gateway which calls my Lambda Function.
url/webhook/....
I want each webhook to call its own python method
startDelivery --> def start_delivery(event, context):
UpdateStatus--> def update_status(event, context):
EndDelivery--> def end_delivery(event, context):
I understand most likely one method will be executed via "url/webhook" which calls the appropriate python method.
def Process_task to call one of the three
What is the ideal way to set up this structure?
Creating different urls for Webhooks and API Gateway captures it and somehow calls the handler?
url/webhook/start
url/webhook/status
url/webhook/end
Sending a different query string for each webhook? and in the lamba parse the query string and call the right python method?
Keep in mind that a Lambda function has one handler (=> 1 invocation = 1 method called).
You can achieve the 1 route <-> 1 method by doing one of the following:
You have a single Lambda function triggered by your 3 APIGW routes.
You can then add a simple router to your function which parse the event['path'] and call the appropriate method.
def lambda_handler(event, context):
path = event['path']
if path == '/webhook/start':
return start_delivery(event, context)
elif path == '/webhook/status':
return update_status(event, context)
elif path == '/webhook/end':
return end_status(event, context)
else:
return { "statusCode": 404, "body": "NotFound" }
Create 1 Lambda function by route:
webhook/start triggers the StartDelivery Lambda function with start_delivery as handler
webhook/status triggers the UpdateDelivery Lambda function with update_delivery as handler
webhook/end triggers the EndDelivery Lambda function with end_delivery as handler
You can use Infrastructure as Code (Cloudformation) to easily manage these functions (SAM: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-getting-started-hello-world.html)
acorbel's answer was helpful for me. The latest version is Format 2.0. Many fields have changed. For example event['path'] doesn't exist with Format 2.0.
Please check the below link for correct key names of event structure.
Working with AWS Lambda proxy integrations for HTTP APIs

Making AWS API Gatway's response the lambda function response

Im trying to create a simple API gateway in which, with a POST method to a certain endpoint, a lambda function is executed.
Setting that up was easy enough, but I'm having some trouble with the request/response handling. Im sending the following request to the API Gateway (Im using python 3.7).
payload = {
"data": "something",
"data2": "sometsadas"
}
response = requests.post('https://endpoint.com/test', params = payload)
That endpoint activates a lambda function when accesed. That function just returns the same event it received.
import json
def lambda_handler(event, context):
# TODO implement
return event
How can I make it so the return value of my lambda function is actually the response from the request? (Or at least a way in which the return value can be found somewhere inside the response)
Seems it was a problem with how the information is sent, json format is required. Solved it by doing the following in the code.
payload{'data': 'someData'}
config_response = requests.post(endpointURL, data = json.dumps(config_payload))

how to get return response from AWS Lambda function

I have a simple lambda function that returns a dict response and another lambda function invokes that function and prints the response.
lambda function A
def handler(event,context):
params = event['list']
return {"params" : params + ["abc"]}
lambda function B invoking A
a=[1,2,3]
x = {"list" : a}
invoke_response = lambda_client.invoke(FunctionName="monitor-workspaces-status",
InvocationType='Event',
Payload=json.dumps(x))
print (invoke_response)
invoke_response
{u'Payload': <botocore.response.StreamingBody object at 0x7f47c58a1e90>, 'ResponseMetadata': {'HTTPStatusCode': 202, 'RequestId': '9a6a6820-0841-11e6-ba22-ad11a929daea'}, u'StatusCode': 202}
Why is the response status 202? Also, how can I get the response data from invoke_response? I could not find a clear documentation of how to do it.
A 202 response means Accepted. It is a successful response but is telling you that the action you have requested has been initiated but has not yet completed. The reason you are getting a 202 is because you invoked the Lambda function asynchronously. Your InvocationType parameter is set to Event. If you want to make a synchronous call, change this to RequestResponse.
Once you do that, you can get the returned data like this:
data = invoke_response['Payload'].read()
try: data = invoke_response['Payload'].read()
read() because it is a StreamingBody object
<botocore.response.StreamingBody object at 0x110b91c50>
It is in the boto3 docs. You can find more details about this here: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/resources.html#actions

Amazon SQS and celery events (not JSON serializable)

I was looking today into Amazon SQS as an alternative approach to installing my own RabbitMQ on EC2 instance.
I have followed the documentation as described here
Within a paragraph it says:
SQS does not yet support events, and so cannot be used with celery
events, celerymon or the Django Admin monitor.
I am a bit confused what events means here. e.g. in the scenario below I have a periodic task every minute where I call the sendEmail.delay(event) asynchronously.
e.g.
#celery.task(name='tasks.check_for_events')
#periodic_task(run_every=datetime.timedelta(minutes=1))
def check_for_events():
now = datetime.datetime.utcnow().replace(tzinfo=utc,second=00, microsecond=00)
events = Event.objects.filter(reminder_date_time__range=(now - datetime.timedelta(minutes=5), now))
for event in events:
sendEmail.delay(event)
#celery.task(name='tasks.sendEmail')
def sendEmail(event):
event.sendMail()
When running it with Amazon SQS I get this error message:
tasks.check_for_events[7623fb2e-725d-4bb1-b09e-4eee24280dc6] raised
exception: TypeError(' is
not JSON serializable',)
So is that the limitation of SQS as pointed out in the documentation or am I doing something fundamentally wrong?
Many thanks for advice,
I might have found the solution. Simply refactor the sendMail() function inside event into the main task therefore there won't be any need to parse the object into json:
#celery.task(name='tasks.check_for_events')
#periodic_task(run_every=datetime.timedelta(minutes=1))
def check_for_events():
now = datetime.datetime.utcnow().replace(tzinfo=utc,second=00, microsecond=00)
events = list(Event.objects.filter(reminder_date_time__range=(now - datetime.timedelta(minutes=5), now)))
for event in events:
subject = 'Event Reminder'
link = None
message = ...
sendEmail.delay(subject, message, event.user.email)
#celery.task(name='tasks.sendEmail')
def sendEmail(subject, message, email):
send_mail(subject, message, settings.DEFAULT_FROM_EMAIL, [email])
This works both with Rabbitmq and Amazon SQS
For someone returning to this post,
This happens when the serializer defined in your celery runtime config is not able to process objects passed to the celery task.
For example: if the config says JSON as required format and some Model object is supplied, above mentioned exception might be raised.
(Q): Is it explicitly necessary to define these parameters
# CELERY_ACCEPT_CONTENT=['json', ],
# CELERY_TASK_SERIALIZER='json',
# CELERY_RESULT_SERIALIZER='json',