How to add built-in test event to lambda function? - amazon-web-services

I have several lambda functions in my project.
For each function I have an event file that is used for local test purposes.
I wonder to know how can I attached these events files to my lambda function inside the aws-console as test event, so I will not need to create a new event each time after code deployment.
Seems like my answer involves with template.yaml file but I couldn't find the answer over the web.

This is not possible.
You cannot add or seed pre-existing 'test events' for a Lambda function - they're meant to just be an easy way of invoking your Lambda from within the console itself.
You will have to recreate them manually unless you are using shareable test events.

You might be looking for Shareable Test Events.
Shareable test events are test events that you can share with other AWS Identity and Access Management (IAM) users in the same AWS account. You can edit other users' shareable test events and invoke your function with them.
To create a shareable test event
Open the Functions page of the Lambda console.
Choose the name of the function that you want to test.
Choose the Test tab.
Choose a Template.
Enter a Name for the test.
In the text entry box, enter the JSON test event.
Under Event sharing settings, choose Shareable.
Choose Save changes.

Related

Architecture for AWS configuration application

I am creating an application which should only store some configuration. I am using AWS AppConfig as the configuration store.
I want to be able to update this configuration data through code. So when an event happens, I want to call SQS to create a message which holds the new configuration data to be appended. The SQS should call a lambda. The Lambda should get the latest configuration from AppConfig, append the new configurations, then deploy to AppConfig.
As a result, I want AppConfig to have the old configurations, and the new ones appended.
Is there a simple way to achieve this using only AWS services?
I've not tried any of this or used AppConfig directly but it shouldn't be difficult for you to piece information together from the web.
Create SQS Queue to hold the updates.
Create Lambda to read from the SQS Queue.
Write code for the Lambda which receives the message from the Queue, pulls the AppConfig and updates with the new values. Use one of the many AWS SDK's for your preferred language.
One thing that you should be aware of is that Lambdas can run multiple at a time so assuming your AppConfig looks like this:
{
"version": 1
}
then two updates get pushed to the SQS Queue at the same time:
{
"update1": "abc"
}
and
{
"update1": "xyz"
}
They could be executed at the same time and a race condition may occur where both save but one overwrwites the other.
I don't see the benefit of the SQS Queue here or understand the full use case or reason for using this set up but I think there may be a better way to achieve what you're trying to achieve.

aws Lex UI does not show lambda function from other account

My colleague created a lex chatbot. I have been working on a lambda function that queries an external database they want to use in their bot. I created an intent to access the function and then exported the intent. I set up the AWS IAM service role for amazon lex and created resource permissions using this (https://docs.aws.amazon.com/lambda/latest/dg/services-lex.html). Though to make it work I did --action lambda:* Still, while the import completes without error, the lambda function does not seem to have been imported into the intent. When the intent is added to a bot, the lambda function is blank and the drop-down menu does not show my lambda function as an option. Is there a way to make the function accessible in my colleague's account in setting up the intent?
There should be an easier way to do this, but this is the workflow that at least works, even if it is not elegant:
create lambda function and export as zip (I use the python-lambda library)
other user creates an empty lambda function (same name and python version)
other user imports zip in labmda
Create an intent in Lex and export (do not put the lambda function in this)
other user imports the intent
other user modifies the fulfillment section of the intent to include the lambda function they just created.
completely kludgy but does work. Still looking for another way to make it work.

Can I create temporary users through Amazon Cognito?

Does Amazon Cognito support temporary users? For my use case, I want to be able to give access to external users, but limited to a time period (e.g. 7 days)
Currently, my solution is something like:
Create User in User Group
Schedule cron job to run in x days
Job will disable/remove User from User Group
This all seems to be quite manual and I was hoping Cognito provides something similar automatically.
Unfortunately there is no functionality used to automate this workflow so you would need to devise your own solution.
I would suggest the below approach to handling this:
Create a Lambda function that is able to post process a user sign up. This Lambda function would create a CloudWatch Event with a schedule for 7 days in the future. Using the SDK you would create the event and assign a target of another Lambda function. When you specify the target in the put_targets function use the Input parameter to pass in your own JSON, this should contain a metadata item related to the user.
You would then create a post confirmation Lambda trigger which would trigger the Lambda you created in the above step. This would allow you to schedule an event every time a user signs up.
Finally create the target Lambda for the CloudWatch event, this will access the input passed in from the trigger and can use the AWS SDK to perform any cognito functions you might want to use such as deleting the user.
The benefit to using these services rather a cron, is that you can perform the most optimal processing only when it is required. If you have many users in this temporary group you would need to loop through every user and compare if its ready to be removed for a one time script (and perhaps sometimes never remove users).
My solution for this is the following: Instead of creating a post confirmation lambda trigger you can also create a pre authentication lambda trigger. This trigger will check for the user attribute "valid_until" which contains a unix timestamp. The pre authentication lambda trigger will only let the user in if the value of the "valid_until" attribute is in the future. Main benefit of this solution is that you don't need any cron-jobs.

AWS S3 folder put event notification

I've written a function in Python that uploads a folder and its content to S3. Now I would like S3 to generate an event (so I can send it to a lambda function). S3 allows to generate events only at file level, in fact folders on s3 are just a visualization layer, which means that S3 has no internal representation for folders, keys with the same root are simply grouped together. That said, as for now I've come up with three approaches that revolves around the idea of a 'poison pill'.
Send a special file at the end of the folder upload process, the creation of which sends an event to lambda that can open the file to read custom directives to act on. Seems that this approach is quite flexible, however it poses serious concerns security-wise (I know that ACLs are in place for this reason but I'm not quite sure if it's enough), and generates some overhead while downloading/uploading/deleting the file from/to local memory.
Map an event to the target lambdas and fire it directly. The difference in approaches is simply that in this case I'm not really creating a file on S3, I'm just making S3 believe so. I would use CloudWatch to fire custom S3-object-created events with the name of the folder for lambda to pick up. This approach feels a little more hacky than the other two, plus when I did my research on the matter it seemed like it shouldn't be possible to generate "mock" events on AWS (i.e. Trigger S3 create event). To my understanding however, the function put_events should do the trick.
Using SQS would allow to put the folder name into an SQS task that can be later consumed by lambda. This has some advantages over the other two approaches, since SQS has now a LIFO variant that allows for exactly-once-delivery, failures reprocessing (via dead letters queue), etc, however this generates a non-trivial amount of complexity compared to the other approaches.
At this point I'm trying to opt for the most 'correct' approach, and
in order to do so I'm trying to weight pros and cons to make an informed decision, which led me to some questions:
Is there another way I'm missing out to proceed that does not involve client notification ? (all the aforementioned approaches rely on the client sending the notification in one way or another, which is not very "cloudy")?
Is there a substantial difference between approaches 2 and 3, considering that both rely on sending the information in and out of a stream (CloudWatch and SQS respectively)?
Have you consider using the prefix option of S3 bucket event, I tested it and it worked fine. In my S3 bucket I created two folder test1 and test2. On s3 event I added prefix test1 with that in place every time put/copy operation happen on bucket lambda is trigger.
I think your question nets down to "how can I trigger a Lambda function after I have uploaded a folder full of files to S3?"
Unless you have some information a priori server-side that you can use to determine when the folder upload has completed, the client is going to have to tell you.
Options I would consider:
change your client to publish a message to SNS or to SQS upon the completion of uploading to S3. That message can then trigger your Lambda function.
after the last file has been uploaded to folder images/dogs/, upload a zero-sized object whose key is the same as the folder (images/dogs/). This is a 'sentinel file'. Use an S3 event trigger with suffix of / to detect the upload of that 'folder' object and trigger your Lambda.
I prefer the 1st option. It achieves the end goal without resulting in extraneous S3 objects. With SNS you can also configure multiple downstream processes in response to the ‘finished upload’ message (a fan out) if needed.

Can I parameterize AWS lambda functions differently for staging and release resources?

I have a Lambda function invoked by S3 put events, which in turn needs to process the objects and write to a database on RDS. I want to test things out in my staging stack, which means I have a separate bucket, different database endpoint on RDS, and separate IAM roles.
I know how to configure the lambda function's event source and IAM stuff manually (in the Console), and I've read about lambda aliases and versions, but I don't see any support for providing operational parameters (like the name of the destination database) on a per-alias basis. So when I make a change to the function, right now it looks like I need a separate copy of the function for staging and production, and I would have to keep them in sync manually. All of the logic in the code would be the same, and while I get the source bucket and key as a parameter to the function when it's invoked, I don't currently have a way to pass in the destination stuff.
For the destination DB information, I could have a switch statement in the function body that checks the originating S3 bucket and makes a decision, but I hate making every function have to keep that mapping internally. That wouldn't work for the DB credentials or IAM policies, though.
I suppose I could automate all or most of this with the SDK. Has anyone set something like this up for a continuous integration-style deployment with Lambda, or is there a simpler way to do it that I've missed?
I found a workaround using Lambda function aliases. Given the context object, I can get the invoked_function_arn property, which has the alias (if any) at the end.
arn_string = context.invoked_function_arn
alias = arn_string.split(':')[-1]
Then I just use the alias as an index into a dict in my config.py module, and I'm good to go.
config[alias].host
config[alias].database
One thing I'm not crazy about is that I have to invoke my function from an alias every time, and now I can't use aliases for any other purpose without affecting this scheme. It would be nice to have explicit support for user parameters in the context object.