aws Lex UI does not show lambda function from other account - amazon-web-services

My colleague created a lex chatbot. I have been working on a lambda function that queries an external database they want to use in their bot. I created an intent to access the function and then exported the intent. I set up the AWS IAM service role for amazon lex and created resource permissions using this (https://docs.aws.amazon.com/lambda/latest/dg/services-lex.html). Though to make it work I did --action lambda:* Still, while the import completes without error, the lambda function does not seem to have been imported into the intent. When the intent is added to a bot, the lambda function is blank and the drop-down menu does not show my lambda function as an option. Is there a way to make the function accessible in my colleague's account in setting up the intent?

There should be an easier way to do this, but this is the workflow that at least works, even if it is not elegant:
create lambda function and export as zip (I use the python-lambda library)
other user creates an empty lambda function (same name and python version)
other user imports zip in labmda
Create an intent in Lex and export (do not put the lambda function in this)
other user imports the intent
other user modifies the fulfillment section of the intent to include the lambda function they just created.
completely kludgy but does work. Still looking for another way to make it work.

Related

How to add built-in test event to lambda function?

I have several lambda functions in my project.
For each function I have an event file that is used for local test purposes.
I wonder to know how can I attached these events files to my lambda function inside the aws-console as test event, so I will not need to create a new event each time after code deployment.
Seems like my answer involves with template.yaml file but I couldn't find the answer over the web.
This is not possible.
You cannot add or seed pre-existing 'test events' for a Lambda function - they're meant to just be an easy way of invoking your Lambda from within the console itself.
You will have to recreate them manually unless you are using shareable test events.
You might be looking for Shareable Test Events.
Shareable test events are test events that you can share with other AWS Identity and Access Management (IAM) users in the same AWS account. You can edit other users' shareable test events and invoke your function with them.
To create a shareable test event
Open the Functions page of the Lambda console.
Choose the name of the function that you want to test.
Choose the Test tab.
Choose a Template.
Enter a Name for the test.
In the text entry box, enter the JSON test event.
Under Event sharing settings, choose Shareable.
Choose Save changes.

Using custom nodejs function in a AWS Lambda layer

So, I have tried AWS lambda layers and I have no issues with them so far. When I want to put libraries over there that my lambda functions will require after all I needed it put the package jwt-decode in a folder structure like this:
nodejs/node_modules/jwt-decode uploaded as .zip no problems.
The problem is... I want to also share my own custom code between my Lambda functions without publishing it as a npm package. Is that even possible?
Say you have a function that uses jwt-decode and decodes your access token from the authorization header event.headers.authorization then it fetches the "sub" identifier from the decoded token. I want to share this code because I don't want to write this implementation for each Lambda function that requires the sub identifier.

How to get bucket name from Bucket object in AWS CDK for python

I've create an S3 bucket for hosting my website. For that I've used the below code from the AWS CDK for python docs
self.bucket = s3.Bucket(
self,
"my-bucket-name",
bucket_name="my-bucket-name",
removal_policy=core.RemovalPolicy.DESTROY,
website_index_document="index.html",
public_read_access=True
)
For a reason, I want to send this bucket object as an argument to another object and get the bucket name from the argument. So, I've tried
self.bucket.bucket_name
self.bucket.bucket_arn
nothing seems working, instead the object returns ${Token[TOKEN.189]}. Could anyone guide me through this?
If the bucket name is hard coded like the example you pasted above, you can always externalize it to the cdk context file. As you've seen, when you access the bucket name from the Bucket construct, it creates a reference to it and that is so if you need it in another resource, cloud formation will depend on the value from the Bucket resource by using the Ref/GetAtt capabilities in CloudFormation. Then it will be guaranteed that the bucket actually exists before it is used downstream.
If you don't care about that and just want the actual bucket name in the cdk app code then put the value in the cdk context json file and use node.try_get_context to retrieve it wherever.
There is a handy method called fromBucketName you can use if it wasn't defined in your current app:
const bucket = aws_s3.Bucket.fromBucketName(this, 'bucketLabel", "nameYouGaveBucket")
Otherwise, I believe you are looking for bucket.bucketName (typescript) or bucket.bucket_name (python).
See typescript docs python docs. This is also available in the CDK wrappers in other languages.
Note that there are similar methods for all sorts of CDK constructs, so you should refer often to the API docs, as there is lots like this you can find easily there.

Can I create temporary users through Amazon Cognito?

Does Amazon Cognito support temporary users? For my use case, I want to be able to give access to external users, but limited to a time period (e.g. 7 days)
Currently, my solution is something like:
Create User in User Group
Schedule cron job to run in x days
Job will disable/remove User from User Group
This all seems to be quite manual and I was hoping Cognito provides something similar automatically.
Unfortunately there is no functionality used to automate this workflow so you would need to devise your own solution.
I would suggest the below approach to handling this:
Create a Lambda function that is able to post process a user sign up. This Lambda function would create a CloudWatch Event with a schedule for 7 days in the future. Using the SDK you would create the event and assign a target of another Lambda function. When you specify the target in the put_targets function use the Input parameter to pass in your own JSON, this should contain a metadata item related to the user.
You would then create a post confirmation Lambda trigger which would trigger the Lambda you created in the above step. This would allow you to schedule an event every time a user signs up.
Finally create the target Lambda for the CloudWatch event, this will access the input passed in from the trigger and can use the AWS SDK to perform any cognito functions you might want to use such as deleting the user.
The benefit to using these services rather a cron, is that you can perform the most optimal processing only when it is required. If you have many users in this temporary group you would need to loop through every user and compare if its ready to be removed for a one time script (and perhaps sometimes never remove users).
My solution for this is the following: Instead of creating a post confirmation lambda trigger you can also create a pre authentication lambda trigger. This trigger will check for the user attribute "valid_until" which contains a unix timestamp. The pre authentication lambda trigger will only let the user in if the value of the "valid_until" attribute is in the future. Main benefit of this solution is that you don't need any cron-jobs.

Can I parameterize AWS lambda functions differently for staging and release resources?

I have a Lambda function invoked by S3 put events, which in turn needs to process the objects and write to a database on RDS. I want to test things out in my staging stack, which means I have a separate bucket, different database endpoint on RDS, and separate IAM roles.
I know how to configure the lambda function's event source and IAM stuff manually (in the Console), and I've read about lambda aliases and versions, but I don't see any support for providing operational parameters (like the name of the destination database) on a per-alias basis. So when I make a change to the function, right now it looks like I need a separate copy of the function for staging and production, and I would have to keep them in sync manually. All of the logic in the code would be the same, and while I get the source bucket and key as a parameter to the function when it's invoked, I don't currently have a way to pass in the destination stuff.
For the destination DB information, I could have a switch statement in the function body that checks the originating S3 bucket and makes a decision, but I hate making every function have to keep that mapping internally. That wouldn't work for the DB credentials or IAM policies, though.
I suppose I could automate all or most of this with the SDK. Has anyone set something like this up for a continuous integration-style deployment with Lambda, or is there a simpler way to do it that I've missed?
I found a workaround using Lambda function aliases. Given the context object, I can get the invoked_function_arn property, which has the alias (if any) at the end.
arn_string = context.invoked_function_arn
alias = arn_string.split(':')[-1]
Then I just use the alias as an index into a dict in my config.py module, and I'm good to go.
config[alias].host
config[alias].database
One thing I'm not crazy about is that I have to invoke my function from an alias every time, and now I can't use aliases for any other purpose without affecting this scheme. It would be nice to have explicit support for user parameters in the context object.