How do I configure My AWS Lambda to take input parameters - amazon-web-services

I want my AWS Lambda function to take a input parameter. Example- the parameter is a date and I want to update a field in the Database using that Date. How do I get that date to my Lambda ??

Lambda's can get params from 2 sources:
triggering event
env variables
Adding environment variables is independent of the triggering event, but it doesn't change value between invocation.
Adding fields to the triggering event depends heavily on the type of event and should be handled on the side of the event's source.

Related

Reference one task from another in a contact flow

I'm building a contact flow which creates a block of tasks in pairs. Ideally, one task in a pair should include a reference to its partner in its description.
I've almost achieved this: When creating the second task, I add a reference type URL and I'm using the $.Task.ContactId attribute, prefixed with the access URL for my instance, i.e.
https://<myurl>.my.connect.aws/connect/contact-trace-records/details/$.Task.ContactID
I'd like to deploy this in more than one Connect instance without having to keep manually editing the contact flow. Is there any way I can specify the access URL as a parameter?
For dynamic attributes like this, you need to call a lambda. The lambda can return the appropriate value and then you can use that returned value to set your reference URL attribute.
If I was writing the lambda I'd send through the $.Task.ContactID then use Details.ContactData.InstanceARN from the event that gets passed to the lambda to lookup the instance alias via the Connect API DescribeInstance function. You can then build the url in the lambda and send it back.
More info about using lambdas in a contact flow can be found here

AWS Lambda - One function with multiple parameter sets or multiple functions?

I have a lambda function which is pretty general. It queries an Athena Database table, and writes the output to an S3 Bucket. The function is currently set up to accept environment variables for the Database name, table name, output filename and output bucket.
Currently, I call this function with 4 different sets of environment variables, so I have 4 different lambdas, whose code is the same, but whose environment variables are different. Do I need to have 4 different lambdas, or is there a way of having one lambda with 4 environment variable 'sets'?
Thanks!
Here's one option: To handle 4 sets of configuration with a single lambda, send a variable (e.g. type: Foo) part of the lambda invocation1. As #Marcin suggests, the lambda uses the type value to fetch the config variables from the SSM Parameter Store at runtime with the GetParametersByPath API. Parameters support hierarchies, so you can store your config using nested variable names like: /Foo/db, /Foo/table, /Bar/table etc.
(1) For example, send type in the event detail if event-triggered, or in the SDK Invoke command payload.

Using 'newUUID()' aws iot function in AWS SiteWise service that returns a random 16-byte UUID to be stored as a partition key

I am trying to use the 'newUUID()' aws iot function in the AWS SiteWise service (as part of an alarm action) that returns a random 16-byte UUID to be stored as a partition key for a DynamoDb tables partition key column.
With reference to the attached screenshot, in the 'PartitionKeyValue' trying to
use the value returned by newUUID() function that will be passed to the DynamoDb as part of the action trigger.
Although this gives an error as follows:
"Invalid Request exception: Failed to parse expression due to: Invalid expression. Unrecognized function: newUUID".
I do understand the error, but not sure how can I solve this and use a random UUID generator. Kindly note that I do not want to use a timestamp, because there could be eventualities where multiple events get triggered at the same time and hence the same timestamp.
Any ideas that how can I use this function, or any other information that helps me achieve the above-mentioned.
The docs you refer to say that function is all lowercase newuuid().
Perhaps that will work, but I believe that function is only available in IoT Core SQL Statements. I think with event notifications, you only have these expressions to work with, which is not much. Essentially, you need to get what you need from the alarm event itself.
You may need the alarm event to invoke Lambda, rather than directly write to DynamoDB. Your Lambda function can create a UUID and write the alarm record to DynamoDB using the SDKs.

AWS lambda python approach/code required to make variable global on every call

I need an approach in AWS lambda to resolve a issue please help
What am I doing now:
Inside lambda handler function I am taking data from athena and performing some logic, also taking data from kinesis performing some logic. lambda handler is invoked every 20 sec
This is pseudo code:
def lambda_handler(event, context):
query = query to get data from athena
df = pd.DataFrame(query)
###Some processing logic from by taking data from kinesis###
My problem is
The data that I take from athena will change only once in a day. So every time when lambda handler is invoked it is unnecessarily querying to athena which is inefficient
What I need
I need some solution approach/code to "query athena and put in dataframe as global scope" so each time when lambda handler is triggered it will make use of global variable.
There are no persistent global variables within lambda itself. The only limited persistence of data that you can count for is through AWS Lambda execution environment:
Objects declared outside of the function's handler method remain initialized, providing additional optimization when the function is invoked again. For example, if your Lambda function establishes a database connection, instead of reestablishing the connection, the original connection is used in subsequent invocations. We recommend adding logic in your code to check if a connection exists before creating a new one.
However, this is not reliable and short lived. Thus the only way for you not to query Athena often, is to store the query results outside of lambda function.
Depending on the nature and amount of the data to be stored, a common choices to ensure persistence of the data between lambda function invocations are S3, EFS, DynamoDB, SSM Parameter Store and ElasticCache.

Can I create temporary users through Amazon Cognito?

Does Amazon Cognito support temporary users? For my use case, I want to be able to give access to external users, but limited to a time period (e.g. 7 days)
Currently, my solution is something like:
Create User in User Group
Schedule cron job to run in x days
Job will disable/remove User from User Group
This all seems to be quite manual and I was hoping Cognito provides something similar automatically.
Unfortunately there is no functionality used to automate this workflow so you would need to devise your own solution.
I would suggest the below approach to handling this:
Create a Lambda function that is able to post process a user sign up. This Lambda function would create a CloudWatch Event with a schedule for 7 days in the future. Using the SDK you would create the event and assign a target of another Lambda function. When you specify the target in the put_targets function use the Input parameter to pass in your own JSON, this should contain a metadata item related to the user.
You would then create a post confirmation Lambda trigger which would trigger the Lambda you created in the above step. This would allow you to schedule an event every time a user signs up.
Finally create the target Lambda for the CloudWatch event, this will access the input passed in from the trigger and can use the AWS SDK to perform any cognito functions you might want to use such as deleting the user.
The benefit to using these services rather a cron, is that you can perform the most optimal processing only when it is required. If you have many users in this temporary group you would need to loop through every user and compare if its ready to be removed for a one time script (and perhaps sometimes never remove users).
My solution for this is the following: Instead of creating a post confirmation lambda trigger you can also create a pre authentication lambda trigger. This trigger will check for the user attribute "valid_until" which contains a unix timestamp. The pre authentication lambda trigger will only let the user in if the value of the "valid_until" attribute is in the future. Main benefit of this solution is that you don't need any cron-jobs.