How can I change default parameter values for lambda? - amazon-web-services

I'm playing with AWS lambda and I am unable to change the default parameters that are used in the lambda. Is there a workaround for this?
Setup:
Lambda "iAmInvoked" is created by a stack in cloudformation which has default parameter values set (I set these defaults thinking that, these will be used in case invoker doesn't provide values for the parameters required and can be overridden). I'm invoking this iAmInvoked lambda asynchronously using a lambda called "iWillInvoke" and providing the payload which contains new values for parameters to be used by iAmInvoked instead of its defaults.
iWillInvoke code:
import json
import boto3
client = boto3.client('lambda')
def lambda_handler(event, context):
payloadForLambda = { 'parameter1' : 'abc,def' , 'parameter2' : '123456' , 'parameter3' : '987654' }
client.invoke(
FunctionName='arn:aws:lambda:us-west-2:123456789:function:iAmInvoked',
InvocationType='Event',
Payload=json.dumps(payloadForLambda)
)
iAmInvoked Code:
AWSTemplateFormatVersion: 2010-09-09
Description: |
"Creates required IAM roles to give permission to get and put SSM parameters and creates lambda function that shares the parameter(s)."
Parameters:
parameter1:
Type: String
Default: parameterValueThatShallBeOverridden1
parameter2:
Type: String
Default: parameterValueThatShallBeOverridden2
parameter3:
Type: String
Default: parameterValueThatShallBeOverridden3
Question/Issue:
Doesn't matter what I provide in the payload of iWillInvoke, iAmInvoked is using its default values. Is there a way I can override the defaults?

iAmInvoked Code is not your function code nor its parameters. Its CloudFormation template and parameters for the template. Using client.invoke does not affect in any form and shape the CloudFormation template.
To work with CloudFormation in boto3, there is cloudformation SDK.

Related

How to read SSM Parameter dynamically from Lambda Environment variable

I am keeping the application endpoint in SSM parameter store and able to access from Lambda environment .
Resources:
M4IAcarsScheduler:
Type: AWS::Serverless::Function
Properties:
Handler: not.used.in.provided.runtime
Runtime: provided
CodeUri: target/function.zip
MemorySize: 512
Timeout: 900
FunctionName: Sample
Environment:
Variables:
SamplePath: !Ref sample1path
SampleId: !Ref sample1pathid
Parameters:
sample1path:
Type: AWS::SSM::Parameter::Value<String>
Description: Select existing security group for lambda function from Parameter Store
Default: /sample/path
sample1pathid:
Type: AWS::SSM::Parameter::Value<String>
Description: Select existing security group for lambda function from Parameter Store
Default: /sample/id
My issue is while I am updating the SSM parameter, the Lambda Env. is not update dynamically, and every time I need to restart.
Is there any way I can handle it dynamically, meaning that when it changes in SSM parameter Store, it'll reflect without restart of Lambda?
By using SSM parameters in a CloudFormation stack, the parameters get resolved when the CloudFormation stack is deployed. If the value in SSM subsequently changes, there is nothing to update the lambda, so the lambda will still have the value that was pulled from SSM at the moment the CloudFormation stack deployed. The lambda will not even know that the parameter came from SSM; rather, it will only know that there there is a static environment variable configured.
Instead, to use SSM Parameters in your lambda you should change your lambda code so that it fetches the parameter from inside the code. This AWS blog shows a Python lambda example of how to fetch the parameters from the lambda code (when the lambda runs):
import os, traceback, json, configparser, boto3
from aws_xray_sdk.core import patch_all
patch_all()
# Initialize boto3 client at global scope for connection reuse
client = boto3.client('ssm')
env = os.environ['ENV']
app_config_path = os.environ['APP_CONFIG_PATH']
full_config_path = '/' + env + '/' + app_config_path
# Initialize app at global scope for reuse across invocations
app = None
class MyApp:
def __init__(self, config):
"""
Construct new MyApp with configuration
:param config: application configuration
"""
self.config = config
def get_config(self):
return self.config
def load_config(ssm_parameter_path):
"""
Load configparser from config stored in SSM Parameter Store
:param ssm_parameter_path: Path to app config in SSM Parameter Store
:return: ConfigParser holding loaded config
"""
configuration = configparser.ConfigParser()
try:
# Get all parameters for this app
param_details = client.get_parameters_by_path(
Path=ssm_parameter_path,
Recursive=False,
WithDecryption=True
)
# Loop through the returned parameters and populate the ConfigParser
if 'Parameters' in param_details and len(param_details.get('Parameters')) > 0:
for param in param_details.get('Parameters'):
param_path_array = param.get('Name').split("/")
section_position = len(param_path_array) - 1
section_name = param_path_array[section_position]
config_values = json.loads(param.get('Value'))
config_dict = {section_name: config_values}
print("Found configuration: " + str(config_dict))
configuration.read_dict(config_dict)
except:
print("Encountered an error loading config from SSM.")
traceback.print_exc()
finally:
return configuration
def lambda_handler(event, context):
global app
# Initialize app if it doesn't yet exist
if app is None:
print("Loading config and creating new MyApp...")
config = load_config(full_config_path)
app = MyApp(config)
return "MyApp config is " + str(app.get_config()._sections)
Here is a post with an example in Node, and similar examples exist for other languages too.
// parameter expected by SSM.getParameter
var parameter = {
"Name" : "/systems/"+event.Name+"/config"
};
responseFromSSM = await SSM.getParameter(parameter).promise();
console.log('SUCCESS');
console.log(responseFromSSM);
var value = responseFromSSM.Parameter.Value;

aws - is it possible to use Cognito user pool custom field in lambda function?

I have created a custom field "permissions" in my user pool.
I wonder if it is possible to use this field in my lambda function, so that I can do some permission control for calling the corresponding lambda function.
For example
if((**custom.permissions**).includes("admin")){
// execute the lambda function
}
For example if your lambda function was written using python boto3 you can get the user Attributes like this:
import boto3
client = boto3.client('cognito-idp')
response = client.get_user(
AccessToken='string'
)
The response structure contains
UserAttributes (list) --
An array of name-value pairs representing user attributes.
For custom attributes, you must prepend the custom: prefix to the attribute name.

create request body and template API GATEWAY CDK

Please tell me two things:
1. How to configure request body via sdk
2. how to configure template, for pulling pass or query param, converting to json, and then passing it to lambda
This is all in the api gateway and via cdk
Assume you have the following setup
const restapi = new apigateway.RestApi(this, "myapi", {
// detail omit
});
const helloWorld = new lambda.Function(this, "hello", {
runtime: lambda.Runtime..PYTHON_3_8,
handler: 'index.handler',
code: Code.asset('./index.py')
})
restapi.root.addResource("test").addMethod("POST", new apigateway.LambdaIntegration(helloWorld))
and inside the lambda function (in python)
def handler(event, context):
request_body = event['body']
parameters = event[queryStringParameters]

Set up S3 Bucket level Events using AWS CloudFormation

I am trying to get AWS CloudFormation to create a template that will allow me to attach an event to an existing S3 Bucket that will trigger a Lambda Function whenever a new file is put into a specific directory within the bucket. I am using the following YAML as a base for the CloudFormation template but cannot get it working.
---
AWSTemplateFormatVersion: '2010-09-09'
Resources:
SETRULE:
Type: AWS::S3::Bucket
Properties:
BucketName: bucket-name
NotificationConfiguration:
LambdaConfigurations:
- Event: s3:ObjectCreated:Put
Filter:
S3Key:
Rules:
- Name: prefix
Value: directory/in/bucket
Function: arn:aws:lambda:us-east-1:XXXXXXXXXX:function:lambda-function-trigger
Input: '{ CONFIGS_INPUT }'
I have tried rewriting this template a number of different ways to no success.
Since you have mentioned that those buckets already exists, this is not going to work. You can use CloudFormation in this way but only to create a new bucket, not to modify existing bucket if that bucket was not created via that template in the first place.
If you don't want to recreate your infrastructure, it might be easier to just use some script that will subscribe lambda function to each of the buckets. As long as you have a list of buckets and the lambda function, you are ready to go.
Here is a script in Python3. Assuming that we have:
2 buckets called test-bucket-jkg2 and test-bucket-x1gf
lambda function with arn: arn:aws:lambda:us-east-1:605189564693:function:my_func
There are 2 steps to make this work. First, you need to add function policy that will allow s3 service to execute that function. Second, you will loop through the buckets one by one, subscribing lambda function to each one of them.
import boto3
s3_client = boto3.client("s3")
lambda_client = boto3.client('lambda')
buckets = ["test-bucket-jkg2", "test-bucket-x1gf"]
lambda_function_arn = "arn:aws:lambda:us-east-1:605189564693:function:my_func"
# create a function policy that will permit s3 service to
# execute this lambda function
# note that you should specify SourceAccount and SourceArn to limit who (which account/bucket) can
# execute this function - you will need to loop through the buckets to achieve
# this, at least you should specify SourceAccount
try:
response = lambda_client.add_permission(
FunctionName=lambda_function_arn,
StatementId="allow s3 to execute this function",
Action='lambda:InvokeFunction',
Principal='s3.amazonaws.com'
# SourceAccount="your account",
# SourceArn="bucket's arn"
)
print(response)
except Exception as e:
print(e)
# loop through all buckets and subscribe lambda function
# to each one of them
for bucket in buckets:
print("putting config to bucket: ", bucket)
try:
response = s3_client.put_bucket_notification_configuration(
Bucket=bucket,
NotificationConfiguration={
'LambdaFunctionConfigurations': [
{
'LambdaFunctionArn': lambda_function_arn,
'Events': [
's3:ObjectCreated:*'
]
}
]
}
)
print(response)
except Exception as e:
print(e)
You could write a custom resource to do this, in fact that's what I've ended up doing at work for the same problem. At the simplest level, define a lambda that takes a put bucket notification configuration and then just calls the put bucket notification api with the data that was passed it.
If you want to be able to control different notifications across different cloudformation templates, then it's a bit more complex. Your custom resource lambda will need to read the existing notifications from S3 and then update these based on what data was passed to it from CF.

Latest Lambda Layer ARN

I have a lambda layer which I keep updating. This lambda layer has multiple versions. How can I find the lambda layer ARN with latest version using aws cli?
I am able to do this using the command listed below -
aws lambda list-layer-versions --layer-name <layer name> --region us-east-1 --query 'LayerVersions[0].LayerVersionArn'
Unfortunately, it's currently not possible (I have encountered the same issue).
You can keep the latest ARN in your own place (like DynamoDB) and update it whenever you publish a new version of the layer.
You can create a custom macro to get the latest lambda layer version and use that as a reference.
The following function gets the latest version from the Lambda Layer stack:
import json
import boto3
def latest_lambdalayer(event, context):
fragment = get_latestversion(event['fragment'])
return {
'requestId': event['requestId'],
'status': 'success',
'fragment': fragment
}
def get_latestversion(fragment):
cloudformation = boto3.resource('cloudformation')
stack = cloudformation.Stack('ticketapp-layer-dependencies')
for o in stack.outputs:
if o['OutputKey']=='TicketAppLambdaDependency':
return o['OutputValue']
#return "arn:aws:lambda:eu-central-1:899885580749:layer:ticketapp-dependencies-layer:16"
And you use this when defining the Lambda layer—here using same global template:
Globals:
Function:
Layers:
- !Transform { "Name" : "LatestLambdaLayer"}
Runtime: nodejs12.x
MemorySize: 128
Timeout: 101