AWS CDK look up ARNs from lambda - amazon-web-services

I am quite new to AWS and have a maybe easy to answer question.
(I am using localstack to develope locally, if this makes any difference)
In a lambda, I got the following code, which should publish a message to an aws-sns.
def handler(event, context):
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.info("confirmed user!")
notification = "A test"
client = boto3.client('sns')
response = client.publish(
TargetArn="arn:aws:sns:us-east-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
Message=json.dumps({'default': notification}),
MessageStructure='json'
)
return {
'statusCode': 200,
'body': json.dumps(response)
}
For now I "hardcode" the ARN of the sns topic which is output to console when deploying (with cdklocal deploy).
I am wondering, if there is any convenient way, to lookup the ARN of a AWS ressource?
I have seen, there is the
cdk.Fn.getAtt(logicalId, 'Arn').toString();
function, but I don't know the logicalID of the sns before deployment. So, how can I lookup ARNs during runtime? What is best practice?
(It's a quite annoying task keeping track of all the ARNs if I just hardcode them as strings, and definitly seems wrong to me)

You can use the !GetAtt function in your CloudFormation template to retrieve and pass your SNS topic ARN to to your Lambda.
Resources:
MyTopic:
Type: AWS::SNS::Topic
Properties:
{...}
MyLambda:
Type: AWS::Lambda::Function
Properties:
Environment:
Variables:
SNS_TOPIC_ARN: !GetAtt MyTopic.Arn

Related

Is there any way to handle the changes of aws layer to all the associated aws lambdas in serverless if they both are in different stack?

I am using Serverless Framework and AWS cloud services for my project. I have created many services with AWS lambda and created a layer to serve common purposes of those services. I am maintaining the layer in a separate stack and included this layer to all the services using cloudformation syntax. The problem is everytime I am updating the layer and deploying it to AWS, I need to deploy all the services once again to reflect those changes into the associated services. Is there any way to mitigate this issue so that once I deploy my layer, all the associated services will also be updated with latest layer changes? I don't need to update those services manually everytime I deploy a layer. Hope this make sense to you. I am adding serverless.yml file of layer and one of my service to make it more clear to you. Looking forward to hear from you. Thanks in advance.
serverless.yml file for layer
service: t5-globals
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs14.x
lambdaHashingVersion: 20201221
environment:
NODE_PATH: "./:/opt/node_modules"
layers:
t5Globals:
path: nodejs
compatibleRuntimes:
- nodejs14.x
resources:
Outputs:
T5GlobalsLayerExport:
Value:
Ref: T5GlobalsLambdaLayer
Export:
Name: T5GlobalsLambdaLayer
plugins:
- serverless-offline
serverless.yml file of one service
service: XXXXX
projectDir: XXXXXX
frameworkVersion: '2'
provider: XXXXXX
plugins: XXXXXX
resources: XXXXXX
functions:
XXXXXX:
name: XXXXXXX
handler: XXXXXXX
layers:
- ${cf:t5-globals-dev.T5GlobalsLayerExport}
events:
- http: XXXXX
When you implement CI/CD, this thing must be automated, we normally use a trigger that trap the git change event of say CodeCommit and execute a lambda function.
The lambda function then scans the files for changes and creates new layer, and update all the lambda functions that uses this layer to use latest version of the layer.
Sharing the code written in python, can change and use as per your needs.
import base64
import datetime
from urllib.parse import unquote_plus
from botocore.exceptions import ClientError
s3 = boto3.resource('s3')
region = os.environ['region']
lambdaclient = boto3.client('lambda', region_name=region)
def layerUpdateExistingLambdaFunctions(extensionLayer) :
functionslist = []
nextmarker = None
while True:
if nextmarker is not None:
list_function_response = lambdaclient.list_functions(
FunctionVersion='ALL',
MaxItems=10,
Marker = nextmarker
)
else:
list_function_response = lambdaclient.list_functions(
FunctionVersion='ALL',
MaxItems=10
)
if 'Functions' in list_function_response.keys():
for function in list_function_response['Functions']:
functionName = function['FunctionName']
layersUsed = []
usingExtensionLayer = False
if 'Layers' in function.keys():
layers = function['Layers']
for layer in layers:
layerArn = layer['Arn'][:layer['Arn'].rfind(':')]
layersUsed.append(layerArn)
if extensionLayer.find(layerArn) >= 0:
print(f'Function {functionName} using extension layer')
usingExtensionLayer = True
functionslist.append(functionName)
if usingExtensionLayer is True:
extensionLayerArn = extensionLayer[:extensionLayer.rfind(':')]
print(f'Existing function {functionName} using {extensionLayerArn}, needs to be updated')
newLayers = []
for layerUsed in layersUsed:
newLayers.append(lambdaclient.list_layer_versions(
CompatibleRuntime = 'python3.7',
LayerName=layerUsed
)['LayerVersions'][0]['LayerVersionArn'])
lambdaclient.update_function_configuration(
FunctionName = functionName,
Layers=newLayers
)
print(f'Function {functionName} updated with latest layer versions')
if 'NextMarker' in list_function_response.keys():
nextmarker = list_function_response['NextMarker']
else:
break
return functionslist

Configure AWS Lambda function to use latest version of a Layer

I have more than 20 lambda functions in a developing application. And a lambda layer that contains a good amount of common code.
A Lambda function, is hook it to a particular version of the layer, and every time I update a layer, it generates a new version. Since it is a developing application, I have a new version of the layer almost every day. That creates a mess on the lambda functions that have to be touched every day - to upgrade the layer version.
I know it is important to freeze code for a lambda function in production, and it is essential to hook one version of the lambda function to a version of the layer.
But, for the development environment, is it possible to prevent generating a new layer version every time a layer is updated? Or configure the lambda function so that the latest lambda version always refers to the latest layer version?
Unfortunately it is currently not possible to reference the latest, and there is no concept of aliases for the layer versions.
The best suggestion would be to automate this, so that whenever you create a new Lambda Layer version it would update all Lambda functions that currently include this Lambda Layer.
To create this event trigger, create a CloudWatch function that uses its event to listen for the PublishLayerVersion event.
Then have it trigger a Lambda that would trigger the update-function-layers function for each Lambda to replace its layer with the new one.
Enhance from #Chris answer, you can also use a lambda-backed Custom Resource in your stack and use this lambda to update the target configuration with the new layer ARN. I note this out in case if there someone have the similar need when I found out this thread couple days ago.
There are some notes on this solution:
The lambda of the customer resource has to send status response back to the trigger CloudFormation (CFN) endpoint, or else the CFN stack will hanging till timeout (about an hour or more, it's a painful process if you have problem on this lambda, be careful with that)
Easy way to send response back, you can use cfnresponse (pythonic way), this lib is available magically when you use CFN lambda inline code (CFN setup this lib when processing CFN with inline code) and must have a line 'import cfnresponse' :D
CFN will not touch to the custom resource after it created, so when you update stack for new layer change, the lambda will not trigger. A trick to make it move is to use custom resource with custom property then you will change this property with something will change each time you execute the stack, layer version arn. So this custom resource will be updated, means the lambda of this resource will be triggered when the stack update.
Not sure why the Logical Name of the lambda layer is changed with AWS::Serverless:Layer so I can't DependOns that layer logical name but I still have !Ref its ARN
Here is a sample code
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
myshared-libraries layer
Resources:
LambdaLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: !Sub MyLambdaLayer
Description: Shared library layer
ContentUri: my_layer/layerlib.zip
CompatibleRuntimes:
- python3.7
ConsumerUpdaterLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: consumer-updater
InlineCode: |
import os, boto3, json
import cfnresponse
def handler(event, context):
print('EVENT:[{}]'.format(event))
if event['RequestType'].upper() == 'UPDATE':
shared_layer = os.getenv("DB_LAYER")
lambda_client = boto3.client('lambda')
consumer_lambda_list = ["target_lamda"]
for consumer in consumer_lambda_list:
try:
lambda_name = consumer.split(':')[-1]
lambda_client.update_function_configuration(FunctionName=consumer, Layers=[shared_layer])
print("Updated Lambda function: '{0}' with new layer: {1}".format(lambda_name, shared_layer))
except Exception as e:
print("Lambda function: '{0}' has exception: {1}".format(lambda_name, str(e)))
responseValue = 120
responseData = {}
responseData['Data'] = responseValue
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData)
Handler: index.handler
Runtime: python3.7
Role: !GetAtt ConsumerUpdaterRole.Arn
Environment:
Variables:
DB_LAYER: !Ref LambdaLayer
ConsumerUpdaterRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- Fn::Sub: arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName:
Fn::Sub: updater-lambda-configuration-policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- lambda:GetFunction
- lambda:GetFunctionConfiguration
- lambda:UpdateFunctionConfiguration
- lambda:GetLayerVersion
- logs:DescribeLogGroups
- logs:CreateLogGroup
Resource: "*"
ConsumerUpdaterMacro:
DependsOn: ConsumerUpdaterLambda
Type: Custom::ConsumerUpdater
Properties:
ServiceToken: !GetAtt ConsumerUpdaterLambda.Arn
DBLayer: !Ref LambdaLayer
Outputs:
SharedLayer:
Value: !Ref LambdaLayer
Export:
Name: MySharedLayer
Another option is using stack Notification ARN which send all stack events into a defined SNS, where you will use it to trigger your update lambda. In your lambda, you will filter the SNS message body (which is a readable json liked format string) with the AWS::Lambda::Layer resource then grab the PhysicalResourceId for the layer ARN. How to engage the SNS topic to your stack, use CLI sam/cloudformation deploy --notification-arns option. Unfortunately, CodePipeline doesn't support this configuration option so you can only use with CLI only
Sample code for your lambda to extract/filter the SNS message body with resource data
import os, boto3, json
def handler(event, context):
print('EVENT:[{}]'.format(event))
resource_data = extract_subscription_msg(event['Records'][0]['Sns']['Message'])
layer_arn = ''
if len(resource_data) > 0:
if resource_data['ResourceStatus'] == 'CREATE_COMPLETE' and resource_data['ResourceType'] == 'AWS::Lambda::LayerVersion':
layer_arn = resource_data['PhysicalResourceId']
if layer_arn != '':
lambda_client = boto3.client('lambda')
consumer_lambda_list = ["target_lambda"]
for consumer in consumer_lambda_list:
lambda_name = consumer.split(':')[-1]
try:
lambda_client.update_function_configuration(FunctionName=consumer, Layers=[layer_arn])
print("Update Lambda: '{0}' to layer: {1}".format(lambda_name, layer_arn))
except Exception as e:
print("Lambda function: '{0}' has exception: {1}".format(lambda_name, str(e)))
return
def extract_subscription_msg(msg_body):
result = {}
if msg_body != '':
attributes = msg_body.split('\n')
for attr in attributes:
if attr != '':
items = attr.split('=')
if items[0] in ['PhysicalResourceId', 'ResourceStatus', 'ResourceType']:
result[items[0]] = items[1].replace('\'', '')
return result
It is possible to derive the most recent version number of a layer, using an additional data statement, as per https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/lambda_layer_version
So in your definition module, you will have the original layer resource definition
resource "aws_lambda_layer_version" "layer_mylib" {
filename = "layer_mylib.zip"
layer_name = "layer_mylib"
compatible_runtimes = ["python3.6", "python3.7", "python3.8"]
}
and then to obtain the ARN with latest version, use
data "aws_lambda_layer_version" "mylatest" {
layer_name = aws_lambda_layer_version.layer_mylib.layer_name
}
then data.aws_lambda_layer_version.mylatest.arn
will give the reference which includes the latest version number, which can be checked by placing
output {
value = data.aws_lambda_layer_version.mylatest.arn
}
in your common.tf

AWS SAM CLI cannot access Dynamo DB when function is invoked locally

I am building an AWS lambda with aws-sam-cli. In the function, I want to access a certain DynamoDB table.
My issue is that the function comes back with this error when I invoke it locally with the sam local invoke command: ResourceNotFoundException: Requested resource not found
const axios = require('axios')
const AWS = require('aws-sdk')
AWS.config.update({region: <MY REGION>})
const dynamo = new AWS.DynamoDB.DocumentClient()
exports.handler = async (event) => {
const scanParams = {
TableName: 'example-table'
}
const scanResult = await dynamo.scan(scanParams).promise().catch((error) => {
console.log(`Scan error: ${error}`)
// => Scan error: ResourceNotFoundException: Requested resource not found
})
console.log(scanResult)
}
However, if I actually sam deploy it to AWS and test it in the actual Lambda console, it logs the table info correctly.
{
Items: <TABLE ITEMS>,
Count: 1,
ScannedCount: 1
}
Is this expected behavior? Or is there some additional configuration I need to do for it to work locally? My template.yaml looks like this:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'Example SAM stack'
Resources:
ExampleFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs12.x
Policies:
- DynamoDBCrudPolicy:
TableName: 'example-table'
I believe when you invoke your Lambda locally, SAM is not recognising which profile to use for the remote resources, ex: DynamoDB
Try to pass the credentials profile for your remote dynamoDB
ex:
sam local invoke --profile default
You can check the command documentation here: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-invoke.html

Lambda invoke Lambda via API Gateway

I can't seem to get this to work. I create 2 lambdas via C9. I'm using boto3 to invoke one lambda from another. Everything seems to work just fine via C9 but when I publish and try to access via API Gateway I keep getting "Endpoint request timed out" errors.
I know it can't be a timeout issue because I've set up my yaml files to have enough time to execute and the lambda right now are really simple (only returning a string)
here are my current yaml file. I'm wondering if maybe there are some sort of permissions I need to include for API Gateway in the second yaml
Lambda1
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
api:
Type: 'AWS::Serverless::Function'
Properties:
Description: ''
Handler: api/lambda_function.lambda_handler
MemorySize: 256
Role: 'arn:aws:iam::820788395625:role/service-role/api_int-role'
Runtime: python3.6
Timeout: 30
VpcConfig:
SecurityGroupIds:
- ...
SubnetIds:
- ...
Policies: AWSLambdaFullAccess
Lambda2
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
api:
Type: 'AWS::Serverless::Function'
Properties:
Description: ''
Handler: api/lambda_function.lambda_handler
MemorySize: 512
Role: 'arn:aws:iam::820788395625:role/service-role/api_int-role'
Runtime: python3.6
Timeout: 15
VpcConfig:
SecurityGroupIds:
- ...
SubnetIds:
- ...
I just set up an API Gateway endpoint directly to Lambda2 and it returned no problem. So...
API Gateway -> Lambda 2 (works)
API Gateway -> Lambda 1 -> Lambda 2 (does not work)
So for some reason when I want to call Lambda 2 via Lambda 1 over API Gateway it doesn't work.
Here is the code that is calling the 2nd Lambda
import json
import boto3
def lambda_handler(event, context):
print('call boto3 client')
lambda_client = boto3.client('lambda', region_name='us-east-1')
print('boto3 client called')
print('invoke lambda')
env_response = lambda_client.invoke(
FunctionName='cloud9-apiAlpha-api-TBSOYXLVBCLX',
InvocationType='RequestResponse',
Payload=json.dumps(event)
)
print('lambda invoked')
print('env_response')
print(env_response)
print(env_response['Payload'])
print(env_response['Payload'].read())
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
'Access-Control-Allow-Methods': 'POST,GET,OPTIONS,PUT,DELETE',
'Access-Control-Allow-Origin': '*'
},
'body': 'HELLO WORLD!',
'isBase64Encoded': False
}
Now when I look at the logs it gets to print('invoke lambda') but then stops and timesout
1.Invoking a Lambda from another Lambda can't be done without some configuration. In your .yml file, permission must be specified in order to invoke another Lambda. This can be accomplished by adding an iamRoleStatements section under the provider property
or
by add the simple policy AWSLambdaRole to the existing role attached to the lambda function_1.
provider:
name: aws
runtime: <runtime goes here> # e.g. python3.6 or nodejs6.10
iamRoleStatements:
- Effect: Allow
Action:
- lambda:InvokeFunction
Resource: "*"
or do this add/attach this policy to your existing role attached to your lambda function_1
2.Invoking lambda function_1 code attached.
global LAMBDA_CLIENT
if not LAMBDA_CLIENT:
LAMBDA_CLIENT = boto3.client('lambda')
try:
encoded_payload = json.dumps({'message': 'this is an invokcation call form lambda_1'}).encode(UTF_8)
invoke_resp = lambda_client.invoke(
FunctionName='function_2',
InvocationType='RequestResponse',
Payload=encoded_payload)
status_code = invoke_resp['StatusCode']
if status_code != 200:
LOGGER.error('error ')
paylaod = invoke_resp['Payload'].read()
resp = json.loads(payload)
print(resp)
except Exception:
IF you are using InvocationType=RequestResponse then you can return some response form function_2.
Finally found the solution. The answer to my particular problem was Lambda 1 & Lambda 2 were operating over VPC thus no internet connection. Once I removed VPC from Lambda 1 the invocation of Lambda 2 worked without any problems.
Just wanted to share in case I can save anyone else a weeks worth of debugging LOL

How to define Resource Policy for CloudWatch Logs with CloudFormation?

When I configure DNS Query Logging with Route53, I can create a resource policy for Route53 to log to my log group. I can confirm this policy with the cli aws logs describe-resource-policies and see something like:
{
"resourcePolicies": [
{
"policyName": "test-logging-policy",
"policyDocument": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"route53.amazonaws.com\"},\"Action\":[\"logs:CreateLogStream\",\"logs:PutLogEvents\"],\"Resource\":\"arn:aws:logs:us-east-1:xxxxxx:log-group:test-route53*\"}]}",
"lastUpdatedTime": 1520865407511
}
]
}
The cli also has a put-resource-policy to create one of these. I also see that Terraform has a resource aws_cloudwatch_log_resource_policy which does the same.
So the question: How do I do this with CloudFormation???
You can't use the CloudWatch console to create or edit a resource policy. You must use the CloudWatch API, one of the AWS SDKs, or the
AWS CLI.
There is no Cloudformation support for creating a resource policy right now, but you create a custom lambda resource to do this.
https://gist.github.com/sudharsans/cf9c52d7c78a81818a4a47872982bd76
CloudFormation Custom resource:
AddResourcePolicy:
Type: Custom::AddResourcePolicy
Version: '1.0'
Properties:
ServiceToken: arn:aws:lambda:us-east-1:872673965194:function:test-lambda-deploy-Lambda-15R963QKCI80A
CloudWatchLogsLogGroupArn: !GetAtt LogGroup.Arn
PolicyName: "testpolicy"
lambda:
import cfnresponse
import boto3
def PutPolicy(arn,policyname):
response = client.put_resource_policy(
policyName=policyname,
policyDocument="....",
)
return
def handler(event, context):
......
if event['RequestType'] == "Delete":
DeletePolicy(PolicyName)
if event['RequestType'] == "Create":
PutPolicy(CloudWatchLogsLogGroupArn,PolicyName)
responseData['Data'] = "SUCCESS"
status=cfnresponse.SUCCESS
.....
4 years later, this still doesn't seem to work through Cloudformation although there is apparently support for this included now