I am trying to invoke a lambda locally with sam local invoke. The function invokes fine but my environment variables for my secrets are not resolving. The secrets resolve as expected when you deploy the function. But I want to avoid my local code and my deployed code being any different. So is there a way to resolve those secrets to the actual secret value at the time of invoking locally? Currently I am getting just the string value from the environment variable. Code below.
template.yaml
# This is the SAM template that represents the architecture of your serverless application
# https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-basics.html
# The AWSTemplateFormatVersion identifies the capabilities of the template
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/format-version-structure.html
AWSTemplateFormatVersion: 2010-09-09
Description: >-
onConnect
# Transform section specifies one or more macros that AWS CloudFormation uses to process your template
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-section-structure.html
Transform:
- AWS::Serverless-2016-10-31
# Resources declares the AWS resources that you want to include in the stack
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resources-section-structure.html
Resources:
# Each Lambda function is defined by properties:
# https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
# This is a Lambda function config associated with the source code: hello-from-lambda.js
helloFromLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/onConnect.onConnect
Runtime: nodejs14.x
MemorySize: 128
Timeout: 100
Environment:
Variables:
WSS_ENDPOINT: '{{resolve:secretsmanager:prod/wss/api:SecretString:endpoint}}'
onConnect.js
/**
* A Lambda function that returns a static string
*/
exports.onConnect = async () => {
const endpoint = process.env.WSS_ENDPOINT;
console.log(endpoint);
// If you change this message, you will need to change hello-from-lambda.test.js
const message = 'Hellddfdsfo from Lambda!';
// All log statements are written to CloudWatch
console.info(`${message}`);
return message;
}
I came up with a work around that will allow me to have one code base and "resolve" secrets/parameters locally.
I created a very basic lambda layer who's only job is fetching secrets if the environment is set to LOCAL.
import boto3
def get_secret(env, type, secret):
client = boto3.client('ssm')
if env == 'LOCAL':
if type == 'parameter':
return client.get_parameter(
Name=secret,
)['Parameter']['Value']
else:
return secret
I set the environment with a parameter in the lambda that will be calling this layer. BTW this layer will resolve more than one secret eventually so that's why the nested if might look a little strange. This is how I set the environment:
Resources:
...
GetWSSToken:
Type: AWS::Serverless::Function
Properties:
FunctionName: get_wss_token
CodeUri: get_wss_token/
Handler: app.lambda_handler
Runtime: python3.7
Timeout: 30
Layers:
- arn:aws:lambda:********:layer:SecretResolver:8
Environment:
Variables:
ENVIRONMENT: !Ref Env
JWT_SECRET: !FindInMap [ Map, !Ref Env, jwtsecret ]
...
Mappings:
Map:
LOCAL:
jwtsecret: jwt_secret
PROD:
jwtsecret: '{{resolve:ssm:jwt_secret}}'
STAGING:
jwtsecret: '{{resolve:ssm:jwt_secret}}'
Parameters:
...
Env:
Type: String
Description: Environment this lambda is being run in.
Default: LOCAL
AllowedValues:
- LOCAL
- PROD
- STAGING
Now I can simply call the get_secret method in my lambda and depending on what I set Env to the secret will either be fetched at runtime or returned from the environment variables.
import json
import jwt
import os
from datetime import datetime, timedelta
from secret_resolver import get_secret
def lambda_handler(event, context):
secret = get_secret(os.environ['ENVIRONMENT'], 'parameter', os.environ['JWT_SECRET'])
two_hours_from_now = datetime.now() + timedelta(hours=2)
encoded_jwt = jwt.encode({"expire": two_hours_from_now.timestamp()}, secret, algorithm="HS256")
return {
"statusCode": 200,
"body": json.dumps({
"token": encoded_jwt
}),
}
I hope this helps someone out there trying to figure this out. The main issue here is keeping the secrets out of the code base and be able to test locally with the same code that's going into production.
Related
I have a AWS SAM template that creates a API Gateway hooked into a Step Function.
This is all working fine, but I need to add a Integration Response Mapping Template to the response back from Step Functions.
I cant see that this is possible with SAM templates?
I found the relevant Cloud Formation template for it: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration-integrationresponse.html
But It looks like I would have to create the whole AWS::ApiGateway::Method / Integration / IntegrationResponses chain - and then I'm not sure how you reference that from the other parts of the SAM template.
I read that it can be done with openAPI / Swagger definition - is that the only way? Or is there a cleaner way to simply add this template?
This is watered down version of what I have just to demonstrate ...
Transform: AWS::Serverless-2016-10-31
Description: My SAM Template
Resources:
MyAPIGateway:
Type: AWS::Serverless::Api
Properties:
Name: my-api
StageName: beta
Auth:
ApiKeyRequired: true
UsagePlan:
CreateUsagePlan: PER_API
UsagePlanName: my-usage-plan
Quota:
Limit: 1000
Period: DAY
Throttle:
BurstLimit: 1000
RateLimit: 1000
MyStateMachine:
Type: AWS::Serverless::StateMachine
Properties:
Name: my-state-machine
DefinitionUri: statemachines/my-state-machine.asl.json
Events:
MyEvent:
Type: Api
Properties:
Path: /myApiMethod
Method: post
RestApiId: !Ref MyAPIGateway
# TODO: how to we define this Integration Response Template ?
# IntegrationResponse:
# Template:
# application/json: |
# ## parse arn:aws:states:REGION:ACCOUNT:execution:STATE_MACHINE:EXECUTION_NAME
# ## to get just the name at the end
# #set($executionArn = $input.json('$.executionArn'))
# #set($arnTokens = $executionArn.split(':'))
# #set($lastIndex = $arnTokens.size() - 1)
# #set($executionId = $arnTokens[$lastIndex].replace('"',''))
# {
# "execution_id" : "$executionId",
# "request_id" : "$context.requestId",
# "request_start_time" : "$context.requestTimeEpoch"
# }
Right now you're using AWS SAM events in your state machine to construct the API for you, which is a very easy way to easily construct the API. However, certain aspects of the API cannot be constructed this way.
You can still use AWS SAM however to construct the API with all the advanced features when you use the DefinitionBody attribute of the AWS::Serverless::Api (or the DefinitionUri). This allows you to specify the API using the OpenAPI specification with the OpenAPI extensions.
You still need to define the event in the StateMachine though, since this will also ensure that the correct permissions are configured for your API to call your other services. If you don't specify the event, you'll have to fix the permissions yourself.
I am using Serverless Framework and AWS cloud services for my project. I have created many services with AWS lambda and created a layer to serve common purposes of those services. I am maintaining the layer in a separate stack and included this layer to all the services using cloudformation syntax. The problem is everytime I am updating the layer and deploying it to AWS, I need to deploy all the services once again to reflect those changes into the associated services. Is there any way to mitigate this issue so that once I deploy my layer, all the associated services will also be updated with latest layer changes? I don't need to update those services manually everytime I deploy a layer. Hope this make sense to you. I am adding serverless.yml file of layer and one of my service to make it more clear to you. Looking forward to hear from you. Thanks in advance.
serverless.yml file for layer
service: t5-globals
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs14.x
lambdaHashingVersion: 20201221
environment:
NODE_PATH: "./:/opt/node_modules"
layers:
t5Globals:
path: nodejs
compatibleRuntimes:
- nodejs14.x
resources:
Outputs:
T5GlobalsLayerExport:
Value:
Ref: T5GlobalsLambdaLayer
Export:
Name: T5GlobalsLambdaLayer
plugins:
- serverless-offline
serverless.yml file of one service
service: XXXXX
projectDir: XXXXXX
frameworkVersion: '2'
provider: XXXXXX
plugins: XXXXXX
resources: XXXXXX
functions:
XXXXXX:
name: XXXXXXX
handler: XXXXXXX
layers:
- ${cf:t5-globals-dev.T5GlobalsLayerExport}
events:
- http: XXXXX
When you implement CI/CD, this thing must be automated, we normally use a trigger that trap the git change event of say CodeCommit and execute a lambda function.
The lambda function then scans the files for changes and creates new layer, and update all the lambda functions that uses this layer to use latest version of the layer.
Sharing the code written in python, can change and use as per your needs.
import base64
import datetime
from urllib.parse import unquote_plus
from botocore.exceptions import ClientError
s3 = boto3.resource('s3')
region = os.environ['region']
lambdaclient = boto3.client('lambda', region_name=region)
def layerUpdateExistingLambdaFunctions(extensionLayer) :
functionslist = []
nextmarker = None
while True:
if nextmarker is not None:
list_function_response = lambdaclient.list_functions(
FunctionVersion='ALL',
MaxItems=10,
Marker = nextmarker
)
else:
list_function_response = lambdaclient.list_functions(
FunctionVersion='ALL',
MaxItems=10
)
if 'Functions' in list_function_response.keys():
for function in list_function_response['Functions']:
functionName = function['FunctionName']
layersUsed = []
usingExtensionLayer = False
if 'Layers' in function.keys():
layers = function['Layers']
for layer in layers:
layerArn = layer['Arn'][:layer['Arn'].rfind(':')]
layersUsed.append(layerArn)
if extensionLayer.find(layerArn) >= 0:
print(f'Function {functionName} using extension layer')
usingExtensionLayer = True
functionslist.append(functionName)
if usingExtensionLayer is True:
extensionLayerArn = extensionLayer[:extensionLayer.rfind(':')]
print(f'Existing function {functionName} using {extensionLayerArn}, needs to be updated')
newLayers = []
for layerUsed in layersUsed:
newLayers.append(lambdaclient.list_layer_versions(
CompatibleRuntime = 'python3.7',
LayerName=layerUsed
)['LayerVersions'][0]['LayerVersionArn'])
lambdaclient.update_function_configuration(
FunctionName = functionName,
Layers=newLayers
)
print(f'Function {functionName} updated with latest layer versions')
if 'NextMarker' in list_function_response.keys():
nextmarker = list_function_response['NextMarker']
else:
break
return functionslist
I am making a serverless website using aws lambda and the sma cli tool from aws (mostly just to test making real requests to the api). I want to serve assets with the express.static function, but have a problem. When i use it I get an error about it not returning json an the error says that it needs to do that to work. I have 2 functions for now: views (to serve the ejs files) and assets (to serve static files like css and frontend js). Here is my template.yml:
# This is the SAM template that represents the architecture of your serverless application
# https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-basics.html
# The AWSTemplateFormatVersion identifies the capabilities of the template
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/format-version-structure.html
AWSTemplateFormatVersion: 2010-09-09
Description: >-
[Description goes here]
# Transform section specifies one or more macros that AWS CloudFormation uses to process your template
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-section-structure.html
Transform:
- AWS::Serverless-2016-10-31
# Resources declares the AWS resources that you want to include in the stack
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resources-section-structure.html
Resources:
assets:
Type: AWS::Serverless::Function
Properties:
Handler: amplify/backend/function/assets/src/index.handler
Runtime: nodejs14.x
MemorySize: 512
Timeout: 100
Description: serves the assets
Events:
Api:
Type: Api
Properties:
Path: /assets/{folder}/{file}
Method: GET
views:
Type: AWS::Serverless::Function
Properties:
Handler: amplify/backend/function/views/src/index.handler
Runtime: nodejs14.x
MemorySize: 512
Timeout: 100
Description: serves the views
Events:
Api:
Type: Api
Properties:
Path: /
Method: GET
Outputs:
WebEndpoint:
Description: "API Gateway endpoint URL for Prod stage"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
And my code for the assets function:
index.js:
const awsServerlessExpress = require('aws-serverless-express');
const app = require('./app');
const server = awsServerlessExpress.createServer(app);
exports.handler = (event, context) => {
console.log(`EVENT: ${JSON.stringify(event)}`);
return awsServerlessExpress.proxy(server, event, context, 'PROMISE').promise;
};
app.js:
const express = require('express'),
app = express()
app.use(express.json())
app.use('/assets', express.static('assets'))
app.listen(3000);
module.exports = app
Is there some config option for the template.yml that I should know or do I have to change my code?
I made my own solution with fs in node js. I made a simple peice of code like this in the views function:
app.get('/assets/*', (req, res) => {
if (!fs.existsSync(__dirname + `/${req.url}`)) {
res.sendStatus(404).send(`CANNOT GET ${req.url}`);
return;
}
res.send(fs.readFileSync(__dirname + `/${req.url}`, 'utf-8'));
})
I also edited the template.yml to make it so the api with the path of /assets/{folder}/{file} is for the views function and deleted the assets function and move the assets folder with all the assets to the views function dir
EDIT:
For almost everything for some resson the content type http header is always being set text/html, but chnaging the code to this fixs it:
app.get('/assets/*', (req, res) => {
if (!fs.existsSync(`${__dirname}${req.url}`)) {
res.sendStatus(404).send(`CANNOT GET ${req.url}`);
return;
}
res.contentType(path.basename(req.url))
res.send(fs.readFileSync(__dirname + `${req.url}`, 'utf-8'));
})
All this does is use the contentType function on the res object. You just pass in the name of the file and it will automatically find the right content type.
I have more than 20 lambda functions in a developing application. And a lambda layer that contains a good amount of common code.
A Lambda function, is hook it to a particular version of the layer, and every time I update a layer, it generates a new version. Since it is a developing application, I have a new version of the layer almost every day. That creates a mess on the lambda functions that have to be touched every day - to upgrade the layer version.
I know it is important to freeze code for a lambda function in production, and it is essential to hook one version of the lambda function to a version of the layer.
But, for the development environment, is it possible to prevent generating a new layer version every time a layer is updated? Or configure the lambda function so that the latest lambda version always refers to the latest layer version?
Unfortunately it is currently not possible to reference the latest, and there is no concept of aliases for the layer versions.
The best suggestion would be to automate this, so that whenever you create a new Lambda Layer version it would update all Lambda functions that currently include this Lambda Layer.
To create this event trigger, create a CloudWatch function that uses its event to listen for the PublishLayerVersion event.
Then have it trigger a Lambda that would trigger the update-function-layers function for each Lambda to replace its layer with the new one.
Enhance from #Chris answer, you can also use a lambda-backed Custom Resource in your stack and use this lambda to update the target configuration with the new layer ARN. I note this out in case if there someone have the similar need when I found out this thread couple days ago.
There are some notes on this solution:
The lambda of the customer resource has to send status response back to the trigger CloudFormation (CFN) endpoint, or else the CFN stack will hanging till timeout (about an hour or more, it's a painful process if you have problem on this lambda, be careful with that)
Easy way to send response back, you can use cfnresponse (pythonic way), this lib is available magically when you use CFN lambda inline code (CFN setup this lib when processing CFN with inline code) and must have a line 'import cfnresponse' :D
CFN will not touch to the custom resource after it created, so when you update stack for new layer change, the lambda will not trigger. A trick to make it move is to use custom resource with custom property then you will change this property with something will change each time you execute the stack, layer version arn. So this custom resource will be updated, means the lambda of this resource will be triggered when the stack update.
Not sure why the Logical Name of the lambda layer is changed with AWS::Serverless:Layer so I can't DependOns that layer logical name but I still have !Ref its ARN
Here is a sample code
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
myshared-libraries layer
Resources:
LambdaLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: !Sub MyLambdaLayer
Description: Shared library layer
ContentUri: my_layer/layerlib.zip
CompatibleRuntimes:
- python3.7
ConsumerUpdaterLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: consumer-updater
InlineCode: |
import os, boto3, json
import cfnresponse
def handler(event, context):
print('EVENT:[{}]'.format(event))
if event['RequestType'].upper() == 'UPDATE':
shared_layer = os.getenv("DB_LAYER")
lambda_client = boto3.client('lambda')
consumer_lambda_list = ["target_lamda"]
for consumer in consumer_lambda_list:
try:
lambda_name = consumer.split(':')[-1]
lambda_client.update_function_configuration(FunctionName=consumer, Layers=[shared_layer])
print("Updated Lambda function: '{0}' with new layer: {1}".format(lambda_name, shared_layer))
except Exception as e:
print("Lambda function: '{0}' has exception: {1}".format(lambda_name, str(e)))
responseValue = 120
responseData = {}
responseData['Data'] = responseValue
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData)
Handler: index.handler
Runtime: python3.7
Role: !GetAtt ConsumerUpdaterRole.Arn
Environment:
Variables:
DB_LAYER: !Ref LambdaLayer
ConsumerUpdaterRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- Fn::Sub: arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName:
Fn::Sub: updater-lambda-configuration-policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- lambda:GetFunction
- lambda:GetFunctionConfiguration
- lambda:UpdateFunctionConfiguration
- lambda:GetLayerVersion
- logs:DescribeLogGroups
- logs:CreateLogGroup
Resource: "*"
ConsumerUpdaterMacro:
DependsOn: ConsumerUpdaterLambda
Type: Custom::ConsumerUpdater
Properties:
ServiceToken: !GetAtt ConsumerUpdaterLambda.Arn
DBLayer: !Ref LambdaLayer
Outputs:
SharedLayer:
Value: !Ref LambdaLayer
Export:
Name: MySharedLayer
Another option is using stack Notification ARN which send all stack events into a defined SNS, where you will use it to trigger your update lambda. In your lambda, you will filter the SNS message body (which is a readable json liked format string) with the AWS::Lambda::Layer resource then grab the PhysicalResourceId for the layer ARN. How to engage the SNS topic to your stack, use CLI sam/cloudformation deploy --notification-arns option. Unfortunately, CodePipeline doesn't support this configuration option so you can only use with CLI only
Sample code for your lambda to extract/filter the SNS message body with resource data
import os, boto3, json
def handler(event, context):
print('EVENT:[{}]'.format(event))
resource_data = extract_subscription_msg(event['Records'][0]['Sns']['Message'])
layer_arn = ''
if len(resource_data) > 0:
if resource_data['ResourceStatus'] == 'CREATE_COMPLETE' and resource_data['ResourceType'] == 'AWS::Lambda::LayerVersion':
layer_arn = resource_data['PhysicalResourceId']
if layer_arn != '':
lambda_client = boto3.client('lambda')
consumer_lambda_list = ["target_lambda"]
for consumer in consumer_lambda_list:
lambda_name = consumer.split(':')[-1]
try:
lambda_client.update_function_configuration(FunctionName=consumer, Layers=[layer_arn])
print("Update Lambda: '{0}' to layer: {1}".format(lambda_name, layer_arn))
except Exception as e:
print("Lambda function: '{0}' has exception: {1}".format(lambda_name, str(e)))
return
def extract_subscription_msg(msg_body):
result = {}
if msg_body != '':
attributes = msg_body.split('\n')
for attr in attributes:
if attr != '':
items = attr.split('=')
if items[0] in ['PhysicalResourceId', 'ResourceStatus', 'ResourceType']:
result[items[0]] = items[1].replace('\'', '')
return result
It is possible to derive the most recent version number of a layer, using an additional data statement, as per https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/lambda_layer_version
So in your definition module, you will have the original layer resource definition
resource "aws_lambda_layer_version" "layer_mylib" {
filename = "layer_mylib.zip"
layer_name = "layer_mylib"
compatible_runtimes = ["python3.6", "python3.7", "python3.8"]
}
and then to obtain the ARN with latest version, use
data "aws_lambda_layer_version" "mylatest" {
layer_name = aws_lambda_layer_version.layer_mylib.layer_name
}
then data.aws_lambda_layer_version.mylatest.arn
will give the reference which includes the latest version number, which can be checked by placing
output {
value = data.aws_lambda_layer_version.mylatest.arn
}
in your common.tf
I can't seem to get this to work. I create 2 lambdas via C9. I'm using boto3 to invoke one lambda from another. Everything seems to work just fine via C9 but when I publish and try to access via API Gateway I keep getting "Endpoint request timed out" errors.
I know it can't be a timeout issue because I've set up my yaml files to have enough time to execute and the lambda right now are really simple (only returning a string)
here are my current yaml file. I'm wondering if maybe there are some sort of permissions I need to include for API Gateway in the second yaml
Lambda1
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
api:
Type: 'AWS::Serverless::Function'
Properties:
Description: ''
Handler: api/lambda_function.lambda_handler
MemorySize: 256
Role: 'arn:aws:iam::820788395625:role/service-role/api_int-role'
Runtime: python3.6
Timeout: 30
VpcConfig:
SecurityGroupIds:
- ...
SubnetIds:
- ...
Policies: AWSLambdaFullAccess
Lambda2
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
api:
Type: 'AWS::Serverless::Function'
Properties:
Description: ''
Handler: api/lambda_function.lambda_handler
MemorySize: 512
Role: 'arn:aws:iam::820788395625:role/service-role/api_int-role'
Runtime: python3.6
Timeout: 15
VpcConfig:
SecurityGroupIds:
- ...
SubnetIds:
- ...
I just set up an API Gateway endpoint directly to Lambda2 and it returned no problem. So...
API Gateway -> Lambda 2 (works)
API Gateway -> Lambda 1 -> Lambda 2 (does not work)
So for some reason when I want to call Lambda 2 via Lambda 1 over API Gateway it doesn't work.
Here is the code that is calling the 2nd Lambda
import json
import boto3
def lambda_handler(event, context):
print('call boto3 client')
lambda_client = boto3.client('lambda', region_name='us-east-1')
print('boto3 client called')
print('invoke lambda')
env_response = lambda_client.invoke(
FunctionName='cloud9-apiAlpha-api-TBSOYXLVBCLX',
InvocationType='RequestResponse',
Payload=json.dumps(event)
)
print('lambda invoked')
print('env_response')
print(env_response)
print(env_response['Payload'])
print(env_response['Payload'].read())
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
'Access-Control-Allow-Methods': 'POST,GET,OPTIONS,PUT,DELETE',
'Access-Control-Allow-Origin': '*'
},
'body': 'HELLO WORLD!',
'isBase64Encoded': False
}
Now when I look at the logs it gets to print('invoke lambda') but then stops and timesout
1.Invoking a Lambda from another Lambda can't be done without some configuration. In your .yml file, permission must be specified in order to invoke another Lambda. This can be accomplished by adding an iamRoleStatements section under the provider property
or
by add the simple policy AWSLambdaRole to the existing role attached to the lambda function_1.
provider:
name: aws
runtime: <runtime goes here> # e.g. python3.6 or nodejs6.10
iamRoleStatements:
- Effect: Allow
Action:
- lambda:InvokeFunction
Resource: "*"
or do this add/attach this policy to your existing role attached to your lambda function_1
2.Invoking lambda function_1 code attached.
global LAMBDA_CLIENT
if not LAMBDA_CLIENT:
LAMBDA_CLIENT = boto3.client('lambda')
try:
encoded_payload = json.dumps({'message': 'this is an invokcation call form lambda_1'}).encode(UTF_8)
invoke_resp = lambda_client.invoke(
FunctionName='function_2',
InvocationType='RequestResponse',
Payload=encoded_payload)
status_code = invoke_resp['StatusCode']
if status_code != 200:
LOGGER.error('error ')
paylaod = invoke_resp['Payload'].read()
resp = json.loads(payload)
print(resp)
except Exception:
IF you are using InvocationType=RequestResponse then you can return some response form function_2.
Finally found the solution. The answer to my particular problem was Lambda 1 & Lambda 2 were operating over VPC thus no internet connection. Once I removed VPC from Lambda 1 the invocation of Lambda 2 worked without any problems.
Just wanted to share in case I can save anyone else a weeks worth of debugging LOL