Configure AWS Lambda function to use latest version of a Layer - amazon-web-services

I have more than 20 lambda functions in a developing application. And a lambda layer that contains a good amount of common code.
A Lambda function, is hook it to a particular version of the layer, and every time I update a layer, it generates a new version. Since it is a developing application, I have a new version of the layer almost every day. That creates a mess on the lambda functions that have to be touched every day - to upgrade the layer version.
I know it is important to freeze code for a lambda function in production, and it is essential to hook one version of the lambda function to a version of the layer.
But, for the development environment, is it possible to prevent generating a new layer version every time a layer is updated? Or configure the lambda function so that the latest lambda version always refers to the latest layer version?

Unfortunately it is currently not possible to reference the latest, and there is no concept of aliases for the layer versions.
The best suggestion would be to automate this, so that whenever you create a new Lambda Layer version it would update all Lambda functions that currently include this Lambda Layer.
To create this event trigger, create a CloudWatch function that uses its event to listen for the PublishLayerVersion event.
Then have it trigger a Lambda that would trigger the update-function-layers function for each Lambda to replace its layer with the new one.

Enhance from #Chris answer, you can also use a lambda-backed Custom Resource in your stack and use this lambda to update the target configuration with the new layer ARN. I note this out in case if there someone have the similar need when I found out this thread couple days ago.
There are some notes on this solution:
The lambda of the customer resource has to send status response back to the trigger CloudFormation (CFN) endpoint, or else the CFN stack will hanging till timeout (about an hour or more, it's a painful process if you have problem on this lambda, be careful with that)
Easy way to send response back, you can use cfnresponse (pythonic way), this lib is available magically when you use CFN lambda inline code (CFN setup this lib when processing CFN with inline code) and must have a line 'import cfnresponse' :D
CFN will not touch to the custom resource after it created, so when you update stack for new layer change, the lambda will not trigger. A trick to make it move is to use custom resource with custom property then you will change this property with something will change each time you execute the stack, layer version arn. So this custom resource will be updated, means the lambda of this resource will be triggered when the stack update.
Not sure why the Logical Name of the lambda layer is changed with AWS::Serverless:Layer so I can't DependOns that layer logical name but I still have !Ref its ARN
Here is a sample code
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
myshared-libraries layer
Resources:
LambdaLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: !Sub MyLambdaLayer
Description: Shared library layer
ContentUri: my_layer/layerlib.zip
CompatibleRuntimes:
- python3.7
ConsumerUpdaterLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: consumer-updater
InlineCode: |
import os, boto3, json
import cfnresponse
def handler(event, context):
print('EVENT:[{}]'.format(event))
if event['RequestType'].upper() == 'UPDATE':
shared_layer = os.getenv("DB_LAYER")
lambda_client = boto3.client('lambda')
consumer_lambda_list = ["target_lamda"]
for consumer in consumer_lambda_list:
try:
lambda_name = consumer.split(':')[-1]
lambda_client.update_function_configuration(FunctionName=consumer, Layers=[shared_layer])
print("Updated Lambda function: '{0}' with new layer: {1}".format(lambda_name, shared_layer))
except Exception as e:
print("Lambda function: '{0}' has exception: {1}".format(lambda_name, str(e)))
responseValue = 120
responseData = {}
responseData['Data'] = responseValue
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData)
Handler: index.handler
Runtime: python3.7
Role: !GetAtt ConsumerUpdaterRole.Arn
Environment:
Variables:
DB_LAYER: !Ref LambdaLayer
ConsumerUpdaterRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- Fn::Sub: arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName:
Fn::Sub: updater-lambda-configuration-policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- lambda:GetFunction
- lambda:GetFunctionConfiguration
- lambda:UpdateFunctionConfiguration
- lambda:GetLayerVersion
- logs:DescribeLogGroups
- logs:CreateLogGroup
Resource: "*"
ConsumerUpdaterMacro:
DependsOn: ConsumerUpdaterLambda
Type: Custom::ConsumerUpdater
Properties:
ServiceToken: !GetAtt ConsumerUpdaterLambda.Arn
DBLayer: !Ref LambdaLayer
Outputs:
SharedLayer:
Value: !Ref LambdaLayer
Export:
Name: MySharedLayer
Another option is using stack Notification ARN which send all stack events into a defined SNS, where you will use it to trigger your update lambda. In your lambda, you will filter the SNS message body (which is a readable json liked format string) with the AWS::Lambda::Layer resource then grab the PhysicalResourceId for the layer ARN. How to engage the SNS topic to your stack, use CLI sam/cloudformation deploy --notification-arns option. Unfortunately, CodePipeline doesn't support this configuration option so you can only use with CLI only
Sample code for your lambda to extract/filter the SNS message body with resource data
import os, boto3, json
def handler(event, context):
print('EVENT:[{}]'.format(event))
resource_data = extract_subscription_msg(event['Records'][0]['Sns']['Message'])
layer_arn = ''
if len(resource_data) > 0:
if resource_data['ResourceStatus'] == 'CREATE_COMPLETE' and resource_data['ResourceType'] == 'AWS::Lambda::LayerVersion':
layer_arn = resource_data['PhysicalResourceId']
if layer_arn != '':
lambda_client = boto3.client('lambda')
consumer_lambda_list = ["target_lambda"]
for consumer in consumer_lambda_list:
lambda_name = consumer.split(':')[-1]
try:
lambda_client.update_function_configuration(FunctionName=consumer, Layers=[layer_arn])
print("Update Lambda: '{0}' to layer: {1}".format(lambda_name, layer_arn))
except Exception as e:
print("Lambda function: '{0}' has exception: {1}".format(lambda_name, str(e)))
return
def extract_subscription_msg(msg_body):
result = {}
if msg_body != '':
attributes = msg_body.split('\n')
for attr in attributes:
if attr != '':
items = attr.split('=')
if items[0] in ['PhysicalResourceId', 'ResourceStatus', 'ResourceType']:
result[items[0]] = items[1].replace('\'', '')
return result

It is possible to derive the most recent version number of a layer, using an additional data statement, as per https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/lambda_layer_version
So in your definition module, you will have the original layer resource definition
resource "aws_lambda_layer_version" "layer_mylib" {
filename = "layer_mylib.zip"
layer_name = "layer_mylib"
compatible_runtimes = ["python3.6", "python3.7", "python3.8"]
}
and then to obtain the ARN with latest version, use
data "aws_lambda_layer_version" "mylatest" {
layer_name = aws_lambda_layer_version.layer_mylib.layer_name
}
then data.aws_lambda_layer_version.mylatest.arn
will give the reference which includes the latest version number, which can be checked by placing
output {
value = data.aws_lambda_layer_version.mylatest.arn
}
in your common.tf

Related

AWS CDK look up ARNs from lambda

I am quite new to AWS and have a maybe easy to answer question.
(I am using localstack to develope locally, if this makes any difference)
In a lambda, I got the following code, which should publish a message to an aws-sns.
def handler(event, context):
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.info("confirmed user!")
notification = "A test"
client = boto3.client('sns')
response = client.publish(
TargetArn="arn:aws:sns:us-east-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
Message=json.dumps({'default': notification}),
MessageStructure='json'
)
return {
'statusCode': 200,
'body': json.dumps(response)
}
For now I "hardcode" the ARN of the sns topic which is output to console when deploying (with cdklocal deploy).
I am wondering, if there is any convenient way, to lookup the ARN of a AWS ressource?
I have seen, there is the
cdk.Fn.getAtt(logicalId, 'Arn').toString();
function, but I don't know the logicalID of the sns before deployment. So, how can I lookup ARNs during runtime? What is best practice?
(It's a quite annoying task keeping track of all the ARNs if I just hardcode them as strings, and definitly seems wrong to me)
You can use the !GetAtt function in your CloudFormation template to retrieve and pass your SNS topic ARN to to your Lambda.
Resources:
MyTopic:
Type: AWS::SNS::Topic
Properties:
{...}
MyLambda:
Type: AWS::Lambda::Function
Properties:
Environment:
Variables:
SNS_TOPIC_ARN: !GetAtt MyTopic.Arn

Is there any way to handle the changes of aws layer to all the associated aws lambdas in serverless if they both are in different stack?

I am using Serverless Framework and AWS cloud services for my project. I have created many services with AWS lambda and created a layer to serve common purposes of those services. I am maintaining the layer in a separate stack and included this layer to all the services using cloudformation syntax. The problem is everytime I am updating the layer and deploying it to AWS, I need to deploy all the services once again to reflect those changes into the associated services. Is there any way to mitigate this issue so that once I deploy my layer, all the associated services will also be updated with latest layer changes? I don't need to update those services manually everytime I deploy a layer. Hope this make sense to you. I am adding serverless.yml file of layer and one of my service to make it more clear to you. Looking forward to hear from you. Thanks in advance.
serverless.yml file for layer
service: t5-globals
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs14.x
lambdaHashingVersion: 20201221
environment:
NODE_PATH: "./:/opt/node_modules"
layers:
t5Globals:
path: nodejs
compatibleRuntimes:
- nodejs14.x
resources:
Outputs:
T5GlobalsLayerExport:
Value:
Ref: T5GlobalsLambdaLayer
Export:
Name: T5GlobalsLambdaLayer
plugins:
- serverless-offline
serverless.yml file of one service
service: XXXXX
projectDir: XXXXXX
frameworkVersion: '2'
provider: XXXXXX
plugins: XXXXXX
resources: XXXXXX
functions:
XXXXXX:
name: XXXXXXX
handler: XXXXXXX
layers:
- ${cf:t5-globals-dev.T5GlobalsLayerExport}
events:
- http: XXXXX
When you implement CI/CD, this thing must be automated, we normally use a trigger that trap the git change event of say CodeCommit and execute a lambda function.
The lambda function then scans the files for changes and creates new layer, and update all the lambda functions that uses this layer to use latest version of the layer.
Sharing the code written in python, can change and use as per your needs.
import base64
import datetime
from urllib.parse import unquote_plus
from botocore.exceptions import ClientError
s3 = boto3.resource('s3')
region = os.environ['region']
lambdaclient = boto3.client('lambda', region_name=region)
def layerUpdateExistingLambdaFunctions(extensionLayer) :
functionslist = []
nextmarker = None
while True:
if nextmarker is not None:
list_function_response = lambdaclient.list_functions(
FunctionVersion='ALL',
MaxItems=10,
Marker = nextmarker
)
else:
list_function_response = lambdaclient.list_functions(
FunctionVersion='ALL',
MaxItems=10
)
if 'Functions' in list_function_response.keys():
for function in list_function_response['Functions']:
functionName = function['FunctionName']
layersUsed = []
usingExtensionLayer = False
if 'Layers' in function.keys():
layers = function['Layers']
for layer in layers:
layerArn = layer['Arn'][:layer['Arn'].rfind(':')]
layersUsed.append(layerArn)
if extensionLayer.find(layerArn) >= 0:
print(f'Function {functionName} using extension layer')
usingExtensionLayer = True
functionslist.append(functionName)
if usingExtensionLayer is True:
extensionLayerArn = extensionLayer[:extensionLayer.rfind(':')]
print(f'Existing function {functionName} using {extensionLayerArn}, needs to be updated')
newLayers = []
for layerUsed in layersUsed:
newLayers.append(lambdaclient.list_layer_versions(
CompatibleRuntime = 'python3.7',
LayerName=layerUsed
)['LayerVersions'][0]['LayerVersionArn'])
lambdaclient.update_function_configuration(
FunctionName = functionName,
Layers=newLayers
)
print(f'Function {functionName} updated with latest layer versions')
if 'NextMarker' in list_function_response.keys():
nextmarker = list_function_response['NextMarker']
else:
break
return functionslist

Resolve secretsmanager when invoking sam template locally

I am trying to invoke a lambda locally with sam local invoke. The function invokes fine but my environment variables for my secrets are not resolving. The secrets resolve as expected when you deploy the function. But I want to avoid my local code and my deployed code being any different. So is there a way to resolve those secrets to the actual secret value at the time of invoking locally? Currently I am getting just the string value from the environment variable. Code below.
template.yaml
# This is the SAM template that represents the architecture of your serverless application
# https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-basics.html
# The AWSTemplateFormatVersion identifies the capabilities of the template
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/format-version-structure.html
AWSTemplateFormatVersion: 2010-09-09
Description: >-
onConnect
# Transform section specifies one or more macros that AWS CloudFormation uses to process your template
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-section-structure.html
Transform:
- AWS::Serverless-2016-10-31
# Resources declares the AWS resources that you want to include in the stack
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resources-section-structure.html
Resources:
# Each Lambda function is defined by properties:
# https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
# This is a Lambda function config associated with the source code: hello-from-lambda.js
helloFromLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/onConnect.onConnect
Runtime: nodejs14.x
MemorySize: 128
Timeout: 100
Environment:
Variables:
WSS_ENDPOINT: '{{resolve:secretsmanager:prod/wss/api:SecretString:endpoint}}'
onConnect.js
/**
* A Lambda function that returns a static string
*/
exports.onConnect = async () => {
const endpoint = process.env.WSS_ENDPOINT;
console.log(endpoint);
// If you change this message, you will need to change hello-from-lambda.test.js
const message = 'Hellddfdsfo from Lambda!';
// All log statements are written to CloudWatch
console.info(`${message}`);
return message;
}
I came up with a work around that will allow me to have one code base and "resolve" secrets/parameters locally.
I created a very basic lambda layer who's only job is fetching secrets if the environment is set to LOCAL.
import boto3
def get_secret(env, type, secret):
client = boto3.client('ssm')
if env == 'LOCAL':
if type == 'parameter':
return client.get_parameter(
Name=secret,
)['Parameter']['Value']
else:
return secret
I set the environment with a parameter in the lambda that will be calling this layer. BTW this layer will resolve more than one secret eventually so that's why the nested if might look a little strange. This is how I set the environment:
Resources:
...
GetWSSToken:
Type: AWS::Serverless::Function
Properties:
FunctionName: get_wss_token
CodeUri: get_wss_token/
Handler: app.lambda_handler
Runtime: python3.7
Timeout: 30
Layers:
- arn:aws:lambda:********:layer:SecretResolver:8
Environment:
Variables:
ENVIRONMENT: !Ref Env
JWT_SECRET: !FindInMap [ Map, !Ref Env, jwtsecret ]
...
Mappings:
Map:
LOCAL:
jwtsecret: jwt_secret
PROD:
jwtsecret: '{{resolve:ssm:jwt_secret}}'
STAGING:
jwtsecret: '{{resolve:ssm:jwt_secret}}'
Parameters:
...
Env:
Type: String
Description: Environment this lambda is being run in.
Default: LOCAL
AllowedValues:
- LOCAL
- PROD
- STAGING
Now I can simply call the get_secret method in my lambda and depending on what I set Env to the secret will either be fetched at runtime or returned from the environment variables.
import json
import jwt
import os
from datetime import datetime, timedelta
from secret_resolver import get_secret
def lambda_handler(event, context):
secret = get_secret(os.environ['ENVIRONMENT'], 'parameter', os.environ['JWT_SECRET'])
two_hours_from_now = datetime.now() + timedelta(hours=2)
encoded_jwt = jwt.encode({"expire": two_hours_from_now.timestamp()}, secret, algorithm="HS256")
return {
"statusCode": 200,
"body": json.dumps({
"token": encoded_jwt
}),
}
I hope this helps someone out there trying to figure this out. The main issue here is keeping the secrets out of the code base and be able to test locally with the same code that's going into production.

Cognito "PreSignUp invocation failed due to configuration" despite having invoke permissions well configured

I currently have a Cognito user pool configured to trigger a pre sign up lambda. Right now I am setting up the staging environment, and I have the exact same setup on dev (which works). I know it is the same because I am creating both envs out of the same terraform files.
I have already associated the invoke permissions with the lambda function, which is very often the cause for this error message. Everything looks the same in both environments, except that I get "PreSignUp invocation failed due to configuration" when I try to sign up a new user from my new staging environment.
I have tried to remove and re-associate the trigger manually, from the console, still, it doesn't work
I have compared every possible setting I can think of, including "App client" configs. They are really the same
I tried editing the lambda code in order to "force" it to update
Can it be AWS taking too long to invalidate the permissions cache? So far I can only believe this is a bug from AWS...
Any ideas!?
There appears to be a race condition with permissions not being attached on the first deployment.
I was able to reproduce this with cloudformation.
Deploying a stack with the same config twice appears to "fix" the permissions issue.
I actually added a 10-second delay on the permissions attachment and it solved my first deployment issue...
I hope this helps others who run into this issue. 😃
# Hack to fix Cloudformation bug
# AWS::Lambda::Permission will not attach correctly on first deployment unless "delay" is used
# DependsOn & every other thing did not work... ¯\_(ツ)_/¯
CustomResourceDelay:
Type: Custom::Delay
DependsOn:
- PostConfirmationLambdaFunction
- CustomMessageLambdaFunction
- CognitoUserPool
Properties:
ServiceToken: !GetAtt CustomResourceDelayFunction.Arn
SecondsToWait: 10
CustomResourceDelayFunctionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement: [{ "Effect":"Allow","Principal":{"Service":["lambda.amazonaws.com"]},"Action":["sts:AssumeRole"] }]
Policies:
- PolicyName: !Sub "${AWS::StackName}-delay-lambda-logs"
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: [ logs:CreateLogGroup, logs:CreateLogStream, logs:PutLogEvents ]
Resource: !Sub arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/${AWS::StackName}*:*
CustomResourceDelayFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Description: Wait for N seconds custom resource for stack debounce
Timeout: 120
Role: !GetAtt CustomResourceDelayFunctionRole.Arn
Runtime: nodejs12.x
Code:
ZipFile: |
const { send, SUCCESS } = require('cfn-response')
exports.handler = (event, context, callback) => {
if (event.RequestType !== 'Create') {
return send(event, context, SUCCESS)
}
const timeout = (event.ResourceProperties.SecondsToWait || 10) * 1000
setTimeout(() => send(event, context, SUCCESS), timeout)
}
# ------------------------- Roles & Permissions for cognito resources ---------------------------
CognitoTriggerPostConfirmationInvokePermission:
Type: AWS::Lambda::Permission
## CustomResourceDelay needed to property attach permission
DependsOn: [ CustomResourceDelay ]
Properties:
Action: lambda:InvokeFunction
FunctionName: !GetAtt PostConfirmationLambdaFunction.Arn
Principal: cognito-idp.amazonaws.com
SourceArn: !GetAtt CognitoUserPool.Arn
In my situation the problem was caused by the execution permissions of the Lambda function: While there was a role configured, that role was empty due to some unrelated changes.
Making sure the role actually had permissions to do the logging and all the other things that the function was trying to do made things work again for me.

Lambda invoke Lambda via API Gateway

I can't seem to get this to work. I create 2 lambdas via C9. I'm using boto3 to invoke one lambda from another. Everything seems to work just fine via C9 but when I publish and try to access via API Gateway I keep getting "Endpoint request timed out" errors.
I know it can't be a timeout issue because I've set up my yaml files to have enough time to execute and the lambda right now are really simple (only returning a string)
here are my current yaml file. I'm wondering if maybe there are some sort of permissions I need to include for API Gateway in the second yaml
Lambda1
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
api:
Type: 'AWS::Serverless::Function'
Properties:
Description: ''
Handler: api/lambda_function.lambda_handler
MemorySize: 256
Role: 'arn:aws:iam::820788395625:role/service-role/api_int-role'
Runtime: python3.6
Timeout: 30
VpcConfig:
SecurityGroupIds:
- ...
SubnetIds:
- ...
Policies: AWSLambdaFullAccess
Lambda2
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
api:
Type: 'AWS::Serverless::Function'
Properties:
Description: ''
Handler: api/lambda_function.lambda_handler
MemorySize: 512
Role: 'arn:aws:iam::820788395625:role/service-role/api_int-role'
Runtime: python3.6
Timeout: 15
VpcConfig:
SecurityGroupIds:
- ...
SubnetIds:
- ...
I just set up an API Gateway endpoint directly to Lambda2 and it returned no problem. So...
API Gateway -> Lambda 2 (works)
API Gateway -> Lambda 1 -> Lambda 2 (does not work)
So for some reason when I want to call Lambda 2 via Lambda 1 over API Gateway it doesn't work.
Here is the code that is calling the 2nd Lambda
import json
import boto3
def lambda_handler(event, context):
print('call boto3 client')
lambda_client = boto3.client('lambda', region_name='us-east-1')
print('boto3 client called')
print('invoke lambda')
env_response = lambda_client.invoke(
FunctionName='cloud9-apiAlpha-api-TBSOYXLVBCLX',
InvocationType='RequestResponse',
Payload=json.dumps(event)
)
print('lambda invoked')
print('env_response')
print(env_response)
print(env_response['Payload'])
print(env_response['Payload'].read())
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
'Access-Control-Allow-Methods': 'POST,GET,OPTIONS,PUT,DELETE',
'Access-Control-Allow-Origin': '*'
},
'body': 'HELLO WORLD!',
'isBase64Encoded': False
}
Now when I look at the logs it gets to print('invoke lambda') but then stops and timesout
1.Invoking a Lambda from another Lambda can't be done without some configuration. In your .yml file, permission must be specified in order to invoke another Lambda. This can be accomplished by adding an iamRoleStatements section under the provider property
or
by add the simple policy AWSLambdaRole to the existing role attached to the lambda function_1.
provider:
name: aws
runtime: <runtime goes here> # e.g. python3.6 or nodejs6.10
iamRoleStatements:
- Effect: Allow
Action:
- lambda:InvokeFunction
Resource: "*"
or do this add/attach this policy to your existing role attached to your lambda function_1
2.Invoking lambda function_1 code attached.
global LAMBDA_CLIENT
if not LAMBDA_CLIENT:
LAMBDA_CLIENT = boto3.client('lambda')
try:
encoded_payload = json.dumps({'message': 'this is an invokcation call form lambda_1'}).encode(UTF_8)
invoke_resp = lambda_client.invoke(
FunctionName='function_2',
InvocationType='RequestResponse',
Payload=encoded_payload)
status_code = invoke_resp['StatusCode']
if status_code != 200:
LOGGER.error('error ')
paylaod = invoke_resp['Payload'].read()
resp = json.loads(payload)
print(resp)
except Exception:
IF you are using InvocationType=RequestResponse then you can return some response form function_2.
Finally found the solution. The answer to my particular problem was Lambda 1 & Lambda 2 were operating over VPC thus no internet connection. Once I removed VPC from Lambda 1 the invocation of Lambda 2 worked without any problems.
Just wanted to share in case I can save anyone else a weeks worth of debugging LOL