So AWS announced Lambda Snapstart very recently, I tried to give it a go since my application has cold start time ~4s.
I was able to do this by adding the following under resources:
- extensions:
NodeLambdaFunction:
Properties:
SnapStart:
ApplyOn: PublishedVersions
Now, when I actually go to the said lambda, this is what I see :
So far so good!
But, the issue is that when I check my Cloudwatch Logs, there's no trace of Restore Time instead the good old Init Duration for cold starts which means Snapstart isn't working properly.
I dug deeper, so Snapstart only works for versioned ARNs. But the thing is Serverless already claims that :
By default, the framework creates function versions for every deploy.
And on checking the logs, I see that the logStreams have the prefix : 2022/11/30/[$LATEST].
When I check the Versions tab in console, I see version number 240. So I would expect that 240 is the latest version of this lambda function and this is the function version being invoked everytime.
However, clicking on the version number open a lambda function with 240 attached to its ARN and testing that function with Snapstart works perfectly fine.
So I am confused if the LATEST version and version number 240 ( in my case ), are these different?
If no, then why isn't Snapstart automatically activated for LATEST?
If yes, how do I make sure they are same?
SnapStart is only available for published versions of a Lambda function. It cannot be used with $LATEST.
Using Versions is pretty hard for Serverless Framework, SAM, CDK, and basically any other IaC tool today, because by default they will all use $LATEST to integrate with API Gateway, SNS, SQS, DynamoDB, EventBridge, etc.
You need to update the integration with API Gateway (or whatever service you're using) to point to the Lambda Version you publish, after that Lambda deployment has completed. This isn't easy to do using Serverless Framework (and other tools). You may be able to achieve this using this traffic-shifting plugin.
In case you use stepFuntions to call your lambda function, you can set useExactVersion: true in
stepFunctions:
stateMachines:
yourStateMachine:
useExactVersion: true
...
definition:
...
This will reference the latest version of your function you just deployed
This has got to be one of the worst feature launches that I have seen in a long time. How the AWS team could put in all the time and effort required to bring this feature to market, while simultaneously rendering it useless, because we can't script the thing is beyond me.
We were ready to jump on this and start migrating apps to lambda, but now we are back in limbo. Even knowing there was a fix coming down the line would be something. Hopefully somebody from the AWS lambda team can provide some insights...
Here is a working POC of a serverless plugin that updates the lambda references to use the most recent version. This fixed the resulting cloud formation code and was tested with both SQS and API-Gateway.
'use strict'
class SetCycle {
constructor (serverless, options) {
this.hooks = {
// this is where we declare the hook we want our code to run
'before:package:finalize': function () { snapShotIt(serverless) }
}
}
}
function traverse(jsonObj,functionVersionMap) {
if( jsonObj !== null && typeof jsonObj == "object" ) {
Object.entries(jsonObj).forEach(([key, value]) => {
if(key === 'Fn::GetAtt' && value.hasOwnProperty('length') && value.length === 2 && value[1] === "Arn" && functionVersionMap.get(value[0])){
console.log(jsonObj);
let newVersionedMethod = functionVersionMap.get(value[0]);
delete jsonObj[key];
jsonObj.Ref = newVersionedMethod;
console.log('--becomes');
console.log(jsonObj);
}else{
// key is either an array index or object key
traverse(value,functionVersionMap);
}
});
}
else {
// jsonObj is a number or string
}
}
function snapShotIt(serverless){
resetLambdaReferencesToVersionedVariant (serverless)
}
function resetLambdaReferencesToVersionedVariant (serverless) {
const functionVersionMap = new Map();
let rsrc = serverless.service.provider.compiledCloudFormationTemplate.Resources
// build a map of all the lambda methods and their associated versioned resource
for (let key in rsrc) {
if (rsrc[key].Type === 'AWS::Lambda::Version') {
functionVersionMap.set(rsrc[key].Properties.FunctionName.Ref,key);
}
}
// loop through all the resource and replace the non-verioned with the versioned lambda arn reference
for (let key in rsrc) {
if (! (rsrc[key].Type === 'AWS::Lambda::Version' || rsrc[key].Type === 'AWS::Lambda::Function')) {
console.log("--" + key);
traverse(rsrc[key],functionVersionMap);
}
}
// add the snapshot syntax
for (let key in rsrc) {
if (rsrc[key].Type === 'AWS::Lambda::Function') {
console.log(rsrc[key].Properties);
rsrc[key].Properties.SnapStart = {"ApplyOn": "PublishedVersions"};
console.log("--becomes");
console.log(rsrc[key].Properties);
}
}
// prints the method map
//for(let [key,value] of functionVersionMap){
//console.log(key + " : " + value);
//}
}
// now we need to make our plugin object available to the framework to execute
module.exports = SetCycle
I was able to achieve this by updating my serverless version to 3.26.0 and adding the property snapStart: true to the functions that i have created. currently serverless creates version numbers and as soon as the new version is published the SnapStart gets enabled to latest version.
ApiName:
handler: org.springframework.cloud.function.adapter.aws.SpringBootApiGatewayRequestHandler
events:
- httpApi:
path: /end/point
method: post
environment:
FUNCTION_NAME: ApiName
runtime: java11
memorySize: 4096
snapStart: true
Related
Using the CDK to create KMS Keys (and other resources for that matter) for my project and want to ensure I'm handling the resource properly.
During my development stage I might do a deploy, do some development work then issue a cdk destroy to cleanup the project as I know I'll not be back to it for some days.
If I don't wrap the code in an import I find duplicate keys are being created or for some resources like DynamoDB it will fail with the resource already existing:
try {
const keyRef = kms.Alias.fromAliasName(this, 'SomeKey', 'SomeKey');
} catch {
const keyRef = new kms.Key(this, 'SomeKey', {
description: 'Some descriptive text',
enableKeyRotation: true,
trustAccountIdentities: true
});
keyRef .grantEncryptDecrypt(lambdaFunc);
}
Can anyone suggest a better way of handling this or is this expected?
While developing my projects I don't like to leave resources in play until the solution is at least at Alpha stage.
When creating a KMS, you can define a RemovalPolicy:
The default value for it is RETAIN, meaning that the KMS key will stay in your account even after you delete your stack. This is useful for production environment, where you would normally want to keep keys that might be used by resources outside your stack.
In your dev environment you can set it to DESTROY and it will be deleted with your stack.
You should capture this logic in your code. Something like
const keyRef = new kms.Key(this, 'SomeKey', {
description: 'Some descriptive text',
enableKeyRotation: true,
trustAccountIdentities: true,
// define a method to check if it's a dev environment
// and set removalPolicy accordingly
removalPolicy: isDevEnv() ? cdk.RemovalPolicy.DESTROY : cdk.RemovalPolicy.RETAIN,
});
In my project, I've a need where I need to show how Pipeline is progressing on custom Web Portal built in PHP. Is there any way in any language such as C# or Java through which I can list pipelines and monitor the progress or even log into Application Insights?
Are you labelling your queries with the OPTION (LABEL='MY LABEL') syntax?
This will make it easy to monitor the progress of your pipeline by querying sys.dm_pdw_exec_requests to pick individual queries (see last paragraph under link heading), and if you use a naming convention like 'pipeline_query' you can probably achieve what you want.
try
{
PipelineRunClient pipelineRunClient = new(new Uri(_Settings.SynapseExtractEndpoint), new DefaultAzureCredential());
run = await pipelineRunClient.GetPipelineRunAsync(runId);
while(run.Status == "InProgress" || run.Status == "Queued")
{
_Logger.LogInformation($"!!Pipeline {run.PipelineName} {runId} Status: {run.Status}");
Task.Delay(30000).Wait();
run = await pipelineRunClient.GetPipelineRunAsync(runId);
}
_Logger.LogInformation($"!!Pipeline {run.PipelineName} {runId} Status: {run.Status} Runtime: {run.DurationInMs} Message: {run.Message}");
}
I have successfully tested dynamodb.transactWriteItems using VS Code (node js) but when I moved my code to Lambda, it always throws the Type Error: dynamodb.transactWriteItems is not a function. Note that I am NOT using documentClient so declaring dynamodb = new AWS.DynamoDB() is not the solution.
How can I check the AWS-SDK used by Lambda (my npm aws-sdk is v2.372.0) and how do I make use of the proper AWS-SDK version on Lambda if this is the root cause of the issue?
data = await dynamodb.transactWriteItems({
ReturnConsumedCapacity: "INDEXES",
ReturnItemCollectionMetrics: "SIZE",
TransactItems: [
{
Put: {
TableName: envVarPOTableName,
Item: {
"poNumber": {S: poNumber},
"supplierName": {S: event.supplierName},
"poStatus" : {S: "Created"},
"rmItemsArr": {L: [
{ M:{
"type": {S:event.rmItemObj.type},
"description": {S:event.rmItemObj.description}
},
}
]}
}
}
},
{
Update: {
TableName: envVarRMTableName,
Key:{
"type": {S: event.rmItemObj.type},
"description": {S: event.rmItemObj.description}
},
UpdateExpression: "set #pnA = list_append(#pnA, :vals)",
ExpressionAttributeNames: {
"#pnA" : "poNumbersArr"
},
ExpressionAttributeValues:{
":vals" : {L:[{S:poNumber}]}
},
ReturnValuesOnConditionCheckFailure: "ALL_OLD"
}
}
]
}).promise();
The issue is that AWS lambda currently supports AWS SDK for JavaScript – 2.290.0 Ref. DynamoDB transactions are implemented from version 2.365.0 Ref. To solve this you can try including the latest version of JavaScript SDK in your Lambda deployment package Ref.
I managed to initially double confuse myself on this, so thought I'd share in case anyone else did the same thing...
My problem was I was using:
const dynamoDB = new AWS.DynamoDB.DocumentClient()
and trying to call .transactWriteItems which isn't valid. If you're using the DocumentClient, you need to use .transactWrite instead.
To use .transactWriteItems your dynamodb has to be set like const dynamoDB = new AWS.DynamoDB()
As #schof said above, the latest lambda aws sdks will support this f() https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html
(up to 2.488.0 as of this writing for both 10.x and 8.10)
Good news - the new Lambda execution environments apparently have the latest SDK. My understanding from reading that blog post is that Node 10X lambdas are automatically using the new environments already. I tested today with a lambda function that used the new Node 10X runtime and I no longer needed to bundle my own copy of the AWS SDK.
Also, apparently as of tomorrow, new Lambda functions (regardless of Node runtime) will have the new Lambda environment so presumably those will work as well.
I have a event scheduler in AWS cloudwatch which runs my lambda function every 2 min. I want to store some variable from last lambda call which is needed for processing in next lambda call.
Is there any small storage option for this or I have to go for dynamodb type storage? Thanks.
You will have to use external storage like S3 or DynamoDB.
You can use the Targets[].Input field to add data to the next event in the CloudWatch event. You can do this easily from the lambda using the aws-sdk of your language, for example in Nodejs
-- Updated with an example
const AWS = require('aws-sdk');
const events = new AWS.CloudWatchEvents();
/**
* Lambda entry point
*/
exports.dostuff = async (request) => {
let stateData;
// do your stuff
await updateIteration(stateData);
}
function getRuleTargets() {
return events.listTargetsByRule({Rule: eventName}).promise();
}
function updateTargetInput(target, newData) {
const Input = JSON.stringify(newData)
const params = {
Rule: eventName,
Targets: [
{
Arn: target.Arn,
Id: target.Id,
Input
}
]
};
return events.putTargets(params).promise();
}
function updateIteration(data) {
return getRuleTargets().then(({Targets}) => {
// Usually there is only one target but just in case
const target = Targets.find(...)
return updateTargetInput(target, data)
});
}
Pros of this approach is that you don't need to setup extra infrastructure, neither fetch the data. This would not be a good approach if the event is been used in other places.
You can use AWS:Systems Manager: Parameter Store.
This allows a string of up to 4kb to be stored/retrieved, so you can store a comma-delimited list or whatever you wish, eg JSON.
The Lambda potentially remains loaded when an execution terminates, so in principle you can cache some data in a global variable (initialised to None/NULL etc), and check that value first, but you must update the parameter store in case the cache is empty next time !
Look up "lambda warm start"
I'm trying to access the parameter store in an AWS lambda function. This is my code, pursuant to the documentation here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SSM.html
var ssm = new AWS.SSM({apiVersion: '2014-11-06'});
var ssm_params1 = {
Name: 'XXXX', /* required */
WithDecryption: true
};
ssm.getParameter(ssm_params1, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else clientId = data.value;
});
Upon execution, I get the error:
"TypeError: ssm.getParameter is not a function"
Did amazon change this without changing the docs? Did this function move to another type of object?
Please check and try the latest version of the SDK. It is not the case that Amazon has ditched the getParameter method in favor of only getParameters. The fact is the method is getParameter, together with getParametersByPath, is newly added methods. Old version of SDK would not resolve these methods.
The answer here is that Amazon must have ditched the getParameter() method in favor of only maintaining one method getParameter(s)(). But they didn't update the documentation. That method seems to work just fine.
I have tried both getParameter and getParameters function, and both of them are working fine.
It could be possible that you are getting an error since you are passing "apiVersion: '2014-11-06'" to the SSM constructor.
Do not pass any apiVersion parameter to the function. It should work fine.
There seems to be a bug in AWS that is not including correct sdk version in certain environments. This can be confirmed by logging the sdk version used.
console.log("AWS-SDK Version: " + require('aws-sdk/package.json').version);
Including the required aws-sdk package solved the problem for us.
Try adding the following in package.json:
"aws-sdk": "^2.339.0"