Is it possible to rename an AWS Lambda function? - amazon-web-services

I have created some AWS Lambda functions for testing purposes (named as test_function something), then after testing I found those functions can be used in prod environment.
Is it possible to rename the AWS Lambda function? and how?
Or should I create a new one and copy paste source code?

The closest you can get to renaming the AWS Lambda function is using an alias, which is a way to name a specific version of an AWS Lambda function. The actual name of the function though, is set once you create it. If you want to rename it, just create a new function and copy the exact same code into it. It won't cost you any extra to do this (since you are only charged for execution time) so you lose nothing.
For a reference on how to name versions of the AWS Lambda function, check out the documentation here: Lambda function versions
.

You cannot rename the function, your only option is to follow the suggestions already provided here or create a new one and copypaste the code.
It's a good thing actually that you cannot rename it: if you were able to, it would cease to work because the policies attached to the function still point to the old name, unless you were to edit every single one of them manually, or made them generic (which is ill-advised).
However, as a best practice in terms of software development, I suggest you to always keep production and testing (staging) separate, effectively duplicating your environment.
This allows you to test stuff on a safe environment, where if you make a mistake you don't lose anything important, and when you confirm that your new features work, replicate them in production.
So in your case, you would have two lambdas, one called 'my-lambda-staging' and the other 'my-lambda-prod'. Use the ENV variables of lambdas to adapt to the current environment, so you don't need to refactor!

My solution is to export the function, create a new Lambda, then upload the .zip file to the new Lambda.

My solution for lambda rename, basically use boto3 describe previous lambda info for configuration setting and download the previous lambda function code to create a new lambda, but the trigger won't be set so you need to add trigger back manually
from boto3.session import Session
from botocore.client import Config
from botocore.handlers import set_list_objects_encoding_type_url
import boto3
import pprint
import urllib3
pp = pprint.PrettyPrinter(indent=4)
session = Session(aws_access_key_id= {YOUR_ACCESS_KEY},
aws_secret_access_key= {YOUR_SECRET_KEY},
region_name= 'your_region')
PREV_FUNC_NAME = 'your_prev_function_name'
NEW_FUNC_NAME = 'your_new_function_name'
def prev_lambda_code(code_temp_path):
'''
download prev function code
'''
code_url = code_temp_path
http = urllib3.PoolManager()
response = http.request("GET", code_url)
if not 200 <= response.status < 300:
raise Exception(f'Failed to download function code: {response}')
return response.data
def rename_lambda_function(PREV_FUNC_NAME , NEW_FUNC_NAME):
'''
Copy previous lambda function and rename it
'''
lambda_client = session.client('lambda')
prev_func_info = lambda_client.get_function(FunctionName = PREV_FUNC_NAME)
if 'VpcConfig' in prev_func_info['Configuration']:
VpcConfig = {
'SubnetIds' : prev_func_info['Configuration']['VpcConfig']['SubnetIds'],
'SecurityGroupIds' : prev_func_info['Configuration']['VpcConfig']['SecurityGroupIds']
}
else:
VpcConfig = {}
if 'Environment' in prev_func_info['Configuration']:
Environment = prev_func_info['Configuration']['Environment']
else:
Environment = {}
response = client.create_function(
FunctionName = NEW_FUNC_NAME,
Runtime = prev_func_info['Configuration']['Runtime'],
Role = prev_func_info['Configuration']['Role'],
Handler = prev_func_info['Configuration']['Handler'],
Code = {
'ZipFile' : prev_lambda_code(prev_func_info['Code']['Location'])
},
Description = prev_func_info['Configuration']['Description'],
Timeout = prev_func_info['Configuration']['Timeout'],
MemorySize = prev_func_info['Configuration']['MemorySize'],
VpcConfig = VpcConfig,
Environment = Environment,
PackageType = prev_func_info['Configuration']['PackageType'],
TracingConfig = prev_func_info['Configuration']['TracingConfig'],
Layers = [Layer['Arn'] for Layer in prev_func_info['Configuration']['Layers']],
)
pp.pprint(response)
rename_lambda_function(PREV_FUNC_NAME , NEW_FUNC_NAME)

Related

Using the CfnOutput created inside a LambdaRestApi in AWS CDK

I'm creating a LambdaRestApi as follows
this.gateway = new apigw.LambdaRestApi(this, "Endpoint", {
handler: hello,
endpointExportName: "MainURL"
})
and I'd like to get to the CfnOutput it generates, is it possible? I want to pass it to other functions and I want to avoid creating a new one.
Specifically the situation I'm tackling is this: I have have a post stage that verifies things are working at it uses the CfnOutput:
deployStage.addPost(
new CodeBuildStep("VerifyAPIGatewayEndpoint", {
envFromCfnOutputs: {
ENDPOINT_URL: deploy.hcEndpoint
},
commands: [
"curl -Ssf $ENDPOINT_URL",
"curl -Ssf $ENDPOINT_URL/hello",
"curl -Ssf $ENDPOINT_URL/test"
]
})
)
That deploy.hcEndpoint is a CfnOutput that I'm manually creating after the LambdaRestApi is created:
const gateway = new LambdaRestApi(this, "Endpoint", {handler: hello})
this.hcEndpoint = new CfnOutput(this, "GatewayUrl", {value: gateway.url})
and then making sure that every construct makes it available to its parent.
Using CfnOutputs in the post-deployment step makes sense. I am trying to learn the proper way of doing things, and also have clean stacks. With only one Lambda function it's no big deal, but with tens or hundreds it might. And since LambdaRestApi already creates the output, it does feel like I'm repeating myself by creating an identical one.
Assuming you are using the following code for your LambdaRestApi:
this.gateway = new apigw.LambdaRestApi(this, "Endpoint", {
handler: hello,
endpointExportName: "MainURL"
});
Referencing in same stack as LambdaRestApi
const outputValue = this.gateway.urlForPath("/");
Looking at the source code, the output value is just a call to urlForPath. The method is public, so you can use it directly.
Referencing from another stack
You can use cross stack references to get a reference to the output value of the stack.
import { Fn } from 'aws-cdk-lib';
const outputValue = Fn.importValue("MainURL");
If you try to use the first method in another stack, CDK will just generate a cross stack reference dynamically by adding extra outputs, so it is better to import the value directly.
I'd like to get to the CfnOutput it generates, is it possible?
Yes. Use the escape hatch syntax to get a reference to the CfnOutput that RestApi creates for the endpointExportName:
const urlCfnOutput = this.gateway.node.findChild('Endpoint') as cdk.CfnOutput;
console.log(urlCfnOutput.exportName);
// MainURL
console.log(urlCfnOutput.value);
// https://${Token[TOKEN.258]}.execute-api.us-east-1.${Token[AWS.URLSuffix.3]}/${Token[TOKEN.277]}/
Prefer standard CDK
As their name suggests, "escape hatches" are for "emergencies" when the CDK's standard solutions fail. Your use case may be one such instance, I don't know. But as #Kaustubh Khavnekar points out, you don't need the CfnOutput to get the url token value.
console.log(this.gateway.url)
// https://${Token[TOKEN.258]}.execute-api.us-east-1.${Token[AWS.URLSuffix.3]}/${Token[TOKEN.277]}/

How to run Lambda created in CDK on a regular basis?

As the title says - I've created a Lambda in the Python CDK and I'd like to know how to trigger it on a regular basis (e.g. once per day).
I'm sure it's possible, but I'm new to the CDK and I'm struggling to find my way around the documentation. From what I can tell it will use some sort of event trigger - but I'm not sure how to use it.
Can anyone help?
Sure - it's fairly simple once you get the hang of it.
First, make sure you're importing the right libraries:
from aws_cdk import core, aws_events, aws_events_targets
Then you'll need to make an instance of the schedule class and use the core.Duration (docs for that here) to set the length. Let's say 1 day for example:
lambda_schedule = aws_events.Schedule.rate(core.Duration.days(1))
Then you want to create the event target - this is the actual reference to the Lambda you created in your CDK earlier:
event_lambda_target = aws_events_targets.LambdaFunction(handler=lambda_defined_in_cdk_here)
Lastly you bind it all together in an aws_events.Rule like so:
lambda_cw_event = aws_events.Rule(
self,
"Rule_ID_Here",
description=
"The once per day CloudWatch event trigger for the Lambda",
enabled=True,
schedule=lambda_schedule,
targets=[event_lambda_target])
Hope that helps!
The question is for Python but thought it might be useful to post a Javascript equivalent:
const aws_events = require("aws-cdk-lib/aws-events");
const aws_events_targets = require("aws-cdk-lib/aws-events-targets");
const MyLambdaFunction = <...SDK code for Lambda function here...>
new aws_events.Rule(this, "my-rule-identifier", {
schedule: aws_events.Schedule.rate(aws_cdk_lib.Duration.days(1)),
targets: [new aws_events_targets.LambdaFunction(MyLambdaFunction)],
});
Note: The above is for version 2 of the SDK - might need a few tweaks for v3.

deploying lambda code inside a folder with an autogenerated name

I am trying to set up a lambda in pulumi-aws but my function code when deployed is wrapped in a folder with the same name as the generated lambda function name.
I would prefer not to have this as it's unnecessary, but more than that it means I can't work out what my handler should be as the folder name is generated?
(I realise I can probably use a reference to get this generated name, but I don't like the added complexity for no reason. I don't see a good reason for having this folder inside the lambda?)
E.g. my function code is 1 simple index.js file. with 1 named export of handler. I would expect my lambda handler to be index.handler.
(Note I am using TypeScript for my pulumi code but the Lambda is in JavaScript.)
I have tried a couple of options for the code property:
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.AssetArchive({
"index.js": new pulumi.asset.FileAsset('./lambdas/add-timesheet-entry/index.js'),
}),
In this example the zip file was simply an index.js with no folder information in the zip.
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.FileArchive("lambdatest.zip"),
AWS Lambda code is always in a "folder" named after the function name. Here is a Lambda that I created in the web console:
It doesn't affect the naming of the handler though. index.handler is just fine.

AWS Lambda NodeJS locale/variable isolation

There is a concern about potential problem with reusable variables in aws-lambda.
A user's locale is passed as
Browser cookies => AWS API Gateway => Lambda (NodeJS 6.10)
On the server side localization is implemented with a static variable in a class. Presenting typescript code for clarity but can be done in pure ECMAScript.
Module Language.ts
export default class Language
{
public static Current: LanguageCode = LanguageCode.es;
}
Static Language.Current variable is used across different parts of the application for manual localization and it works perfectly on the client side (react + redux).
Lambda function
import {APIGatewayEvent, Context, Callback} from 'aws-lambda';
import Language from './pathToModule/Language.ts';
export const api = function(event: APIGatewayEvent, context: Context, callback: Callback)
{
Language.Current = event.headers.cookie.locale;
// do the logic here
}
Potential problem
According to AWS documentation NodeJS instances can be reused for different requests. It means that famous concurrent problems have to be considered, e.g.
User 1 calls lambda function. The locale is set to English.
In parallel user 2 calls the same lambda instance. The local is changed to Spanish.
User 1 code continues and reads modified (wrong) locale variable from the shared module Language.
How do you resolve this problem?
For convenience it is good to have only one place for locale change. As I understand the same concern exists for all famous i18n npm packages (i18next, i18n, yahoo i18n, etc).
One of the best practices for Lambda functions is to try and not write code which maintains state.
Here you are initializing the locale based on an initial request and applying it to all future requests, which is inherently flawed even on server based code, forget server less.
To fix this, you will need to initialize the localization library for each request, or at least maintain an in memory lazy map, which you can make use of use the current request's locale to achieve the desired localization.
There are several solutions:
Node JS container is reused only after a function process is finished (callback or error is occurred) (thanks to #idbehold). Thus there is always a unique context per a function call.
Refactor code and pass a locale variable back and force (#Yeshodhan Kulkarni suggestion).
For example, return a function as an intermediate result and use it before calling the result back.
var localizableResult = ...;
var result = localizableResult.Localize(requestedLocale).
If there is a need to use a local stack (kind of a thread context) for other projects there is a npm package node-continuation-local-storage.
Case 1 makes it really simple to use global variables for current locale.

Restoring files on a version enabled amazon s3 bucket

I am trying to enable versioning and lifecycle policies on my Amazon S3 buckets. I understand that it is possible to enable Versioning first and then apply LifeCycle policy on that bucket. If you see the image below, that will confirm this idea.
I have then uploaded a file several times which created several versions of the same file. I then deleted the file and still able to see several versions. However, if I try to restore a file, I see that the Initiate Restore option is greyed out.
I would like to ask anyone who had any similar issue or let me know what I am doing wrong.
Thanks,
Bucket Versioning on Amazon S3 keeps all versions of objects, even when they are deleted or when a new object is uploaded under the same key (filename).
As per your screenshot, all previous versions of the object are still available. They can be downloaded/opened in the S3 Management Console by selecting the desired version and choosing Open from the Actions menu.
If Versions: Hide is selected, then each object only appears once. Its contents is equal to the latest uploaded version of the object.
Deleting an object in a versioned bucket merely creates a Delete Marker as the most recent version. This makes the object appear as though it has been deleted, but the prior versions are still visible if you click the Versions: Show button at the top of the console. Deleting the Delete Marker will make the object reappear and the contents will be the latest version uploaded (before the deletion).
If you want a specific version of the object to be the "current" version, either:
Delete all versions since that version (making the desired version that latest version), or
Copy the desired version back to the same object (using the same key, which is the filename). This will add a new version, but the contents will be equal to the version you copied. The copy can be performed in the S3 Management Console -- just choose Copy and then Paste from the Actions Menu.
Initiate Restore is used with Amazon Glacier, which is an archival storage system. This option is not relevant unless you have created a Lifecycle Policy to move objects to Glacier.
With the new console, you can do it as following.
Click on the Deleted Objects button
You will see your deleted object below, Select it
Click on More -> Undo delete
If you have a lot of deleted files to restore. You might want to use a script to do the job for you.
The script should
Get the versions of objects in your bucket using the Get object versions API
Inspect the versions data to get Delete Marker (i.e. delete objects) name and version id
Delete the markers found using the marker names and version ids using Delete object API
Python example with boto:
This example script deletes delete markers found one by one once.
#!/usr/bin/env python
import boto
BUCKET_NAME = "examplebucket"
DELETE_DATE = "2015-06-08"
bucket = boto.connect_s3().get_bucket(BUCKET_NAME)
for v in bucket.list_versions():
if (isinstance(v, boto.s3.deletemarker.DeleteMarker) and
v.is_latest and
DELETE_DATE in v.last_modified):
bucket.delete_key(v.name, version_id=v.version_id)
Python example with boto3:
However, if you have thousands of objects, this could be a slow process. AWS does provide a way to batch delete objects with a maximum batch size of 1000.
The following example script searches your objects with a prefix, and test them if they are deleted ( i.e. current version is a delete marker ) and them batch delete them. It is set to search 500 objects in your bucket in each batch, and try to delete multiple object with a batch no more than 1000 objects.
import boto3
client = boto3.client('s3')
def get_object_versions(bucket, prefix, max_key, key_marker):
kwargs = dict(
Bucket=bucket,
EncodingType='url',
MaxKeys=max_key,
Prefix=prefix
)
if key_marker:
kwargs['KeyMarker'] = key_marker
response = client.list_object_versions(**kwargs)
return response
def get_delete_markers_info(bucket, prefix, key_marker):
markers = []
max_markers = 500
version_batch_size = 500
while True:
response = get_object_versions(bucket, prefix, version_batch_size, key_marker)
key_marker = response.get('NextKeyMarker')
delete_markers = response.get('DeleteMarkers', [])
markers = markers + [dict(Key=x.get('Key'), VersionId=x.get('VersionId')) for x in delete_markers if
x.get('IsLatest')]
print '{0} -- {1} delete markers ...'.format(key_marker, len(markers))
if len(markers) >= max_markers or key_marker is None:
break
return {"delete_markers": markers, "key_marker": key_marker}
def delete_delete_markers(bucket, prefix):
key_marker = None
while True:
info = get_delete_markers_info(bucket, prefix, key_marker)
key_marker = info.get('key_marker')
delete_markers = info.get('delete_markers', [])
if len(delete_markers) > 0:
response = client.delete_objects(
Bucket=bucket,
Delete={
'Objects': delete_markers,
'Quiet': True
}
)
print 'Deleting {0} delete markers ... '.format(len(delete_markers))
print 'Done with status {0}'.format(response.get('ResponseMetadata', {}).get('HTTPStatusCode'))
else:
print 'No more delete markers found\n'
break
delete_delete_markers(bucket='data-global', prefix='2017/02/18')
I have realised that I can perform and Initiate Restore operation once the object is stored on Gliacer, as shown by the Storage Class of the object. To restore a previous copy on S3, the Delete marker on the current Object has to be removed.