deploying lambda code inside a folder with an autogenerated name - amazon-web-services

I am trying to set up a lambda in pulumi-aws but my function code when deployed is wrapped in a folder with the same name as the generated lambda function name.
I would prefer not to have this as it's unnecessary, but more than that it means I can't work out what my handler should be as the folder name is generated?
(I realise I can probably use a reference to get this generated name, but I don't like the added complexity for no reason. I don't see a good reason for having this folder inside the lambda?)
E.g. my function code is 1 simple index.js file. with 1 named export of handler. I would expect my lambda handler to be index.handler.
(Note I am using TypeScript for my pulumi code but the Lambda is in JavaScript.)
I have tried a couple of options for the code property:
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.AssetArchive({
"index.js": new pulumi.asset.FileAsset('./lambdas/add-timesheet-entry/index.js'),
}),
In this example the zip file was simply an index.js with no folder information in the zip.
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.FileArchive("lambdatest.zip"),

AWS Lambda code is always in a "folder" named after the function name. Here is a Lambda that I created in the web console:
It doesn't affect the naming of the handler though. index.handler is just fine.

Related

AWS Lambda : how to handle a code asset defined by an environment variable that does not exist yet?

I (try to) deploy my current application using CDK pipelines.
In doing so, I stumbled across an unexpected behavior (here if interested) which now I am trying to resolve. I have a Lambda function for which the asset is a directory that is dynamically generated during a CodeBuild step. The line is currently defined like this in my CDK stack :
code: lambda.Code.fromAsset(process.env.CODEBUILD_SRC_DIR_BuildLambda || "")
The issue is that locally, this triggers the unexpected and undesired behaviour because the environment variable does not exist and therefore goes to the default "".
What is the proper way to avoid this issue?
Thanks!
Option 1: Set the env var locally, pointing to the correct source directory;
CODEBUILD_SRC_DIR_BuildLambda=path/to/lambda && cdk deploy
Option 2: Define a dummy asset if CODEBUILD_SRC_DIR_BuildLambda is undefined
code: process.env.CODEBUILD_SRC_DIR_BuildLambda
? lambda.Code.fromAsset(process.env.CODEBUILD_SRC_DIR_BuildLambda)
: new lambda.InlineCode('exports.handler = async () => console.log("NEVER")'),

Terraform resource AWS LAMBDA GO Error : "fork/exec /var/task/main: no such file or directory"

I have a go script and I am making Terraform resource aws_lambda_function with runtime configurations as such :
handler = "main"
memory_size = 512
timeout = 360
runtime = "go1.x"
In my go code, I have imported the modules :
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-lambda-go/events"
and a snippet of code for ecr-sync.go
func main() {
lambda.Start(HandleRequest)
}
func HandleRequest(ctx context.Context, event event.HandleRequest)(string,error) {
return string(body),err
}
The lambda function is deployed but while testing the function, it throws me following error:
{
"errorMessage": "fork/exec /var/task/main: no such file or directory",
"errorType": "PathError"
}
Anyone know how to fix this issue? I saw this post https://github.com/serverless/serverless/issues/4710 but I am not sure how I can set up the build confifguration through a pipeline as runtime configs are set up through terraform.
"fork/exec /var/task/main: no such file or directory"
The error means that the executable in your lambda's zip file is not named main.
In the Go API for Lambda, the handler must be in the main package and it must be called in the main() function, just like yours. Neither package nor function name need to be set anywhere. The handler setting in the resource refers to the filename of the executable in the zip file uploaded.
From the error, it is clear that your zipfile does not have a main. (/var/task comes from the internal setup on the lambda side).
The lambda function is deployed but while testing the function, it throws me following error:
Yes, deploying a function does not verify that its handler configuration matches its zipfile. That error happens at runtime. Filename including extension is irrelevant, but must match the handler you specify in the lambda config.
To fix the error, check your zipfile, and update the handler to point to the executable. Keep in mind that Go lambdas must be compiled and the executable must be provided in the zipfile - unlike interpreted languages like Javascript of Python, source code does not go in the zipfile.

The create_profile_job() method is not accepting the new parameters

I am trying to use a Lambda function (written in python) to create a series of profile jobs in DataBrew. AWS recently added a new parameter to this function ("Configuration) which I have added in my code. However, when I call the function, I get the following error message: "Unknown parameter in input: "Configuration", must be one of: DatasetName, EncryptionKeyArn, EncryptionMode, Name, LogSubscription, MaxCapacity, MaxRetries, OutputLocation, RoleArn, Tags, Timeout, JobSample." This does not match the parameter list in the boto3 documentation, which was recently updated to align with the new features added to DataBrew on 07/23/21. Has anyone else had this issue? If so, is there a timeline for this bug to be fixed?
It turns out that the version of boto3 that is available in Lambda by default is not the most updated version. Hence, in order to use all the parameters for this method, you have to add the latest version of boto3 (and all dependencies) as a Lambda layer.

file payload in google cloud function bucket triggered

I have a question about a Google Cloud functions triggered by an event on a storage bucket (I’m developing it in Python).
I have to read the data of the file just finalized (a PDF file) on the bucket that is triggering the event and I was looking for the file payload on the event object passed to my function (data, context) but it seems there is not payload on that object.
Do I have to use the cloud storage library to get the file from the bucket ? Is there a way to get the payload directly from the context of the triggered function ?
Enrico
From checking the more complete examplein the Firebase documentation, it indeed seems that the payload of the file is not included in the parameters. That make sense, since there's no telling how big the file is that was just finalized, and if that will even fit in the memory of your Functions runtime.
So you'll have to indeed grab the file from the bucket with a separate call, based on the information in the metadata. The full Firebase example grabs the filename and other info from its context/data with:
exports.generateThumbnail = functions.storage.object().onFinalize(async (object) => {
const fileBucket = object.bucket; // The Storage bucket that contains the file.
const filePath = object.name; // File path in the bucket.
const contentType = object.contentType; // File content type.
const metageneration = object.metageneration; // Number of times metadata has been generated. New objects have a value of 1.
...
I'll see if I can find a more complete example. But I'd expect it to work similarly on raw Google Cloud Functions, which Firebase wraps, even when using Python.
Update: from looking at this Storage/Function/PubSub documentation that the Python binding is apparently based on, it looks like the the path should be available as data['resource'] or as data['name'].

Is it possible to rename an AWS Lambda function?

I have created some AWS Lambda functions for testing purposes (named as test_function something), then after testing I found those functions can be used in prod environment.
Is it possible to rename the AWS Lambda function? and how?
Or should I create a new one and copy paste source code?
The closest you can get to renaming the AWS Lambda function is using an alias, which is a way to name a specific version of an AWS Lambda function. The actual name of the function though, is set once you create it. If you want to rename it, just create a new function and copy the exact same code into it. It won't cost you any extra to do this (since you are only charged for execution time) so you lose nothing.
For a reference on how to name versions of the AWS Lambda function, check out the documentation here: Lambda function versions
.
You cannot rename the function, your only option is to follow the suggestions already provided here or create a new one and copypaste the code.
It's a good thing actually that you cannot rename it: if you were able to, it would cease to work because the policies attached to the function still point to the old name, unless you were to edit every single one of them manually, or made them generic (which is ill-advised).
However, as a best practice in terms of software development, I suggest you to always keep production and testing (staging) separate, effectively duplicating your environment.
This allows you to test stuff on a safe environment, where if you make a mistake you don't lose anything important, and when you confirm that your new features work, replicate them in production.
So in your case, you would have two lambdas, one called 'my-lambda-staging' and the other 'my-lambda-prod'. Use the ENV variables of lambdas to adapt to the current environment, so you don't need to refactor!
My solution is to export the function, create a new Lambda, then upload the .zip file to the new Lambda.
My solution for lambda rename, basically use boto3 describe previous lambda info for configuration setting and download the previous lambda function code to create a new lambda, but the trigger won't be set so you need to add trigger back manually
from boto3.session import Session
from botocore.client import Config
from botocore.handlers import set_list_objects_encoding_type_url
import boto3
import pprint
import urllib3
pp = pprint.PrettyPrinter(indent=4)
session = Session(aws_access_key_id= {YOUR_ACCESS_KEY},
aws_secret_access_key= {YOUR_SECRET_KEY},
region_name= 'your_region')
PREV_FUNC_NAME = 'your_prev_function_name'
NEW_FUNC_NAME = 'your_new_function_name'
def prev_lambda_code(code_temp_path):
'''
download prev function code
'''
code_url = code_temp_path
http = urllib3.PoolManager()
response = http.request("GET", code_url)
if not 200 <= response.status < 300:
raise Exception(f'Failed to download function code: {response}')
return response.data
def rename_lambda_function(PREV_FUNC_NAME , NEW_FUNC_NAME):
'''
Copy previous lambda function and rename it
'''
lambda_client = session.client('lambda')
prev_func_info = lambda_client.get_function(FunctionName = PREV_FUNC_NAME)
if 'VpcConfig' in prev_func_info['Configuration']:
VpcConfig = {
'SubnetIds' : prev_func_info['Configuration']['VpcConfig']['SubnetIds'],
'SecurityGroupIds' : prev_func_info['Configuration']['VpcConfig']['SecurityGroupIds']
}
else:
VpcConfig = {}
if 'Environment' in prev_func_info['Configuration']:
Environment = prev_func_info['Configuration']['Environment']
else:
Environment = {}
response = client.create_function(
FunctionName = NEW_FUNC_NAME,
Runtime = prev_func_info['Configuration']['Runtime'],
Role = prev_func_info['Configuration']['Role'],
Handler = prev_func_info['Configuration']['Handler'],
Code = {
'ZipFile' : prev_lambda_code(prev_func_info['Code']['Location'])
},
Description = prev_func_info['Configuration']['Description'],
Timeout = prev_func_info['Configuration']['Timeout'],
MemorySize = prev_func_info['Configuration']['MemorySize'],
VpcConfig = VpcConfig,
Environment = Environment,
PackageType = prev_func_info['Configuration']['PackageType'],
TracingConfig = prev_func_info['Configuration']['TracingConfig'],
Layers = [Layer['Arn'] for Layer in prev_func_info['Configuration']['Layers']],
)
pp.pprint(response)
rename_lambda_function(PREV_FUNC_NAME , NEW_FUNC_NAME)