How to use executable file within AWS lambda handler? - amazon-web-services

I want to use executable file within AWS lambda handler function.
Here is my handler function where I want to use executable
func handler(){
cmd := exec.Command("./balance", "GetBalance", id)
cmd.Dir = "/Users/xxxx/go/src/getBalance"
output, err := cmd.Output()
}
I want to use the output of the above command in this handler. Is it possible to use? If possible, do I need to zip both executables? Or is there any other way where I can use executable within the handler?

Sadly, you will not be able to write to /Users/xxxx/go/src/getBalance. In lambda, you have access only to /tmp.
Also, if you bundle the balance file with your deployment package it will be stored in /var/task alongside your function code.
EDIT:
Based on the new comments, to complete solution also required removal of cmd.Dir and recompilation of balance for linux.

Related

Invoking binary in aws lambda with rust

So I have the following rust aws lambda function:
use std::io::Read;
use std::process::{Command, Stdio};
use lambda_http::{run, service_fn, Body, Error, Request, RequestExt, Response};
use lambda_http::aws_lambda_events::serde_json::json;
/// This is the main body for the function.
/// Write your code inside it.
/// There are some code example in the following URLs:
/// - https://github.com/awslabs/aws-lambda-rust-runtime/tree/main/examples
async fn function_handler(_event: Request) -> Result<Response<Body>, Error> {
// Extract some useful information from the request
let program = Command::new("./myProggram")
.stdout(Stdio::piped())
.output()
.expect("failed to execute process");
let data = String::from_utf8(program.stdout).unwrap();
let parsed = data.split("\n").filter(|x| !x.is_empty()).collect::<Vec<&str>>();
// Return something that implements IntoResponse.
// It will be serialized to the right response event automatically by the runtime
let resp = Response::builder()
.status(200)
.header("content-type", "application/json")
.body(json!(parsed).to_string().into())
.map_err(Box::new)?;
Ok(resp)
}
#[tokio::main]
async fn main() -> Result<(), Error> {
tracing_subscriber::fmt()
.with_max_level(tracing::Level::INFO)
// disable printing the name of the module in every log line.
.with_target(false)
// disabling time is handy because CloudWatch will add the ingestion time.
.without_time()
.init();
run(service_fn(function_handler)).await
}
The idea here is that I want to return the response from the binary in JSON format.
I'm compiling the function with cargo lambda which is producing bootstrap file, then I'm zipping it manually by including the bootstrap binary and the myProgram binary.
When I test my function in the lambda panel by sending event to it I get the response with the right headers etc. but the response body is empty.
I'm deploying my function thru the aws panel, on Custom runtime on Amazon Linux 2 by uploading the zip file.
When I test locally with cargo lambda watch and cargo lambda invoke the response body is filled with the myProgram stdout parsed to json.
Any ideas or thoughts on what goes wrong in the actual cloud are much appreciated!
My problem was with the dynamically linked libraries in the binary. It is actually python binary and it was missing specific version of GLIBC.
The easiest solution in my case was to compile myProgram on Amazon Linux 2

Terraform resource AWS LAMBDA GO Error : "fork/exec /var/task/main: no such file or directory"

I have a go script and I am making Terraform resource aws_lambda_function with runtime configurations as such :
handler = "main"
memory_size = 512
timeout = 360
runtime = "go1.x"
In my go code, I have imported the modules :
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-lambda-go/events"
and a snippet of code for ecr-sync.go
func main() {
lambda.Start(HandleRequest)
}
func HandleRequest(ctx context.Context, event event.HandleRequest)(string,error) {
return string(body),err
}
The lambda function is deployed but while testing the function, it throws me following error:
{
"errorMessage": "fork/exec /var/task/main: no such file or directory",
"errorType": "PathError"
}
Anyone know how to fix this issue? I saw this post https://github.com/serverless/serverless/issues/4710 but I am not sure how I can set up the build confifguration through a pipeline as runtime configs are set up through terraform.
"fork/exec /var/task/main: no such file or directory"
The error means that the executable in your lambda's zip file is not named main.
In the Go API for Lambda, the handler must be in the main package and it must be called in the main() function, just like yours. Neither package nor function name need to be set anywhere. The handler setting in the resource refers to the filename of the executable in the zip file uploaded.
From the error, it is clear that your zipfile does not have a main. (/var/task comes from the internal setup on the lambda side).
The lambda function is deployed but while testing the function, it throws me following error:
Yes, deploying a function does not verify that its handler configuration matches its zipfile. That error happens at runtime. Filename including extension is irrelevant, but must match the handler you specify in the lambda config.
To fix the error, check your zipfile, and update the handler to point to the executable. Keep in mind that Go lambdas must be compiled and the executable must be provided in the zipfile - unlike interpreted languages like Javascript of Python, source code does not go in the zipfile.

deploying lambda code inside a folder with an autogenerated name

I am trying to set up a lambda in pulumi-aws but my function code when deployed is wrapped in a folder with the same name as the generated lambda function name.
I would prefer not to have this as it's unnecessary, but more than that it means I can't work out what my handler should be as the folder name is generated?
(I realise I can probably use a reference to get this generated name, but I don't like the added complexity for no reason. I don't see a good reason for having this folder inside the lambda?)
E.g. my function code is 1 simple index.js file. with 1 named export of handler. I would expect my lambda handler to be index.handler.
(Note I am using TypeScript for my pulumi code but the Lambda is in JavaScript.)
I have tried a couple of options for the code property:
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.AssetArchive({
"index.js": new pulumi.asset.FileAsset('./lambdas/add-timesheet-entry/index.js'),
}),
In this example the zip file was simply an index.js with no folder information in the zip.
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.FileArchive("lambdatest.zip"),
AWS Lambda code is always in a "folder" named after the function name. Here is a Lambda that I created in the web console:
It doesn't affect the naming of the handler though. index.handler is just fine.

AWS nodejs handler module with dots yields "Bad handler" error

I'm trying to put my event handler function in a file called app.lambda.js because the app.js should contain only generic stuff not related to AWS Lambda.
But specifying "app.lambda.handler" as a handler yields Bad handler app.lambda.handler.
Is it simply impossible to use dots in the file name of the handler module?
Yes, You should give app.handler. Only one dot will allow you in handler name.
Try by giving your filename.handler. It will work for you.

Set or modify an AWS Lambda environment variable with Python boto3

i want to set or modify an environment variable in my lambda script.
I need to save a value for the next call of my script.
For exemple i create an environment variable with the aws lambda console and don't set value. After that i try this :
import boto3
import os
if os.environ['ENV_VAR']:
print(os.environ['ENV_VAR'])
os.environ['ENV_VAR'] = "new value"
In this case my value will never print.
I tried with :
os.putenv()
but it's the same result.
Do you know why this environment variable is not set ?
Thank you !
Consider using the boto3 lambda command, update_function_configuration to update the environment variable.
response = client.update_function_configuration(
FunctionName='test-env-var',
Environment={
'Variables': {
'env_var': 'hello'
}
}
)
I need to save a value for the next call of my script.
That's not how environment variables work, nor is it how lambda works. Environment variables cannot be set in a child process for the parent - a process can only set environment variables in its own and child process environments.
This may be confusing to you if you set environment variables at the shell, but in that case, the shell is the long running process setting and getting your environment variables, not the programs it calls.
Consider this example:
from os import environ
print environ['A']
environ['A'] = "Set from python"
print environ['A']
This will only set env A for itself. If you run it several times, the initial value of A is always the shell's value, never the value python sets.
$ export A="set from bash"
$ python t.py
set from bash
Set from python
$ python t.py
set from bash
Set from python
Further, even if that wasn't the case, it wouldn't work reliably with aws lambda. Lambda runs your code on whatever compute resources are available at the time; it will typically cache runtimes for frequently executed functions, so in these cases data could be written to the filesystem to preserve it. But if the next invocation wasn't run in that runtime, your data would be lost.
For your needs, you want to preserve your data outside the lambda. Some obvious options are: write to s3, write to dynamo, or, write to sqs. The next invocation would read from that location, achieving the desired result.
AWS Lambda just executes the piece of code with given set of inputs. Once executed, it returns the output and that's all. If you want to preserve the output for your next call, then you probably need to store that in DB or Queue as Dan said. I personally use SQS in conjunction with SNS that sends me notifications about current state. You can even store the end result like success or failure in SQS which you can use for next trigger. Just throwing the options here, rest all depends on your requirements.