So I have the following rust aws lambda function:
use std::io::Read;
use std::process::{Command, Stdio};
use lambda_http::{run, service_fn, Body, Error, Request, RequestExt, Response};
use lambda_http::aws_lambda_events::serde_json::json;
/// This is the main body for the function.
/// Write your code inside it.
/// There are some code example in the following URLs:
/// - https://github.com/awslabs/aws-lambda-rust-runtime/tree/main/examples
async fn function_handler(_event: Request) -> Result<Response<Body>, Error> {
// Extract some useful information from the request
let program = Command::new("./myProggram")
.stdout(Stdio::piped())
.output()
.expect("failed to execute process");
let data = String::from_utf8(program.stdout).unwrap();
let parsed = data.split("\n").filter(|x| !x.is_empty()).collect::<Vec<&str>>();
// Return something that implements IntoResponse.
// It will be serialized to the right response event automatically by the runtime
let resp = Response::builder()
.status(200)
.header("content-type", "application/json")
.body(json!(parsed).to_string().into())
.map_err(Box::new)?;
Ok(resp)
}
#[tokio::main]
async fn main() -> Result<(), Error> {
tracing_subscriber::fmt()
.with_max_level(tracing::Level::INFO)
// disable printing the name of the module in every log line.
.with_target(false)
// disabling time is handy because CloudWatch will add the ingestion time.
.without_time()
.init();
run(service_fn(function_handler)).await
}
The idea here is that I want to return the response from the binary in JSON format.
I'm compiling the function with cargo lambda which is producing bootstrap file, then I'm zipping it manually by including the bootstrap binary and the myProgram binary.
When I test my function in the lambda panel by sending event to it I get the response with the right headers etc. but the response body is empty.
I'm deploying my function thru the aws panel, on Custom runtime on Amazon Linux 2 by uploading the zip file.
When I test locally with cargo lambda watch and cargo lambda invoke the response body is filled with the myProgram stdout parsed to json.
Any ideas or thoughts on what goes wrong in the actual cloud are much appreciated!
My problem was with the dynamically linked libraries in the binary. It is actually python binary and it was missing specific version of GLIBC.
The easiest solution in my case was to compile myProgram on Amazon Linux 2
Related
I have a go script and I am making Terraform resource aws_lambda_function with runtime configurations as such :
handler = "main"
memory_size = 512
timeout = 360
runtime = "go1.x"
In my go code, I have imported the modules :
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-lambda-go/events"
and a snippet of code for ecr-sync.go
func main() {
lambda.Start(HandleRequest)
}
func HandleRequest(ctx context.Context, event event.HandleRequest)(string,error) {
return string(body),err
}
The lambda function is deployed but while testing the function, it throws me following error:
{
"errorMessage": "fork/exec /var/task/main: no such file or directory",
"errorType": "PathError"
}
Anyone know how to fix this issue? I saw this post https://github.com/serverless/serverless/issues/4710 but I am not sure how I can set up the build confifguration through a pipeline as runtime configs are set up through terraform.
"fork/exec /var/task/main: no such file or directory"
The error means that the executable in your lambda's zip file is not named main.
In the Go API for Lambda, the handler must be in the main package and it must be called in the main() function, just like yours. Neither package nor function name need to be set anywhere. The handler setting in the resource refers to the filename of the executable in the zip file uploaded.
From the error, it is clear that your zipfile does not have a main. (/var/task comes from the internal setup on the lambda side).
The lambda function is deployed but while testing the function, it throws me following error:
Yes, deploying a function does not verify that its handler configuration matches its zipfile. That error happens at runtime. Filename including extension is irrelevant, but must match the handler you specify in the lambda config.
To fix the error, check your zipfile, and update the handler to point to the executable. Keep in mind that Go lambdas must be compiled and the executable must be provided in the zipfile - unlike interpreted languages like Javascript of Python, source code does not go in the zipfile.
I'm writing my first Scala lambda, and I have run into a bit of an issue, which i think should be straightforward, but I'm having trouble finding the answer.
So I have the following code that will allow me to accept json and use it in the lambda.
--eventTest.scala
val stream : InputStream = getClass.getResourceAsStream("/test_data/body.json")
--request handler
def handleRequest(input: InputStream): Unit = {
val name = scalaMapper.readValue(input, classOf[NameInfo])
val result = s"Hello there, ${name.firstName} ${name.lastName}."
println(result)
}
This works just fine, but I'm having problems figuring out how to be able to get URL parameters. Does it automatically use the same input Stream? There is seems to be very little documentation on this in Scala.
Thanks
A Lambda function's event is a JSON object. The Lambda runtime will introspect the handler function and attempt to extract or convert that object based on the function signature. I believe the easiest representation is a java.util.Map[String,String] (iirc, Lambda doesn't have a Scala runtime, so you'll have to use Java classes and convert them).
An example event from the API Gateway proxy integration: https://github.com/awsdocs/aws-lambda-developer-guide/blob/master/sample-apps/nodejs-apig/event.json
For more information about the Java runtime: https://docs.aws.amazon.com/lambda/latest/dg/java-handler.html
I want to use executable file within AWS lambda handler function.
Here is my handler function where I want to use executable
func handler(){
cmd := exec.Command("./balance", "GetBalance", id)
cmd.Dir = "/Users/xxxx/go/src/getBalance"
output, err := cmd.Output()
}
I want to use the output of the above command in this handler. Is it possible to use? If possible, do I need to zip both executables? Or is there any other way where I can use executable within the handler?
Sadly, you will not be able to write to /Users/xxxx/go/src/getBalance. In lambda, you have access only to /tmp.
Also, if you bundle the balance file with your deployment package it will be stored in /var/task alongside your function code.
EDIT:
Based on the new comments, to complete solution also required removal of cmd.Dir and recompilation of balance for linux.
I have a question about a Google Cloud functions triggered by an event on a storage bucket (I’m developing it in Python).
I have to read the data of the file just finalized (a PDF file) on the bucket that is triggering the event and I was looking for the file payload on the event object passed to my function (data, context) but it seems there is not payload on that object.
Do I have to use the cloud storage library to get the file from the bucket ? Is there a way to get the payload directly from the context of the triggered function ?
Enrico
From checking the more complete examplein the Firebase documentation, it indeed seems that the payload of the file is not included in the parameters. That make sense, since there's no telling how big the file is that was just finalized, and if that will even fit in the memory of your Functions runtime.
So you'll have to indeed grab the file from the bucket with a separate call, based on the information in the metadata. The full Firebase example grabs the filename and other info from its context/data with:
exports.generateThumbnail = functions.storage.object().onFinalize(async (object) => {
const fileBucket = object.bucket; // The Storage bucket that contains the file.
const filePath = object.name; // File path in the bucket.
const contentType = object.contentType; // File content type.
const metageneration = object.metageneration; // Number of times metadata has been generated. New objects have a value of 1.
...
I'll see if I can find a more complete example. But I'd expect it to work similarly on raw Google Cloud Functions, which Firebase wraps, even when using Python.
Update: from looking at this Storage/Function/PubSub documentation that the Python binding is apparently based on, it looks like the the path should be available as data['resource'] or as data['name'].
I have been stuck on this for a while.
I want to be able to trigger lambda functions to run my .cpp files via lambda function.
For simplicity, I want to run a hello.cpp file that just runs hello world. I want to know how this is possible, what is the file architecture I need? What goes inside my handler function?
I know this is very simple to do in node.js, but how would I replicate the node.js hello world example to run the c++ file?
The AWS website does say I'm allowed to use custom runtime, so it should be possible.
Any insight will help.
AWS Lambda is now supporting C++ natively.
Announcement post here.
If you still want to use NodeJS, compile your program using an Amazon Linux image, and run this index.js, adding the compiled file to the folder node_modules/.bin/ and the dynamic libraries required to the folder node_modules/.bin/lib in the uploaded .zip file:
exports.handler = (event, context, callback) => {
var result = require('child_process').execSync("/lib64/ld-linux-x86-64.so.2 --library-path "+process.cwd() +"/node_modules/.bin/lib "+process.cwd() +"/node_modules/.bin/helloworld ").toString();
result = JSON.parse(result);
const response = {
statusCode: 200,
body: result
};
callback(null, response);
};