I have been stuck on this for a while.
I want to be able to trigger lambda functions to run my .cpp files via lambda function.
For simplicity, I want to run a hello.cpp file that just runs hello world. I want to know how this is possible, what is the file architecture I need? What goes inside my handler function?
I know this is very simple to do in node.js, but how would I replicate the node.js hello world example to run the c++ file?
The AWS website does say I'm allowed to use custom runtime, so it should be possible.
Any insight will help.
AWS Lambda is now supporting C++ natively.
Announcement post here.
If you still want to use NodeJS, compile your program using an Amazon Linux image, and run this index.js, adding the compiled file to the folder node_modules/.bin/ and the dynamic libraries required to the folder node_modules/.bin/lib in the uploaded .zip file:
exports.handler = (event, context, callback) => {
var result = require('child_process').execSync("/lib64/ld-linux-x86-64.so.2 --library-path "+process.cwd() +"/node_modules/.bin/lib "+process.cwd() +"/node_modules/.bin/helloworld ").toString();
result = JSON.parse(result);
const response = {
statusCode: 200,
body: result
};
callback(null, response);
};
Related
So I have the following rust aws lambda function:
use std::io::Read;
use std::process::{Command, Stdio};
use lambda_http::{run, service_fn, Body, Error, Request, RequestExt, Response};
use lambda_http::aws_lambda_events::serde_json::json;
/// This is the main body for the function.
/// Write your code inside it.
/// There are some code example in the following URLs:
/// - https://github.com/awslabs/aws-lambda-rust-runtime/tree/main/examples
async fn function_handler(_event: Request) -> Result<Response<Body>, Error> {
// Extract some useful information from the request
let program = Command::new("./myProggram")
.stdout(Stdio::piped())
.output()
.expect("failed to execute process");
let data = String::from_utf8(program.stdout).unwrap();
let parsed = data.split("\n").filter(|x| !x.is_empty()).collect::<Vec<&str>>();
// Return something that implements IntoResponse.
// It will be serialized to the right response event automatically by the runtime
let resp = Response::builder()
.status(200)
.header("content-type", "application/json")
.body(json!(parsed).to_string().into())
.map_err(Box::new)?;
Ok(resp)
}
#[tokio::main]
async fn main() -> Result<(), Error> {
tracing_subscriber::fmt()
.with_max_level(tracing::Level::INFO)
// disable printing the name of the module in every log line.
.with_target(false)
// disabling time is handy because CloudWatch will add the ingestion time.
.without_time()
.init();
run(service_fn(function_handler)).await
}
The idea here is that I want to return the response from the binary in JSON format.
I'm compiling the function with cargo lambda which is producing bootstrap file, then I'm zipping it manually by including the bootstrap binary and the myProgram binary.
When I test my function in the lambda panel by sending event to it I get the response with the right headers etc. but the response body is empty.
I'm deploying my function thru the aws panel, on Custom runtime on Amazon Linux 2 by uploading the zip file.
When I test locally with cargo lambda watch and cargo lambda invoke the response body is filled with the myProgram stdout parsed to json.
Any ideas or thoughts on what goes wrong in the actual cloud are much appreciated!
My problem was with the dynamically linked libraries in the binary. It is actually python binary and it was missing specific version of GLIBC.
The easiest solution in my case was to compile myProgram on Amazon Linux 2
Been trying to test out the aws-iot-device-sdk-v2 library for a bit. I am currently trying to test out the sample app provided by the AWS dev team. I am trying to test out the system incrementally. This is the code I have tested so far:
import { mqtt, auth, http, io, iot } from 'aws-iot-device-sdk-v2';
const client_bootstrap = new io.ClientBootstrap();
let config_builder = iot.AwsIotMqttConnectionConfigBuilder.new_with_websockets({
region: 'us-west-2',
credentials_provider: auth.AwsCredentialsProvider.newDefault(client_bootstrap)
});
config_builder.with_clean_session(false);
config_builder.with_endpoint('example.com');
config_builder.with_client_id(1);
const config = config_builder.build();
const client = new mqtt.MqttClient(client_bootstrap);
const connection = client.new_connection(config);
await connection.connect();
When running this on the AWS console, I am getting the following error:
TypeError: Cannot read property 'AwsCredentialsProvider' of undefined
Any idea what I'm doing wrong here?
Wasn't able to identify why I couldn't use AwsCredentialsProvider as expected but found a work-around. Instead, I was able to initialize the builder with const config_builder = iot.AwsIotMqttConnectionConfigBuilder.new_with_websockets();. Anyway, didn't figure out why I couldn't utilize AwsCredentialsProvider as expected. Might be something to look into if the dev team has time. š
I also encounter the same issue when running this example in browser and found the reason :
'aws-iot-device-sdk-v2' just import these 5 classes directly form aws-crt
https://github.com/aws/aws-iot-device-sdk-js-v2/blob/main/lib/index.ts
while in aws-crt, it's implementation for browser is in a sub folder ,
https://github.com/awslabs/aws-crt-nodejs/tree/main/lib/browserm, and it don't include 'auth' .
so if you run these example in browser, you need to import form aws-crt's subfolder, and skip 'auth':
import { mqtt, http, io, iot } from 'aws-crt/dist.browser/browser';
AWS recently introduced S3 Object Lambda, however looking at the online documentation:
Writing and debugging Lambda functions for S3 Object Lambda Access Points
Introducing Amazon S3 Object Lambda ā Use Your Code to Process Data as It Is Being Retrieved from S3
How to use Amazon S3 Object Lambda to generate thumbnails
I can only find example for Java, Python and NodeJS.
Is there an example out there for c++ that I missed ? In particular I fail to understand what is the equivalent in c++ for getObjectContext (python/nodejs) / S3ObjectLambdaEvent (java) ? How should I retrieve outputRoute, outputToken, inputS3Url ? The integration test does not make it particularly clear either:
https://github.com/aws/aws-sdk-cpp/blob/main/aws-cpp-sdk-s3-integration-tests/BucketAndObjectOperationTest.cpp#L580-L581
S3 Object Lambda is using the AWS Lambda service. AWS Lambda does only support the following runtimes "natively":
Go
.NET Core
Ruby
Java
Python
NodeJS
C++ is support through "custom" runtimes or through Docker containers. Usually, the AWS documentation only covers the "natively" supported runtimes and even then not all (as you noticed). Mostly they have examples for the most popular ones.
So what you need to look for are C++ Lambda examples, examples using the C++ AWS SDK and reference documentation.
Using the reference documentation and the Java/Python/NodeJS examples, it should be easy to write a C++ version.
For example:
Introducing the C++ Lambda Runtime (Blog Post)
C++ AWS SDK Page
API Reference for WriteGetObjectResponseRequest
Answering my own post after a couple of weeks of struggle.
Basic point is summarized at Introduction to Amazon S3 Object Lambda
The JSON payload of interest is simply:
{ "xAmzRequestId": "1a5ed718-5f53-471d-b6fe-5cf62d88d02a",
"getObjectContext": {
"inputS3Url": "https://transform-424432388155.s3-accesspoint.us-east-1.amazonaws.com/title.txt?X-Amz-Security-Token=...",
"outputRoute": "io-iad-cell001",
"outputToken": "..." },
You can simply open it using:
static invocation_response my_handler(invocation_request const& req, Aws::S3::S3Client const& client)
{
using namespace Aws::Utils::Json;
JsonValue json(req.payload);
if (!json.WasParseSuccessful()) {
return invocation_response::failure("Failed to parse input JSON", "InvalidJSON");
}
auto view = json.View();
auto s3url = view.GetObject("getObjectContext").GetString("inputS3Url");
auto route = view.GetObject("getObjectContext").GetString("outputRoute");
auto token = view.GetObject("getObjectContext").GetString("outputToken");
Which will later be used in WriteGetObjectResponseRequest as:
S3::Model::WriteGetObjectResponseRequest request;
request.WithRequestRoute(route);
request.WithRequestToken(token);
request.SetBody(objectStream);
auto outcome = client.WriteGetObjectResponse(request);
It is then up to the application to decide on how to construct the objectStream, but a simple pass-through example would be:
std::shared_ptr<Aws::Http::HttpRequest> getRequest =
CreateHttpRequest(s3url, Http::HttpMethod::HTTP_GET,
Aws::Utils::Stream::DefaultResponseStreamFactoryMethod);
std::shared_ptr<Aws::Http::HttpClient> httpClient =
Aws::Http::CreateHttpClient(Aws::Client::ClientConfiguration());
std::shared_ptr<Aws::Http::HttpResponse> getResponse =
httpClient->MakeRequest(getRequest);
std::shared_ptr<Aws::IOStream> objectStream =
Aws::MakeShared<Aws::StringStream>("SO-WriteGetObjectResponse");
const Aws::IOStream& responseBody = getResponse->GetResponseBody();
*objectStream << responseBody.rdbuf();
objectStream->flush();
Examples which help me are:
https://github.com/awslabs/aws-lambda-cpp/blob/master/examples/s3/main.cpp
Working with WriteGetObjectResponse (Example #2/Python)
I have a question about a Google Cloud functions triggered by an event on a storage bucket (Iām developing it in Python).
I have to read the data of the file just finalized (a PDF file) on the bucket that is triggering the event and I was looking for the file payload on the event object passed to my function (data, context) but it seems there is not payload on that object.
Do I have to use the cloud storage library to get the file from the bucket ? Is there a way to get the payload directly from the context of the triggered function ?
Enrico
From checking the more complete examplein the Firebase documentation, it indeed seems that the payload of the file is not included in the parameters. That make sense, since there's no telling how big the file is that was just finalized, and if that will even fit in the memory of your Functions runtime.
So you'll have to indeed grab the file from the bucket with a separate call, based on the information in the metadata. The full Firebase example grabs the filename and other info from its context/data with:
exports.generateThumbnail = functions.storage.object().onFinalize(async (object) => {
const fileBucket = object.bucket; // The Storage bucket that contains the file.
const filePath = object.name; // File path in the bucket.
const contentType = object.contentType; // File content type.
const metageneration = object.metageneration; // Number of times metadata has been generated. New objects have a value of 1.
...
I'll see if I can find a more complete example. But I'd expect it to work similarly on raw Google Cloud Functions, which Firebase wraps, even when using Python.
Update: from looking at this Storage/Function/PubSub documentation that the Python binding is apparently based on, it looks like the the path should be available as data['resource'] or as data['name'].
I am trying to set up a lambda in pulumi-aws but my function code when deployed is wrapped in a folder with the same name as the generated lambda function name.
I would prefer not to have this as it's unnecessary, but more than that it means I can't work out what my handler should be as the folder name is generated?
(I realise I can probably use a reference to get this generated name, but I don't like the added complexity for no reason. I don't see a good reason for having this folder inside the lambda?)
E.g. my function code is 1 simple index.js file. with 1 named export of handler. I would expect my lambda handler to be index.handler.
(Note I am using TypeScript for my pulumi code but the Lambda is in JavaScript.)
I have tried a couple of options for the code property:
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.AssetArchive({
"index.js": new pulumi.asset.FileAsset('./lambdas/add-timesheet-entry/index.js'),
}),
In this example the zip file was simply an index.js with no folder information in the zip.
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.FileArchive("lambdatest.zip"),
AWS Lambda code is always in a "folder" named after the function name. Here is a Lambda that I created in the web console:
It doesn't affect the naming of the handler though. index.handler is just fine.