In Firebase how do I run C++ in functions? - c++

I want to run C++ code in the backend, so in functions. Does anyone know how?
The question is generic on purpose, because I do not care how the code is built or called.
My initial thought was calling a C++ lib in nodejs, using FFI. I am still working on it, but I want to explore other options.
Nevertheless, my searches always lead to C++ projects running on the client side, and that not what I am looking for.
If something is unclear, please ask.
Your help will be much appreciated.

As mentioned by Dharmaraj in the comments,
As far as I know, you can use only Javascript and Typescript in
Firebase Cloud Functions. Though you get more options like python, go
and a few more in GOOGLE Cloud Functions.
the only available language (as standard) in Firebase Functions/Hosting is JavaScript (NodeJS) or TypeScript. There are a few ways out there that attempt to make languages such as PHP work, however I will not link any as I have never tried any.
Google Cloud does offer a server environment with languages such as Go, Python, PHP (potentially C++), as opposed to a serverless environment that Firebase Functions runs in. A serverless environment means that code cannot automatically run in the backend and must be called via HTTP request, pub/sub or cron job.

Related

Where can I find the source code for AWS Lambda's dotnetcore3.1 runtime?

AWS publishes a lot of their source code on GitHub, but so far I haven't been able to find the source for the dotnetcore3.1 Lambda Runtime.
I expect this source will be a console application responsible for startup and IoC (using the Lambda configuration to determine the handler class and function, using reflection to instantiate the handler, as well as the serializer specified on the lambda's assembly attribute). So far I have found a number of repositories containing things like NuGet libraries and dotnet CLI tooling--but haven't located the runtime itself.
Is the AWS Lambda dotnetcore3.1 runtime source publically available? Can you point me to the repo and .csproj? Thanks!
I am not 100% sure, but there is probably nothing like a dedicated ".NET Core Runtime" that is publicly available.
Let's explore how Lambda works to explain why I think that:
First there is an EC2 instance with a host operating system. That's the bare metal your resources are coming from (vCPU, RAM, disk, etc.).
Second, there is a thin management layer for microVMs. For AWS Lambda this is a project called Firecracker.
The microVMs that are started use a micro kernel. In case of AWS Lambda it is the OSv micro kernel. The micro kernel already has some support for runtimes:
OSv supports many managed language runtimes including unmodified JVM, Python 2 and 3, Node.JS, Ruby, Erlang as well as languages compiling directly to native machine code like Golang and Rust.
.NET Core is compiled so I think this falls into the same category. It is a "native" binary, that is self-contained and just started.
You mentioned reflection etc. to run the binary. I think it is not that complicated. This is were the last puzzle piece comes into place: environment variables.
There are a bunch of environment variables thar are set by the "runtime". You can find the full list in the AWS documentation: Defined runtime environment variables.
Every Lambda has a environment variable called _HANDLER which points to the handler. In your case this would be the binary. This would allow the runtime to just "run" your handler.
When the handler runs the underlying AWS SDK is starting some kind of webserver/socket (I think it depends on the framework/language) which exposes everything the "standard" runtime needs to communicate with your code.
So as far as I can tell there is not really a dedicated .NET Core runtime. It is pretty generic all the way down, which simplifies development for the folks at AWS I guess.
The source code for Lambda's dotnet runtime(s) is within the aws/aws-lambda-dotnet repository--it doesn't appear to have a dedicated netcoreapp3.1 library, or a preserved branch, but presumably the 3.1 runtime is contained within this repository's history.
The Amazon.Lambda.RuntimeSupport console application appears to be the entrypoint, which in turn instantiates and invokes the published Handler.

What is the proper way to build many Lambda functions and updated them later?

I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.

Can we use AWS-CDK within another application?

The documentation on AWS-CDK has examples of setting it up as a standalone application with support in multiple languages.
I have the following questions regarding the same:
Is it possible to use it within a separate app (written in .NET Core or Angular) like a library?
By above I mean being able to instantiate the construct classes within my app's services and create stacks in my AWS account.
If yes, how does it affect the deployment process? Will invoking the synth() function, generate the cloud-formation templates as expected?
Apologies if my question is vague. I am just getting started with this and am willing to provide necessary details if needed.
I appreciate any help in this regard. Thank you.
I've tried using cdk as a library, but had a few issues and started calling it from another app by using a cli call.
I was using typescript and basically what I did was to call the synth method on the app construct:
import * as cdk from '#aws-cdk/core';
const app = new cdk.App();
... // do something
const cf = app.synth(); // here you get the cloud assembly
cf.something() // you can manipulate the results here
A few issues I found was to get errors during synth as they were not proper bubbled up.
I couldn't find a way to deal with the assets either...
In summary, I didn't explore it much further than that, but I think cdk might need more development to be able to use all its features when importing as a library.

Using cloud functions vs cloud run as webhook for dialogflow

I don't know much about web development and cloud computing. From what I've read when using Cloud functions as the webhook service for dialogflow, you are limited to write code in just 1 source file. I would like to create a real complex dialogflow agent, so It would be handy to have an organized code structure to make the development easier.
I've recently discovered Cloud run which seems like it can also handle webhook requests and makes it possible to develop a complex code structure.
I don't want to use Cloud Run just because it is inconvenient to write everything in one file, but on the other hand it would be strange to have a cloud function with a single file with thousands of lines of code.
Is it possible to have multiple files in a single cloud function?
Is cloud run suitable for my problem? (create a complex dialogflow agent)
Is it possible to have multiple files in a single cloud function?
Yes. When you deploy to Google Cloud Functions you create a bundle with all your source files or have it pull from a source repository.
But Dialogflow only allows index.js and package.json in the Built-In Editor
For simplicity, the built-in code editor only allows you to edit those two files. But the built-in editor is mostly just meant for basic testing. If you're doing serious coding, you probably already have an environment you prefer to use to code and deploy that code.
Is Cloud Run suitable?
Certainly. The biggest thing Cloud Run will get you is complete control over your runtime environment, since you're specifying the details of that environment in addition to the code.
The biggest downside, however, is that you also have to determine details of that environment. Cloud Funcitons provide an HTTPS server without you having to worry about those details, as long as the rest of the environment is suitable.
What other options do I have?
Anywhere you want! Dialogflow only requires that your webhook
Be at a public address (ie - one that Google can resolve and reach)
Runs an HTTPS server at that address with a non-self-signed certificate
During testing, it is common to run it on your own machine via a tunnel such as ngrok, but this isn't a good idea in production. If you're already familiar with running an HTTPS server in another environment, and you wish to continue using that environment, you should be fine.

How to build a perl web-service infrastructure

I have many scripts that I use to manage a multi server infrastructure. Some of these scripts require root access, some require access to a databases, and most of them are perl based. I would like to convert all these scripts into very simple web services that can be executed from different applications. These web services would take regular request inputs and would output json as a result of being executed. I'm thinking that I should setup a simple perl dispatcher, call it action, that would do logging, checking credentials, and executing these simple scripts. Something like:
http://host/action/update-dns?server=www.google.com&ip=192.168.1.1
This would invoke the action perl driver which in turn would call the update-dns script with the appropriate parameters (perhaps cleaned in some way) and return an appropriate json response. I would like for this infrastructure to have the following attributes:
All scripts reside in a single place. If a new script is dropped there, then it automatically becomes callable.
All scripts need to have some form of manifest that describe, who can call it (belonging to some ldap group), what parameters does it take, what is the response, etc. So that it is self explained.
All scripts are logged in terms of who did what and what was the response.
It would be great if there was a command line way to do something like # action update-dns --server=www.google.com --up=192.168.1.1
Do I have to get this going from scratch or is there something already on top of which I can piggy back on?
You might want to check out my framework Sub::Spec. The documentation is still sparse, but I'm already using it for several projects, including for my other modules in CPAN.
The idea is you write your code in functions, decorate/add enough metadata to these functions (including some summary, specification of arguments, etc.) and there will be toolchains to take care of what you need, e.g. running your functions in the command-line (using Sub::Spec::CmdLine, and over HTTP (using Sub::Spec::HTTP::Server and Sub::Spec::HTTP::Client).
There is a sample project in its infancy. Also take a look at http://gudangapi.com/. For example, the function GudangAPI::API::finance::currency::id::bca::get_bca_exchange_rate() will be accessible as an API function via HTTP API.
Contact me if you are interested in deploying something like this.