How to run C++ files on the Google Cloud Functions? - c++

From what I am aware, Google Cloud Functions only allows you to deploy NodeJs or Python scripts.
Question: How would I be able to deploy a simple Hello_World.cpp on Google Cloud Functions? For example, writing a hello world HTTP function.
What are alternate methods to do this? I want to use serverless approach, since it's cheapest method. Therefore, that is why I'm going with Google Cloud Functions. Would I have to use docker in order to run C++ files? I've been stuck on this for a while and any guidance or help would be appreciated.

You can compile your C++ function into a WebAssembly module using emscripten. Then you can call it from a small nodejs glue code.
I built an example for you here:
https://github.com/ArthurSonzogni/gcloud-cpp-starter

You can run C++ Code by node.js on google cloud functions (tested with node.js 10)
how to using C++ and N-API (node-addon-api) https://medium.com/#atulanand94/beginners-guide-to-writing-nodejs-addons-using-c-and-n-api-node-addon-api-9b3b718a9a7f
use https://console.cloud.google.com/functions and click CREATE FUNCTION to upload .zip or gcloud functions deploy --runtime nodejs10 --trigger-http
The trick is when you zip file you need to remove /build and /node_modules folder then use command line by cd to folder of index.js and 'zip your_name.zip -r *'
ps. when I use firebase deploy --only functions it will error because it doesn't know file addon.node format (in fact it shouldn't read this file because it need to be recompiled) but I think if we using gcloud functions command line with .gcloudignore for /build and /node_modules it will success https://cloud.google.com/functions/docs/deploying/filesystem
HOW DOES IT WORK
I think when you deploy node.js source code to cloud functions it will run npm install and your C++ code will be compiled too (like npm run build will be auto run after npm install)

You can't use C++ on Cloud Functions, period. You can only use Node.js 6.14, Node.js 8.11.1 (beta) and Python 3.7 (also beta).
If you wish to use C++ in GCP with a serverless approach, my best suggestion would be running your own Custom Runtime in App Engine. You would still need to configure some instances options, but you don't have to manage servers and so on.

You can only use App Engine Flexible Environment (or, of course, standard VM architecture, Compute Engine). Extract from the docs (https://cloud.google.com/appengine/docs/flexible/):
Runtimes - The flexible environment includes native support for Java 8
(with no web-serving framework), Eclipse Jetty 9, Python 2.7 and Python 3.6,
Node.js, Ruby, PHP, .NET core, and Go. Developers can customize these
runtimes or provide their own runtime by supplying a custom Docker image
or Dockerfile from the open source community.
As an interesting side note, Google Serverless Containers will give you the chance to deploy your dockerized application but in a serverless flavour (in fact it's built on top of Google Cloud Functions technology). It's currently in Alpha stage.

Related

Run .sh script using Google Cloud SDK shell

I'm trying to automate deploying code to my 3 GCE Linux VM's. I read this article Scripting with gcloud: a beginner’s guide to automating GCP tasks, it shows how to make a script. Now I assume that means saving the code as a .sh file (it even has a shebang on top), now how do I run that. Do I type the script file name in the Google Cloud SDK Shell? I tried it, it does not seem to work. can someone help me? I will really appreciate.
Here is an image of my google cloud shell where I am trying to use the script files.
You're able to install Google Cloud SDK on variety of operation systems such as Linux, macOS and Windows. After that, you'll be able to use same commands like gcloud, gsutil and bq. Meanwhile, scripting relies on the command-line interpreters: you can use bash with Linux and macOS, but for Windows you should use cmd and PowerShell. You can run examples provided at the article, you've mentioned, and at the documentation Scripting gcloud CLI commands with bash on Linux and macOS, so the error messages you've got were expected. You can't run .sh scripts on windows naively, as it was mentioned by #Pievis at the comment section.
As a possible workaround you can install Windows Subsystem for Linux (WSL) for Windows 10 (usually you can choose between WSL2 and WSL1, but it depends on build version of your Windows 10) to get some interoperability between Windows and Linux.
If you need to transfer files to you VM instances please follow the documentation Transferring files to VMs.
If you are interested in automation with GCP, please have a look on the documentation Infrastructure as code to "automate repeatable tasks like provisioning, configuration, and deployments".

How to set up development environment for AWS Lambda?

I wish to implement the answer that is outlined here: https://stackoverflow.com/a/50397276/1980516
However, I find that I keep running into Unable to import module 'index' at exactly this line:
const _archiver = require('archiver');
So, I'm guessing that I cannot do this via the online console. Instead, I probably have to create a deployment package.
How do I go about this? I apparently need AWS CLI, Node.js, npm and I'm new to all of it. In the Amazon docs I can't find a practical list of how to set up my local development environment.
What tools do I install, which versions and in what order exactly?
Edit: Windows :)
My guess is that you need to npm install archiver and package the node_modules dependencies along with your index.js (handler file for your lambda entry point). You can zip up and deploy/upload it to your lambda.
Also have a look at https://github.com/serverless/serverless framework, that will do these type of things easier.
Have a look at AWS SAM, the Serverless Application Model. It provides a local development setup for things like Lambda functions and API Gateway endpoints, and a way to easily package and deploy things. The exact steps you need are:
Create an AWS account and an IAM user with admin privileges
Install node.js
Install the AWS CLI (and configure it with aws configure)
Install SAM CLI and Docker (the local instances run in docker containers)
Initialize a new SAM project with sam init --runtime nodejs (or other runtime version if need)
Run through the quickstart to get an idea of how to define a SAM template, build a SAM app, and deploy.
If you don't want to use the framework or local development environment and just want to create the source bundle, there are docs. The gist is:
Install nodejs (e.g. using homebrew or an installer)
npm install the modules you need
Zip up your code including the node_modules folder
Upload the zip via the AWS Console

Google Cloud Build

Hi I am new to Google Cloud Platform. I want to build an Java application which should be built using Google Cloud Build without docker containers. And also the built application to be tested and artifact to be saved in bucket. Can anyone help me on this ?
Cloud Build is conceptually a pipeline mechanism that takes some set of files as input (commonly in some source repo) and applies a number of processing steps to the files including steps that produce output: file(s) | step-1 | step-2 | ... | step-n.
Most of the examples show Cloud Build producing Docker images but this underplays all the many things it can do.
Importantly, each of the processors (steps) must be a Docker containers but the input and output need not be docker images.
You can use javac or mvn or gradle steps to compile your code and then use the gsutil step to copy the war or jar to Google Cloud Storage.
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/javac
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/mvn
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/gradle
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/gsutil
Since you mentioned that you without docker container, I assume you want to deploy your application not in docker image. You can deploy your app into Google App Engine Standard. So in how to deploy into App Engine, you can refer to this documentation: https://cloud.google.com/build/docs/deploying-builds/deploy-appengine
To run the application on App Engine, you create app.yaml on your project Then you put these lines inside app.yaml
runtime: java11
entrypoint: java -Xmx64m -jar {your application artifact in jar file}```

Import setup module error while deploying to app engine via google cloud sdk

I am writing after a lot of searching and trial and error with no luck.
I am trying to deploy a service in app engine.
You might be aware that deploying on app engine is usually practiced a two step process
1. Deploy on local dev app server
2. If step 1 succeeds deploy on cloud
My problems are with step 1 when I include third party python libraries such as numpy, sklearn, gcloud etc.
I am trying to deploy a service in local devapp server. When I import numpy or any other third party libraries in my main.py script it throws an error saying unable to find the module.
I am using cloud sdk and have two python distributions, the default python 2.7 and anaconda with python 2.7. When I change the path to look for the modules in anaconda distribution, it fails to find module ‘setup’ required by the cloud sdk.
Is there a way to install the cloud sdk for anaconda distribution ?
Any help/pointers will be much appreciated!
When using app engine python standard environment, you can install pure python 3rd party libs using pip by vendoring them as explained here.
There are also a number of libraries included in the python27 runtime which can be requested using the libraries directive in your app.yaml as explained here.
If there's a lib which is not pure python (i.e it uses C extensions) that you want to use in your project, and it's not part of this list, then your only option is to use a flexible VM. If you want to use anaconda, you should consider customizing the runtime for your flexible VM.

AWS Lambda: How to use tools that must be installed first in linux?

I understand that AWS Lambda runs on the application layer of an isolated environment.
In many situations, functions need to use third-party tools that must be installed first on the linux machine. For example, a media processing function uses exiftool to extract metadata from image, so I install exiftool first.
Now I want to migrate the media processing code into AWS Lambda. My question is, how can I use those tools that I originally must install on linux? My code is written in Java, and exiftool is necessary.
To expand on Daniel's answer, if you wanted to bundle exiftool, you would follow steps 1 and 2 for Unix/Linux platforms from the official install instructions. You would then include exiftool and lib in your function's zip file. To run exiftool you would do something like:
const exec = require('child_process').exec;
exports.handler = (event, context, callback) => {
// './exiftool' gave me permission denied errors
exec('perl exiftool -ver', (error, stdout, stderr) => {
if (error) {
callback(`error: ${error}`);
return;
}
callback(null, `stderr: ${stderr} \n stdout: ${stdout}`);
});
}
Everything your Lambda function executes must be included in the deployment package you upload.
That means if you want to run Java code, you can reference other Java libraries. (Likewise, if you want to run Node.js code, you can reference other Node libraries.)
Regardless of the tools you use, the resulting .zip file must have the following structure:
All compiled class files and resource files at the root level.
All required jars to run the code in the /lib directory.
(source)
Or you can upload a .jar file.
exiftool, on the other hand, is a Perl command-line program. I suspect that on your local machine, you shell out from your Java code and run it.
You cannot do that in AWS Lambda. You need to find a Java package that extracts EXIF information (I am sure there are plenty to choose from) and include that in your deployment package. You cannot install software packages on Lambda.
https://aws.amazon.com/lambda/faqs/
Q: What languages does AWS Lambda support?
AWS Lambda supports code written in Node.js (JavaScript), Python, and Java (Java 8 compatible). Your code can include existing libraries, even native ones. Please read our documentation on using Node.js, Python and Java.
So basically you can call out to native processes if they are pre-installed but only from JavaScript and Java as the parent process.
To get a rough idea of what is installed have a look at what packages are installed:
https://gist.github.com/royingantaginting/4499668
This list won't be a 100% accurate, to do that you would need to look directly at the AMI image (ami-e7527ed7)
exiftool doesn't appear to be installed by default. I doubt the account running the lambda function would have enough rights to install anything globally but you could always bundle exiftool with your Node or Java function.
You may also want to have a look at lambdash (https://github.com/alestic/lambdash) which allows you to run command from your local command line on a remote lamdba instance
This can now be done using AWS Lambda Layers.
An example of how to prepare a layer for exiftool specifically can be found here:
https://gist.github.com/hughevans/6b8c57839b8194ba910428de4375794a