NodeJS - Compile shared cpp libraries in GCP Cloud Functions - c++

In general
When you're installing dependencies with npm install (or yarn) locally, any native (c++) libraries inside are automatically compiled.
However after deploying to Cloud Functions you may get similar error:
Error: *.so: cannot open shared object file
So how can they be used in a Cloud function?
Specific example
I think this question applies to any c++ libraries & node dependencies, but I can show you my specific use case.
I'm trying to run tensorflow.js in a cloud function, but the tfjs-node package includes a shared library libtensorflow.so
Installing locally using yarn automatically runs node-gyp scripts and compiles everything needed.
However after deploying the GCP Cloud function and calling it, results in error:
Error: libtensorflow.so: cannot open shared object file: No such file or directory
Full logs are hosted in this pastebin.
And the question here is again: How to compile the library and solve the error?

Related

Amazon aws missing dlls (MSVCP140, VCRUNTIME140, CONCRT140)

I made an web server app, that should reside on amazon aws vitual machine with win10.
When i trying execute this app, it says that it missing dlls (MSVCP140, VCRUNTIME140, CONCRT140)
In the app i use cpprestsdk, cryptocpp, opencv.
I try to install various visual C++ redistributables, but its not gave the result.
Downloaded this dlls separately and put it into system32 foled ,but either no result.
When i put dlls in app folder it gave me the error that app was unable to start correctly (0xc0000007b).
Tryed to execute app builded with /MT flag (it actually dont remove all the dlls) and problem is still there
Amazon instance type is t2.micro

AWS Lambda - NodeJS based function that requires Linux dependency packages

Trying to create a function which is based on NodeJS that runs a C++ compiled code. The C++ code required me to install certain Linux packages using 'yum' in order to run.
The NodeJS and the C++ parts work great when running on Docker, because in the DockerFile I can install all the needed Linux dependencies that the C++ code requires using 'yum' command before the app execution.
When running under a Lambda I do not know how to tell the running container to install these Linux packages in order the C++ to run and be loaded successfully by the NodeJS.
I am trying to create a NodeJS Runtime based function with all my code and add a Custom Runtime layer before the function loads that can install all the dependencies to the OS (Not NodeJS dependencies - Linux OS dependencies).
I tried creating a custom runtime function but didn't understand how to connect everything and how it should (if possible) connect to the function itself even after configuring the layer version to the function.
Does anyone know how such a thing can be achieved?

SkiaSharp on AWS Lambda

I have a .NET core 3.0 API deployed on AWS Lambda. I am using SkiaSharp dll. I get this error:
System.DllNotFoundException: Unable to load shared library 'libSkiaSharp' or one of its dependencies. liblibSkiaSharp: cannot open shared object file: No such file or directory.
I have tried nuget packages of Skiasharp, skiasharp.NativeAssests.Linux, Avalonia.Skiasharp.NativeAsstes.Linux and also tried with Skiasharp.NativeAssets.Linux.NoDependencies but I'm still getting that error
You need to add the SkiaSharp.NativeAssets.Linux package to your NuGet dependencies. Found that into on this AWS Form post.

Update a single external repo in Bazel

I want to update a single external dependency before I do bazel build.
Is there a way to do this?
bazel sync refreshes all the external dependencies, I am looking for something to just refresh a single external dependency.
bazel fetch does not seem to work for me, at least when I tried fetching a remote git repo.
TL,DR
delete repo root or marker file in $(bazel info output_base)/external
bazel shutdown
bazel build //target/related/with/the:repo
For later coming, this requirement generally happens when external repo is not properly configured or you are the one that is configuring it and it is broken for now.
E.g. for cuda in tensorflow, a custom repository_rule is defined for bazel to collect cuda headers installed on the system and form a local external repository #local_config_cuda, however, if you updated part of the library, say, upgrade cudnn from 7.3.1 to 7.6.5 , the repository_rule will not simply, automatically, intelligently re-execute for you. In this case, there is no way for you to use checksum or specify a list of file (before you collect that list for file) as input for the repository_rule.
Simply put, there is no way for bazel to force re-execute a single repository_rule at the moment. See Working with External Dependencies - Layout.
To rebuild the local external repository, see Design Document: Invalidation of remote repositories for how to invalid the local repo.

How to deploy C++ project with all dependencies?

I am working on a C++ project that requires third-party libraries (boost, poco, etc). I use cmake/make to install the package to an install location and deploy it to the production machine. However, when pushing the app to another machine, the shared libraries are not present on the target machine causing ld errors. Is there a standard way to detect dependencies (i.e. shared libs) and deploy them to the install location along the application?
You can try Linux Deploy Qt.I use it for deploying my Qt application written in C++. Apart from Qt, it also puts other non-Qt dependencies(libraries) into a single directory. just run it with the executable as the argument.