how can I add software pre-requisites to cloud foundry nodejs - cloud-foundry

I am using Cloud Foundry's nodejs profile and my nodejs package.json requires chartjs-node-canvas. That package uses node-canvas and node-canvas is based on Cairo. The node-canvas site says I have to add the cairo-devel package to Linux (apt-get) in order for canvas to be installed.
Is it possible to add software to the OS image running on cloud foundry? If so, how?

You can do that by vendoring the dependencies. When you vendor them, you'll build locally in an Ubuntu Bionic Linux container or VM. Node will build everything that's required and you will no longer need the cairo-devel package (it's only needed to build).
The process to vendor dependencies is documented here.
The other option is to use the Apt Buildpack which is described on this SO post. That can be used to install any apt packages.

Related

dbt and google cloud composer PyPI dependency issues

I am currently running Google Cloud Composer with a Composer version 2.0.9 and airflow version 2.1.4. I am trying install the most recent version of dbt (1.0.4 for core and 1.0.0 for the BigQuery plugin). Because cloud composter images has specific packages installed, I am getting conflicting PyPI dependency issues. When I try to fix one dependency another issue occurs. Does anyone know the specific set of packages installed that would resolve this issue? I have read the following posts by the community but I wanted to know if anyone has a solution for just using composer?
How to run DBT in airflow without copying our repo
How to set up dbt with Google Cloud Composer?
I was able to reproduce the behaviour you are seeing. Below are the dependency conflicts I saw in the Cloud Build logs. These conflicts are occurring between the dbt-core requirements and the pre-installed package requirements in Composer.
Pre-installed package requirements:
hologram 0.0.14 has requirement jsonschema<3.2,>=3.0, but you have jsonschema 3.2.0. ##=> can be installed manually
flask 1.1.4 has requirement click<8.0,>=5.1, but you have click 8.1.2.
apache-airflow 2.1.4+composer has requirement markupsafe<2.0,>=1.1.1, but you have markupsafe 2.0.1.
looker-sdk 22.4.0 has requirement typing-extensions>=4.1.1, but you have typing-extensions 3.10.0.2.
dbt-core requirements:
hologram 0.0.14 has requirement jsonschema<3.2,>=3.0, but you have jsonschema 3.2.0. ##=> can be installed manually
dbt-core 1.0.4 has requirement click<9,>=8, but you have click 7.1.2.
dbt-core 1.0.4 has requirement MarkupSafe==2.0.1, but you have markupsafe 1.1.1.
dbt-core 1.0.4 has requirement typing-extensions<3.11,>=3.7.4, but you have typing-extensions 4.1.1.
I tried downgrading the pre-installed packages, but subsequent package installations fail and it is not recommended as well.
Therefore, I would suggest using an external solution as stated in this thread you have linked. Quoting the workarounds given in #Ryan Yuan's answer here.
Using external services to run dbt jobs, e.g. Cloud Run.
Using Composer's KubernetesPodOperator(updated Composer 2 link). My colleague has put up a nice article on dbt discourse here going through the setup process.
Ignoring Composer's Dependency conflicts by setting Composer's environmental variable IGNORE_PYPI_DEPENDENCY_CONFLICTS to True.
However, I don't recommend this as it may cause potential issues.
Creating a Python virtual environment in Composer and install the dbt packages.
As mentioned by #Kabilan Mohanraj, the current version of dbt (1.0.4) and a more recent version of Composer has dependency issues (Composer version 2.0.9 and Airflow version 2.1.4). Therefore an alternative solution is needed. In my case, I played around and searched for a solution from other people in the community and found one person using a certain version of Composer and dbt that only had mimimal dependency issues. However, as mentioned by #Kabilan Mohanraj, Google does not recommend downgrading preinstalled packages, so this would not be a viable solution for something in production.
create composer through gcloud to use an older version that is not available via the Composer UI
gcloud composer environments create my_airflow_dbt_example
--location us-central1
--image-version composer-1.17.9-airflow-2.1.4
requirements
dbt-bigquery==0.21.0
jsonschema==3.1.1
packaging==20.9
For this specific composer version, you are downgrading jsonschema from 3.2.0 to 3.1.1 and packaging from 21.3 to 20.9

Node.JS native addons on LINUX [duplicate]

I'm using AWS Lambda, which involves creating an archive of my node.js script, including the node_modules folder and uploading that to their infrastructure to run.
This works fine, except when it comes to node modules with native bindings (using node-gyp). Because the binding was complied and project archived on my local computer (OS X), it is not compatible with AWS's (Amazon Linux) servers.
How can I cross-compile/install a node module (specifically, node-sqlite3) so when I upload it to another server arch it runs?
While not really a solution to your problem, a very easy workaround could be to simply compile the native addons on a Linux machine.
For your particular situation, I would use Vagrant. Vagrant can create virtual machines and configure them within seconds.
Find an OS image that resembles Amazon's Linux distro (Fedora, CentOS, others that use yum as package manager - see Wiki)
Use a simple configuration script that, when run by Vagrant on machine startup, will run npm install (optionally it might also remove the node_modules folder before to ensure a clean installation)
For extra comfort, the script can also create the zip file for deployment
Once the installation finishes, the script will shutdown the VM to avoid unnecessary consumption of system resources
Deploy!
It might require some tuning if the linked libraries are not at the same place on the target machine but generally this seems to me like the best and quickest solution.
While installing the app using Vagrant might be sufficient in some cases, I have found it necessary to build the app on Linux which is as close to Lambda's Amazon Linux AMI as possible.
You can read the original answer here: https://stackoverflow.com/a/34019739/303184
Steps to make it work:
Spawn new EC2 instance. Make sure it is based on exactly the same image as your AWS Lambda runtime. You can review Lambda env details here: http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html. In our case, it was Amazon Linux AMI called amzn-ami-hvm-2015.03.0.x86_64-gp2.
Install nvm and use it to install the same version of Node.js as on the AWS Lambda. At the time of writing this, it was v0.10.36. You can refer to http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html again to find out.
You will probably need to install git & g++ compiler on the EC2. You can do this running
sudo yum install git gcc-c++
Finally, clone your app to your new EC2 and install your app's dependecies:
nvm use 0.10.36
npm install --production
You can then easily download the node_modules using scp or such.
Same lines as Robert's answer, when I had to work on my MAC in a different OS I use vm ware like Oracle's free virtualizer VirtualBox to get a linux on my mac, no cost to me. Or sign up for a new AWS account, you get a micro for a year free. Use that to get your linux box, do whatever you need there.
AWS has a page describing how to deal with native NPM modules: https://aws.amazon.com/blogs/compute/nodejs-packages-in-lambda/

Blender on IBM Cloud (Cloud Foundry)

I'm currently developing a web application (Django 2.0) application.
My app will be deployed on IBM Cloud (Cloud Foundry) using python build-pack.
One of my requirements is to install blender.
Everything else is very well, but for blender installation.
What I've tried so far was:
I tried access my app using SSH connection, but surely I don't have root access to apt-get install blender!!
And tried to include blender in packages.json file and push that file using cf push my-app.
But nothing worked for me.
In another shorter question: what is the main approach in Cloud Foundry Apps to install packages like when we use apt-get install in Ubuntu / Debian.
Please correct me if I did anything wrong, or guide me with headlines to solve this problem!!
I see a couple options for you to install packages if they cannot be installed using the regular requirements file (which is the preferred way):
Download the relevant libraries and put them in subfolders of the app before pushing it. The libraries will be uploaded. That is how I would do it.
Once you have an SSH connection, use secure copy (scp) to upload the files and place them in the subfolders where they are expected.
Regarding Blender, the question is what you need in addition to having the code copied over. Does it need a running daemon? Are there more dependencies? You would need to share more information about your specific app to answer that. Maybe, packaging everything as one or more containers and run it on Kubernetes or a combination of Cloud Foundry and Kubernetes is a better way.

How to choose the right version of Google Cloud library components in python?

I have a specific version of google-cloud-core in my server and I don’t know how to choose the other libraries in a way to make them suit with my google cloud core version.
I can’t just move to the latest versions because old programs cannot run into new versions for example BigQuery library.
My specific need is to know « how to know » which version of Google Cloud Storage should I choose according to the 0.26.0 core version.
Is there some repositories where we can find packages grouped by google-cloud-core versions?
In my case the version 1.6 of google-cloud-storage works but I found it just by downgrade and try again method !
Best regards
What you can do as a workaround is create a virtual environment, install a specific library - like google-cloud-storage - and check the dependencies installed with that library version. I made a quick test and installed a few versions of google-cloud-storage. For version 1.3.0, the google-cloud-core 0.26.0 dependency was installed.
You can do so by following these steps:
virtualenv env-name
source env-name/bin/activate
pip freeze (to check there is nothing there)
pip install google-cloud-storage==1.3.0
pip freeze (again)
Once finished you’ll see google-cloud-core 0.26.0 was installed.

Use package manager in a Cloud Foundry instance

Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it? I expect to do it the same way as in a dockerfile, but it doesn't work with or without sudo in my case.
Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it?
No. Running apt-get or a package manager would typically require root access and you do not get root access when the build pack runs or when your application runs (this is a difference w/Docker).
That said, you can do anything that doesn't require root access, so if you found a package manager that installed in the vcap user's home directory and didn't need root then you could use that.
It depends on what you're trying to install, but in some cases you can work around this by downloading the .deb or .rpm file and manually extracting the binaries. This typically works OK for things like shared libraries. Just download the precompiled binary that matches your stack (cflinuxfs2 == Ubuntu Trusty). For other things, you can build your own binaries from source. This is what the build pack's do, see binary-builder.
Hope that helps!