Use package manager in a Cloud Foundry instance - cloud-foundry

Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it? I expect to do it the same way as in a dockerfile, but it doesn't work with or without sudo in my case.

Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it?
No. Running apt-get or a package manager would typically require root access and you do not get root access when the build pack runs or when your application runs (this is a difference w/Docker).
That said, you can do anything that doesn't require root access, so if you found a package manager that installed in the vcap user's home directory and didn't need root then you could use that.
It depends on what you're trying to install, but in some cases you can work around this by downloading the .deb or .rpm file and manually extracting the binaries. This typically works OK for things like shared libraries. Just download the precompiled binary that matches your stack (cflinuxfs2 == Ubuntu Trusty). For other things, you can build your own binaries from source. This is what the build pack's do, see binary-builder.
Hope that helps!

Related

Node.JS native addons on LINUX [duplicate]

I'm using AWS Lambda, which involves creating an archive of my node.js script, including the node_modules folder and uploading that to their infrastructure to run.
This works fine, except when it comes to node modules with native bindings (using node-gyp). Because the binding was complied and project archived on my local computer (OS X), it is not compatible with AWS's (Amazon Linux) servers.
How can I cross-compile/install a node module (specifically, node-sqlite3) so when I upload it to another server arch it runs?
While not really a solution to your problem, a very easy workaround could be to simply compile the native addons on a Linux machine.
For your particular situation, I would use Vagrant. Vagrant can create virtual machines and configure them within seconds.
Find an OS image that resembles Amazon's Linux distro (Fedora, CentOS, others that use yum as package manager - see Wiki)
Use a simple configuration script that, when run by Vagrant on machine startup, will run npm install (optionally it might also remove the node_modules folder before to ensure a clean installation)
For extra comfort, the script can also create the zip file for deployment
Once the installation finishes, the script will shutdown the VM to avoid unnecessary consumption of system resources
Deploy!
It might require some tuning if the linked libraries are not at the same place on the target machine but generally this seems to me like the best and quickest solution.
While installing the app using Vagrant might be sufficient in some cases, I have found it necessary to build the app on Linux which is as close to Lambda's Amazon Linux AMI as possible.
You can read the original answer here: https://stackoverflow.com/a/34019739/303184
Steps to make it work:
Spawn new EC2 instance. Make sure it is based on exactly the same image as your AWS Lambda runtime. You can review Lambda env details here: http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html. In our case, it was Amazon Linux AMI called amzn-ami-hvm-2015.03.0.x86_64-gp2.
Install nvm and use it to install the same version of Node.js as on the AWS Lambda. At the time of writing this, it was v0.10.36. You can refer to http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html again to find out.
You will probably need to install git & g++ compiler on the EC2. You can do this running
sudo yum install git gcc-c++
Finally, clone your app to your new EC2 and install your app's dependecies:
nvm use 0.10.36
npm install --production
You can then easily download the node_modules using scp or such.
Same lines as Robert's answer, when I had to work on my MAC in a different OS I use vm ware like Oracle's free virtualizer VirtualBox to get a linux on my mac, no cost to me. Or sign up for a new AWS account, you get a micro for a year free. Use that to get your linux box, do whatever you need there.
AWS has a page describing how to deal with native NPM modules: https://aws.amazon.com/blogs/compute/nodejs-packages-in-lambda/

Installing Python 2.7.16 and packages offline. Concerns with dependencies

Problem
I am attempting to install Python 2.7.16, openpyxl, and pyinstaller onto a Windows 10 machine that is offline for security reasons. To clarify, I have a mapped network drive on there from which I can transfer the files I need to use.
Question
What is the best way to go about this? I currently have a .msi Python installation file directly from their website. The packages I need are packaged as .tar.gz files. I currently have those on my windows machine, but do not want to proceed until I know for sure what I need to do. Also, do I need to do anything for dependencies? If so, how do I find the dependencies for the packages I need?
Side Notes
The version of Python (2.7.16) comes with pip. Not sure if that makes a difference. Downloading and transferring things requires me to ask my admin, for him to download the files, and then transfer them to my drive so I can have them on my computer. If able, I would like to do this in as little attempts as possible.
Useful links
Python: https://www.python.org/downloads/release/python-2716/
openpyxl: https://pypi.org/project/openpyxl/#files
pyinstaller: https://pypi.org/project/PyInstaller/#files
My solution would be to seek out the offline versions of the python and pip installer and follow this guide
Also a great tip: try the complete procedure (the installing of the required software) on a seperate pc which you have disconnected and do the installation. Note everything you have to do to get it working and use those instruction on your originally intended machine. This will prevent you from having to go back and forth and scratch your head while installing on the target machine.
Please note that I have NO idea how python works and this is just a hunch from me as a programmer.
Installing Python and packages on an Offline Machine: A Comprehensive Guide
The Environment
Let us begin by defining the environment in which this guide may be of some great use. If your situation can be described by one or more of the following, you might have great results following this guide...
The machine you are developing on is offline. (No connection to the internet)
You need to develop and run Python on the machine that is completely offline.
If this sounds like you, read the following cases in which a few minor details may make a big difference in getting you started.
Case1:
You are not allowed to plug in any external media devices into the offline machine. This includes but is not limited to a USB, CD, floppy disk, or any other removable media that may be of some use in helping you transfer Python files to the offline machine.
You are allowed to map a network drive (somewhere else on the local network). This would fix the problem mentioned in number one with removable media.
Answer: In this case, just proceed with the guide, as this was my case and I will explain in detail how I solved my problem.
Case2:
There is no physical way to transfer files onto the development machine that is offline.
Answer: If this is your case, you need to get in touch with the admin team who handles the software on your development machine. Direct them to this guide to proceed.
Let's Get Started
Warning A:
The following must be performed on a computer with an internet connection. It is impossible to download things from any website without an internet connection.
Warning B:
There is a longer way, and there is a shorter way to do the following. To avoid the longer way, you must be able to install python on a different machine that is online. This can be the same machine that you are using to download the packages and python version, or it can even be a home machine. This can be any machine in the world that is on the internet. It's sole purpose will be to help you identify the dependencies of each package.
Installing Python
Visit the python website and identify the version you want. 2.7.9 and up is recommended for this guide. Download the file for your specific system.
Python 2.7.9 : https://www.python.org/downloads/release/python-279/
Python 3.7.3 : https://www.python.org/downloads/release/python-373/
The reason I provided Python 2.7.9 is because that is the earliest 2.7.x version that comes with pip (a package manager).
Visit the python package index to locate the packages you will be using in your python project. https://pypi.org/
Search the package you need, go to the downloads, and get the (.tar.gz) file. Not the .whl files unless you know what you are doing with those.
Tip: If you want to keep track of the packages you are installing, I suggest you put them all in one folder somewhere you can find, or just write them down on paper.
Unpack the .tar.gz package files. You can get rid of the .tar.gz once you unpack them as they will not needed any longer.
Install the version of python that you downloaded for your system in step 1 above.
(This may just be running the .msi file for windows or unpacking some files for linux) If you are not sure how, just look at this brilliant guide
https://realpython.com/installing-python/
Now you should be able to go to your terminal and type "python" and get the python interpreter to open up. If you get a "cannot find python command" you need to setup your path variable.
Windows guide: https://geek-university.com/python/add-python-to-the-windows-path/
Linux guide: https://www.tutorialspoint.com/python/python_environment.htm
Your python installation is done! And your packages should also be ready to install!
Installing Python Packages
What you need to know here is that MOST all python packages have dependencies, which are other packages which packages need installed before they can be installed. If you need more explanation on dependencies, read here: https://www.fullstackpython.com/application-dependencies.html
Before proceeding be sure to add the Python/Scripts folder to your path variable too or pip will not work. Follow this link for instructions. https://appuals.com/fix-pip-is-not-recognized-as-an-internal-or-external-command/
Install packages using pip install [package_name] for every package you need, on your machine that is on the internet, and then do a pip freeze to see all the packages installed.
Once you can see all the packages installed, which will include the dependencies for the ones you ran pip install on, you need to manually download these dependencies from the python package index https://pypi.org/ just like you did with the regular packages.
Moving Offline
Once you have identified all the packages you will need, and all of their dependencies, you will need to download them, unpack all of them, and move them into one folder, which I will call "OFFLINE_SETUP_FOLDER".
To be clear:
The packages we installed before was only to find out the dependencies we were going to need. You do not have to re-download the packages you have already downloaded before running pip install. You should only need to download the dependencies you have found during the pip freeze command.
Finally you need to copy into the "OFFLINE_SETUP_FOLDER" your python installation file, be it a .msi file for windows, or the .tar file for linux.
Your "OFFLINE_SETUP_FOLDER" should contain the following...
In the following, package can be the name of any package that you downloaded, and the a and b inpackage1a and package1b just represent dependencies for that package. These file names are just examples for packages
python.msi (installation file for python)
/package1 (normal package folder)
/package1a (package dependency folder)
/package1b (package dependency folder)
/package2 (normal package folder)
/package3 (normal package folder)
/package3a (package dependency folder)
Once this is complete, you need to move that folder onto the machine that is completely offline form the network.
Then run the installation for python as you did before and install it on the machine. Do no forget to setup the path variable. Refer back to the Installing Python section if needed.
Open your terminal or CMD and CD into the "OFFLINE_SETUP_FOLDER".
Now you need to CD into each individual package folder, and run this command: python setup.py install and let it run.
If the package install fails, it will be because one of the dependencies has not been installed. If this is the case, CD into the dependency that is says is missing, and run python setup.py install in there first.
Keep repeating these steps until all packages and dependencies have been installed.
This is the end of this python guide for installing python on an offline machine. I hope this helped :)

Can I load additional libraries in Gitpod without creating my own Docker image?

I have recently tried out Gitpod, which seems to be a quite cool tool.
For testing purposes, I have opened some C++ GitHub repository of mine that uses Boost (among other libraries). Unfortunately, Boost does not seem to be installed in the Docker image, so my code does not compile.
I know about the possibility of creating own Docker images, but I was wondering if there are alternative, easier options as well. Does Gitpod provide any Environment Modules-like option to dynamically load/unload certain "commonly used" libraries or do I always have to provide my own Docker instance in this case?
I work on Gitpod. Thank you for trying it and the compliment :)
We didn't want to invent yet another module system for Gitpod.
Instead, we decided to support Dockerfiles and build them on-demand, because Dockerfiles allow using all those amazing module systems that are already out there: Debian's packages, Alpine's packages, Node Version Manager (NVM), Ruby Version Manager (RVM), SDKman, etc. Basically any Linux-compatible package manager down to simple wget.
You can also use own Docker images, but I find Dockerfiles more convenient because you can check them into git and thereby version them together with your source code. It's dev-environment-as-code and should be shared across the team. Also, you don't need to bother with building and pushing them to a registry (e.g. hub.docker.com).
What Gitpod does offer, hoever, is a selection of Docker images (src). The most prominent one is gitpod/workspace-full, which it Gitpod's default image.
To get back to your question about the easiest way to get the right modules into your Gitpod development environment:
inheriting from gitpod/workspace-full is very convenient.
If you don't want (2), copy'n'pasting sections from gitpod/workspace-full is convenient.
Often, putting RUN apt-get update && apt-get install -y libboost-all-dev into your Dockerfile is enough. This is APT to install the package libboost-all-dev.
Most software projects have documentation on how to build them under Linux. These instructions usually work in Dockerfiles, too.
Search on hub.docker.com for useful Docker images. You can inherit from those images or find their Dockerfiles and copy'n'paste sections from there.

Deploying Django with virtualenv inside a distribution package?

I have to deploy a Django application onto a SuSE Linux Enterprise 11 system. Corporate rules say I need to deploy using RPMs only. While I can use ./setup.py bdist_rpm for each dependency, it's not really sane, since RPM doesn't record all of the dependencies yet. Therefore I'd have no real advantage in using RPMs and managing dependencies manually is somewhat cumbersome and I would like to avoid it.
Now I had the following idea: While building a package, I could create a virtualenv, install all my dependencies via pip there and then package it up with the rest of the code into one solid RPM.
How sensible is this approach?
I've been using this approach for about a year now and it has worked out pretty well.
One gotcha is that you'll want to check out the bang lines in any python scripts written to the virtualenv's bin directory. These will end up being full path names used in your build environment, which probably won't be the same directory where you end up installing the virtualenv. So you may need to add some sed calls in your RPM's postinstall to adjust the paths.

Run Pinax without VirtualEnv

Is there a way to run Pinax without virtualenv?
I want to run it without virtualenv as I want to run it on a django-container on mediatemples grid-hosting service. Their containers can scale upto 1Gb of dedicated memory, so I wouldnt have to worry about my own VPS or scaling issues. But their response was:
" because of the way the DjangoContainer works, you won't be able to configure your server to use your virtualenv. Essentially the DjangoContainer is a virtualized server (to which you don't have access other than the AccountCenter tools, or the 'mtd' command line tool) with the specific purpose of serving your Django applications. It mounts your django container folder so that it has your application code, but you cannot modify the version or location of python it uses. This probably means you'll have to use Pinax without virtualenv support, as the general idea of using virtualenv in this way would be to create a custom environment for your Pinax application, which as I mentioned here is impossible to instruct the server to use. "
As of 0.9a1, Pinax can be used without pinax-boot.py which was the virtualenv dependency (we bundled it). Requirements are project-level and must be installed with pip. However, setup_project does enforce a virtual environment when installing requirements (it calls pip for you as a convenience; I would be open to not enforcing a virtual environment here). You can pass --no-reqs to setup_project forcing it to skip dependency installation. You can then run pip yourself and install it however you like.
technically yes, but you would have to change out quite a bit of the configuration that is handed out and hand install a lot of libraries. Pinax has virtualenv as a very low level built in assumption.
you can, all you need to do is find out what is in the virtualenv. set it up and install yolk in the virtual env and type yolk -l to see what you need to install to get it to work.