Can you add CLI tools to a Cloud Foundry app? - cloud-foundry

Terribly sorry if this is an obvious question. But I have a webapp that relies on a CLI tool to get it to work. I was wondering if there was a way I could specify this without using a custom buildpack. And how to go about doing this if possible
Any help on this would be great, thanks

Can you add CLI tools to a Cloud Foundry app?
It's not possible to directly install things with apt or apt-get. Your app runs as an unprivileged user and is unable to run those tools to install things.
This leaves you with a couple options:
Get the binary and bundle it with your app. Some people (no judgement from me though) would say that your app is responsible for bringing everything it needs to run anyway, so you should be doing this already.
Twelve-factor apps also do not rely on the implicit existence of any system tools. Examples include shelling out to ImageMagick or curl. [1]
This path works well for dependencies that are small or self contained, like statically built Go apps. If your app need shared libraries or other resources to function, you need to bundle those with your app too. It's also not great if the size of what you bundle is large. Everything you bundle will be pushed up with the app, so it can slow down your pushes. You are also responsible for tracking updates and making sure that you have the latest, bug free & security patched binaries & libraries.
The general steps for doing this are:
Create a folder like binaries/ under the root of your app, with subfolders of bin/ and lib/.
Place all your binaries under binaries/bin and any shared libraries they require under binaries/lib.
Add a .profile file at the root of your app. This will be sourced prior to your app starting so it will put any binaries you bundle on the path and add libraries to the search path.
In .profile put the following:
export PATH=$HOME/binaries/bin:$PATH
export LD_LIBRARY_PATH=$HOME/binaries/lib:$LD_LIBRARY_PATH
That should be it. Just push your app with all the new files.
Another easier option, is to use the Apt buildpack [2]. This buildpack can install required dependencies using apt. You just need to add an apt.yml file to the root of your app & run your app with multi-buildpacks (apt buildpack first, then your normal buildpack).
The main benefit of doing this is that you don't have to manage the dependencies. The apt buildpack will automatically install them from the repo you tell it to use, so it'll pick up new versions from there as well. This is good if what you need to install has a lot of dependencies, particularly sensitive dependencies (like openssl) or dependencies that get updated/patched often like other language runtimes (Python, Perl, Ruby, etc...).
Other benefits. It's easier because the buildpack takes care of adjusting PATH & LD_LIBRARY_PATH. It also makes the app size smaller so pushes are faster.
The downsides of this option are that the apt-buildpack is not an official buildpack (it's community maintained). It also works best when you have Internet access, so it can download binaries from the Internet, although you can work around this by using an internal repo.
There's a couple other options as well, but I wouldn't recommend them unless both options above are definitely not going to work for you.
Use Docker. You can set up your own Docker container with all the dependencies you need, plus your app code and cf push the Docker image to CF. The downside of this is that your lose the advantages of using buildpacks, so you're back to building and managing Docker images and all the required dependencies of your app all on your own.
You could create your own custom buildpack and supply the dependencies that way. I don't see any reason you'd want to do this though. It's a decent bit of work and in the end, you'd have something that's just more brittle and less flexible than Apt Buildpack.
It's technically possible to ship your own rootfs, but you really really shouldn't (I'm just including this to be thorough). This is the base file system that's used by all apps on CF. Doing this has a lot of drawbacks though, chiefly being that it's difficult. It also applies to all apps on the foundation, can bloat the size of the rootfs, and makes a larger attack surface for anything using the rootfs (i.e. all apps).
Hope that helps!
[1] https://12factor.net/dependencies
[2] https://github.com/cloudfoundry/apt-buildpack

Related

Is docker best just for prod environments?

I just decided to jump into using docker to test out building a microservice application using AWS fargate.
My question really relates to hearing about many development teams using Docker to avoid people saying the phrase "works on my machine" when committing code. Although I see the solution to that problem being solved, I still do not see how Docker images actually can be used in development environment.
The workflow for anything above production baffles me. Example of my thinking is...
team of 10 devs all use docker, each pull the image from the repo to there container, with the source code, if they all have a individual version of the image, that means any edits they make to that image is their own and when they push back to the repo where none of the edits can be merged (along with that to edit a image source code is not easily done as well).
I am thinking of it in the say way as git -GitHub, where code is pushed to a branch and then merged to master to create a finished product.
I guess if you pull the code from the GitHub master and create the Docker image is the way for it to be used, but again that points back to my original assumption of Docker being used for Production environments over development.
Is docker being used in development, more so the dev can just test the feature on the container that ever other dev on the team is using so all the environments match across the team?
I just really do not understand the workflow of development environments with docker.
I'd highlight three cases where I've found Docker particularly useful, prior to a production deploy:
Docker is really useful for installing local dependencies. If your application needs a database, docker run postgresql with appropriate options. Need a clean start? Delete the container. Running two microservices that need separate databases? Start two containers. The second microservice is maintained by another team? Run it in a container too.
Docker is useful for capturing the build environment in the CI system. Jenkins, for example, can run build steps inside a container, bind-mounting the current work tree in, so it's useful to build an image that just contains build-time dependencies (which can be updated independently of the CI system itself).
If you're running Docker in production, you can test the exact thing you're about to run. You're guaranteed the install environment will be the same in the QA and prod environments, because it's encapsulated inside the same Docker image. A developer can debug problems against the production-installed code without actually being in production.
In the basic scenario you describe, an important detail to note is that you never "edit an image"; you always docker build a new image from its Dockerfile and other source code. In compiled languages (C++, Go, Java, Rust, Haskell) the source code won't be in the image. Even if you're "using Docker in development" the actual source code will be in some other system (frequently Git), and typically you will have a CI system that builds "official" images from that source code.
Where I see Docker proposed for day-to-day development, it's either because the language ecosystem in use makes it hard to have multiple versions concurrently installed, or to avoid installing software on the host system. You need specific tooling support to "develop inside a container", and if developers choose their own IDE, this support is not universal. Conversely, in between OS package managers (APT, Homebrew) and interpreter version managers (rbenv, nvm) it's usually straightforward to install a couple of things on the host. If your application isn't that sensitive to, say, the specific version of Node, it's probably easier to use whichever version is already installed on your host than to try to insert Docker into the process.

Can I load additional libraries in Gitpod without creating my own Docker image?

I have recently tried out Gitpod, which seems to be a quite cool tool.
For testing purposes, I have opened some C++ GitHub repository of mine that uses Boost (among other libraries). Unfortunately, Boost does not seem to be installed in the Docker image, so my code does not compile.
I know about the possibility of creating own Docker images, but I was wondering if there are alternative, easier options as well. Does Gitpod provide any Environment Modules-like option to dynamically load/unload certain "commonly used" libraries or do I always have to provide my own Docker instance in this case?
I work on Gitpod. Thank you for trying it and the compliment :)
We didn't want to invent yet another module system for Gitpod.
Instead, we decided to support Dockerfiles and build them on-demand, because Dockerfiles allow using all those amazing module systems that are already out there: Debian's packages, Alpine's packages, Node Version Manager (NVM), Ruby Version Manager (RVM), SDKman, etc. Basically any Linux-compatible package manager down to simple wget.
You can also use own Docker images, but I find Dockerfiles more convenient because you can check them into git and thereby version them together with your source code. It's dev-environment-as-code and should be shared across the team. Also, you don't need to bother with building and pushing them to a registry (e.g. hub.docker.com).
What Gitpod does offer, hoever, is a selection of Docker images (src). The most prominent one is gitpod/workspace-full, which it Gitpod's default image.
To get back to your question about the easiest way to get the right modules into your Gitpod development environment:
inheriting from gitpod/workspace-full is very convenient.
If you don't want (2), copy'n'pasting sections from gitpod/workspace-full is convenient.
Often, putting RUN apt-get update && apt-get install -y libboost-all-dev into your Dockerfile is enough. This is APT to install the package libboost-all-dev.
Most software projects have documentation on how to build them under Linux. These instructions usually work in Dockerfiles, too.
Search on hub.docker.com for useful Docker images. You can inherit from those images or find their Dockerfiles and copy'n'paste sections from there.

Use package manager in a Cloud Foundry instance

Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it? I expect to do it the same way as in a dockerfile, but it doesn't work with or without sudo in my case.
Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it?
No. Running apt-get or a package manager would typically require root access and you do not get root access when the build pack runs or when your application runs (this is a difference w/Docker).
That said, you can do anything that doesn't require root access, so if you found a package manager that installed in the vcap user's home directory and didn't need root then you could use that.
It depends on what you're trying to install, but in some cases you can work around this by downloading the .deb or .rpm file and manually extracting the binaries. This typically works OK for things like shared libraries. Just download the precompiled binary that matches your stack (cflinuxfs2 == Ubuntu Trusty). For other things, you can build your own binaries from source. This is what the build pack's do, see binary-builder.
Hope that helps!

Django apps: installing system-wide vs project-wide

What are pros/cons (regarding maintainability) of installing django apps system-wide vs installing them project-wide? Is there a recommended aproach?
By django extensions, do you mean django-extensions?
In all honesty, I'd steer clear of system wide installations, they instantly tie you to the system's installed versions, and if incompatibilities arise system-wide, that is a bigger issue than with a project-wide approach. In addition, they add complexity when deploying to remote services, and don't stick to the 12 Factor App principles. Keeping everything self contained, project code and its dependencies will make life easier in the long run.
I'd recommend using virtualenv and pip to install your dependencies, which keeps them isolated to the project in question, and dramatically simplifies deployment.
The recommended approach is not to copy any reusable app inside you project. They provide extension points and settings to customize. Also, it is recommended to use virtualenv for projects and install any project specific python modules there. This will protect you from different versions conflicts.

Deploying Django with virtualenv inside a distribution package?

I have to deploy a Django application onto a SuSE Linux Enterprise 11 system. Corporate rules say I need to deploy using RPMs only. While I can use ./setup.py bdist_rpm for each dependency, it's not really sane, since RPM doesn't record all of the dependencies yet. Therefore I'd have no real advantage in using RPMs and managing dependencies manually is somewhat cumbersome and I would like to avoid it.
Now I had the following idea: While building a package, I could create a virtualenv, install all my dependencies via pip there and then package it up with the rest of the code into one solid RPM.
How sensible is this approach?
I've been using this approach for about a year now and it has worked out pretty well.
One gotcha is that you'll want to check out the bang lines in any python scripts written to the virtualenv's bin directory. These will end up being full path names used in your build environment, which probably won't be the same directory where you end up installing the virtualenv. So you may need to add some sed calls in your RPM's postinstall to adjust the paths.