How to mount a file via CloudFoundry manifest similar to Kubernetes? - cloud-foundry

With Kubernetes, I used to mount a file containing feature-flags as key/value pairs. Our UI would then simply get the file and read the values.
Like this: What's the best way to share/mount one file into a pod?
Now I want to do the same with the manifest file for CloudFoundry. How can I mount a file so that it will be available in /dist folder at deployment time?
To add more information, when we mount a file, the UI later can download the file and read the content. We are using React and any call to the server has to go through Apigee layer.

The typical approach to mounting files into a CloudFoundry application is called Volume Services. This takes a remote file system like NFS or SMB and mounts it into your application container.
I don't think that's what you want here. It would probably be overkill to mount in a single file. You totally could go this route though.
That said, CloudFoundry does not have a built-in concept that's similar to Kubernetes, where you can take your configuration and mount it as a file. With CloudFoundry, you do have a few similar options. They are not exactly the same though so you'll have to make the determination if one will work for your needs.
You can pass config through environment variables (or through user-provided service bindings, but that comes through an environment variable VCAP_SERVICES as well). This won't be a file, but perhaps you can have your UI read that instead (You didn't mention how the UI gets that file, so I can't comment further. If you elaborate on that point like if it's HTTP or reading from disk, I could perhaps expand on this option).
If it absolutely needs to be a file, your application could read the environment variable contents and write it to disk when it starts. If your application isn't able to do that like if you're using Nginx, you could include a .profile script at the root of your application that reads it and generates the file. For example: echo "$CFG_VAR" > /dist/file or whatever you need to do to generate that file.
A couple of more notes when using environment variables. There are limits to how much information can go in them (sorry I don't know the exact value off the top of my head, but I think it's around 128K). It is also not great for binary configuration, in which case, you'd need to base64 encode your data first.
You can pull the config file from a config server and cache it locally. This can be pretty simple. The first thing your app does when it starts is to reach out and download the file, place it on the disk and the file will persist there for the duration of your application's lifetime.
If you don't have a server-side application like if you're running Nginx, you can include a .profile script (can be any executable script) at the root of your application which can use curl or another tool to download and set up that configuration.
You can replace "config server" with an HTTP server, Git repository, Vault server, CredHub, database, or really any place you can durably store your data.
Not recommended, but you can also push your configuration file with the application. This would be as simple as including it in the directory or archive that you push. This has the obvious downside of coupling your configuration to the application bits that you push. Depending on where you work, the policies you have to follow, and the tools you use this may or may not matter.
There might be other variations you could use as well. Loading the file in your application when it starts or through a .profile script is very flexible.

Related

How to obtain output files from cloudfoundry that deployed application has created?

I have deployed a windows exe on cloudfoundry, now i want to get back the files from cloudfoundry that exe has generated.
Is there any way of retrieving files using a batch script?
It's important to note that you don't write anything of critical to the local file system. The local file system is ephemeral so if your app crashes/restarts for any reason before you obtain the files then they are gone.
Having said that, if you have an application that is doing some processing and generating output files, the recommended way to handle this would be to have the process write those files somewhere external and durable, like a database, SFTP server or S3-compatible service. In short, push the files out somewhere, instead of trying to get into the container to obtain the files.
If you must pull them out of the container, you have a couple options:
Run a small HTTP server in the container & expose the files on it. Make sure you are appropriately securing the server to prevent unauthorized access to your files.
Run scp or sftp and download the files. Instructions for doing that can be found here, but roughly speaking you run cf app app-name --guid to get the app guid, cf ssh-code to get a passcode, then scp -P 2222 -oUser=cf:<insert app-guid>/0 ssh.system_domain:my-remote-file.json ./ to get the file named my-remote-file.json. Obviously, change the path/file to what you want to download.

How to handle private configuration file when deploying?

I am deploying a Django application using the following steps:
Push updates to GIT
Log into AWS
Pull updates from GIT
The issue I am having is my settings production.py file. I have it in my .gitignore so it does not get uploaded to GITHUB due to security. This, of course, means it is not available when I PULL updates onto my server.
What is a good approach for making this file available to my app when it is on the server without having to upload it to GITHUB where it is exposed?
It is definitely a good idea not to check secrets into your repository. However, there's nothing wrong with checking in configuration that is not secret if it's an intrinsic part of your application.
In large scale deployments, typically one sets configuration using a tool for that purpose like Puppet, so that all the pieces that need to be aware of a particular application's configuration can be generated from one source. Similarly, secrets are usually handled using a secret store like Vault and injected into the environment when the process starts.
If you're just running a single server, it's probably just fine to adjust your configuration or application to read secrets from the environment (or possibly a separate file) and set those values on the server. You can then include other configuration settings (secrets excluded) as a file in the repository. If, in the future, you need more flexibility, you can pick up other tools in the future.

Recommended way to provide configuration files to Docker containers

I am moving our web application to docker-compose deployment (Django, DRF, AngularJS).
Docker looks solid now and things are going well.
I want to:
confirm with you that I am following best practices regarding application configuration files
know if "volume files" are actually bind mounts, which are not recommended
I've managed to use environment variables and docker-compose secrets read from the Django settings.py file and it works. The downside is that environment variables are limited to simple strings and can pose some escape challenges when sending Python lists, dictionaries etc. We also have to define and maintain a lot of environment variables since our web app is installed in many place and it's highly configurable.
On frontend side (AngularJS) we have two constants.js files and the nginx conf.
I've used a CMD ["/start.sh"] in Dockerfile and have some sed commands.
But this looks really hackish and it also means that we have to define and maintain quite a few environment variables.
Are Docker volumes a good idea to use for these configuration files?
Does such thing as "volume file" actually exist (mentioned here) or is it actually a bind mount? And bind mounts are less recommendable since they depend on the file system and file path on the host.
Volumes documentation briefly mentions files: "path where the file or directory are mounted in the container", but does not go into greater detail.
Our web app has simple configuration files now:
settings.py
site\contants.js
admin\constants.js
and:
I want to avoid moving those files to dedicated directories that can be mounted.
Can you show me a sample docker-compose.yml with single file volumes (not bind mounts).
Thank you
If you can't use environment variables then you should use a bind mount. If you use a named volume you can't access single files and you can't directly edit the config files.
A named volume is always an entire directory, and can't be directly accessed from the host. There is no such thing as a "volume file" (your linked question is entirely about bind mounts, some using named-volume syntax) and there is no way to mount a single file out of a named volume.
Newer Docker has a couple of different syntaxes for bind mounts (in Compose, the short and long volumes: service configuration, or creating a type: bind named volume). These are all basically equivalent, and many of the answers in the question you link to involve making a named volume simulate a bind mount.
Docker Compose supports relative paths, so there is much less of a concern around host paths for bind mounts being non-portable across systems. A basic fragment of a docker-compose.yml file could include:
services:
app:
build: django
volumes:
- ./config/django-settings.py:/app/settings.py
In this example I'd suggest a (deploy-time) config directory that contains the configuration files, but that's an arbitrary choice; if you want to bind-mount ./django/settings.py from the application source tree over what's in the image to be able to directly edit it, that's a valid choice too. You can check this tree into source control, and it will still work regardless of where it's checked out.
If you're using a base image with the full GNU tool set (Ubuntu, not Alpine) then your container entrypoint script can also use envsubst as a very lightweight templating tool (it replaces $VARIABLE references with the equivalent environment variable), which will help you support the "many options" case but not the "dict-type options" case.
In general I'd recommend bind mounts for two cases and maybe a third: for config files (where the operator needs to directly edit them), for log files (where the operator needs to directly read them) and maybe for persistent data storage (where your existing backup solution will work unmodified; but not on MacOS where it's very slow). Named volumes can be a good match for the persistent-data case and better match what you would use in a clustered environment (Swarm, Kubernetes) but can't be directly accessed.

Where is Appropriate to Put AWS Keys

I'm learning about Strongloop, it's pretty good so far.
Question: What is the appropriate place to put AWS keys? config.json? ..and how would I access them from my application?
Thanks
Ideally you would not put those credentials in any file that is committed. I usually find environment variables to be the best balance of convenience and security.
If you are using strong-pm, then you would do this with slc ctl env-set. If you are using some other supervisor, then you'll need to consult its docs.
A lot of times it is enough to use Upstart or systemd directly, which both make it fairly easy to set environment variables in the service process.
Other than above answer, what you can do is put these in your release procedure.
What we have done in our product is all these entries are kept in a config file which is deployed from the shared folder.
Let me elaborate it.
we have local config files in the git. and separate config files on production servers in a folder names as shared, now, when ever a tag release is deployed from git, the shared folder overwrite these config files.

Dropwizard configuration.yml security issues (where to save and should it contain passwords)

Where should the configuration.yml file of Dropwizard be saved?
I'm using Dropwizard which is a Java web framework.
Dropwizard uses configuration.yml files to load in environment specific configuration files.
In the example I found online the configuration.yml files contains username and password of databases.
Now the question is where to save this configuration files which contain password in plain text.
OPTION 1 GIT REPOSITORY
In the example the configuration.yml are part of the project. So one could keep them in the git repository with the rest of the code. This though is a well-known bad security practice.
If someone crack the git repository has access to the code and to the database. Also this way every single developer has access to all the passwords of all the environments.
OPTION 2 FILE ON THE COMPUTER
Safe the configuration.yml on the machine but do not store on the git repository
OPTION 3 ENVIRONMENT VARIABLES
Use configuration.yml file which point to environment variables on the specific machine.
This is not so practical since all this environment variables needs to be set manually on all the machines. Also what is the syntax to use ENVIRONMENT VARIABLES in Dropwizard's configuration.yml files?
I'd go with environment variables if you cannot control read access to the config file or are concerned that your machine is owned by an untrusted third party.
Environment variables are trivial to script.
You should use a file on the computer: this is how many frameworks out there work.
If you use a unix/linux server you can chmod 0600 [filename] and be sure that nobody (almost nobody as root can do anything) can read that file.
On the dropwizard ML it was also cited to use software like puppet/chef to deploy your application and using these frameworks to handle all variables (eg: different configurations for test/staging/production).
Bye
Piero