Run command on Cloud Foundry in it's own container - cloud-foundry

As you can see on the official doc of Cloud Foundry
https://docs.cloudfoundry.org/devguide/using-tasks.html
A task is an application or script whose code is included as part of a deployed application, but runs independently in its own container.
I'd like to know if there's a way to run command and manipulate files directly on the main container of an application without using a SSH connection or the manifest file.
Thanks

No. Tasks run in their own container, so they cannot affect other running tasks or running application instances. That's the designed behavior.
If you need to make changes to your application, you should look at one of the following:
Use a .profile script. This will let you execute operations prior to your application starting up. It runs for every application instance that is started (I do not believe it runs for tasks) so the operation will be consistently applied.
While not recommended, you can technically background a task through the .profile script that will continue running for the duration of your app. This is not recommended because there is no monitoring of this script and if it crashes it won't cause your app container to restart.
Integrate with your buildpack. Some buildpack's like the PHP buildpack provide hooks for you to integrate and add behavior to staging. For other buildpacks, you can fork the buildpack and make it do whatever you want. This includes changing the command that's returned by the buildpack which tells the platform how to execute your droplet and ultimately what runs in the container.
You can technically modify a running app instance with cf ssh, but I wouldn't recommend it. It's something you should really only do for troubleshooting or maybe testing. Definitely not production apps. If you feel like you need to modify a running app instance for some reason, I'd suggest looking at the reasons why and look for alternative ways of accomplishing your goals.

Related

Can we change/modify the environment in PCF (Pivotal Cloud Foundy) at Runtime?

We use Pivotal Cloud Foundry's YML file to set up the environment. Everything is fine. According to DEVOPS if we have to modify/create an environment variable, we have to modify YML and push the app again. I am wondering if it is possible to modify/create an environment variable while the PCF app is running. It will be really cool if it can be done without having to redeploy the app. If it can't be done, is it because of Java's way of handling the environment?
Thanks
Can we change/modify the environment in PCF (Pivotal Cloud Foundy) at Runtime?
Yes and no.
You can modify environment variables associated with an application while the application is running using the cf set-env (to set or update) and cf unset-env (to delete).
This will update the environment variable in Cloud Controller at the time you run the command. However, this will not update the environment variable inside of a running application container. In order for your application to see the change that you made, you must cf restart, cf restage or cf push.
This has nothing to do with language specifics (i.e. it doesn't matter what language you're using). It is a requirement because the container where your application is running gets created with a fixed set of environment variables. When those change, the container must be recreated. That said, even if the container could be changed at runtime, in Linux a process' environment variables cannot be updated externally at runtime either (there are technically some ways to do this, but it's really unlikely you'd do this in practice). The process itself should be restarted for environment variables to change.
If you want to have your configuration update at runtime, you can look at something like Spring Cloud Config server & its refresh capabilities. That said, it turns out that most applications and frameworks assume configuration is read once while the app is starting up, so your application would need to support changing the configuration you want to change at runtime as well.

Cloud Foundry triggers if application was created

is there a possibility that cloud foundry triggers an function if a new application was pushed to the platform.
I would like to trigger same internal functions like registration on the API gateway. I know that I can pull the information from events API https://apidocs.cloudfoundry.org/224/events/list_all_events.html. But, is it also possible by push?
The closest thing I can think of to what you're asking is the profile script.
https://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html#profile
The note about the Java buildpack not supporting .profile scripts is incorrect. It's a platform feature, so all buildpack's support them. The difference with Java apps is that you're probably pushing a JAR or WAR file so it's harder to make sure the file is placed in the correct location. Location of the file is everything.
When your application starts, the platform will first run the .profile script, if it exists, that is packaged with your application. It's a standard shell script and you can do whatever you like in this file.
The only caveat is that your application will not start until this script completes successfully (i.e. exit 0). Thus you have a limited amount of time for that script to run and your application to start. How much time, you ask? That is configured by cf push -t and is in seconds. You can also set it in your manifest.yml with the timeout attribute.
Time (in seconds) allowed to elapse between starting up an app and the first healthy response from the app
This is also something that each application needs to include. I suppose you could also use a custom buildpack to add that file, if you wanted to have it added across multiple applications. There's no easy way to add it for all apps though.
Hope that helps!

Cloud Foundry Change Buildpack from Command Line

I have a Jenkins app running on Cloud Foundry for a POC. Since it's Jenkins it uses a bound service for file persistence.
I had to make a change to the Java Buildpack and would like Jenkins to use the updated buildpack.
I could pull the source for Jenkins from GitHub and push it again with updated references to the new build pack in the manifest.yml file or via a command line option. In theory, the bound file system service's state would remain intact. However, I haven't validated this assumption and have concerns I might loose the state.
I've looked through the client CLI to see if there's a way to explicitly swap buildpacks without another push. However, I didn't see anything.
Is anyone aware of a way to change the buildpack of an existing application without re-pushing it to Cloud Foundry?
After some research I couldn't find anyway to swap the buildpack without a push. I did discover that my bound file system service remained intact and didn't loose any work.
Answer: re-push to change the buildpack.

How Docker and Ansible fit together to implement Continuous Delivery/Continuous Deployment

I'm new to the configuration management and deployment tools. I have to implement a Continuous Delivery/Continuous Deployment tool for one of the most interesting projects I've ever put my hands on.
First of all, individually, I'm comfortable with AWS, I know what Ansible is, the logic behind it and its purpose. I do not have same level of understanding of Docker but I got the idea. I went through a lot of Internet resources, but I can't get the the big picture.
What I've been struggling is how they fit together. Using Ansible, I can manage my Infrastructure as Code; building EC2 instances, installing packages... I can even deploy a full application by pulling its code, modify config files and start web server. Docker is, itself, a tool that packages an application and ensures that it can be run wherever you deploy it.
My problems are:
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Suppose we have a source code repository, the team members finish working on a feature and they push their work. Jenkins detects this, runs all the acceptance/unit/integration test suites and if they all passed, it declares it as a stable build. How Docker fits here? I mean when the team pushes their work, does Jenkins have to pull the Docker file source coded within the app, build the image of the application, start the container and run all the tests against it or it runs the tests the classic way and if all is good then it builds the Docker image from the Docker file and saves it in a private place?
Should Jenkins tag the final image using x.y.z for example!?
Docker containers configuration :
Suppose we have an image built by Jenkins stored somewhere, how to handle deploying the same image into different environments, and even, different configurations parameters ( Vhosts config, DB hosts, Queues URLs, S3 endpoints, etc...) What is the most flexible way to deal with this issue without breaking Docker principles? Are these configurations backed in the image when it gets build or when the container based on it is started, if so how are they injected?
Ansible and Docker:
Ansible provides a Docker module to manage Docker containers. Assuming I solved the problems mentioned above, when I want to deploy a new version x.t.z of my app, I tell Ansible to pull that image from where it was stored on, start the app container, so how to inject the configuration settings!? Does Ansible have to log in the Docker image, before it's running ( this sounds insane to me ) and use its Jinja2 templates the same way with a classic host!? If not, how is this handled?!
Excuse me if it was a long question or if I misspelled something, but this is my thinking out loud. I'm blocked for the past two weeks and I can't figure out the correct workflow. I want this to be a reference for future readers.
Please, it would very helpful to read your experiences and solutions because this looks like a common workflow.
I would like to answer in parts
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Since docker images same everywhere, you use your docker images as if they are production images. Therefore, when somebody committed a code, you build your docker image. You run tests against it. When all tests pass, you tag that image accordingly. Since docker is fast, this is a feasible workflow.
Also docker changes are incremental; therefore, your images will have minimal impact on storage. Also when your tests fail, you may also choose to save that image too. In this way, developer will pull that image and investigate easily why your tests failed. Developer may choose to run tests in their machine too since docker images in jenkins and their machine are not different.
What this brings that all developers will have same environment, same version of all software since you decide which one will be used in docker images. I have come across to bugs that are due to differences between developer machines. For example in the same operating system, unicode settings may affect your code. But in docker images all developers will test against same settings, same version software.
Docker containers configuration :
If you are using a private repository, and you should use one, then configuration changes will not affect hard disk space much. Therefore except security configurations, such as db passwords, you can apply configuration changes to docker images(Baking the Configuration into the Container). Then you can use ansible to apply not-stored configurations to deployed images before/after startup using environment variables or Docker Volumes.
https://dantehranian.wordpress.com/2015/03/25/how-should-i-get-application-configuration-into-my-docker-containers/
Does Ansible have to log in the Docker image, before it's running (
this sounds insane to me ) and use its Jinja2 templates the same way
with a classic host!? If not, how is this handled?!
No, ansible will not log in the Docker image, but ansible with Jinja2 templates can be used to change dockerfile. You can change dockerfile with templates and can inject your configuration to different files. Tag your files accordingly and you have configured images to spin up.
Regarding your question about handling multiple environment configurations using the same Docker image, I have been planning on using a Service Discovery tool like Consul as a centralized config/property management tool. So, when you start your container up, you set an ENV var that tells it what application it is (appID), and what environment config it should use (ex: MyApplication:Dev) and it will pull its config from Consul at startup. I still have to investigate the security around Consul (as if we are storing DB connection credentials in there for example, how do we restrict who can query/update those values). I don't want to just use this for containers, but all apps in general. Another cool capability is to change the config value in Consul and have a hook back into your app to apply the changes immediately (maybe like a REST endpoint on your app to push changes down to and dynamically apply it). Of course your app has to be written to support this!
You might be interested in checking out Martin Fowler's blog articles on immutable infrastructure and on Phoenix servers.
Although not a complete solution, I have suggestions for two of your issues. Although they might not be perfect, these are the practices we are using in our workflow, and prove themselves so far.
Defining different environments - supposing you've written a different Ansible role for each environment you launch, we define an environment variable setting the environment we wish the container to belong to. We then download the suitable configuration file from an S3 bucket using the env variable set before into the container (which should be possible if you supply AWS creds or give your server an IAM role) and inject these parameters into the code when building it.
Ansible doesn't need to log into the docker app, but the solution is a bit tricky. I've tried two ways of tackling this problem, and both aren't ideal. The first one is to download the configuration file as part of the docker image command line, and build the app on container startup. While this solution works - it breaches the Docker philosophy and makes the image highly prone to build errors.
Another solution is pushing several images to your docker hub repo, and then pulling the appropriate image according to the environment at hand.
In a broader stroke, I've tried launching our app completely with Ansible and it was hell, many configuration steps are tricky and get trickier when you try to implement them as a playbook. When I switched to maintaining the severs alone with Ansible, and deploying the app itself with Docker things got a lot easier.

How does ElasticBeanStalk deploy your application version to instances?

I am currently using AWS ElasticBeanStalk and I was curious as to how (as in internally) it knows that when you fire up an instance (or it automatically does with scaling), to unpack the zip I uploaded as a version? Is there some enviroment setting that looks up my zip in my S3 bucket and then unpacks automatically for every instance running in that environment?
If so, could this be used to automate a task such as run an SQL query on boot-up (instance deployment) too? Are these automated tasks changeable or viewable at all?
Thanks
I don't know how beanstalk knows which version to download and unpack, but running a task on start-up is trivial. Check out cloud-init, a tool written by Ubuntu that's now packaged in Amazon Linux. It allows you to pass arbitrary shell scripts into the UserData section of the instance configuration, and those shell scripts will run on startup.
It's a great way to bootstrap instances on startup, which avoids the soul-sucking misery of managing AMIs.
A quick (possibly non-applicable) warning: If you're running a SQL query on a database that lives on the beanstalk AMI, you're pretty much guaranteed to lose your database at some point. Those machines are designed to be entirely transient. Do not put databases on them. See this answer for more details.
Since your goal seems to be to run custom configuration tasks, the answer is yes, there is a way to do that. You can define custom actions in an .ebextensions file packaged with your app. For example, you can configure a command to run every time a new machine is deployed:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-commands