Cloud Foundry - is it possible to check files on flapping app? - cloud-foundry

Is there a way to review content/files on a flapping app instance?
I had today this problem with one go application and unfortunately since container didnt start, I couldnt check what files are there. So the only way to debug problem (which was btw related to wrong filename) was the log stream.
Thank you,
Leszek
PS.
I am using HPE Stackato, but I assume the approach will be similar to the approach in CF and PCF...

With Pivotal Cloud Foundry, you can cf ssh to SSH into the container, or to set up port-forwarding so that you can use plain ssh or even scp and sftp to access the container or view its file system. You can read some more about it in:
The diego-ssh repository's README
The documentation on Accessing Apps with SSH
I highly doubt this functionality exists with HPE Stackato because it uses an older Cloud Foundry architecture. Both PCF and HPE are based off of open source Cloud Foundry, but PCF is on the newer Diego architecture, HPE is still on the DEA architecture according to the nodes listed in the Stackato Cluster Setup docs.
With the DEA architecture, you should be able to use the cf files command, which has the following usage:
NAME:
files - Print out a list of files in a directory or the contents of a specific file
USAGE:
cf files APP_NAME [PATH] [-i INSTANCE]
ALIAS:
f
OPTIONS:
-i Instance
To deal with a container that is failing to start, there is currently no out-of-the-box solution with Diego, but it has been discussed. This blog post discusses some options, including:
For the app in question explicitly specify a start command by adding a ";sleep 1d" The push command would like this - cf push <app_name> -c "<original_command> ;sleep 1d". This will keep the container around for a day after process within the container has exited.

Related

Cloud Build -> Google Cloud Storage: Question about downtimes at deployment time

- name: 'google/cloud-sdk:alpine'
entrypoint: 'gsutil'
args: ['-m', 'rsync', '-r', '-d', '-p', 'dist/', 'gs://my-site-frontend']
Good morning, the snippet above is the command that, via Google Cloud Build, copies the build of my VueJS frontend to a Google Cloud Storage bucket, where the website will be hosted.
My question is simple and short: If any user is browsing at the time of this deployment (the execution of the command above), will he notice any inconsistencies, downtime or something like that when Cloud Build is copying/syncing the new files via rsync? Is this task seamless enough? Maybe the user can feel some inconsistence when accessing some file that is being copied? Should I use Cloud Run instead?
Yes, you can have inconsistency for a while (files outdated or not found). The best solution is to use a product that package in a consistent manner the sources. You can use Cloud Run, but you can also use App Engine standard for that.
The main advantage of this 2 solutions is that each version is unitary, package in the same container. Like that, you can easily perform rollback, traffic splitting, canary release, A/B testing,.... All these things are impossible with Cloud Storage.

Do I need to deploy function in gcloud in order to have OCR?

This GCloud Tutorial has a "Deploying the function", such as
gcloud functions deploy ocr-extract --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point
But at Quickstart: Using Client Libraries does not mention it at all, all it needs is
npm install --save #google-cloud/storage
then a few lines of code will work.
So I'm confused, do I need the "deploy" in order to have OCR, in other words what do/don't I get from "deploy"?
The command
npm install --save #google-cloud/storage
is an example of installing the Google Cloud Client Library for Node.js in your development environment, in this case, Cloud Storage API. This example is part of Setting Up a Node.js Development Environment tutorial.
Once you have coded, tested and set all the configurations for the app as described in the tutorial the next step would be the deployment, in this example a Cloud Function:
gcloud functions deploy ocr-extract --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point
So, note that this commands are two different steps to run OCR with Cloud Functions, Cloud Storage and other Cloud Platform components in the tutorial example using Node.js environment.
While Cloud Function (CF) is easy to understand, this answers specifically my own question, what does the "Deploy" actually do:
to have the code work for you, they must be deployed/uploaded to the GC. For people like me never done GCF this is new. My understanding was all I need to supply is credentials and satisfy the whatever server/backend (sorry, cloud) settings when my local app calls the remote Web API. That's where I stucked. The key I missed is the sample app itself is a server/backend event-handler trigger functions, and therefore Google requires them to be "deployed" just like when we deploy something during a staging or production release in a traditional corporate environment. So it's a real deploy. If you still don't get it, go to your GC admin page, menu, Cloud Function, "Overview" tab, you will see them. Hence goes to next
The 3 GC deploy command used in the Deploying Functions have ocr-extract ocr-save ocr-translate, they are not switches, they are function names that you can name them anything. Now, still in the Admin page, click on any of 3, "Source". Bang, they are there, deployed (uploaded).
Google, as this is a tutorial no one has digged into command reference book yet, I recommend adding a piece of note telling readers those 3 ocr-* can be anything you want to name.

Cloud Foundry Change Buildpack from Command Line

I have a Jenkins app running on Cloud Foundry for a POC. Since it's Jenkins it uses a bound service for file persistence.
I had to make a change to the Java Buildpack and would like Jenkins to use the updated buildpack.
I could pull the source for Jenkins from GitHub and push it again with updated references to the new build pack in the manifest.yml file or via a command line option. In theory, the bound file system service's state would remain intact. However, I haven't validated this assumption and have concerns I might loose the state.
I've looked through the client CLI to see if there's a way to explicitly swap buildpacks without another push. However, I didn't see anything.
Is anyone aware of a way to change the buildpack of an existing application without re-pushing it to Cloud Foundry?
After some research I couldn't find anyway to swap the buildpack without a push. I did discover that my bound file system service remained intact and didn't loose any work.
Answer: re-push to change the buildpack.

How Docker and Ansible fit together to implement Continuous Delivery/Continuous Deployment

I'm new to the configuration management and deployment tools. I have to implement a Continuous Delivery/Continuous Deployment tool for one of the most interesting projects I've ever put my hands on.
First of all, individually, I'm comfortable with AWS, I know what Ansible is, the logic behind it and its purpose. I do not have same level of understanding of Docker but I got the idea. I went through a lot of Internet resources, but I can't get the the big picture.
What I've been struggling is how they fit together. Using Ansible, I can manage my Infrastructure as Code; building EC2 instances, installing packages... I can even deploy a full application by pulling its code, modify config files and start web server. Docker is, itself, a tool that packages an application and ensures that it can be run wherever you deploy it.
My problems are:
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Suppose we have a source code repository, the team members finish working on a feature and they push their work. Jenkins detects this, runs all the acceptance/unit/integration test suites and if they all passed, it declares it as a stable build. How Docker fits here? I mean when the team pushes their work, does Jenkins have to pull the Docker file source coded within the app, build the image of the application, start the container and run all the tests against it or it runs the tests the classic way and if all is good then it builds the Docker image from the Docker file and saves it in a private place?
Should Jenkins tag the final image using x.y.z for example!?
Docker containers configuration :
Suppose we have an image built by Jenkins stored somewhere, how to handle deploying the same image into different environments, and even, different configurations parameters ( Vhosts config, DB hosts, Queues URLs, S3 endpoints, etc...) What is the most flexible way to deal with this issue without breaking Docker principles? Are these configurations backed in the image when it gets build or when the container based on it is started, if so how are they injected?
Ansible and Docker:
Ansible provides a Docker module to manage Docker containers. Assuming I solved the problems mentioned above, when I want to deploy a new version x.t.z of my app, I tell Ansible to pull that image from where it was stored on, start the app container, so how to inject the configuration settings!? Does Ansible have to log in the Docker image, before it's running ( this sounds insane to me ) and use its Jinja2 templates the same way with a classic host!? If not, how is this handled?!
Excuse me if it was a long question or if I misspelled something, but this is my thinking out loud. I'm blocked for the past two weeks and I can't figure out the correct workflow. I want this to be a reference for future readers.
Please, it would very helpful to read your experiences and solutions because this looks like a common workflow.
I would like to answer in parts
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Since docker images same everywhere, you use your docker images as if they are production images. Therefore, when somebody committed a code, you build your docker image. You run tests against it. When all tests pass, you tag that image accordingly. Since docker is fast, this is a feasible workflow.
Also docker changes are incremental; therefore, your images will have minimal impact on storage. Also when your tests fail, you may also choose to save that image too. In this way, developer will pull that image and investigate easily why your tests failed. Developer may choose to run tests in their machine too since docker images in jenkins and their machine are not different.
What this brings that all developers will have same environment, same version of all software since you decide which one will be used in docker images. I have come across to bugs that are due to differences between developer machines. For example in the same operating system, unicode settings may affect your code. But in docker images all developers will test against same settings, same version software.
Docker containers configuration :
If you are using a private repository, and you should use one, then configuration changes will not affect hard disk space much. Therefore except security configurations, such as db passwords, you can apply configuration changes to docker images(Baking the Configuration into the Container). Then you can use ansible to apply not-stored configurations to deployed images before/after startup using environment variables or Docker Volumes.
https://dantehranian.wordpress.com/2015/03/25/how-should-i-get-application-configuration-into-my-docker-containers/
Does Ansible have to log in the Docker image, before it's running (
this sounds insane to me ) and use its Jinja2 templates the same way
with a classic host!? If not, how is this handled?!
No, ansible will not log in the Docker image, but ansible with Jinja2 templates can be used to change dockerfile. You can change dockerfile with templates and can inject your configuration to different files. Tag your files accordingly and you have configured images to spin up.
Regarding your question about handling multiple environment configurations using the same Docker image, I have been planning on using a Service Discovery tool like Consul as a centralized config/property management tool. So, when you start your container up, you set an ENV var that tells it what application it is (appID), and what environment config it should use (ex: MyApplication:Dev) and it will pull its config from Consul at startup. I still have to investigate the security around Consul (as if we are storing DB connection credentials in there for example, how do we restrict who can query/update those values). I don't want to just use this for containers, but all apps in general. Another cool capability is to change the config value in Consul and have a hook back into your app to apply the changes immediately (maybe like a REST endpoint on your app to push changes down to and dynamically apply it). Of course your app has to be written to support this!
You might be interested in checking out Martin Fowler's blog articles on immutable infrastructure and on Phoenix servers.
Although not a complete solution, I have suggestions for two of your issues. Although they might not be perfect, these are the practices we are using in our workflow, and prove themselves so far.
Defining different environments - supposing you've written a different Ansible role for each environment you launch, we define an environment variable setting the environment we wish the container to belong to. We then download the suitable configuration file from an S3 bucket using the env variable set before into the container (which should be possible if you supply AWS creds or give your server an IAM role) and inject these parameters into the code when building it.
Ansible doesn't need to log into the docker app, but the solution is a bit tricky. I've tried two ways of tackling this problem, and both aren't ideal. The first one is to download the configuration file as part of the docker image command line, and build the app on container startup. While this solution works - it breaches the Docker philosophy and makes the image highly prone to build errors.
Another solution is pushing several images to your docker hub repo, and then pulling the appropriate image according to the environment at hand.
In a broader stroke, I've tried launching our app completely with Ansible and it was hell, many configuration steps are tricky and get trickier when you try to implement them as a playbook. When I switched to maintaining the severs alone with Ansible, and deploying the app itself with Docker things got a lot easier.

How do I know what .ebextensions config file to create?

I think I'm on the right path. I can use .ebextensions to change some of the conf files for the instance I'm running. Since I'm using Elastic Beanstalk, and that a lot of the software is shrinkwrapped (which I'm fine with), I should be using .ebextensions as a means of modifying the environment.
I want to employ some form of mod_rewrite config, but I know nothing of this Amazon Linux. I don't even know what the web server is. I've been through the console for the past few hours and see no trace of the things I want to override.
Apparently I can setup a shell to take a look around, but modifying things that way will cause things to be overridden since Beanstalk is handling config. I'm not entirely sure on that last point.
Should I just ssh and play in userland like a typical unix host?
You can definitely ssh to the instance, and see around. But remember, that your changes are not persistent. You should look at .ebextensions config files as the way to re-run your commands on the host, plus more.
It might take some time to see where ElasticBeanstalk stores configuration files and all other interesting things.
To get you started, your app files are located at: /opt/python/current/app and if you are using Python, it is located in virtual environment at: /opt/python/run/venv/bin/python27
Customizing the Software on EC2 Instances Running Linux guide contains detailed information on what you can do:
Packages - install packages
Sources - retrieve archives
Files - operations with files
Users - anything with users
Groups - anything with groups
Commands - execute instance commands
Container_commands - execute commands after the container is
extracted
Services - launch services
Option_settings - configure
container settings
See if that satisfies your requirements, if not, come back to StackOverflow and ask more questions.