We push a java app on cloud foundry using cf push with below manifest file
applications:
- name: xyz-api
instances: 1
memory: 1G
buildpack: java_buildpack_offline
path: target/xyz-api-0.1-SNAPSHOT.jar
I understand that, PAAS (ex: cloud foundry) is a layer on top of IAAS(ex:vcenter hosting linux and windows VM's).
In manifest file, buildpack just talks about userspace runtime libraries required to run an app.
Coming from non-cloud background, and reading this manifest file, I would like to understand...
1) How to understand the operating system(OS) environment, that an app is running? On which operating system...
2) How app running on bosh instance different from docker container?
1) How to understand the operating system(OS) environment, that an app is running? On which operating system...
The stack determines the operating system on which your app will run. There is a stack attribute in the manifest or you can use cf push -s to indicate the stack.
You can run cf stacks to see all available stacks.
In most environments at the time of writing, you will have cflinuxfs2. This is Ubuntu Trusty 14.04. It will be replaced by cflinuxfs3 which is Ubuntu Bionic 18.04, because Trusty is only supported through April of 2019. You will always have some cflinuxfs* stack though, the number will just vary depending on when you read this.
In some environments you might also have a Windows based stack. The original Windows based stack is windows2012r2. This is quite old as I write this so you probably won't see it any more. What you're likely to see is windows2016 or possibly something even newer depending on when you read this.
If you need more control than that, you can always push a docker container. That would let you pick the full OS image for your app.
2) How app running on bosh instance different from docker container?
Apps running on Cloud Foundry aren't deployed by BOSH directly. The app runs in a container. The container is scheduled and run by Diego. Diego is a BOSH deployed VM. So there's an extra layer in there.
At the core, the difference between running your app on Cloud Foundry and running an app in a docker container is minimal. They both run in a Linux "container" which has limitations put on it by kernel namespaces & cgroups.
The difference comes in a.) how you build the container and b.) how the container is deployed.
With Cloud Foundry, you don't build the container. You provide your app to CF & CF builds the container image based on the selected stack and the additional software added by buildpacks. The output in CF terminology is called a "droplet", but it basically an OCI image (this will be even more so with buildpacks v3). When you need to upgrade or add new code, you just repeat the process and push again. The stack and buildpacks, which are automatically updated by the platform, will in turn provide you with a patched & up-to-date app image.
With Docker, you manually create your image building it up from scratch or from some trusted base image. You add you own runtimes & application code. When you need to upgrade, that's on you to pull in updates from the base image & runtimes or worse to update your from-scratch image.
When it comes to deployment, CF handles this all for you automatically. It can run any number of instance of your app that you'd like & it will automatically place those so that your app is resilient to failures in the infrastructure & in CF.
With Docker, that's on you or increasingly often on some other tool like Kubernetes.
Hope that helps!
Related
I'm considering the purchase of one of the new Macbook M1's. Docker Desktop is apparently unworkable with their Rosetta 2 engine, and all of my development efforts rely on Docker desktop, and a local development environment that auto-reloads when files are changed.
I haven't done much with Docker remote hosts, but I see that this could be a stop-gap solution until Docker rewrites its engine. Google is failing me... can you keep files on your local machine synced up with your Docker remote host?
No, Docker doesn't do this. Instead, Docker packages your application code into an image; that image can be transferred to a repository (with Docker Hub being the most prominent option), and then run on the remote system, without necessarily needing to have the application code or the interpreter directly installed there. Beyond the image system, Docker has no direct ability to transfer or mount files from one system to another (you could do something like create an NFS-backed named volume, but you would need to run the NFS server yourself).
For day-to-day development, using your language's native isolation system often will work better than trying to simulate a local development environment using Docker. For Python, consider using a tool like Pipfile to create a virtual environment. Python is reasonably platform-independent, so you shouldn't notice any trouble using Apple silicon vs. Intel's.
Don't even consider using the Docker remote API. If you don't configure it perfectly, it's trivial to use it to root the host (and there are many instances of this in the wild). Even if it is configured, you can't use it to mount files from your local system (a docker run -v bind-mount option is always interpreted relative to the Docker host it runs on). If you need to work directly on the remote host for whatever reason, use an ordinary ssh connection.
I just decided to jump into using docker to test out building a microservice application using AWS fargate.
My question really relates to hearing about many development teams using Docker to avoid people saying the phrase "works on my machine" when committing code. Although I see the solution to that problem being solved, I still do not see how Docker images actually can be used in development environment.
The workflow for anything above production baffles me. Example of my thinking is...
team of 10 devs all use docker, each pull the image from the repo to there container, with the source code, if they all have a individual version of the image, that means any edits they make to that image is their own and when they push back to the repo where none of the edits can be merged (along with that to edit a image source code is not easily done as well).
I am thinking of it in the say way as git -GitHub, where code is pushed to a branch and then merged to master to create a finished product.
I guess if you pull the code from the GitHub master and create the Docker image is the way for it to be used, but again that points back to my original assumption of Docker being used for Production environments over development.
Is docker being used in development, more so the dev can just test the feature on the container that ever other dev on the team is using so all the environments match across the team?
I just really do not understand the workflow of development environments with docker.
I'd highlight three cases where I've found Docker particularly useful, prior to a production deploy:
Docker is really useful for installing local dependencies. If your application needs a database, docker run postgresql with appropriate options. Need a clean start? Delete the container. Running two microservices that need separate databases? Start two containers. The second microservice is maintained by another team? Run it in a container too.
Docker is useful for capturing the build environment in the CI system. Jenkins, for example, can run build steps inside a container, bind-mounting the current work tree in, so it's useful to build an image that just contains build-time dependencies (which can be updated independently of the CI system itself).
If you're running Docker in production, you can test the exact thing you're about to run. You're guaranteed the install environment will be the same in the QA and prod environments, because it's encapsulated inside the same Docker image. A developer can debug problems against the production-installed code without actually being in production.
In the basic scenario you describe, an important detail to note is that you never "edit an image"; you always docker build a new image from its Dockerfile and other source code. In compiled languages (C++, Go, Java, Rust, Haskell) the source code won't be in the image. Even if you're "using Docker in development" the actual source code will be in some other system (frequently Git), and typically you will have a CI system that builds "official" images from that source code.
Where I see Docker proposed for day-to-day development, it's either because the language ecosystem in use makes it hard to have multiple versions concurrently installed, or to avoid installing software on the host system. You need specific tooling support to "develop inside a container", and if developers choose their own IDE, this support is not universal. Conversely, in between OS package managers (APT, Homebrew) and interpreter version managers (rbenv, nvm) it's usually straightforward to install a couple of things on the host. If your application isn't that sensitive to, say, the specific version of Node, it's probably easier to use whichever version is already installed on your host than to try to insert Docker into the process.
I have a Django project. I am considering adding Docker to it before deploying to Elastic Beanstalk. I am very new to Django and Docker and want to know what are the benefits of using Docker when deploying a Django app to Elastic Beanstalk. Thanks!
The general benefits of using Docker in EB, as compared to regular Python EB environment portability and reproducibility.
If you bundle your django app as Docker container, you know that you your development environment will be exactly same as your production one. All the dependencies, package versions, tools will be same in the container, regardless if it runs on your local workstation, home laptop or on EB platform.
However, when you use regular Python platform, the portability and reproducibility can be difficult to guarantee. The current Python platform is based on Amazon Linux 2. So the question is, is your development environment at home or work exactly same? Usually this is not the case, which often leads to issues in the vain of "It works on my local ubuntu workstation, but not on EB".
Also, one day you may decide to migrate your app out of EB or even AWS. It will be much easier to do that when using docker. This is because EB is a custom product from AWS, not available in other could providers with its own settings and requirements.
EB supports two types of docker-based environments:
single-docker
multi-docker
Depending on your requirements, you would have to use one of them. Each of them has its own use-cases, which I think are out of the scope to discuss for this question.
All apps on our team use a buildpack named ruby_latest_buildpack. It's currently a renamed version of ruby_1_7_27_buildpack. We're about to make it become ruby_1_7_28_buildpack.
What will happen to deployed and running applications when we update ruby_latest_buildpack? If we restart an application, will it continue to run under the environment that was created by the buildpack at deploy time, or will it start to pickup features provided by the updated buildpack?
Once droplet is created(created while staging process) all the frameworks and runtime(which are essentially provided by Buildpacks) are already in Image. So if you just restart your application old buildpacks will be used. If you want to use updated buildpacks you will have to restage your application.
As far as my understanding goes, the Kubernetes engine is meant for deploying applications that can be load balanced, for example, having an application which unhashes a string. If pod-a is on high load, it would be offloaded to pod-b. Correct me if I am wrong here, since if this is false, my following question will not make sense.
After exploring it for few hours I can't seem to figure out how to deploy a C++ application to the Kubernetes cluster. How would I do so?
What I tried:
I tried to follow the guide: Interactive Tutorial - Deploying an App, however, I couldn't understand how I would get my C++ app as an image that could be deployed.
What the C++ application is:
At the moment it proxies TCP traffic to another HOST designated by clients' HOSTNAME. It is pretty much a reverse proxy, however, this is NOT an HTTP application.
Is Kubernetes the right choice?
-
Kubernetes is really useful to loadbalance workloads, to provide high availability in case of failure to speed up test processes, and to increase safety during production rollout through different strategies and increase security through segregation.
However, not all the kind of workloads can take advantage of all the features introduced by Kubernetes.
For example, if your application is built in such a way it needs a stable amount of RAM and CPU, the code as well is really stable and you need merely one replica, then maybe Kubernetes and containers are not the best choice (even if you can perfectly use them), and you should rather implement everything on a big monolithic server/virtual machine.
But if you need to deploy it on a different cloud provider, and it should run merely some hours every day, maybe then it can make use as well of those features. If you are willing to add a layer, make sure that you need the features it introduces, otherwise it would be merely an overhead.
Note that Kubernetes it is not capable of splitting your workload alone. Therefore I do not know if what you mean by "If pod-a is on high load, it would be offloaded to pod-b" likely yes it is possible, but you have to instruct it to do so.
Kubernetes takes care to run your POD, making sure there have been scheduled on nodes where enough memory and CPU is available according to your specification, you can set up autoscaling procedures as well to support high workload periods or to scale even the cluster itself. Your application should have been created in such a way to support a divide and conquer pattern, otherwise you will likely have three nodes, one pod running on one node, two idle and a overhead that you could have avoided.
If your C++ application POD unhashes a strings and a single request could consume all the resources of a node Kubernetes will not "spit" the initial workload and will not create for you more PODS scheduling them across the cluster! Of course you can achieve something similar, but it will not come for free and you will likely need to modify your C++ code.
For sure you can take advantage of Kubernetes, running your application on it is pretty easy, but maybe you will have to modify something in the architecture to fully make advantage of those features.
Deploy the C++ application
The process to deploy your application in Kubernetes is pretty standard. Develop it locally, create a Docker image with all the libraries and components you need, test it locally, push it to the registry, and create the deployment in Kubernetes.
Let's say that you have all the resources needed to run your application and your executable file in a local folder. Create the Docker file.
Example, modify to implement your application, I have reported it as an example to show syntax:
# Download base image, Ubuntu 16.04 (Xenial Xerus)
FROM ubuntu:16.04
# Update software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from the Ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor
# Define the environment variable
ENV nginx_vhost /etc/nginx/sites-available/default
[...]
# Enable php-fpm on the nginx virtualhost configuration
COPY default ${nginx_vhost}
[...]
RUN chown -R www-data:www-data /var/www/html
# Volume configuration
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
# Configure services and port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443
Built it running:
export PROJECT_ID="$(gcloud config get-value project -q)"
docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .
gcloud docker -- push gcr.io/${PROJECT_ID}/hello-app:v1
kubectl run hello --image=gcr.io/${PROJECT_ID}/hello-app:v1 --port [port number if needed]
More information is here.