App with Docker Image and Memory allocation - cloud-foundry

I came across a questionnaire on Containers and Buildpacks. I could not find the answer to the following questions :
What buildpack is used to stage docker apps?
I would assume a non-docker app would require a buildpack which would produce the container image during the staging process. For a docker based application a buildpack is not relevant since it is already a container image . Is this correct understanding ?
When you allocate 1G to an app, how much memory does the application receive? What
component determines this?
I would assume, the actual application (For example: my_sample_spring_boot_app)would receive less memory than 1G , since some part of teh memory would be utilised for Infra + Stack + runtime etc. Is this correct understanding ?
Could you please help me with some guidance

What buildpack is used to stage docker apps?
I would assume a non-docker app would require a buildpack which would produce the container image during the staging process. For a docker based application a buildpack is not relevant since it is already a container image . Is this correct understanding ?
That sounds like a trick question. When you run Docker images on Cloud Foundry, no buildpacks run.
There are different app lifecycles to run each type of app. So when you have source code to build and run, the buildpacks app lifecycle takes that, runs the buildpacks, and generates a droplet from them. The droplet is then run.
With a Docker image, the docker app lifecycle is used. No build/stage happens since that's assumed to happen externally, and the image is just ready to be run.
When you allocate 1G to an app, how much memory does the application receive? What component determines this?
I would assume, the actual application (For example: my_sample_spring_boot_app)would receive less memory than 1G , since some part of teh memory would be utilised for Infra + Stack + runtime etc. Is this correct understanding ?
Correct. If you allocate a 1G memory limit then everything running in the container must all fit within that 1G limit.
Your application will get most of that memory, but there are a couple of small processes that run in every container: an init process (i.e. PID1), an SSHD (for cf ssh) and in most CF versions there's also an envoy proxy (for TLS termination in the container). I haven't looked recently, but a while back this took up around 24M. If anyone has measured recently, please comment.
Everything has to fit under the memory limit. If your application forks, perhaps because it forks worker processes or if you fork and run sub-processes as part of the application. Everything you run has to all stay under the 1G limit.
Last note. For Java apps, 1G is the total memory size of the container. NOT your JVM heap. That's a common misconception. The entire JVM process and all its different memory regions (heap, metaspace, threads, direct memory, code cache, etc...) have to fit within the 1G limit.

Related

Questions regarding Cloudfoundry Application with multiple Processes

I am reading about the concept of Side Car and Multi Process Application in cloud foundry .
https://docs.cloudfoundry.org/devguide/multiple-processes.html
https://docs.cloudfoundry.org/devguide/sidecars.html
I have few questions which I could not figure out myself.
Q1: When to use a CF Application with Sidecar vs when to use a CF Application with Processes
I understood that the primary difference between sidecar vs multiple process application is related to container. A sidecar process runs in the same container whereas for multi process application all of them run in separate containers.
I could not figure out , in which scenarios we should use sidecar vs in which scenarios we can use multiple process application
Q2: Different Processes in different technology
In an application with multiple processes , if I want to run 2 processes in 2 different technology ( one process on Java another process in Go / anything else), how to do so ?
This question comes to my mind as I see the buildpack configuration goes with the application , not with the process. So I am getting an impression as if only all processes must be in same technology ( or we can provide multiple buildpacks here ?) .
Here is a sample manifest.yml that I am using:
applications:
- name: multi-process1
disk_quota: 1G
path: target/SampleApp-1.jar
instances: 1
memory: 2G
buildpacks:
- java_buildpack
env:
CONFIG_SERVER_PORT: 8080
processes:
- type: 'task'
command: '$PWD/.java-buildpack/open_jdk_jre/bin/java -jar $PWD/BOOT-INF/lib/mycustomlogger-1.jar'
memory: 512MB
instances: 1
health_check:
type: http
- type: 'sampleProcess2'
command: '$PWD/.java-buildpack/open_jdk_jre/bin/java -jar $PWD/BOOT-INF/lib/mycustomlogger-1.jar'
memory: 512MB
instances: 1
health_check:
type: http
- type: 'web'
#command: '$PWD/.java-buildpack/open_jdk_jre/bin/java $PWD/BOOT-INF/classes/com/example/SampleApp'
memory: 1G
health_check:
type: http
Q3: Interacting processes
In this context how can one process call/talk/interact with the other processes within the application. What are the options available here ? I could not find any sample which demonstrate multiple interacting processes within an app, any sample would be really helpful.
Q4 : Difference between Multi Target Application vs Multi Process Application
I came across a concept called Multi Target Application , reference : https://www.cloudfoundry.org/blog/accelerating-deployment-distributed-cloud-applications/
I did not find such possibility in standard Cloud Foundry, but I felt it might be "similar" to Multi Process app ( since they run on independent containers and are not impacting each other).
My questions are:
What are the differences between Multi Target Application vs Multi Process Application?
What are the fundamental Could Foundry concepts on which Multi Target Application is built ?
Any guidance would be really appreciated.
Q1: When to use a CF Application with Sidecar vs when to use a CF Application with Processes
Different process types are helpful when you have well-separated applications. They might talk to each other, they might interact, but it's done through some sort of published interface like a REST API or through a message queue.
A typical example is a work queue. You might have one process that's running your web application & handling traffic, but if a big job comes in then it'll instruct a worker process, running separately, to handle that job. This is often done through a message queue. The advantage is you can scale each individually.
With a sidecar, they are in the same process. This works well for the scenario where you need tight coupling between two or more processes. For example, if you need to share the same file system or if you have proxy process that intercepts traffic.
The two processes are often linked in a way that they are scaled in tandum, i.e. there is a one-to-one relationship between the processes. This relationship is necessary because if you scale up the application instance count, you're scaling up both. You cannot scale them independently.
In an application with multiple processes , if I want to run 2 processes in 2 different technology ( one process on Java another process in Go / anything else), how to do so ?
You're correct. You'd need to rely on multi-buildpack support.
Your application is only staged once and a single droplet is produced. Each process is spun up from the same droplet. Everything you need must be built into that one droplet. Thus you need to push everything all together and you need to run multiple buildpacks.
In your manifest.yml, the buildpacks entry is a list. You can specify the buildpacks you want to run and they will run in that order.
Alternatively, you could pre-compile. If you're using Go. Cross-compilation is often trivial, so you could compile ahead of time and just push the binary you generate with your application.
In this context how can one process call/talk/interact with the other processes within the application. What are the options available here ? I could not find any sample which demonstrate multiple interacting processes within an app, any sample would be really helpful.
I'll split this in two parts:
If you're talking about apps running as a sidecar, it depends on what the sidecar does, but you have options. Basically, anything you can do in a Linux environment, keeping in mind you're running as a non-root user, you can do. Coordinate through a shared file/folder, intercept the network port & proxy to another port, or other ipc.
If you're talking about multiple processes (the CF terminology) such that you have each running in a separate container, then you are more limited. You would need to use some external method to communicate. This could be a service broker, a database (not recommended), or an API (Rest, gRCP or rsocket are all options).
Note that this "external" method of communication doesn't necessarily need to be public, just external to the container. You could use a private service like a broker/DB, or you could use internal routes & container networking.
Q4 : Difference between Multi Target Application vs Multi Process Application
Sorry, I'm not sure about this one. Multi Target apps are not a core concept in CF. I'll leave it for someone else to answer.

How can I scale CloudFoundry applications "down" without the risk of restarting all of them?

This is a question regarding the Swisscom Application Cloud.
I have implemented a strategy to restart already deployed CloudFoundry applications without using cf restart APP_NAME. I wanted to be able to:
restart running applications without needing access the app manifest and
avoid them suffering any down-time.
The general concept looks like this:
cf scale APP_NAME -I 2
increasing the instance count of the app from 1 to 2
wait for all app instances to be running
cf restart-app-instance APP_NAME 0
restart the "old" app instance
wait for all app instances to be running again
cf scale easyasset-repower-staging -I 1
decrease the instance count of the app back from 2 to 1
This generally works and usually does what I expect it to do. The problem I am having occurs at Stage (3), where sometimes instead of just scaling the instance count back, CloudFoundry will also restart all (remaining) instances.
I do not understand:
Why does this happen only sometimes (all apps restart when scaling down)?
Shouldn't CloudFoundry keep the the remaining instances up and running?
If cf scale is not able to keep perfectly fine running app instances alive - when is it useful?
Please Note:
I am well aware of the Bluegreen / Autopilot plugins for zero-down-time deployment of applications in CloudFoundry and I am actually using them for our deployments from our build server, but they require me to provide a manifest (and additional credentials), which in this case I don't have access to (unless I can somehow extract it from a running app via cf create-app-manifest?).
Update 1:
Looking at the plugins again I found bg-restage, which apparently does approximately what I want, but I have no idea how reliable that one is.
Update 2:
I have concluded that it's probably an obscure issue (or bug) in CloudFoundry and that there are no guarantees given by cf scale that existing instances are to remain running. As pointed out above, I have since realised that it is indeed very much possible to generate the app manifest on the fly (cf create-app-manifest) and even though I couldn't use the bg-restage plugin without errors, I reverted back to the blue-green-deploy plugin, which I can now hand a freshly generated manifest to avoid this whole cf scale exercise.
Comment Questions:
Why do you have the need to restart instances of your application?
We are caching some values from persistent storage on start-up. This restart is happening when changes to that data was detected.
information about the health-check
We are using all types of health checks, depending on which app is to be re-started (http, process and port). I have observed this issue only for apps with health checkhttp. I also have ahttp-endpoint` defined for the health check.
Are you trying to alter the memory with cf scale as well?
No, I am trying to keep all app configuration the same during this process.
When you have two running instances, the command
cf scale <APP> -i 1
will kill instance #1 and instance #0 will not be affected.

Running multiple app instances on a single container in PCF

We have an internal installation of PCF.
A developer wants to push a stateless (obeys 12 factor rules) nodejs app which will spawn other app instances i.e leverage nodejs clustering as per https://nodejs.org/api/cluster.html. Hence there would be multiple processes running on each container. Any known issues with this from a PCF perspective? I appreciate it violates the rule/suggestion of one app instance per container but that is just a suggestion :) All info welcome.
Regards
John
When running an application on Cloud Foundry that spawns child processes, the number one thing you need to watch out for is memory consumption. You set a memory limit when you push your application which is for the entire container. That includes the parent process, whatever child processes are spawned and a little overhead (currently init process, sshd process & health check).
Why is this a problem? Most buildpacks make the assumption that only one process will be running and that it will consume as much memory as possible while staying under the defined memory limit. They attempt to configure the software which is running your application to do this. When you spawn child processes, this breaks the buildpack's assumptions and can create scenarios where your application will exceed the defined memory limit. When this happens, even by one byte, the process will be killed and restarted.
If you're concerned with scaling your application, you should not try to spin off child processes in one extra large container. Instead, let the platform help you and scale up the number of application instances. The platform can easily do this and by using multiple smaller containers you can scale just as well. In fact, if you already have a 12-factor app, it should be well positioned to work in this manner.
Good luck!

Can the same dyno run multiple processes?

I am creating small app running multiple microservices. I would like to have this app available 24/7, so free dyno hours are not enough for me. If I upgrade to a hobby plan I would get 10 Process Types.
Can I run another microservice on each of the processes (web), or does Heroku give me the ability only to install one web process per dyno, and the other 10 process types are for scaling my app? In other words, If i need 6 microservices running 24/7 should I buy 6 hobby dynos?
You can only have 1 web process type. You can horizontally scale your web process to run on multiple dynos ("horizontal scalability"), however you will need to upgrade to at least standard-1x dyno types to do that (i.e. you can only run 1 web dyno instance if you are using free or hobby dyno types).
However, in addition to your web process, you can instantiate multiple additional process types (e.g. "worker" processes). These will NOT be able to listen on HTTP/S requests from your clients, but can be used for offloading long-running jobs from your web process.
So, if you map each of your 4-6 microservices to a different Process Type in your Procfile, and if your microservices are not themselves web servers, you might be able to make do with hobby dynos.
Heroku's default model is to map a process type to its own dyno: https://devcenter.heroku.com/articles/procfile states
"Every dyno in your application will belong to one of the process
types, and will begin executing by running the command associated with
that process type."
e.g. heroku ps:scale worker=1 for a type of worker.
Other people have written about how to use foreman or Honcho to run multiple python processes in a single dyno, which utilize a secondary Procfile and possibly other slug setup in a post_compile step. Presumably you could do something similar depending on your chosen language and its buildpack; the official buildpack API doesn't list this step though :/. That said, given Heroku's base Ubuntu stacks, you may be able to get away with a web: script.sh to do any setup and exec of processes.
For grouping multiple processes in a Docker container, you could check out https://runnable.com/docker/rails/run-multiple-processes-in-a-container, which relies again on a custom CMD [ "/start.sh" ]. Note that it's contrary to Docker's single-­process-­per-­container philosophy, and can give you more headaches e.g. around forwarding signals to child processes, an ugly Dockerfile to setup all microservices, etc. (If you do decide to use multiple containers, Heroku has a writeup on using Docker Compose for local dev.)
Also, don't forget you're bounded by the performance of your dyno and the process/thread limits.
Of course, multiple processes in a given dyno is generally not recommended for non-toy production or long term dev maintenance. ;)
There's RUNIT buildpack which makes it easy to combine multiple processes within a single dyno - as long as they all fit within your dyno memory limit (512M for a Hobby dyno).
You still have only one HTTP endpoint per Heroku app, which means your microservices will need to communicate via queues, pub/sub or some RPC hub (like deepstream.io).

Django mod_wsgi Hosting Server Requirements

I have normal Content Management Website developed in Django. My Client has a server with 256 MB RAM. He wants to deploy this site in wsgi mode. 256 MB RAM is sufficient or not?
I don't have any knowledge about Server RAM requirements and all. Any help will be appreciated
I have gone through this doc of wsgi
But it doesn't have any info about system Specifications.
What is the minmum RAM needed for running a Django application in wsgi mode?
How much memory you need depends on how many instances of the web application you intend to run at the same time. How many you need is going to be dictated by factors such as whether your code base is thread safe and so whether you can run it in a multithreaded configuration, or whether you will have to run a multi process configuration with single threaded processes.
So, do you even know how much memory one instance (process) uses when it is running your application?
The underlying web server has very little to do with memory used, because your application is going to totally dwarf how much memory the web server uses.
Some quick tips.
Don't use embedded mode of mod_wsgi, use daemon mode.
Go watch my PyCon US 2012 talk. http://lanyrd.com/2012/pycon/spcdg/
Do some monitoring of your web application to determine how much memory it uses.
Get an idea of what traffic volumes you need to handle.
Only once you have some real data about your applications memory requirements, how much load you need to handle and the response times of your application will you be able to work out the configuration and how much memory you need.
what is the operating system?
how many connections are needed?
what is the traffic that it needs to handle?
256MB does not seem realistic at first for CMS type of workload unless there is very little traffic and the operating system is striped down to the minimum needed.
Here is some data:
http://nichol.as/benchmark-of-python-web-servers