What is the most cost effective way of running a single docker container on GCP? I have various simple scripts which I've packaged in images and which I'd like to move to GCP and run them as containers. From the docs Google Container engine is:
A Container Engine cluster is a group of Compute Engine instances running Kubernetes. It consists of one or more node instances, and a managed Kubernetes master endpoint. A container cluster is the foundation of a Container Engine application—pods, services, and replication controllers all run on top of a cluster.
This sounds like an overkill as I only need one Compute Engine instance with the docker toolchain installed and easy access to other cloud tools (e.g. SQL). I proceeded to provision a Compute Engine VM but then had to set up docker which felt like reinventing Google Container Engine.
EDIT: I found this which is in alpha stage as of now (2017-09-06): https://cloud.google.com/compute/docs/instance-groups/deploying-docker-containers
The most cost effective way is to run a single VM that runs your container. You can run Google's Container-Optimized OS to run the container and add a startup script to start the container when the machine boots (this OS already has docker installed and is the OS used by default in Google Container Engine).
However, you get other benefits from running on top of Google Container Engine: health checking of your container (and optionally of your VM), the ability to later trivially scale up your application to multiple replicas, the ability to easily deploy new versions of your application, support for logging / monitoring, etc. You may find that the features provided by Google Container Engine are worth the extra overhead it adds to your single node.
I would probably just set up a single node Container engine cluster.
This is pretty much the same thing as a compute engine instance anyway, and it's pretty cost effective. Especially if you find that you aren't fully using the hardware, you can just run a second docker image on the same container engine instance without paying anything extra for it.
Related
Firstly, I apologize for the rather basic question. I am just beginning to learn about Microservices Architecture and would like to get my basics right.
I was wondering if topics such as AWS cloud services/web services imply the Microservices architecture. For instance, if someone is working on an AWS project does that mean that he is using a microservice architecture? I do understand AWS, Docker etc is more of a platform. Are they exclusively for Microservices?
I would really appreciate a short clarification
Microservices, cloud infrastructure like Amazon Web Services, and container infrastructure like Docker are three separate things; you can use any of these independently of the others.
"Microservices" refers to a style of building a large application out of independently-deployable parts that communicate over the network. A well-designed microservice architecture shouldn't depend on sharing files between components, and could reasonably run distributed across several systems. Individual services could run on bare-metal hosts and outside containers. This is often in contrast to a "monolithic" application, a single large deployable where all parts have to be deployed together, but where components can communicate with ordinary function calls.
Docker provides a way of packaging and running applications that are isolated from their host system. If you have an application that depends on a specific version of Python with specific C library dependencies, those can be bundled into a Docker image, and you can just run it without needing to separately install them on the host.
Public-cloud services like AWS fundamentally let you rent someone else's computer by the hour. An AWS Elastic Compute Cloud (EC2) instance literally is just a computer that you can ssh into and run things. AWS, like most other public-cloud providers offers a couple of tiers of services on top of this; a cloud-specific networking and security layer, various pre-packaged open-source tools as services (you can rent a MySQL or PostgreSQL database by the hour using AWS RDS, for example), and then various proprietary cloud-specific offerings (Amazon's DynamoDB database, analytics and machine-learning services). This usually gives you "somewhere to run it" more than any particular design features, unless you're opting to use a cloud's proprietary offerings.
Now, these things can go together neatly:
You design your application to run as microservices; you build and unit-test them locally, without any cloud or container infrastructure.
You package each microservice to run in a Docker container, and do local integration testing using Docker Compose, without any cloud infrastructure.
You further set up your combined application to deploy in Kubernetes, using Docker Desktop or Minikube to test it locally, again without any cloud infrastructure.
You get a public-cloud Kubernetes cluster (AWS EKS, Google GKE, Azure AKS, ...) and deploy the same application there, using the cloud's DNS and load balancing capabilities.
Again, all of these steps are basically independent of each other. You could deploy a monolithic application in containers; you could deploy microservices directly on cloud compute instances; you could run containers in an on-premises environment or directly on cloud instances, instead of using a container orchestrator.
No, using a cloud provider does not imply using a microservice architecture.
AWS can be (and is often) used to spin up a monolithic service, e.g. just a single EC2 server which uses a single RDS database.
Utilizing Docker and a container orchestrator like ECS or EKS, also does not mean on its own that one has a microservice architecture. If you split your backend and frontend into two Docker containers that get run on ECS, that's really not a microservice architecture. Even if you'd horizontally scale them, so you'd have multiple identical containers running for both the backend and frontend service, they still wouldn't be thought of as microservices.
Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops (project link) into local Kubernetes cluster (setup using kubeadm)?
My thinking is that if the application runs in k8s cluster based on AWS EC2 instances, it should also run in local k8s cluster as well. I am trying it locally for testing purposes.
Heres what I have tried so far but it is not working.
First I set up my local 2-node cluster using kubeadm
Then I modified the installation script of the project (link given above) by removing all the references to EC2 (as I am using local machines) and kops (particularly in their create_cluster.py script) state.
I have modified their application yaml files (app requirements) to meet my localsetup (2-node)
Unfortunately, although most of the application pods are created and in running state, some other application pods are unable to create and therefore, I am not being able to run the whole application on my local cluster.
I appreciate your help.
It is the beauty of Docker and Kubernetes. It helps to keep your development environment to match production. For simple applications, written without custom resources, you can deploy the same workload to any cluster running on any cloud provider.
However, the ability to deploy the same workload to different clusters depends on some factors, like,
How you manage authorization and authentication in your cluster? for example, IAM, IRSA..
Are you using any cloud native custom resources - ex, AWS ALBs used as LoadBalancer Services
Are you using any cloud native storage - ex, your pods rely on EFS/EBS volumes
Is your application cloud agonistic - ex using native technologies like Neptune
Can you mock cloud technologies in your local - ex. Using local stack to mock Kinesis, Dynamo
How you resolve DNS routes - ex, Say you are using RDS n AWS. You can access it using a route53 entry. In local you might be running a mysql instance and you need a DNS mechanism to discover that instance.
I did a google search and looked at the documentation of kOps. I could not find any info about how to deploy to local, and it only supports public cloud providers.
IMO, you need to figure out a way to set up your local EKS cluster, and if there are any usage of cloud native technologies, you need to figure out an alternative way about doing the same in your local.
The true answer, as Rajan Panneer Selvam said in his response, is that it depends, but I'd like to expand somewhat on his answer by saying that your application should run on any K8S cluster given that it provides the services that the application consumes. What you're doing is considered good practice to ensure that your application is portable, which is always a factor in non-trivial applications where simply upgrading a downstream service could be considered a change of environment/platform requiring portability (platform-independence).
To help you achieve this, you should be developing a 12-Factor Application (12-FA) or one of its more up-to-date derivatives (12-FA is getting a little dated now and many variations have been suggested, but mostly they're all good).
For example, if your application uses a database then it should use DB independent SQL or no-sql so that you can switch it out. In production, you may run on Oracle, but in your local environment you may use MySQL: your application should not care. The credentials and connection string should be passed to the application via the usual K8S techniques of secrets and config-maps to help you achieve this. And all logging should be sent to stdout (and stderr) so that you can use a log-shipping agent to send the logs somewhere more useful than a local filesystem.
If you run your app locally then you have to provide a surrogate for every 'platform' service that is provided in production, and this may mean switching out major components of what you consider to be your application but this is ok, it is meant to happen. You provide a platform that provides services to your application-layer. Switching from EC2 to local may mean reconfiguring the ingress controller to work without the ELB, or it may mean configuring kubernetes secrets to use local-storage for dev creds rather than AWS KMS. It may mean reconfiguring your persistent volume classes to use local storage rather than EBS. All of this is expected and right.
What you should not have to do is start editing microservices to work in the new environment. If you find yourself doing that then the application has made a factoring and layering error. Platform services should be provided to a set of microservices that use them, the microservices should not be aware of the implementation details of these services.
Of course, it is possible that you have some non-portable code in your system, for example, you may be using some Oracle-specific PL/SQL that can't be run elsewhere. This code should be extracted to config files and equivalents provided for each database you wish to run on. This isn't always possible, in which case you should abstract as much as possible into isolated services and you'll have to reimplement only those services on each new platform, which could still be time-consuming, but ultimately worth the effort for most non-trival systems.
i'dont really understand how to install something from GCP Marketplace to Compute Engine, which has been created already(windows servser). For instance i need to deploy Jenkins to practice with CI, but when i'm choosing that solution from Marketplace it's just deploying right below my VM in the list and looks like a separate process but i need this exactly on my RDP.
It is unlikely there is a good Marketplace based solution for your use case.
Depending on the type of solution you pick off the Marketplace, you'll get different behavior. Many of the solutions in the marketplace are self-contained -- they'll install the infrastructure they need to run, such as additional VMs. This is done via Deployment Manager. They won't install on VMs you already have provisioned. (This also lets the software and infrastructure be easily removed).
Others will just provide a container which you can place on an already running VM (for example, this jenkins package. These will require more work on your part to manage and keep updated, of course (and obviously find a container that works on your windows machine if this is the route you want to go). I don't currently see an obvious candidate in the market for Jenkins.
A third type of marketplaces package is "click to deploy". These will bring up a GKE cluster to run the containers on, but this likely isn't what you're looking for if you don't want additional VMs.
I have followed the Docker Docs about setting up Swarm on Virtualbox.
I suppose it is the same procedure to set it up on AWS, Azure or DigitalOcean.
It is a lot to do manually every time .
Is there a tool to automate this?
I would like to use something to set up and scale Swarm like Compose does for Docker .
Maybe I would start with one AWS instance and 2-3 containers and then scale them up to 100 containers and the instances to scale accordingly. Then I would want to scale down to 2 instances and the rest would shut down.
Does something like this exist ?
If you want to avoid manual configurations but still get the required high availability and cost efficiency, try to run Docker Swarm template pre-packaged by Jelastic:
it has built-in automatic clustering and scaling
the installation is performed automatically and you'll get full access to the cluster via intuitive UI
containers are running directly on bare metal, so no need to reserve full VMs for each service (and you can choose the datacenter you want to run your project on)
the payment is done based on actual consumption of RAM and CPU
containers are automatically distributed across different hardware servers that increases high availability
The details about the package and installation steps are in this article.
You can use Ansible for configuring the Swarm master, Swarm nodes, and all the required cluster discovery. Ansible is a general IT automation tool, but it comes with a very powerful Docker module that allows to set up Docker Swarm easily.
This GitHub repository shows a good example how to set up Swarm with Ansible.
You can use Docker Machine for provisioning hosts and configuring swarm easily (example).
The Docker Ecosystem includes also managed solutions like Tutum or Docker Cloud to achieve easily what you want.
Checkout devopsbyte.com blog, which covers how to set up a docker swarm cluster using ansible
I am working on a project using a microservices architecture.
Each service lives in its own docker container and has a separate git repository in order to ensure loose coupling.
It is my understanding that AWS recently announced support for Multi-Container Docker environments in ElasticBeanstalk. This is great for development because I can launch all services with a single command and test everything locally on my laptop. Just like Docker Compose.
However, it seems I only have the option to also deploy all services at once which I am afraid defies the initial purpose of having a micro services architecture.
I would like to be able to deploy/version each service independently to AWS. What would be the best way to achieve that while keeping infrastructure management to a minimum?
We are currently using Amazon ECS to accomplish exactly what you are talking about trying to achieve. You can define your Docker Container as a Task definition and then Create an ECS Service which will handle number of instances, scaling, etc.
One thing to note is Amazon mentions the word container a lot in the documentation. They may be talking about the EC2 instance used for the cluster for your docker instances/containers.