Deploy Docker container using Kubernetes - amazon-web-services

I'm learning about Kubernetes because it's a very useful tool to manage and deploy container.
So My question is:
For example i have 2 instances Amazon EC2 called Kube1 and Kube2. So on Kube1 i create some container using Docker and deploy wordpress successfully. Now i want to make a cluster between Kube1 and Kube2 and after that using Kubernetes to deploy all of the containers on Kube1 to Kube2. Is there any step-by-step tutorial to get me through it? I'm kind of stuck with a lot of new concept of Kubernetes.

Kubernetes is an orchestration tool.
I lets you deploy containers on a cluster to insure availability.
What that means is that you define containers specs (or sets of containers) called Pods, and you send them to the cluster manager to be deployed.
You do not choose where the Pods get deployed: Kubernetes decides where you Pod is deployed depending on the resources it needs and the resources that are available in the cluster.
There is a concept of Service, (which I find is confusing as 'service' often means your 'application' in today's jargon), but a Kubernetes Service is a load balanced proxy to the Pod you target.
The Service insures that you can talk to a Pod using a 'selector' which defines what Pods are targeted.
If you have a Wordpress site, it serves content. If you have 2 containers running this site, that are the same, then the Service would load balance the requests to either one of the 2 Pods.
Now, for this to work, you need the 2 Pods to be the same, that means if the data is updated (as it would be on a blog) the data needs to get to the Wordpress server somehow from a single source.
You could have a shared Database Pod that both servers connect to. Ideally you use a distributed version of the database that takes care of replication. Or you'd need to mirror the DB.
The use-case you mention though is a bit different: you're talking about porting an infrastructure to another server.
If you have your containers running on one node, replicating somewhere else should be as easy as pushing your containers to a registry and pulling then on to the other node. For the data, you may need to backup the volume and move it manually, or create a Docker volume to push to your registry.

Related

Best approach to deploy a multi-containers web app?

I have been working on a web app for a few months and now it's ready for deployment. My frontend and backend are in different docker containers (and different repos as well). I use docker-compose to communicate between the two containers and for nginx. Now, I want to deploy my app to AWS and I'm thinking of 2 approaches, but I don't know which one is better:
Deploy the 2 containers separately (as 2 different apps) so that it's easier for me to make changes/maintain each of them, and I also read somewhere that this approach is more secured.
Deploy them as a single app for simpler deployment process, but other than that, I can't really think of anything good about this approach.
I'm obviously leaning more toward the first approach, but if anyone could give me more insights on the pros and cons of both approaches, I would highly appreciate! I am trying to make this process as professional as possible so I can learn more about devOps.
So what docker-compose does under the hood:
Create a docker network
Put all containers in this network
Sets up DNS names, so containers can find each other using their names
This can also be achieved with ECS (which seems suitable for your use case).
So create an ECS Cluster with Fargate as the capacity provider (allowing you to work serverless and don't have to care about ec2 instances)
ECS works with task definitions, so you can create a task definition containing your backend and frontend and create a service based on the definition.
All containers defined in one task work exactly like docker-compose, ECS creates a docker network for them, and they are basically in the same network.
Also see:
AWS Docs for ECS task definitions
AWS Docs for launch types
If you just want to use nginx in front of your service for load balancing, maybe using an application load balancer will be a better choice.

Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops into local kubernetes cluster (using kubeadm)?

Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops (project link) into local Kubernetes cluster (setup using kubeadm)?
My thinking is that if the application runs in k8s cluster based on AWS EC2 instances, it should also run in local k8s cluster as well. I am trying it locally for testing purposes.
Heres what I have tried so far but it is not working.
First I set up my local 2-node cluster using kubeadm
Then I modified the installation script of the project (link given above) by removing all the references to EC2 (as I am using local machines) and kops (particularly in their create_cluster.py script) state.
I have modified their application yaml files (app requirements) to meet my localsetup (2-node)
Unfortunately, although most of the application pods are created and in running state, some other application pods are unable to create and therefore, I am not being able to run the whole application on my local cluster.
I appreciate your help.
It is the beauty of Docker and Kubernetes. It helps to keep your development environment to match production. For simple applications, written without custom resources, you can deploy the same workload to any cluster running on any cloud provider.
However, the ability to deploy the same workload to different clusters depends on some factors, like,
How you manage authorization and authentication in your cluster? for example, IAM, IRSA..
Are you using any cloud native custom resources - ex, AWS ALBs used as LoadBalancer Services
Are you using any cloud native storage - ex, your pods rely on EFS/EBS volumes
Is your application cloud agonistic - ex using native technologies like Neptune
Can you mock cloud technologies in your local - ex. Using local stack to mock Kinesis, Dynamo
How you resolve DNS routes - ex, Say you are using RDS n AWS. You can access it using a route53 entry. In local you might be running a mysql instance and you need a DNS mechanism to discover that instance.
I did a google search and looked at the documentation of kOps. I could not find any info about how to deploy to local, and it only supports public cloud providers.
IMO, you need to figure out a way to set up your local EKS cluster, and if there are any usage of cloud native technologies, you need to figure out an alternative way about doing the same in your local.
The true answer, as Rajan Panneer Selvam said in his response, is that it depends, but I'd like to expand somewhat on his answer by saying that your application should run on any K8S cluster given that it provides the services that the application consumes. What you're doing is considered good practice to ensure that your application is portable, which is always a factor in non-trivial applications where simply upgrading a downstream service could be considered a change of environment/platform requiring portability (platform-independence).
To help you achieve this, you should be developing a 12-Factor Application (12-FA) or one of its more up-to-date derivatives (12-FA is getting a little dated now and many variations have been suggested, but mostly they're all good).
For example, if your application uses a database then it should use DB independent SQL or no-sql so that you can switch it out. In production, you may run on Oracle, but in your local environment you may use MySQL: your application should not care. The credentials and connection string should be passed to the application via the usual K8S techniques of secrets and config-maps to help you achieve this. And all logging should be sent to stdout (and stderr) so that you can use a log-shipping agent to send the logs somewhere more useful than a local filesystem.
If you run your app locally then you have to provide a surrogate for every 'platform' service that is provided in production, and this may mean switching out major components of what you consider to be your application but this is ok, it is meant to happen. You provide a platform that provides services to your application-layer. Switching from EC2 to local may mean reconfiguring the ingress controller to work without the ELB, or it may mean configuring kubernetes secrets to use local-storage for dev creds rather than AWS KMS. It may mean reconfiguring your persistent volume classes to use local storage rather than EBS. All of this is expected and right.
What you should not have to do is start editing microservices to work in the new environment. If you find yourself doing that then the application has made a factoring and layering error. Platform services should be provided to a set of microservices that use them, the microservices should not be aware of the implementation details of these services.
Of course, it is possible that you have some non-portable code in your system, for example, you may be using some Oracle-specific PL/SQL that can't be run elsewhere. This code should be extracted to config files and equivalents provided for each database you wish to run on. This isn't always possible, in which case you should abstract as much as possible into isolated services and you'll have to reimplement only those services on each new platform, which could still be time-consuming, but ultimately worth the effort for most non-trival systems.

How to Host a microservice webapp in AWS

I have a microservice architecture web application that I need to host in AWS in a cheap and optimized manner.
I have 3 spring boot applications and two node applications. My Application used MySql Database.
Following is my plan:
Get 1 EC2 instance.
Get RDS for mySql DB.
Install docker on EC2.
create 2 docker containers.
a. One tomcat container to run all spring boot applications.
b. one container to run node applications.
Q1. Is it possible to deploy my application in this manner, or am I inherently flawed in my understanding of AWS architecture?
Q2. Do I need a 3rd Nginx docker container?
Q3. is there anything else required?
Any Help is welcome. Thanks in advance.
In my opinion, the current design is good to begin with keeping in mind you want to have the economy in mind. You have isolated your datastore by moving it to RDS.
Q1. Yes, I think your approach is fine. But this would mean you will have to take care of the provisioning of the EC2 instance and RDS instance on your own. You can also try to explore Elastic Beanstalk if you want to offload all this to AMZ. The tech stack that you are currently using is supported by Elastic Beanstalk and you may find it a little difficult to begin with but later will prove to be beneficial.
Q2. I would say yes. You should have a separate NGINX container.
Q3. You must also try to containerize each Spring Boot application instead of having just one docker container hosting all of them. And same goes true for your 2 Node applications too. Once you have dockerized all the application you then have complete isolation of the application and can handle the resiliency & scaling part much better than keeping them together.
I hope this answers your query.

Deploying MEAN app on AWS ECS

I have successfully deployed a MEAN app on AWS ECS, but there are a couple things I don't have set-up properly.
1) If I spin up a new task, the Mongo data does not persist between the containers
2) Should my Mongo container and my frontend container be in the same task definition? This seems wrong because I feel like they should be able to scale independently of each other. But if they should be in separate task definitions, do I link them the same way?
Current Architecture:
1 Task Defintion
contains frontend container and mongo container which are linked
I did not define any mounts or volumes (which I assume is why data isn't persisting, but I am struggling to figure out how to properly set this up)
1 Cluster
1 service
contains load balancer and auto-scaling group (when this auto-scaling group creates a new task, I run into the issue of not having data persistence)
I guess what you assume is correct. Since you are not defining any mounts the data is not persistent. I recommend using Amazon EFS to Persist Data from Amazon ECS Containers.You can find step by step guide below to achieve the same.
Using Amazon EFS to Persist Data from Amazon ECS Containers

How to understand Amazon ECS cluster

I recently tried to deploy docker containers using task definition by AWS. Along the way, I came across the following questions.
How to add an instance to a cluster? When creating a new cluster using Amazon ECS console, how to add a new ec2 instance to the new cluster. In other words, when launching a new ec2 instance, what config is needed in order to allocate it to a user created cluster under Amazon ECS.
How many ECS instances are needed in a cluster, and what are the factors?
If I have two instances (ins1, ins2) in a cluster, and my webapp, db containers are running in ins1. After I updated the running service (through http://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html), I can see the newly created service is running in "ins2", before draining the old service in "ins1". My question is that after my webapp container allocated to another instance, the access IP address becomes another instance IP. How to prevent or what the solution to make the same IP address access to webapp? Not only IP, what about the data after changing to a new instance?
These are really three fairly different questions, so it might best to split them into different questions here accordingly - I'll try to provide an answer regardless:
Amazon ECS Container Instances are added indirectly, it's the job of the Amazon ECS Container Agent on each instance to register itself with the cluster created and named by you, see concepts and lifecycle for details. For this to work, you need follow the steps outlined in Launching an Amazon ECS Container Instance, be it manually or via automation. Be aware of step 10.:
By default, your container instance launches into your default
cluster. If you want to launch into your own cluster instead of the
default, choose the Advanced Details list and paste the following
script into the User data field, replacing your_cluster_name with the
name of your cluster.
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
You only need a single instance for ECS to work as such, because the cluster itself is managed by AWS on your behalf. This wouldn't be sufficient for high availability scenarios though:
Because the container hosts are just regular Amazon EC2 instances, you would need to follow AWS best practices and spread them over two or three Availability Zones (AZ) so that a (rare) outage of an AZ doesn't impact your cluster, because ECS can migrate your containers to a different host instance (provided your cluster has sufficient spare capacity).
Many advanced clustering technologies that facilitate containers have their own service orchestration layers and usually require an uneven number >= 3 (service) instances for a high availability setup. You can read more about this in section Optimal Cluster Size within Administration for example (see also Running CoreOS with AWS EC2 Container Service).
This refers back to the high availability and service orchestration topics mentioned in 2. already, more precisely your are facing the problem of service discovery, which becomes more prevalent even when using container technologies in general and micro-services in particular:
To get familiar with this, I recommend Jeff Lindsay's Understanding Modern Service Discovery with Docker for an excellent overview specifically focused on your use case.
Jeff also maintains a containerized version of the increasingly popular Consul, which makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface (see Running Consul in Docker and gliderlabs/docker-consul).