Multiple docker containers vs multiple ec2 instances - amazon-web-services

I have multiple microservices that I have to run independently. I am thinking of deploying them in docker containers in one ec2 instances. But then there is the question of scaling. I knowledge is ecs gives me the ability to scale. I haven't used ecs though. So my question is can I scale all my containers by creating one ec2 networks ? Or is there anything I haven't thought or know about ? Also, what is the performance issues with this deployment ?
Thanks
Amit

For the microservices deployment, one ec2 instance will never suffice for production workload considering HA, scaling, performance etc.
You should think of cluster. A compute cluster is a multi-tenant computing environment consisting of servers (called “nodes”) whose resources have been pooled together and are used to execute processes. To enable this behavior, the nodes in a cluster must be managed by some sort of cluster management framework.
So you have to choose between multiple options from Kubernets, Mesos & Marathon and AWS ECS.
The EC2 Container Service is: A cluster management framework that uses optimistic, shared state scheduling to execute processes on EC2 instances using Docker containers.
The above option provides all the functionality you looking for. So you need to analyze more of them and select the most suitable option as per your need.

Related

Deploying to bare EC2 instances in an ASG?

I have a service that needs to run on our own EC2 instances, since it requires some support from the kernel. My previous experience is all with containers in AWS. The application itself is distributed as a single JAR file and I'm looking for advice for how I should automate deployments. The architecture is:
An ALB in front of the ASG.
EC2 instance running a single Java application.
Any open sockets are open for an hour tops and to not cause any trouble, we have to drain the connections to the EC2 instances before performing an update, so a hard requirement is for the ALB to stop opening new connections for an hour before updating the software. The application is mission critical and ECS has had some issues last year, so I want to minimize the AWS services I depend on. While I could do what I want on my own ECS cluster with custom AMIs, I don't want to do it, since I will run a single instance of the app per host and don't need the extra layer.
My question: What is the simplest method to achieve this using CodePipeline? My understanding is that I need to use a CodeDeploy deployment step to push something to bare EC2 instances. How does draining with an ALB work in this case? We're using CloudFormation for the deployment.
You need to use codedeploy. You can find tutorial on AWS codedeploy documentation.
Codedeploy deployment lifecycle hooks for EC2.
https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-server

Run multiple container instances on Azure with the same docker image just like AWS Fargate tasks

I have used AWS Fargate previously for deploying a docker image as 10+ tasks in a cluster.
Now I want to do the same on Azure. I have been successful to run the image on a container group but I want to create replicas of the container group just like on AWS it's possible to run multiple tasks.
Can anyone suggest how to achieve the same on Azure ?
Also if I want to scale the container groups how could I do that ? (just like on AWS scaling policies and auto-scaling groups were there)
A container group (POD in AKS) is a collection of containers that get scheduled on the same host machine. The containers in a container group share a lifecycle, resources, local network, and storage volumes. It's similar in concept to a pod in Kubernetes.
Question: How to create replicas of the container group in Azure?
Answer: Here are two common ways to deploy a multi-container group
1.Resource Manager template
A Resource Manager template is recommended when you need to deploy additional Azure service resources (for example, an Azure Files share)
2.YAML file.
The YAML format's is more concise nature, a YAML file is recommended when your deployment includes only container instances.
Reference: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-container-groups
These two tool you can use to scale the container groups one is Most Popular AKS and Docker Swarm.
But I will suggest you use AKS because docker swarm is limited for scaling docker container only while AKS is use for scaling all types of containers. example Container D, Rocket Container and Docker Container. AKS uses POD to keep the container. So here we can assume container group with pods name. By default, AKS is not set for auto scaling or healing but using high level of Kubernetes object like Replication set we can auto scale. Kubernetes supports horizontal pod autoscaling.
Follow this link for auto scaling pods using Kubernetes: https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli#autoscale-pods
It sounds like Azure Web App for Containers is what you're looking for. Like Fargate, you just point it to your image repository and which image version it should use.
Controlling the number of instances is either done manually using a slider:
(this can also be done using the Azure CLI).
...or automatically based on conditions on certain metrics.
And like Fargate, Azure Webapps for Containers also provide various ways of side-by-side deployments (called deployment slots) that you can route part of your traffic to.
Although all of this could also be achieved using K8S (as mentioned by others), I think Webapps For Containers is probably the more "1-to-1 equivalent" of Amazons Fargate.

What are usually the basis on grouping Fargate applications under an ECS cluster?

Based on amazon docs:
An Amazon ECS cluster is a logical grouping of tasks or services. If you are running tasks or services that use the EC2 launch type, a cluster is also a grouping of container instances. If you are using capacity providers, a cluster is also a logical grouping of capacity providers. When you first use Amazon ECS, a default cluster is created for you, but you can create multiple clusters in an account to keep your resources separate.
In our use case, we are not using EC2 launch types. We are mainly using Fargate.
What are the usual basis/strategy on grouping services? Is it a purely subjective thing?
Lets say I have a Payment Service, Invoice/Receipt Service, User Service, and Authentication Service. Do I put some of them in an ECS cluster or is it best practice to have them on separate ECS clusters.
A service is a functioning application, so for example you might have an authentication service or payment service etc.
Whilst services can speak between one another, a service by itself should contain all parts to make it work, these parts are the containers.
Your service may be as simple as one container, or contain many containers to provide its functionality such as caching or background jobs.
The services concept generally comes from the ideas of both service driven design and micro service architecture.
Ultimately the decision comes down to you, you could put everything under one service, but this could lead to problems further down the line.
One key point to note is the scaling of containers is done at a service levels so you would need to increase all containers that are part of your task definition. You generally want to scale to meet the demands of functionality.
An ECS Cluster may contain one service or contain a number of services that produce a deliverable. For example in AWS S3 is made up of more than 200 micro services, these would be a cluster. However you would not expect every AWS service to be part of the same cluster.
In your scenario you define several services, personally I would separate these into different clusters as they deliver completely different business functions.

How to setup ReactJs, NodeJs, Redis application on AWS

I am newbie in AWS and totally confused about the deploy. here i have
React for front-end , Nodejs for API, Mongodb for database and redis for session store.
Can i use 1 EC2 for every service ? or
Divide every service as different EC2
Can i use Elastic Beanstalk Environment?
Which is better option for scaling and update without downtime in future ?
Can I use 1 EC2 for every service?
Its depend on your case but the best approach is to utilize the underlying EC2 instance is to run multiple services on single EC2 for nodejs and front-end app, as nodejs container-based application take maximum advantage in this case. in this case, ECS blue-green deployment with the dynamic port of the container can help to scale with zero downtime.
Divide every service as different EC2
In nodejs based application this approach does not help you a lot where for Redis and mongo it make sense if you are planning for clustering and replica also these applications need persistent storage so will keep storage on each instance, so my suggestion is to keep redis and mongo DB in daemon mode and application in replica mode, as these are application that will do blue-green deployment not the redis or Db.
AWS provides two types of task to deal with such cases
REPLICA—
The replica scheduling strategy places and maintains the desired
number of tasks across your cluster. By default, the service scheduler
spreads tasks across Availability Zones. You can use task placement
strategies and constraints to customize task placement decisions. For
more information, see Replica.
DAEMON—
The daemon scheduling strategy deploys exactly one task on each active
container instance that meets all of the task placement constraints
that you specify in your cluster. When using this strategy, there is
no need to specify a desired number of tasks, a task placement
strategy, or use Service Auto Scaling policies. For more information
ecs_services

How to understand Amazon ECS cluster

I recently tried to deploy docker containers using task definition by AWS. Along the way, I came across the following questions.
How to add an instance to a cluster? When creating a new cluster using Amazon ECS console, how to add a new ec2 instance to the new cluster. In other words, when launching a new ec2 instance, what config is needed in order to allocate it to a user created cluster under Amazon ECS.
How many ECS instances are needed in a cluster, and what are the factors?
If I have two instances (ins1, ins2) in a cluster, and my webapp, db containers are running in ins1. After I updated the running service (through http://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html), I can see the newly created service is running in "ins2", before draining the old service in "ins1". My question is that after my webapp container allocated to another instance, the access IP address becomes another instance IP. How to prevent or what the solution to make the same IP address access to webapp? Not only IP, what about the data after changing to a new instance?
These are really three fairly different questions, so it might best to split them into different questions here accordingly - I'll try to provide an answer regardless:
Amazon ECS Container Instances are added indirectly, it's the job of the Amazon ECS Container Agent on each instance to register itself with the cluster created and named by you, see concepts and lifecycle for details. For this to work, you need follow the steps outlined in Launching an Amazon ECS Container Instance, be it manually or via automation. Be aware of step 10.:
By default, your container instance launches into your default
cluster. If you want to launch into your own cluster instead of the
default, choose the Advanced Details list and paste the following
script into the User data field, replacing your_cluster_name with the
name of your cluster.
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
You only need a single instance for ECS to work as such, because the cluster itself is managed by AWS on your behalf. This wouldn't be sufficient for high availability scenarios though:
Because the container hosts are just regular Amazon EC2 instances, you would need to follow AWS best practices and spread them over two or three Availability Zones (AZ) so that a (rare) outage of an AZ doesn't impact your cluster, because ECS can migrate your containers to a different host instance (provided your cluster has sufficient spare capacity).
Many advanced clustering technologies that facilitate containers have their own service orchestration layers and usually require an uneven number >= 3 (service) instances for a high availability setup. You can read more about this in section Optimal Cluster Size within Administration for example (see also Running CoreOS with AWS EC2 Container Service).
This refers back to the high availability and service orchestration topics mentioned in 2. already, more precisely your are facing the problem of service discovery, which becomes more prevalent even when using container technologies in general and micro-services in particular:
To get familiar with this, I recommend Jeff Lindsay's Understanding Modern Service Discovery with Docker for an excellent overview specifically focused on your use case.
Jeff also maintains a containerized version of the increasingly popular Consul, which makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface (see Running Consul in Docker and gliderlabs/docker-consul).