I have been working on a web app for a few months and now it's ready for deployment. My frontend and backend are in different docker containers (and different repos as well). I use docker-compose to communicate between the two containers and for nginx. Now, I want to deploy my app to AWS and I'm thinking of 2 approaches, but I don't know which one is better:
Deploy the 2 containers separately (as 2 different apps) so that it's easier for me to make changes/maintain each of them, and I also read somewhere that this approach is more secured.
Deploy them as a single app for simpler deployment process, but other than that, I can't really think of anything good about this approach.
I'm obviously leaning more toward the first approach, but if anyone could give me more insights on the pros and cons of both approaches, I would highly appreciate! I am trying to make this process as professional as possible so I can learn more about devOps.
So what docker-compose does under the hood:
Create a docker network
Put all containers in this network
Sets up DNS names, so containers can find each other using their names
This can also be achieved with ECS (which seems suitable for your use case).
So create an ECS Cluster with Fargate as the capacity provider (allowing you to work serverless and don't have to care about ec2 instances)
ECS works with task definitions, so you can create a task definition containing your backend and frontend and create a service based on the definition.
All containers defined in one task work exactly like docker-compose, ECS creates a docker network for them, and they are basically in the same network.
Also see:
AWS Docs for ECS task definitions
AWS Docs for launch types
If you just want to use nginx in front of your service for load balancing, maybe using an application load balancer will be a better choice.
Related
I'm new to these both technologies but have trouble understanding what exactly do these two do different, a use case example will be very helpful.
AWS ECS is the container orchestration service that allows deployment, scale of the containers. Let's say you have 10 apps to be deployed on EC2 machines. ECS will provide you an easy way to deploy and manage them, scale the app when needed etc.
Now, these 10 apps might want to talk to each other. One way is to use the ip address and make an RPC call to the other application. However, this process doesn't scale. What if the machine is restarted or the app or the app is moved to another EC2 machine etc.
So, you require a middleware that manages the mapping of app to the EC2 machine so that the application doesn't need to bother about how to call the other application.
AWS AppMesh provides exactly that middleware. It provides an application level networking so that your service can communicate with other services.
ECS - Platform to run containers as task/service in a clustered manner.
when multiple containers are running in an ecs cluster they may want to talk with each other OR other aws services. These containers should know where other containers/services are by means of ip/endpoint..etc. That's where service discovery comes into picture.
Appmesh - Appmesh is a service discovery tool plus a lot more feature. One of them is to ensuring reliable communication between containers.
Appmesh uses envoy as sidecar in ecs to implement service discovery(plus many more) functionality.
Most of the time Appmesh is used in conjunction with Aws CloudMap.
I have a microservice architecture web application that I need to host in AWS in a cheap and optimized manner.
I have 3 spring boot applications and two node applications. My Application used MySql Database.
Following is my plan:
Get 1 EC2 instance.
Get RDS for mySql DB.
Install docker on EC2.
create 2 docker containers.
a. One tomcat container to run all spring boot applications.
b. one container to run node applications.
Q1. Is it possible to deploy my application in this manner, or am I inherently flawed in my understanding of AWS architecture?
Q2. Do I need a 3rd Nginx docker container?
Q3. is there anything else required?
Any Help is welcome. Thanks in advance.
In my opinion, the current design is good to begin with keeping in mind you want to have the economy in mind. You have isolated your datastore by moving it to RDS.
Q1. Yes, I think your approach is fine. But this would mean you will have to take care of the provisioning of the EC2 instance and RDS instance on your own. You can also try to explore Elastic Beanstalk if you want to offload all this to AMZ. The tech stack that you are currently using is supported by Elastic Beanstalk and you may find it a little difficult to begin with but later will prove to be beneficial.
Q2. I would say yes. You should have a separate NGINX container.
Q3. You must also try to containerize each Spring Boot application instead of having just one docker container hosting all of them. And same goes true for your 2 Node applications too. Once you have dockerized all the application you then have complete isolation of the application and can handle the resiliency & scaling part much better than keeping them together.
I hope this answers your query.
The purpose is production-level deployment of a 8-container application, using swarm.
It seems (ECS aside) we are faced with 2 options:
Use the so called docker-for-aws that does (swarm) provisioning via a cloudformation template.
Set up our VPC as usual, install docker engines, bootstrap the swarm (via init/join etc) and deploy our application in normal EC2 instances.
Is the only difference between these two approaches the swarm bootstrap performed by docker-for-aws?
Any other benefits of docker-for-aws compared to a normal AWS VPC provisioning?
Thx
If you need to provide a portability across different cloud providers - go with AWS CloudFormation template provided by Docker team. If you only need to run on AWS - ECS should be fine. But you will need to spend a bit of time on figuring out how service discovery works there. Benefit of Swarm is that they made it fairly simple, just access your services via their service name like they were DNS names with built-in load-balancing.
It's fairly easy to automate new environment creation with it and if you need to go let's say Azure or Google Cloud later - you simply use template for them to get your docker cluster ready.
Docker team has put quite a few things into that template and you really don't want to re-create them yourself unless you really have to. For instance if you don't use static IPs for your infra (fairly typical scenario) and one of the managers dies - you can't just restart it. You will need to manually re-join it to the cluster. Docker for AWS handles that through IPs sync via DynamoDB and uses other provider specific techniques to make failover / recovery work smoothly. Another example is logging - they push your logs automatically into CloudWatch, which is very handy.
A few tips on automating your environment provisioning if you go with Swarm template:
Use some infra automation tool to create VPC per environment. Use some template provided by that tool so you don't write too much yourself. Using a separate VPC makes all environment very isolated and easier to work with, less chance to screw something up. Also, you're likely to add more elements into those environments later, such as RDS. If you control your VPC creation it's easier to do that and keep all related resources under the same one. Let's say DEV1 environment's DB is in DEV1 VPC
Hook up running AWS Cloud Formation template provided by docker to provision a Swarm cluster within this VPC (they have a separate template for that)
My preference for automation is Terraform. It lets me to describe a desired state of infrastructure rather than on how to achieve it.
I would say no, there are basically no other benefits.
However, if you want to achieve all/several of the things that the docker-for-aws template provides I believe your second bullet point should contain a bit more.
E.g.
Logging to CloudWatch
Setting up EFS for persistence/sharing
Creating subnets and route tables
Creating and configuring elastic load balancers
Basic auto scaling for your nodes
and probably more that I do not recall right now.
The template also ingests a bunch of information about related resources to your EC2 instances to make it readily available for all Docker services.
I have been using the docker-for-aws template at work and have grown to appreciate a lot of what it automates. And what I do not appreciate I change, with the official template as a base.
I would go with ECS over a roll your own solution. Unless your organization has the effort available to re-engineer the services and integrations AWS offers as part of the offerings; you would be artificially painting yourself into a corner for future changes. Do not re-invent the wheel comes to mind here.
Basically what #Jonatan states. Building the solutions to integrate what is already available is...a trial of pain when you could be working on other parts of your business / application.
I'm learning about Kubernetes because it's a very useful tool to manage and deploy container.
So My question is:
For example i have 2 instances Amazon EC2 called Kube1 and Kube2. So on Kube1 i create some container using Docker and deploy wordpress successfully. Now i want to make a cluster between Kube1 and Kube2 and after that using Kubernetes to deploy all of the containers on Kube1 to Kube2. Is there any step-by-step tutorial to get me through it? I'm kind of stuck with a lot of new concept of Kubernetes.
Kubernetes is an orchestration tool.
I lets you deploy containers on a cluster to insure availability.
What that means is that you define containers specs (or sets of containers) called Pods, and you send them to the cluster manager to be deployed.
You do not choose where the Pods get deployed: Kubernetes decides where you Pod is deployed depending on the resources it needs and the resources that are available in the cluster.
There is a concept of Service, (which I find is confusing as 'service' often means your 'application' in today's jargon), but a Kubernetes Service is a load balanced proxy to the Pod you target.
The Service insures that you can talk to a Pod using a 'selector' which defines what Pods are targeted.
If you have a Wordpress site, it serves content. If you have 2 containers running this site, that are the same, then the Service would load balance the requests to either one of the 2 Pods.
Now, for this to work, you need the 2 Pods to be the same, that means if the data is updated (as it would be on a blog) the data needs to get to the Wordpress server somehow from a single source.
You could have a shared Database Pod that both servers connect to. Ideally you use a distributed version of the database that takes care of replication. Or you'd need to mirror the DB.
The use-case you mention though is a bit different: you're talking about porting an infrastructure to another server.
If you have your containers running on one node, replicating somewhere else should be as easy as pushing your containers to a registry and pulling then on to the other node. For the data, you may need to backup the volume and move it manually, or create a Docker volume to push to your registry.
I am working on a project using a microservices architecture.
Each service lives in its own docker container and has a separate git repository in order to ensure loose coupling.
It is my understanding that AWS recently announced support for Multi-Container Docker environments in ElasticBeanstalk. This is great for development because I can launch all services with a single command and test everything locally on my laptop. Just like Docker Compose.
However, it seems I only have the option to also deploy all services at once which I am afraid defies the initial purpose of having a micro services architecture.
I would like to be able to deploy/version each service independently to AWS. What would be the best way to achieve that while keeping infrastructure management to a minimum?
We are currently using Amazon ECS to accomplish exactly what you are talking about trying to achieve. You can define your Docker Container as a Task definition and then Create an ECS Service which will handle number of instances, scaling, etc.
One thing to note is Amazon mentions the word container a lot in the documentation. They may be talking about the EC2 instance used for the cluster for your docker instances/containers.