Deployment methods for docker based micro services architecture on AWS - amazon-web-services

I am working on a project using a microservices architecture.
Each service lives in its own docker container and has a separate git repository in order to ensure loose coupling.
It is my understanding that AWS recently announced support for Multi-Container Docker environments in ElasticBeanstalk. This is great for development because I can launch all services with a single command and test everything locally on my laptop. Just like Docker Compose.
However, it seems I only have the option to also deploy all services at once which I am afraid defies the initial purpose of having a micro services architecture.
I would like to be able to deploy/version each service independently to AWS. What would be the best way to achieve that while keeping infrastructure management to a minimum?

We are currently using Amazon ECS to accomplish exactly what you are talking about trying to achieve. You can define your Docker Container as a Task definition and then Create an ECS Service which will handle number of instances, scaling, etc.
One thing to note is Amazon mentions the word container a lot in the documentation. They may be talking about the EC2 instance used for the cluster for your docker instances/containers.

Related

Microservices same as cloud services or webservices?

Firstly, I apologize for the rather basic question. I am just beginning to learn about Microservices Architecture and would like to get my basics right.
I was wondering if topics such as AWS cloud services/web services imply the Microservices architecture. For instance, if someone is working on an AWS project does that mean that he is using a microservice architecture? I do understand AWS, Docker etc is more of a platform. Are they exclusively for Microservices?
I would really appreciate a short clarification
Microservices, cloud infrastructure like Amazon Web Services, and container infrastructure like Docker are three separate things; you can use any of these independently of the others.
"Microservices" refers to a style of building a large application out of independently-deployable parts that communicate over the network. A well-designed microservice architecture shouldn't depend on sharing files between components, and could reasonably run distributed across several systems. Individual services could run on bare-metal hosts and outside containers. This is often in contrast to a "monolithic" application, a single large deployable where all parts have to be deployed together, but where components can communicate with ordinary function calls.
Docker provides a way of packaging and running applications that are isolated from their host system. If you have an application that depends on a specific version of Python with specific C library dependencies, those can be bundled into a Docker image, and you can just run it without needing to separately install them on the host.
Public-cloud services like AWS fundamentally let you rent someone else's computer by the hour. An AWS Elastic Compute Cloud (EC2) instance literally is just a computer that you can ssh into and run things. AWS, like most other public-cloud providers offers a couple of tiers of services on top of this; a cloud-specific networking and security layer, various pre-packaged open-source tools as services (you can rent a MySQL or PostgreSQL database by the hour using AWS RDS, for example), and then various proprietary cloud-specific offerings (Amazon's DynamoDB database, analytics and machine-learning services). This usually gives you "somewhere to run it" more than any particular design features, unless you're opting to use a cloud's proprietary offerings.
Now, these things can go together neatly:
You design your application to run as microservices; you build and unit-test them locally, without any cloud or container infrastructure.
You package each microservice to run in a Docker container, and do local integration testing using Docker Compose, without any cloud infrastructure.
You further set up your combined application to deploy in Kubernetes, using Docker Desktop or Minikube to test it locally, again without any cloud infrastructure.
You get a public-cloud Kubernetes cluster (AWS EKS, Google GKE, Azure AKS, ...) and deploy the same application there, using the cloud's DNS and load balancing capabilities.
Again, all of these steps are basically independent of each other. You could deploy a monolithic application in containers; you could deploy microservices directly on cloud compute instances; you could run containers in an on-premises environment or directly on cloud instances, instead of using a container orchestrator.
No, using a cloud provider does not imply using a microservice architecture.
AWS can be (and is often) used to spin up a monolithic service, e.g. just a single EC2 server which uses a single RDS database.
Utilizing Docker and a container orchestrator like ECS or EKS, also does not mean on its own that one has a microservice architecture. If you split your backend and frontend into two Docker containers that get run on ECS, that's really not a microservice architecture. Even if you'd horizontally scale them, so you'd have multiple identical containers running for both the backend and frontend service, they still wouldn't be thought of as microservices.

AWS EC2 instance vs Docker?

What is the difference between an AWS EC2 instance and a docker container instance? When should I use one over the other?
When you get an EC2 instance it will provide the base installation of that specific operating system with some additional AWS packages installed such as the SSM Agent.
There are then AMIs that are prepared for specific usecases such as SQL Server, or in this case pre-configured with AWS Orchestration services (either ECS or EKS) which have the usecase software installed.
If you're not familiar with Docker I would suggest running it in your local environment first so that you can become familiar with it. Yes people have been moving towards containers and serverless but you need to ensure you are able to support this in production.
With containers being deployed you will need to understand the orchestration layer that you're using. It's very easy to see containers as an alternative to a virtualisation layer, but there are many differences to how these operate.
Take a look at the What is Docker? page for further explanations.

Jenkins setup on EC2 vs ECS

Currently we have Jenkins that is running on-premise(VMware), planning to move into the cloud(aws). What would be the best approach to install Jenkins whether on ec2 or ECS?
Best way would be running on EC2. Make sure you have granular control over your instance Security Group and Network ACL's. I would recommend using terraform to build your environment as you can write code and also version control it. https://www.terraform.io/downloads.html
Have you previously containerized your Jenkins? On VMWare itself? If not, and if you are not having experience with containers, go for EC2. It will be as easy as running on any other VM. For reproducing the infrastructure, use Terraform or CloudFormartion.
I would recommend dockerize your on-premise Jenkins first. See how much efforts are required in implementation and administrating/scaling it. Then go for ECS.
Else, shift to EC2 and see how much admin overhead + costs you are billed. Then if required, go for ECS.
Another point you have to consider is how your Jenkins is architected. Are you using master-slave? Are you running builds contentiously so that VMs are never idle? Do you want easy scaling such that build environment is created and destroyed per build execution?
If you have no experience with running containers then create it on EC2. Before running on ECS make sure you really understand containers and container orchestration.
Just want to complement the other answers by providing link to official AWS white paper:
Jenkins on AWS
It might be of special interest as it discusses both options in detail: EC2 and ECS:
In this section we discuss two approaches to deploying Jenkinson AWS. First, you could use the traditional deployment on top of Amazon Elastic Compute Cloud (Amazon EC2). Second, you could use the containerized deployment that leverages Amazon EC2 Container Service (Amazon ECS).Both approaches are production-ready for an enterprise environment.
There is also AWS sample solution for Jenkins on AWS for ECS:
https://github.com/aws-samples/jenkins-on-aws:
This project will build and deploy an immutable, fault tolerant, and cost effective Jenkins environment in AWS using ECS. All Jenkins images are managed within the repository (pulled from upstream) and fully configurable as code. Plugin installation is automated, including versioning, as well as configured through the Configuration as Code plugin.

Spring boot/cloud microservices on AWS

I have created a Spring cloud microservices based application with netflix APIs (Eureka, config, zuul etc). can some one explain me how to deploy that on AWS? I am very new to AWS. I have to deploy development instance of my application.
Do I need to integrate docker before that or I can go ahead without docker as well.
As long as your application is self-contained and you have externalised your configurations, you should not have any issue.
Go through this link which discusses what it takes to deploy an App to Cloud Beyond 15 factor
Use AWS BeanStalk to deploy and Manage your application. Dockerizing your app is not a predicament inorder to deploy your app to AWS.
If you use an EC2 instance then it's configuration is no different to what you do on your local machine/server. It's just a virtual machine. No need to dockerize or anything like that. And if you're new to AWS, I'd rather suggest to to just that. Once you get your head around, you can explore other options.
For example, AWS Beanstalk seems like a popular option. It provides a very secure and reliable configuration out of the box with no effort on your part. And yes, it does use docker under the hood, but you won't need to deal with it directly unless you choose to. Well, at least in most common cases. It supports few different ways of deployment which amazon calls "Application Environments". See here for details. Just choose the one you like and follow instructions. I'd like to warn you though that whilst Beanstalk is usually easier then EC2 to setup and use when dealing with a typical web application, your mileage might vary depending on your application's actual needs.
Amazon Elastic container Service / Elastic Kubernetes Service is also a good option to look into.
These services depend on the Docker Images of your application. Auto Scaling, Availability cross region replication will be taken care by the Cloud provider.
Hope this helps.

Choosing the right AWS Services and software tools

I'm developing a prototype IoT application which does the following
Receive/Store data from sensors.
Web application with a web-based IDE for users to deploy simple JavaScript/Python scripts which gets executed in Docker Containers.
Data from the sensors gets streamed to these containers.
User programs can use this data to do analytics, monitoring etc.
The logs of these programs are outputted to the user on the webapp
Current Architecture and Services
Using one AWS EC2 instance. I chose EC2 because I was trying to figure out the architecture.
Stack is Node.js, RabbitMQ, Express, MySQl, MongoDB and Docker
I'm not interested in using AWS IoT services like AWS IoT and Greengrass
I've ruled out Heroku since I'm using other AWS services.
Questions and Concerns
My goal is prototype development for a Beta release to a set of 50 users
(hopefully someone else will help/work on a production release)
As far as possible, I don't want to spend a lot of time migrating between services since developing the product is key. Should I stick with EC2 or move to Beanstalk?
If I stick with EC2, what is the best way to handle small-medium traffic? Use one large EC2 machine or many small micro instances?
What is a good way to manage containers? Is it worth it use swarm and do container management? What if I have to use multiple instances?
I also have small scripts which have status of information of sensors which are needed by web app and other services. If I move to multiple instances, how can I make these scripts available to multiple machines?
The above question also holds good for servers, message buses, databases etc.
My goal is certainly not production release. I want to complete the product, show I have users who are interested and of course, show that the product works!
Any help in this regard will be really appreciated!
If you want to manage docker containers with least hassle in AWS, you can use Amazon ECS service to deploy your containers or else go with Beanstalk. Also you don't need to use Swarm in AWS, ECS will work for you.
Its always better to scale out rather scale up, using small to medium size EC2 instances. However the challenge you will face here is managing and scaling underlying EC2's as well as your docker containers. This leads you to use Large EC2 instances to keep EC2 scaling aside and focus on docker scaling(Which will add additional costs for you)
Another alternative you can use for the Web Application part is to use, AWS Lambda and API Gateway stack with Serverless Framework, which needs least operational overhead and comes with DevOps tools.
You may keep your web app on Heroku and run your IoT server in AWS EC2 or AWS Lambda. Heroku is on AWS itself, so this split setup will not affect performance. You may heal that inconvenience of "sitting on two chairs" by writing a Terraform script which provisions both EC2 instance and Heroku app and ties them together.
Alternatively, you can use Dockhero add-on to run your IoT server in a Docker container alongside your Heroku app.
ps: I'm a Dockhero maintainer