There are several tutorials on how to deploy a containerized service to the cloud: AWS, Google Cloud Platform, Heroku, and many others all have nice tutorials on how to do this.
However, most real-world apps are made of two or more services (for example a database + a web server), rather than just one service.
Is it bad practice to deploy the various services of a multi-service app to different clusters (e.g. deploy the database to a GKE cluster, and the web server to another GKE cluster)? I'm asking this because I am finding it very difficult to deploy a simple web app to a single cluster, while I was expecting that once I set up my Dockerfiles and docker-compose.yml everything would work out-of-the-box (as advertised by the documentations of Docker Compose and Kubernetes) and I would be able to have a small cluster with 1 container for my database and 1 container for my web server.
So my questions are:
Is it bad practice to deploy the various services of a multi-service app to different clusters?
What is, in general, the de-facto standard way to deploy a web app with a database and a web server to the cloud? What are the easiest tools to achieve this?
Practically, what is the simplest way I can deploy a React + Express + MongoDB app to any cloud provider with a free-tier account?
Deploying multiple services (AKA applications) that shares some logic between them on the same cluster/namespace is actually the best practice. I am not sure why you find it difficult, but you could take a container orchestrator platform, such as Kubernetes and deploy as many applications as you want - in the same project on the same cluster.
I would recommend getting into a cloud platfrom that serves a Container Orchestrator such as Google Container Engine of Google Cloud Platform (or any other cloud platform you want) and start exploring around. You can also read about containers overall or Kubernetes.
So, practically speaking, I would probably create MongoDB and the express app inside the same namespace (and every other service or application related to the project on another container within the same namespace).
Related
Firstly, I apologize for the rather basic question. I am just beginning to learn about Microservices Architecture and would like to get my basics right.
I was wondering if topics such as AWS cloud services/web services imply the Microservices architecture. For instance, if someone is working on an AWS project does that mean that he is using a microservice architecture? I do understand AWS, Docker etc is more of a platform. Are they exclusively for Microservices?
I would really appreciate a short clarification
Microservices, cloud infrastructure like Amazon Web Services, and container infrastructure like Docker are three separate things; you can use any of these independently of the others.
"Microservices" refers to a style of building a large application out of independently-deployable parts that communicate over the network. A well-designed microservice architecture shouldn't depend on sharing files between components, and could reasonably run distributed across several systems. Individual services could run on bare-metal hosts and outside containers. This is often in contrast to a "monolithic" application, a single large deployable where all parts have to be deployed together, but where components can communicate with ordinary function calls.
Docker provides a way of packaging and running applications that are isolated from their host system. If you have an application that depends on a specific version of Python with specific C library dependencies, those can be bundled into a Docker image, and you can just run it without needing to separately install them on the host.
Public-cloud services like AWS fundamentally let you rent someone else's computer by the hour. An AWS Elastic Compute Cloud (EC2) instance literally is just a computer that you can ssh into and run things. AWS, like most other public-cloud providers offers a couple of tiers of services on top of this; a cloud-specific networking and security layer, various pre-packaged open-source tools as services (you can rent a MySQL or PostgreSQL database by the hour using AWS RDS, for example), and then various proprietary cloud-specific offerings (Amazon's DynamoDB database, analytics and machine-learning services). This usually gives you "somewhere to run it" more than any particular design features, unless you're opting to use a cloud's proprietary offerings.
Now, these things can go together neatly:
You design your application to run as microservices; you build and unit-test them locally, without any cloud or container infrastructure.
You package each microservice to run in a Docker container, and do local integration testing using Docker Compose, without any cloud infrastructure.
You further set up your combined application to deploy in Kubernetes, using Docker Desktop or Minikube to test it locally, again without any cloud infrastructure.
You get a public-cloud Kubernetes cluster (AWS EKS, Google GKE, Azure AKS, ...) and deploy the same application there, using the cloud's DNS and load balancing capabilities.
Again, all of these steps are basically independent of each other. You could deploy a monolithic application in containers; you could deploy microservices directly on cloud compute instances; you could run containers in an on-premises environment or directly on cloud instances, instead of using a container orchestrator.
No, using a cloud provider does not imply using a microservice architecture.
AWS can be (and is often) used to spin up a monolithic service, e.g. just a single EC2 server which uses a single RDS database.
Utilizing Docker and a container orchestrator like ECS or EKS, also does not mean on its own that one has a microservice architecture. If you split your backend and frontend into two Docker containers that get run on ECS, that's really not a microservice architecture. Even if you'd horizontally scale them, so you'd have multiple identical containers running for both the backend and frontend service, they still wouldn't be thought of as microservices.
Similarities that I see are:
They are PaaS offerings.
They make AWS more similar to Heroku.
They abstract away load balancing and auto scaling stuff.
The only difference that I see is that App Runner uses docker but Elastic beanstalk may not use it. Correct me if I am wrong, but seems like it is not a requirement to containerize your app first to be able to use it on App Runner as you can just supply the Github Url and App Runner will containerize it for you.
So what is the difference between the two and how do I make a decision to choose one over the other?
It depends. AWS App Runner (AR) is container based only. Not every application nor developer want to use containers, nor their application is suited for container deployments. AR also gives you very little control over your resources and operating system. Many application may require such control (e.g. gpu) Also AWS EB gives you much more control over your resources, including operating system.
As Cloud Guru said that
Behind the scenes, the core of App Builder is that it builds an Amazon
ECS Cluster and uses Fargate to execute your containers
So as you can see
Elastic Beanstalk belonging to PaaS
App Runner is serverless belonging to FaaS
Furthermore, App runner just works with Container only. So it really depends on what kind of your app is.
I have created a Spring cloud microservices based application with netflix APIs (Eureka, config, zuul etc). can some one explain me how to deploy that on AWS? I am very new to AWS. I have to deploy development instance of my application.
Do I need to integrate docker before that or I can go ahead without docker as well.
As long as your application is self-contained and you have externalised your configurations, you should not have any issue.
Go through this link which discusses what it takes to deploy an App to Cloud Beyond 15 factor
Use AWS BeanStalk to deploy and Manage your application. Dockerizing your app is not a predicament inorder to deploy your app to AWS.
If you use an EC2 instance then it's configuration is no different to what you do on your local machine/server. It's just a virtual machine. No need to dockerize or anything like that. And if you're new to AWS, I'd rather suggest to to just that. Once you get your head around, you can explore other options.
For example, AWS Beanstalk seems like a popular option. It provides a very secure and reliable configuration out of the box with no effort on your part. And yes, it does use docker under the hood, but you won't need to deal with it directly unless you choose to. Well, at least in most common cases. It supports few different ways of deployment which amazon calls "Application Environments". See here for details. Just choose the one you like and follow instructions. I'd like to warn you though that whilst Beanstalk is usually easier then EC2 to setup and use when dealing with a typical web application, your mileage might vary depending on your application's actual needs.
Amazon Elastic container Service / Elastic Kubernetes Service is also a good option to look into.
These services depend on the Docker Images of your application. Auto Scaling, Availability cross region replication will be taken care by the Cloud provider.
Hope this helps.
I am a beginner in creating micro-services using springboot with aws. What is the best way to start?
To start with microservices in spring boot, you can go through the this tutorial.
This should help you get started with Rest services.
Later you can cover topics related to data modules.
Once you get hold of the concept of how app are developed and how they execute you can go to aws and pick up a ec2 to setup the container (like tomcat) and deploy you app on it. They are multiple ways you can deploy your app on it.
You can then explore Elastic beanstalk or multi containers( similar to docker)
You can read this as well
There are just too many things to cover here. Microservices are collection of many services and you will need to provide many helper modules to help you deploy and manage these services.
I'm developing a prototype IoT application which does the following
Receive/Store data from sensors.
Web application with a web-based IDE for users to deploy simple JavaScript/Python scripts which gets executed in Docker Containers.
Data from the sensors gets streamed to these containers.
User programs can use this data to do analytics, monitoring etc.
The logs of these programs are outputted to the user on the webapp
Current Architecture and Services
Using one AWS EC2 instance. I chose EC2 because I was trying to figure out the architecture.
Stack is Node.js, RabbitMQ, Express, MySQl, MongoDB and Docker
I'm not interested in using AWS IoT services like AWS IoT and Greengrass
I've ruled out Heroku since I'm using other AWS services.
Questions and Concerns
My goal is prototype development for a Beta release to a set of 50 users
(hopefully someone else will help/work on a production release)
As far as possible, I don't want to spend a lot of time migrating between services since developing the product is key. Should I stick with EC2 or move to Beanstalk?
If I stick with EC2, what is the best way to handle small-medium traffic? Use one large EC2 machine or many small micro instances?
What is a good way to manage containers? Is it worth it use swarm and do container management? What if I have to use multiple instances?
I also have small scripts which have status of information of sensors which are needed by web app and other services. If I move to multiple instances, how can I make these scripts available to multiple machines?
The above question also holds good for servers, message buses, databases etc.
My goal is certainly not production release. I want to complete the product, show I have users who are interested and of course, show that the product works!
Any help in this regard will be really appreciated!
If you want to manage docker containers with least hassle in AWS, you can use Amazon ECS service to deploy your containers or else go with Beanstalk. Also you don't need to use Swarm in AWS, ECS will work for you.
Its always better to scale out rather scale up, using small to medium size EC2 instances. However the challenge you will face here is managing and scaling underlying EC2's as well as your docker containers. This leads you to use Large EC2 instances to keep EC2 scaling aside and focus on docker scaling(Which will add additional costs for you)
Another alternative you can use for the Web Application part is to use, AWS Lambda and API Gateway stack with Serverless Framework, which needs least operational overhead and comes with DevOps tools.
You may keep your web app on Heroku and run your IoT server in AWS EC2 or AWS Lambda. Heroku is on AWS itself, so this split setup will not affect performance. You may heal that inconvenience of "sitting on two chairs" by writing a Terraform script which provisions both EC2 instance and Heroku app and ties them together.
Alternatively, you can use Dockhero add-on to run your IoT server in a Docker container alongside your Heroku app.
ps: I'm a Dockhero maintainer