I need to develop a Spring Boot microservice and need to deploy in Docker. Now I developed a sample microservice. When I am learning Docker and container deployment I found many documentations for installing Docker and building images and running the application as container packaging. Here I have some doubts in deployment procedure:
If I need to deploy 4 Spring Boot microservice in Docker, do I need to create separate image for all? Or can I use the same Docker file in all my Spring Boot microservices?
I am using PostgreSQL database. So can I include that connection into Docker image file? Or I need to manage separately?
If you have four different Spring Boot applications, I suggest creating four different Dockerfiles, and building four different images from those files. Basically put one Dockerfile in each Spring application folder.
You can build PostgreSQL credentials (hostname, username & password) into the application by writing it in the code. This is easiest.
If you use AWS and ECS (Elastic Container Service) or EC2 to run your Docker containers you could store the credentials in the EC2 Parameter Store, and have your application fetch them at startup, however this takes a bit more AWS knowledge and you have to use the AWS SDK to fetch the credentials from the application. Here is a StackOverflow question about exactly this: Accessing AWS parameter store values with custom KMS key
Ask Question
There can be one single image for your all micro-services but its not a good design and not suggested. Always try to decoupled the things one from another. In your case, create separate images(separate Dockerfile) for each micro-service.
Again same thing for your second question, create a separate image(one Dockerfile) for your database as well. And for the credentials, you can follow the Jonatan's suggestion.
Related
There are several tutorials on how to deploy a containerized service to the cloud: AWS, Google Cloud Platform, Heroku, and many others all have nice tutorials on how to do this.
However, most real-world apps are made of two or more services (for example a database + a web server), rather than just one service.
Is it bad practice to deploy the various services of a multi-service app to different clusters (e.g. deploy the database to a GKE cluster, and the web server to another GKE cluster)? I'm asking this because I am finding it very difficult to deploy a simple web app to a single cluster, while I was expecting that once I set up my Dockerfiles and docker-compose.yml everything would work out-of-the-box (as advertised by the documentations of Docker Compose and Kubernetes) and I would be able to have a small cluster with 1 container for my database and 1 container for my web server.
So my questions are:
Is it bad practice to deploy the various services of a multi-service app to different clusters?
What is, in general, the de-facto standard way to deploy a web app with a database and a web server to the cloud? What are the easiest tools to achieve this?
Practically, what is the simplest way I can deploy a React + Express + MongoDB app to any cloud provider with a free-tier account?
Deploying multiple services (AKA applications) that shares some logic between them on the same cluster/namespace is actually the best practice. I am not sure why you find it difficult, but you could take a container orchestrator platform, such as Kubernetes and deploy as many applications as you want - in the same project on the same cluster.
I would recommend getting into a cloud platfrom that serves a Container Orchestrator such as Google Container Engine of Google Cloud Platform (or any other cloud platform you want) and start exploring around. You can also read about containers overall or Kubernetes.
So, practically speaking, I would probably create MongoDB and the express app inside the same namespace (and every other service or application related to the project on another container within the same namespace).
I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.
I'm planning on transfer an application from Heroku to AWS Elastic Beanstalk. On the Heroku, I have two different applications, one for staging and the other for production, and both have their web and workers dynos.
I'd like to setup something like that on AWS EB. I've read the difference about Web Tier and Worker Tier, but here goes some questions:
Do I setup two different applications for production and staging? Or the same application and two different environments? If so, I would have to create 4 environments, two for production web/worker and two for staging web/worker? What's the correct structure? I'll use the same Rails application for web and worker. In that case, will I have to deploy them separate or is there a command to deploy both environments together?
I'll use the same Rails application for web and worker.
This tells me that you should have a single application. Applications manage application versions, which is basically just deployment history.
You will want to create 4 environments. This allows you to "promote to prod" by cname swapping, or by deploying a previously deployed version.
You will have to deploy your web/worker separately, but you could very easily create a script that deploys to both at the same time.
For future reference, AWS Elastic Beanstalk later has created a solution for that called Environment Links:
https://aws.amazon.com/about-aws/whats-new/2015/11/aws-elastic-beanstalk-adds-support-for-environment-links/
With that feature, we're now able to link both environments with the same code (so we deploy it only once instead of twice). To make the worker run the worker process and the web to run the web server, you can set different environment variables and customize the EB initialization scripts to check for those vars and run specific process.
I am working on a project using a microservices architecture.
Each service lives in its own docker container and has a separate git repository in order to ensure loose coupling.
It is my understanding that AWS recently announced support for Multi-Container Docker environments in ElasticBeanstalk. This is great for development because I can launch all services with a single command and test everything locally on my laptop. Just like Docker Compose.
However, it seems I only have the option to also deploy all services at once which I am afraid defies the initial purpose of having a micro services architecture.
I would like to be able to deploy/version each service independently to AWS. What would be the best way to achieve that while keeping infrastructure management to a minimum?
We are currently using Amazon ECS to accomplish exactly what you are talking about trying to achieve. You can define your Docker Container as a Task definition and then Create an ECS Service which will handle number of instances, scaling, etc.
One thing to note is Amazon mentions the word container a lot in the documentation. They may be talking about the EC2 instance used for the cluster for your docker instances/containers.
I am trying to setup a new springboot+docker(microservices) based project. The deployment is targeted on aws. Every service has a Dockerfile associated with it. I am thinking of using amazon container service for deployment, but as far as I see it only pulls images from docker hub. I don't want ECS to pull from docker-hub, rather build the images from docker file and then take over the deploying those containers.Is it possible to do? If yes how.
This is not possible yet with the Amazon EC2 Container Service (ECS) alone - while ECS meanwhile supports private registries (see also the introductory blog post), it doesn't yet offer an image build service (as usual, AWS is expected to add such notable additional features over time, see e.g. the Feature Request: ECS container dream service for more on this).
However, it can already be achieved with AWS Elastic Beanstalk's built in initial support for Single Container Docker Configurations:
Docker uses a Dockerfile to create a Docker image that contains your source bundle. [...] Dockerfile is a plain text file that contains instructions that Elastic Beanstalk uses to build a customized Docker image on each Amazon EC2 instance in your Elastic Beanstalk environment. Create a Dockerfile when you do not already have an existing image hosted in a repository. [emphasis mine]
In an ironic twist, Elastic Beanstalk has now added Multicontainer Docker Environments based on ECS, but this highly desired more versatile Docker deployment option doesn't offer the ability to build images in turn:
Building custom images during deployment with a Dockerfile is not supported by the multicontainer Docker platform on Elastic Beanstalk. Build your images and deploy them to an online repository before creating an Elastic Beanstalk environment. [emphasis mine]
As mentioned above, I would expect this to be added to ECS in a not too distant future due to AWS' well known agility (see e.g. the most recent ECS updates), but they usually don't commit to roadmap details, so it is hard to estimate how long we need to wait on this one.
Meanwhile Amazon has introduced EC2 Container Registry https://aws.amazon.com/ecr/
It is a private docker repository if you do not like docker hub. Nicely integrated with the ECS service.
However it does not build your docker images, so it does not solve the entire problem.
I use a bamboo server for building images (the source is in git repositories in bitbucket). Bamboo pushes the images to Amazons container registry.
I am hoping the Bitbucket Pipelines will make the process more smooth with less configuration of build servers. From the videos I have seen all your build configuration sits right in your repository. It is still in a closed beta so I guess we will have to wait a bit more to see what it ends up being.