Diff Docker containers for wso2-am product profiles - wso2

Im trying to deploy wso2-am 1.10.0 in a docker container.
To have clear separation between components we would like to spin up separate clusters (docker containers) for gateway workers, store/publisher, gateway managers and key managers. Could you please point me an example of how to achieve this? I was able to follow instructions in https://github.com/wso2/dockerfiles
and build a docker image of the product. But i would like to have different containers for diff wso2-am product profiles. Please help

If your requirement is to spawn different cluster patterns with WSO2 APIM, you can look at the docker-compose aproach used here.

Related

How to create better Kubernetes architecture for multiple servers?

I'm new in Kubernetes:
I have a multiple independent servers, they are based on spring boot java.
Each server has a separate independent database, where database connection details are written in application.yml.
And I was wondering if i deploy in Kubernetes,
should I have a let say 15 different deployments , basically for each application.yml?
Could you please suggest the general flow or picture?
Flexibility comes when there is less or no dependency, so yes each service should be deployed and managed with its own deployment. So deployments just manage pod, and the pod is the smallest unit of the Kubernetes application, for example, let's say we have two services login and user service. Both have different container image so we need two different pods which mean two different deployments.
It will help to scale, rollout, clean and update independently. Plus in future, if you place monitoring etc it help you to identify object from the deployment which having issue.
kubernetes-deployment
Services like ArgoCD which work based on GitOps approach, sync applications from the git repository so in that case, it is also easier to sync applications independently.
In addition to that, Better to use a helm each service will represent a helm chart.

Continuous Deployment of Docker Compose App to AWS/EC2

I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.

Spring Boot microservice deployment in Docker

I need to develop a Spring Boot microservice and need to deploy in Docker. Now I developed a sample microservice. When I am learning Docker and container deployment I found many documentations for installing Docker and building images and running the application as container packaging. Here I have some doubts in deployment procedure:
If I need to deploy 4 Spring Boot microservice in Docker, do I need to create separate image for all? Or can I use the same Docker file in all my Spring Boot microservices?
I am using PostgreSQL database. So can I include that connection into Docker image file? Or I need to manage separately?
If you have four different Spring Boot applications, I suggest creating four different Dockerfiles, and building four different images from those files. Basically put one Dockerfile in each Spring application folder.
You can build PostgreSQL credentials (hostname, username & password) into the application by writing it in the code. This is easiest.
If you use AWS and ECS (Elastic Container Service) or EC2 to run your Docker containers you could store the credentials in the EC2 Parameter Store, and have your application fetch them at startup, however this takes a bit more AWS knowledge and you have to use the AWS SDK to fetch the credentials from the application. Here is a StackOverflow question about exactly this: Accessing AWS parameter store values with custom KMS key
Ask Question
There can be one single image for your all micro-services but its not a good design and not suggested. Always try to decoupled the things one from another. In your case, create separate images(separate Dockerfile) for each micro-service.
Again same thing for your second question, create a separate image(one Dockerfile) for your database as well. And for the credentials, you can follow the Jonatan's suggestion.

Tool to automate Docker Swarm

I have followed the Docker Docs about setting up Swarm on Virtualbox.
I suppose it is the same procedure to set it up on AWS, Azure or DigitalOcean.
It is a lot to do manually every time .
Is there a tool to automate this?
I would like to use something to set up and scale Swarm like Compose does for Docker .
Maybe I would start with one AWS instance and 2-3 containers and then scale them up to 100 containers and the instances to scale accordingly. Then I would want to scale down to 2 instances and the rest would shut down.
Does something like this exist ?
If you want to avoid manual configurations but still get the required high availability and cost efficiency, try to run Docker Swarm template pre-packaged by Jelastic:
it has built-in automatic clustering and scaling
the installation is performed automatically and you'll get full access to the cluster via intuitive UI
containers are running directly on bare metal, so no need to reserve full VMs for each service (and you can choose the datacenter you want to run your project on)
the payment is done based on actual consumption of RAM and CPU
containers are automatically distributed across different hardware servers that increases high availability
The details about the package and installation steps are in this article.
You can use Ansible for configuring the Swarm master, Swarm nodes, and all the required cluster discovery. Ansible is a general IT automation tool, but it comes with a very powerful Docker module that allows to set up Docker Swarm easily.
This GitHub repository shows a good example how to set up Swarm with Ansible.
You can use Docker Machine for provisioning hosts and configuring swarm easily (example).
The Docker Ecosystem includes also managed solutions like Tutum or Docker Cloud to achieve easily what you want.
Checkout devopsbyte.com blog, which covers how to set up a docker swarm cluster using ansible

Deployment methods for docker based micro services architecture on AWS

I am working on a project using a microservices architecture.
Each service lives in its own docker container and has a separate git repository in order to ensure loose coupling.
It is my understanding that AWS recently announced support for Multi-Container Docker environments in ElasticBeanstalk. This is great for development because I can launch all services with a single command and test everything locally on my laptop. Just like Docker Compose.
However, it seems I only have the option to also deploy all services at once which I am afraid defies the initial purpose of having a micro services architecture.
I would like to be able to deploy/version each service independently to AWS. What would be the best way to achieve that while keeping infrastructure management to a minimum?
We are currently using Amazon ECS to accomplish exactly what you are talking about trying to achieve. You can define your Docker Container as a Task definition and then Create an ECS Service which will handle number of instances, scaling, etc.
One thing to note is Amazon mentions the word container a lot in the documentation. They may be talking about the EC2 instance used for the cluster for your docker instances/containers.