Directing the Root Context to the Developer Portal WSO2 on Docker? - wso2

i am trying to Directing the Root Context to the Developer Portal on the dockerized version of WSO2, but how i can keep the changes without edit files inside the container ?
I followed this docs:
https://apim.docs.wso2.com/en/latest/develop/customizations/directing-the-root-context-to-the-developer-portal/
Thanks

If you running this without a container orchestration system, then you can mount these configurations to the docker container as volumes.
https://docs.docker.com/storage/volumes/
If you are using K8s or any container orchestration system, then in K8s you can use configmaps to add these files.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
In both cases, you are not editing files in the container and simply replace these files.

Related

Continuous Deployment of Docker Compose App to AWS/EC2

I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.

Spring Boot microservice deployment in Docker

I need to develop a Spring Boot microservice and need to deploy in Docker. Now I developed a sample microservice. When I am learning Docker and container deployment I found many documentations for installing Docker and building images and running the application as container packaging. Here I have some doubts in deployment procedure:
If I need to deploy 4 Spring Boot microservice in Docker, do I need to create separate image for all? Or can I use the same Docker file in all my Spring Boot microservices?
I am using PostgreSQL database. So can I include that connection into Docker image file? Or I need to manage separately?
If you have four different Spring Boot applications, I suggest creating four different Dockerfiles, and building four different images from those files. Basically put one Dockerfile in each Spring application folder.
You can build PostgreSQL credentials (hostname, username & password) into the application by writing it in the code. This is easiest.
If you use AWS and ECS (Elastic Container Service) or EC2 to run your Docker containers you could store the credentials in the EC2 Parameter Store, and have your application fetch them at startup, however this takes a bit more AWS knowledge and you have to use the AWS SDK to fetch the credentials from the application. Here is a StackOverflow question about exactly this: Accessing AWS parameter store values with custom KMS key
Ask Question
There can be one single image for your all micro-services but its not a good design and not suggested. Always try to decoupled the things one from another. In your case, create separate images(separate Dockerfile) for each micro-service.
Again same thing for your second question, create a separate image(one Dockerfile) for your database as well. And for the credentials, you can follow the Jonatan's suggestion.

Deploying Docker Data Volumes

How do I deploy a named data volume, with contents, to nodes in a swarm? Here is what I want to do, as described in the Docker documentation:
“Consider a situation where your image starts a lightweight web server. You could use that image as a base image, copy in your website’s HTML files, and package that into another image. Each time your website changed, you’d need to update the new image and redeploy all of the containers serving your website. A better solution is to store the website in a named volume which is attached to each of your web server containers when they start. To update the website, you just update the named volume.”
(source:
https://docs.docker.com/engine/reference/commandline/service_create/#add-bind-mounts-or-volumes)
I'd like to use the better solution. But the description doesn't say how the named volume is deployed to host machines running the web servers, and I can't get a clear read on this from the documentation. I'm using Docker-for-AWS to set up a swarm where each node is running on a different EC2 instance. If the containers are supposed to mount the volume locally, then how is it deployed to each node of the swarm? If it is mounted from a manager node as a network filesystem visible to the nodes, how is this specified in the docker-compose yaml file? And how does the revised volume get deployed from the development machine to the swarm manager? Can this be done through a deploy directive in a docker-compose yaml file? Can it be done in Docker Cloud?
Thanks

Is it possible to deploy Docker containers using Netflix's Spinnaker?

I wonder if Spinnaker (http://spinnaker.io) can be used for docker container deployment?
What we do is:
Poke the repo
If the code is new there - we build 3 containers (nginx, django app container, fluentd logger container)
we are spinning up fluentd container in order to collect the logs from the rest 2 containers and send it to Splunk/AWS Cloudwatch Logs
we want to spin up django app container, on the same host - nginx container (as a proxy to Django container) [and forward the logs into fluentd ]
we forward (map) the certain json file with the app configuration ito the django container
Unfortunately Spinnaker has too few examples, the example they have here shows only how to bake the image with the certain DEB package inside.
We do have jenkins jobs which can poll the repo, test the code, create and upload the docker container into the private registry and deploy the containers using ansible. The question is if we can use Spinnaker in order to do that natively?
there is currently no container support in Spinnaker. Google is actively working on adding Kubernetes support. But there is currently no plans to integrate Spinnaker directly with either docker or ecs.
One thing we tried and worked was to use Jenkins to build and publish a debian wrapper for the docker image that was created. All that this debian does is to pull and start the docker container for a spinnaker service. We then created a spinnaker pipeline that bakes this debian and then deploys it.

How to configure Amazon container service without docker hub integration

I am trying to setup a new springboot+docker(microservices) based project. The deployment is targeted on aws. Every service has a Dockerfile associated with it. I am thinking of using amazon container service for deployment, but as far as I see it only pulls images from docker hub. I don't want ECS to pull from docker-hub, rather build the images from docker file and then take over the deploying those containers.Is it possible to do? If yes how.
This is not possible yet with the Amazon EC2 Container Service (ECS) alone - while ECS meanwhile supports private registries (see also the introductory blog post), it doesn't yet offer an image build service (as usual, AWS is expected to add such notable additional features over time, see e.g. the Feature Request: ECS container dream service for more on this).
However, it can already be achieved with AWS Elastic Beanstalk's built in initial support for Single Container Docker Configurations:
Docker uses a Dockerfile to create a Docker image that contains your source bundle. [...] Dockerfile is a plain text file that contains instructions that Elastic Beanstalk uses to build a customized Docker image on each Amazon EC2 instance in your Elastic Beanstalk environment. Create a Dockerfile when you do not already have an existing image hosted in a repository. [emphasis mine]
In an ironic twist, Elastic Beanstalk has now added Multicontainer Docker Environments based on ECS, but this highly desired more versatile Docker deployment option doesn't offer the ability to build images in turn:
Building custom images during deployment with a Dockerfile is not supported by the multicontainer Docker platform on Elastic Beanstalk. Build your images and deploy them to an online repository before creating an Elastic Beanstalk environment. [emphasis mine]
As mentioned above, I would expect this to be added to ECS in a not too distant future due to AWS' well known agility (see e.g. the most recent ECS updates), but they usually don't commit to roadmap details, so it is hard to estimate how long we need to wait on this one.
Meanwhile Amazon has introduced EC2 Container Registry https://aws.amazon.com/ecr/
It is a private docker repository if you do not like docker hub. Nicely integrated with the ECS service.
However it does not build your docker images, so it does not solve the entire problem.
I use a bamboo server for building images (the source is in git repositories in bitbucket). Bamboo pushes the images to Amazons container registry.
I am hoping the Bitbucket Pipelines will make the process more smooth with less configuration of build servers. From the videos I have seen all your build configuration sits right in your repository. It is still in a closed beta so I guess we will have to wait a bit more to see what it ends up being.