How to configure Amazon container service without docker hub integration - amazon-web-services

I am trying to setup a new springboot+docker(microservices) based project. The deployment is targeted on aws. Every service has a Dockerfile associated with it. I am thinking of using amazon container service for deployment, but as far as I see it only pulls images from docker hub. I don't want ECS to pull from docker-hub, rather build the images from docker file and then take over the deploying those containers.Is it possible to do? If yes how.

This is not possible yet with the Amazon EC2 Container Service (ECS) alone - while ECS meanwhile supports private registries (see also the introductory blog post), it doesn't yet offer an image build service (as usual, AWS is expected to add such notable additional features over time, see e.g. the Feature Request: ECS container dream service for more on this).
However, it can already be achieved with AWS Elastic Beanstalk's built in initial support for Single Container Docker Configurations:
Docker uses a Dockerfile to create a Docker image that contains your source bundle. [...] Dockerfile is a plain text file that contains instructions that Elastic Beanstalk uses to build a customized Docker image on each Amazon EC2 instance in your Elastic Beanstalk environment. Create a Dockerfile when you do not already have an existing image hosted in a repository. [emphasis mine]
In an ironic twist, Elastic Beanstalk has now added Multicontainer Docker Environments based on ECS, but this highly desired more versatile Docker deployment option doesn't offer the ability to build images in turn:
Building custom images during deployment with a Dockerfile is not supported by the multicontainer Docker platform on Elastic Beanstalk. Build your images and deploy them to an online repository before creating an Elastic Beanstalk environment. [emphasis mine]
As mentioned above, I would expect this to be added to ECS in a not too distant future due to AWS' well known agility (see e.g. the most recent ECS updates), but they usually don't commit to roadmap details, so it is hard to estimate how long we need to wait on this one.

Meanwhile Amazon has introduced EC2 Container Registry https://aws.amazon.com/ecr/
It is a private docker repository if you do not like docker hub. Nicely integrated with the ECS service.
However it does not build your docker images, so it does not solve the entire problem.
I use a bamboo server for building images (the source is in git repositories in bitbucket). Bamboo pushes the images to Amazons container registry.
I am hoping the Bitbucket Pipelines will make the process more smooth with less configuration of build servers. From the videos I have seen all your build configuration sits right in your repository. It is still in a closed beta so I guess we will have to wait a bit more to see what it ends up being.

Related

Deploy Django Code with Docker to AWS EC2 instance without ECS

Here is my setup.
My Django code is hosted on Github
I have docker setup in my EC2 Instance
My deployment process is manual. I have to run git pull, docker build and docker run on every single code change. I am using a dockerservice account to perform this step.
How do I automate step 3 with AWS code deploy or something similar.
Every example I am seeing on the internet involves ECS, Fargate which I am not ready to use yet
Check this out on how to Use Docker Images from a Private Registry (eg. dockerhub) for Your Build Environment
How to Use Docker Images from a Private Registry for Your Build Environment

Continuous Deployment of Docker Compose App to AWS/EC2

I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.

Difference between Docker and AMI

In the context of AWS:
AMI is used to package software and can be deployed on EC2.
Docker can also be used to package software and can also be deployed to EC2.
What's the difference between both and how do I choose between them?
An AMI is an image. This is a whole machine that you can start new instances from. A docker container is more lightweight and portable. A docker container should be transportable between providers while an AMI is not (easily).
AMI's are VM images basically.
Docker containers are packaged mini-images that run on some VM in an isolated environment.
Eventhough this doesn't answer the question directly, but gives some background on how they are used.
One approach is you launch EC2 instances with Amazon AMI's (or can be any AMI) then run docker containers (with all dependencies) on top of it. With this approach, the docker image gets bloated over time and there is a container drift over time. Also time taken for the application to be up and running is more as the Ec2 has to be booted and docker has to bring up your app server.
Another approach is "Immutable Ec2 instances". With this approach, you use Amazon AMI as base and install all the dependencies ( use shell scripts or Ansible) and bake them in the AMI. We use Hashicorp Packer which is an amazing tool. Here the time taken for the application to be up and running is greatly reduced as all the dependencies ( java8 , tomcat, war file etc) are already installed in the AMI.
For production use case, use Packer to create AMI and use Terraform to launch cloud resources to use this AMI. Tie all this together in Jenkins pipeline.
This link has details about differences between Docker and AMI:-
https://forums.docker.com/t/how-would-you-differentiate-between-docker-vs-ec2-image/1235/2

Is it possible to deploy Docker containers using Netflix's Spinnaker?

I wonder if Spinnaker (http://spinnaker.io) can be used for docker container deployment?
What we do is:
Poke the repo
If the code is new there - we build 3 containers (nginx, django app container, fluentd logger container)
we are spinning up fluentd container in order to collect the logs from the rest 2 containers and send it to Splunk/AWS Cloudwatch Logs
we want to spin up django app container, on the same host - nginx container (as a proxy to Django container) [and forward the logs into fluentd ]
we forward (map) the certain json file with the app configuration ito the django container
Unfortunately Spinnaker has too few examples, the example they have here shows only how to bake the image with the certain DEB package inside.
We do have jenkins jobs which can poll the repo, test the code, create and upload the docker container into the private registry and deploy the containers using ansible. The question is if we can use Spinnaker in order to do that natively?
there is currently no container support in Spinnaker. Google is actively working on adding Kubernetes support. But there is currently no plans to integrate Spinnaker directly with either docker or ecs.
One thing we tried and worked was to use Jenkins to build and publish a debian wrapper for the docker image that was created. All that this debian does is to pull and start the docker container for a spinnaker service. We then created a spinnaker pipeline that bakes this debian and then deploys it.

Docker consumer on AWS while using RabbitMQ

I have most of my app designed and deployed. We have gone for RabbitMQ over SQS for several reasons.
I currently have my consumer running in a docker container and would like to deploy this on AWS. I am fairly familiar with Elastic Beanstalk since our web-tier is running there, but it seems all workers deployed this way have to use SQS?
The other option I am aware of is to use ECS for the docker component, but I do not want to make the image publicly available and don't have access to a private repository.
Is there some basic functionality I am missing, a document describing how to deploy to ECS using a Dockerfile and source code locally, or a way of deploying to EB using a Dockerfile without being locked into SQS?
edit
So I have found the note on the docs for EB which says that Dockerfiles are not supported when building a multi-container environment and that repositories currently have to be used for that purpose, so EB is out for me.