How to script AWS VPC with Cloudformation seeded by Docker - amazon-web-services

I want to confirm my approach to setting up a VPC using cloudformation/scepter and seeding instances with docker container is correct.
Create an aws ec2 instance.
Create a docker image on that instance
Create a cloudformation VPC template (.yaml )
-reference docker image in template?
Create a sceptre project using the template above and run script from ec2 instance
So as I understand if the majority of the work will be in the cloudformation template. Currently I'm stuck on sceptre errors, but I wanted to make sure I was approaching the problem correctly. Does this look like the right approach?

There are a lot of ways of doing what you want:
Run sceptre locally on your development machine
This is easier, but not best practice for important environments as
having a build server, gives a better trail of what was done when (especially in shared environments)
Use CodeBuild to save you having to do steps 1 & 2 yourself (AWS maintain a docker image with python installed)
It also avoids the chicken and egg problem of how you deploy the EC2 instance in the first place.
Configure Jobs on a build server such as Jenkins
CodeDeploy is good for simple setups, but a well configured build server, can have dashboards to track what is deployed where
as sceptre is just a way of generating/managing deploying templates across environments, there are lots of other ways of doing this including what you outlined.
p.s Apologies that the getting started documentation isn't great at the moment, it is something we are focusing on for release 2.0.

Related

What is the best way to set up a CI/CD pipeline on ECS?

There are so many options:
Docker-compose with ECS cli looks the easiest solution
Terraform
CloudFormation (looks complex!)
Ansible
I am only interested in setting up a basic ECS docker set-up with ELB and easily updating the Docker image version.
We all love technology here, but we're not all super geniuses when it comes to tech. So I'm looking to keep my set-up as simple as possible. We run Jenkins, 2 NodeJS applications, 2 Java applications in ECS and I know it involves IAM, Security Groups, EBS, ELB, ECS Service/Task, ECS Task Definition, but that already gets complex quickly in CloudFormation.
What are good technologies that will allow us to use Docker, keep things simple and don't require us to be very intelligent to understand our own programming code?
I would suggest you start by trying to setup your pipeline using Terraform. Learning it will give you experience in a non-vendor specific infrastructure as code.
Another possibility is to avoid using CloudFormation directly and prefer using the AWS CDK (https://docs.aws.amazon.com/cdk/latest/guide/home.html) as IaC.
Best regards

Continuous Deployment of Docker Compose App to AWS/EC2

I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.

Docker for AWS vs pure Docker deployment on EC2

The purpose is production-level deployment of a 8-container application, using swarm.
It seems (ECS aside) we are faced with 2 options:
Use the so called docker-for-aws that does (swarm) provisioning via a cloudformation template.
Set up our VPC as usual, install docker engines, bootstrap the swarm (via init/join etc) and deploy our application in normal EC2 instances.
Is the only difference between these two approaches the swarm bootstrap performed by docker-for-aws?
Any other benefits of docker-for-aws compared to a normal AWS VPC provisioning?
Thx
If you need to provide a portability across different cloud providers - go with AWS CloudFormation template provided by Docker team. If you only need to run on AWS - ECS should be fine. But you will need to spend a bit of time on figuring out how service discovery works there. Benefit of Swarm is that they made it fairly simple, just access your services via their service name like they were DNS names with built-in load-balancing.
It's fairly easy to automate new environment creation with it and if you need to go let's say Azure or Google Cloud later - you simply use template for them to get your docker cluster ready.
Docker team has put quite a few things into that template and you really don't want to re-create them yourself unless you really have to. For instance if you don't use static IPs for your infra (fairly typical scenario) and one of the managers dies - you can't just restart it. You will need to manually re-join it to the cluster. Docker for AWS handles that through IPs sync via DynamoDB and uses other provider specific techniques to make failover / recovery work smoothly. Another example is logging - they push your logs automatically into CloudWatch, which is very handy.
A few tips on automating your environment provisioning if you go with Swarm template:
Use some infra automation tool to create VPC per environment. Use some template provided by that tool so you don't write too much yourself. Using a separate VPC makes all environment very isolated and easier to work with, less chance to screw something up. Also, you're likely to add more elements into those environments later, such as RDS. If you control your VPC creation it's easier to do that and keep all related resources under the same one. Let's say DEV1 environment's DB is in DEV1 VPC
Hook up running AWS Cloud Formation template provided by docker to provision a Swarm cluster within this VPC (they have a separate template for that)
My preference for automation is Terraform. It lets me to describe a desired state of infrastructure rather than on how to achieve it.
I would say no, there are basically no other benefits.
However, if you want to achieve all/several of the things that the docker-for-aws template provides I believe your second bullet point should contain a bit more.
E.g.
Logging to CloudWatch
Setting up EFS for persistence/sharing
Creating subnets and route tables
Creating and configuring elastic load balancers
Basic auto scaling for your nodes
and probably more that I do not recall right now.
The template also ingests a bunch of information about related resources to your EC2 instances to make it readily available for all Docker services.
I have been using the docker-for-aws template at work and have grown to appreciate a lot of what it automates. And what I do not appreciate I change, with the official template as a base.
I would go with ECS over a roll your own solution. Unless your organization has the effort available to re-engineer the services and integrations AWS offers as part of the offerings; you would be artificially painting yourself into a corner for future changes. Do not re-invent the wheel comes to mind here.
Basically what #Jonatan states. Building the solutions to integrate what is already available is...a trial of pain when you could be working on other parts of your business / application.

deploying on AWS ECS via task definitions without dockerhub

Currently I'm using task-definitions that refer to custom images in dockerhub to deploy my webapp on ECS (Amazon EC2 Container Service). Is there a way to do this without going through dockerhub i.e. build/deploy the dockerfile locally across cluster nodes?
At the moment, I can only think of sending shell commands over ssh or using a tool like ansible.
Perhaps I'm missing something totally obvious here...
This is a little late for an answered question, but I just figured this out myself. The EC2 Container Registry (ECR, Amazon's repository equivalent) is working well for me, maybe didn't exist at the time?
I build the containers locally. Tag them and push them to Amazon's ECR using the AWS CLI (later versions of which include support for ECR), and then refer to them at that location in the task definitions in ECS. Works like a charm.
http://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html
ECS is a service to run containers, not to build them. It has no native support for it, so you're not missing something obvious.
As you suggest, you could distribute a Dockerfile to the container instances and build locally, but that will actually be more difficult since the container instances must have everything needed to build the image, plus you'd have to distribute the image to the other container instances.
You could run a repository yourself and specify a different repository-url for the image parameter in your ECS task definition. You'd still be responsible for building the images and now the added burden of running a repository as well.
Sorry to be the bearer of bad news but there's not a simpler workflow for this at the moment.

code deployments on EC2

There are quite a few resources on deployments of AMI's on EC2. But are there any solutions to incremental code updates to a PHP/Java based website?
Suppose I have 10 EC2 instances all running PHP / Java based websites with docroots local to the instance. I may want to do numerous code deployments to it through out the day.
I don't want to create a new AMI copy and scale that up to new instances each time I have a code update.
Any leads on how to best do this would be greatly appreciated. We use subversion as our main code repository and in the past we've simply done an SVN update/co when we were on one to two servers.
Thanks.
You should check out Elastic Beanstalk. Essentially you just package up your WAR or other code file, upload it to a bucket via AWS's command line/Eclipse integration and the deployment is performed automatically.
http://aws.amazon.com/elasticbeanstalk/
Elastic Beanstalk is exactly designed to do this for you. We use the Elastic Beanstalk java/tomcat flavor but it also has support for php, ruby, python environment. It has web console that allows you to deploy code (it even keeps history of it), it also has git tool to deploy code from command line.
It also has monitoring, load balancer, auto scaling all built in. Only a few web form entries to control all these.
Have you considered using a tool designed to manage this sort of thing for you, Puppet is well regarded in this area.
Have a look here:
https://puppetlabs.com/puppet/what-is-puppet/
(No I am not a Puppet Labs employee :))
Capistrano is a great tool for deploying code to multiple servers at once. Chef and Puppet are great tools for setting up those servers with databases, webservers, etc.
Go for a Capistrano . Its a good way to deploy your code on multiple servers .
As already mentioned Elastic Beanstalk is a good option if you just want a webserver and don't want to worry about the details.
Also, take a look at AWS CodeDeploy. You can have much more control over the lifecycle of your instance and you'd be looking at something very similar to what you have now (a set of EC2 instances that you setup). You can even get automatic deployments on instance launch with Auto Scaling.
You can either use Capsitrano or TravisCI.