Create multiple Docker machines at once with docker-machine - amazon-web-services

In my application, I need to create many Docker machines on a cloud computing service (AWS-EC2 for now but could be changed), then deploy many containers on those machines. I am using docker-machine to provision them on AWS, using a command like
docker-machine create --driver amazonec2 --amazonec2-ssh-keypath <path-to-pem> <machine-name>
The problem is that it takes a lot of time, about 6 minutes, to create one such machine. So overall it takes hours just to create the machines for my deployment. Is there any way to create multiple machines at once with docker-machine? Or any way to speed up the provisioning of a machine? All machines have the same configuration, just different EC2 instances with different names.
I think running multiple docker-machine create commands in the background might work, but I fear it might corrupt the configuration (machine list, internal settings, etc.) of docker-machine. Even if I can run multiple such commands safely, I don't know how to check when and if they have completed successfully.
P.S: I understand that AWS supports Docker containers and it might be faster to create instances this way. But their "service model" of computation does not fit the needs of my application.

Running simultaneous docker-machine create commands should be fine - it depends on the driver, but I do this with Azure. The Azure driver has some logic to create the subnet, availability sets etc. and will re-use them if they're already there. Assuming AWS does something similar, you should be fine if you create the first machine and wait for it to complete, then start the rest in parallel.
As to knowing when they're done, it would be cool to run Docker Machine in a Docker container. That way you can create each machine with a docker run command, and use docker ps to see how they're doing. Assuming you want to manage them from the host though, you'd need your containers to share the host's Docker Machine state - mounting a volume and setting MACHINE_STORAGE_PATH to the volume on the host should do it.

TLDR; This will run docker-machine command 9 times (asynchronously) to create multiple machines. Adjust as required:
for i in `seq 1 9` ; do docker-machine create --driver virtualbox worker$i & done
Creates 9x machines...
worker1
worker2
worker3
worker4
worker5
worker6
worker7
worker8
worker9

Related

docker-compose on AWS

I would like an web a software on AWS. Locally, I run it on Ubuntu VirtualBox VM with docker-compose, it requires 2-4 cores, 8G RAM, 30-40G disk. Do you think it will run on AWSS? Should I install docker-compose and app on a EC2? Elasticbean, ECS, (https://aws.amazon.com/about-aws/whats-new/2018/06/amazon-ecs-cli-supports-docker-compose-version-3/ ), or something else?
I am vary because my attempts to run it on an IT department managed KVM failed.
What resources are best to request for either of solutions?
At the moment it is more of prove of concept/demo, but eventually I hope deploying production on a Kubernetes cluster
I'm looking for, in the order of decreasing importance :
Simplicity and chances to succeed with deployment asap
costs
stability, QoS
You may want to consider using AWS Fargate. This lets you run container-based applications without having to managing the underlying EC2 instance. You can use Fargate with either ECS or EKS.
The ECS CLI that you link to in your question also helps you create your application and should make it easy get started.
You can look at ecsworkshop.com for an introduction to using ECS and Fargate.

Is there a time sync process on the Docker for AWS nodes?

I haven't been able to determine if there is a time sync process (such as ntpd or chronyd) running on the docker swarm I've deployed to AWS using Docker Community Edition (CE) for AWS.
I've ssh'd to a swarm manager, but ps doesn't show much, and I don't see anything in /etc or /etc/conf.d that looks relevant.
I don't really have a good understanding of cloudformation, but I can see that the created instances running the docker nodes used AMI image Moby Linux 18.09.2-ce-aws1 stable (ami-0f4fb04ea796afb9a). I created a new instance w/ that AMI so I could ssh there. Still no time sync process indications w/ ps or in /etc
I suppose one of the swarm control containers that is running may deal with sync'ing time (maybe docker4x/l4controller-aws:18.09.2-ce-aws1)? Or maybe the cloudformation template installed one on the instances? But I don't know how to verify that.
So if anyone can tell me if there is a time sync process running (and where)?
And if not, I feel there should be so how might I start one up?
You can verify resources that are created by cloud formation Docker-no-vpc.tmpl from the link you provided.
Second thing, do you think ntpd have something do with docker-swarm? or it should be installed on the underlying EC2 instance?
Do ssh to your ec2 instance and very the status of the service, normally all AWS AMI has ntpd installed.
or you can just type to check
ntpd
If you did not find, you can install it for your self or you can run docker swarm with custom AMI.
UCP requires that the system clocks on all the machines in a UCP
cluster be in sync or else it can start having issues checking the
status of the different nodes in the cluster. To ensure that the
clocks in a cluster are synced, you can use NTP to set each machine's
clock.
First, on each machine in the cluster, install NTP. For example, to
install NTP on an Ubuntu distribution, run:
sudo apt-get update && apt-get install ntp
#On CentOS and RHEL, run:
sudo yum install ntp
what-does-clock-skew-detected-mean
Last thing, do you really need the stack that is created by cloudformation?
EC2 instances + Auto Scaling groups
IAM profiles
DynamoDB Tables
SQS Queue
VPC + subnets and security groups
ELB
CloudWatch Log Group
I know the cloudformation ease our life, but if you do not know the template (what resouces will be created) do not try to run the template, otherwise you will bear sweet cost at the of the month.
Also will suggest exploring AWS ECS and EKS, these are service that are sepcifly designed for docker container.

Motivation for putting Docker containers inside an AWS EC2 instance

There seems to be a growing trend of developers setting up multiple Docker containers inside of their AWS EC2 instances. This seems counterintuitive to me. Why put a virtual machine inside another virtual machine? Why not just use smaller EC2 instances (ie make the instance the same size as your container and then not use Docker at all)? One argument I've heard is that Docker containers make it easy to bring your development environment exactly as-is to prod. Can't the same be done with Amazon Machine Images (AMIs)? Is there some sort of marginal cost savings? Is it so that developers can be cloud-agnostic and avoid vendor lock in? I understand the benefits of Docker on physical hardware, just not on EC2.
The main advantage of Docker related to your question is the concept of images. These are light-weight and easier to configure than AMIs. Also, note that Docker is simple because it runs on a VM (in this case EC2).
More info here - How is Docker different from a normal virtual machine?
Docker containers inside of their AWS EC2 instances. This seems counterintuitive to me. Why put a virtual machine inside another virtual machine?
Err... a docker container is not a VM. It is a way to isolate part of the resources (filesystem, CPU, memory) of your host (here an EC2 VM).
AMI (Amazon Machine Images) is just one of EC2 resources

Tool to automate Docker Swarm

I have followed the Docker Docs about setting up Swarm on Virtualbox.
I suppose it is the same procedure to set it up on AWS, Azure or DigitalOcean.
It is a lot to do manually every time .
Is there a tool to automate this?
I would like to use something to set up and scale Swarm like Compose does for Docker .
Maybe I would start with one AWS instance and 2-3 containers and then scale them up to 100 containers and the instances to scale accordingly. Then I would want to scale down to 2 instances and the rest would shut down.
Does something like this exist ?
If you want to avoid manual configurations but still get the required high availability and cost efficiency, try to run Docker Swarm template pre-packaged by Jelastic:
it has built-in automatic clustering and scaling
the installation is performed automatically and you'll get full access to the cluster via intuitive UI
containers are running directly on bare metal, so no need to reserve full VMs for each service (and you can choose the datacenter you want to run your project on)
the payment is done based on actual consumption of RAM and CPU
containers are automatically distributed across different hardware servers that increases high availability
The details about the package and installation steps are in this article.
You can use Ansible for configuring the Swarm master, Swarm nodes, and all the required cluster discovery. Ansible is a general IT automation tool, but it comes with a very powerful Docker module that allows to set up Docker Swarm easily.
This GitHub repository shows a good example how to set up Swarm with Ansible.
You can use Docker Machine for provisioning hosts and configuring swarm easily (example).
The Docker Ecosystem includes also managed solutions like Tutum or Docker Cloud to achieve easily what you want.
Checkout devopsbyte.com blog, which covers how to set up a docker swarm cluster using ansible

Amazon elastic compute cloud

I have created an ubuntu 32 bit instance and installed python packages on it.
My code is running fine on it.
Now I need to create another instance exactly same as this running instance,But the main concern is that both instances shouldnot share the database or mysql.
Can I install different mysql in both or is there any other wayout?
To launch additional EC2 instances based on one you've already created, just go to the EC2 dashboard in your account, view your instances, select the one you want to clone and from the ACTIONS menu select "Launch more like this." Your new server request will kick off and you can change any parameters you want to in that process (such as throwing your new instance into a different AZ, etc.)
For running MySQL separately, you have a couple of good/easy options here:
If the MySQL engine is already installed on the first server (the one you're cloning) you could run a parallel engine on your second server, so they both run in parallel -- and are entirely independent of one another.
Alternatively, you could spin up a MySQL flavored RDS instance and simply run two DBs on that, one for each of your two EC2 servers. That would take the MySQL overhead off of your EC2s, give you one place to manage your DBs, and would probably be less of a management hassle in the long run.