docker-machine connect to existing machine - amazon-web-services

I have a docker swarm all hosted on AWS, created basically along the lines of this tutorial.
To deploy our code, I need to be able to access this swarm separate of the computer where I created these instances. I don't see anywhere in the docs for the docker-machine amazonec2 driver where I can use my AWS credentials to connect to these existing instances.
Some tutorials I came across use a --url argument to specify via the docker-machine url to connect to an existing instance, but I don't see that argument in my most recent docker-machine version.
Other tutorials mention TLS configuration and using that in conjunction with docker-machine to connect to existing instances, but given unique/secret AWS credentials, this seems redundant and adds a layer of complexity I hope I can avoid.
What is the recommended approach to this?
Unable to connect:
puttygen my-key.pem -L > id_rsa
docker-machine create --driver generic --generic-ip-address=ec2-....compute.amazonaws.com --generic-ssh-key id_rsa Swarm-Dev01
Running pre-create checks...
Creating machine...
(Swarm-Dev01) Importing SSH key...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...

To get access to an existing instance, you can use the docker-machine create --driver generic command. The command will ssh onto the machine, make sure docker is installed, and then download certificates that it stores for future access e.g. using docker-compose.
Command:
docker-machine create \
--driver generic \
--generic-ip-address=<your_ip> \
--generic-ssh-key ~/.ssh/id_rsa \
vm
Documentation:
https://docs.docker.com/machine/drivers/generic/

Related

How do I specify EC2 instance type from cli?

Is there a way to specify EC2 instance type and storage from cli ?
I've got this command with which I'm creating instance:
docker-machine create -d amazonec2 --amazonec2-access-key abc --amazonec2-secret-key xyz --amazonec2-region eu-west-2 app-prod
this creates instance with default micro type and 16GB of SSD both of which I need to change.
I can change instance type from GUI but when I change storage it won't have the operating system and app installed.
Hence I'm asking how both can be specified from cli with other attributes ?
Use --amazonec2-instance-type
See: Using Docker Machine with AWS - Scott's Weblog - The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Creating a custom AMI from an AWS OpsWorks, Chef 12, Linux instance

Does anyone have an set of instructions for creating an AWS AMI from a chef 12, Linux based OpsWorks instance?
AWS publishes instructions on Creating a Custom Linux AMI from an AWS OpsWorks Instance. But it looks like they are out of date for Chef 12, Linux based OpsWorks stacks.
For example, they do not say you should remove the /opt/chef or /var/chef folders.
Here are the current instructions from AWS:
Create a Custom Linux AMI from an AWS OpsWorks Instance
If you want to use a customized AWS OpsWorks Linux instance to create an AMI, you should be aware that every Amazon EC2 instance created by OpsWorks includes a unique identity. If you create a custom AMI from such an instance, it will include that identity and all instances based on the AMI will have the same identity. To ensure that the instances based on your custom AMI have a unique identity, you must remove the identity from the customized instance before creating the AMI.
To create a custom AMI from an AWS OpsWorks instance
Create a Linux stack and add one or more layers to define the configuration of the customized instance. You can use built-in layers, customized as appropriate, as well as fully custom layers. For more information, see Customizing AWS OpsWorks.
Edit the layers and disable AutoHealing.
Add an instance with your preferred Linux distribution to the layer or layers and start it. We recommend using an Amazon EBS-backed instance. Open the instance's details page and record its Amazon EC2 ID for later.
When the instance is online, log in with SSH and run the following commands, in order:
sudo /etc/init.d/monit stop
sudo /etc/init.d/opsworks-agent stop
sudo rm -rf /etc/aws/opsworks/ /opt/aws/opsworks/ /var/log/aws/opsworks/ /var/lib/aws/opsworks/ /etc/monit.d/opsworks-agent.monitrc /etc/monit/conf.d/opsworks-agent.monitrc /var/lib/cloud/
If you are creating an AMI based on Amazon Linux 2014.09, run rpm -e opsworks-agent-ruby to ensure that the agent is running.
If you are creating an AMI based on Ubuntu, run dpkg -r opsworks-agent-ruby to ensure that the agent is running.
This step depends on the instance type:
For an Amazon EBS-backed instance, use the AWS OpsWorks console to stop the instance and create the AMI as described in Creating an Amazon EBS-Backed Linux AMI..
For an instance store-backed instance, create the AMI as described in Creating an Instance Store-Backed Linux AMI and then use the AWS OpsWorks console to stop the instance.
When you create the AMI, be sure to include the certificate files. For example, you can call the ec2-bundle-vol command with the -i argument set to -i $(find /etc /usr /opt -name '.pem' -o -name '.crt' -o -name '*.gpg' | tr '\n' ','). Do not remove the apt public keys when bundling. The default ec2-bundle-vol command handles this task.
Clean up your stack by returning to the AWS OpsWorks console and deleting the instance from the stack.

Changing ssh keypair when creating an ec2 instance with chef

I am bringing up an ec2 instance based on an in house AMI that used a different ssh key for authentication than the one I'd like to use on the instance I create using knife (in the example I call it original-pem-for-ami.pem):
knife ec2 server create -I ami-0123456 -f m2.xlarge \
--ssh-user username --groups sg-1234 \
--identity-file ~/.ssh/original-pem-for-ami.pem \
--node-name solr1 --hint ec2 -a public_ip_address \
--ssh-key name-of-key-i-want-to-use-to-login-to-new-instance
When I run this command the server comes up correctly, the correct security group is assigned etc, but I can only connect it to using:
ssh -i ~/.ssh/original-pem-for-ami.pem username#assigned-ec2-public-dns-name
Is there a way to make the new instance the key associated with the named keypair name-of-key-i-want-to-use-to-login-to-new-instance. I thought using --ssh-key name-of-key-i-want-to-use-to-login-to-new-instance would do this.
Check which version of knife-ec2 you have. --ssh-key is correct in 0.12 but before that (0.11 and earlier) I think it was something different. Also make sure this works through the normal AWS tools, it is possible the AMI wasn't prepared correctly and uses a hardwired key.

How to setup Kubernetes Master HA on AWS

What I am trying to do:
I have setup kubernete cluster using documentation available on Kubernetes website (http_kubernetes.io/v1.1/docs/getting-started-guides/aws.html). Using kube-up.sh, i was able to bring kubernete cluster up with 1 master and 3 minions (as highlighted in blue rectangle in the diagram below). From the documentation as far as i know we can add minions as and when required, So from my point of view k8s master instance is single point of failure when it comes to high availability.
Kubernetes Master HA on AWS
So I am trying to setup HA k8s master layer with the three master nodes as shown above in the diagram. For accomplishing this I am following kubernetes high availability cluster guide, http_kubernetes.io/v1.1/docs/admin/high-availability.html#establishing-a-redundant-reliable-data-storage-layer
What I have done:
Setup k8s cluster using kube-up.sh and provider aws (master1 and minion1, minion2, and minion3)
Setup two fresh master instance’s (master2 and master3)
I then started configuring etcd cluster on master1, master 2 and master 3 by following below mentioned link:
http_kubernetes.io/v1.1/docs/admin/high-availability.html#establishing-a-redundant-reliable-data-storage-layer
So in short i have copied etcd.yaml from the kubernetes website (http_kubernetes.io/v1.1/docs/admin/high-availability/etcd.yaml) and updated Node_IP, Node_Name and Discovery Token on all the three nodes as shown below.
NODE_NAME NODE_IP DISCOVERY_TOKEN
Master1
172.20.3.150 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed
Master2
172.20.3.200 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed
Master3
172.20.3.250 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed
And on running etcdctl member list on all the three nodes, I am getting:
$ docker exec <container-id> etcdctl member list
ce2a822cea30bfca: name=default peerURLs=http_localhost:2380,http_localhost:7001 clientURLs=http_127.0.0.1:4001
As per documentation we need to keep etcd.yaml in /etc/kubernete/manifest, this directory already contains etcd.manifest and etcd-event.manifest files. For testing I modified etcd.manifest file with etcd parameters.
After making above changes I forcefully terminated docker container, container was existing after few seconds and I was getting below mentioned error on running kubectl get nodes:
error: couldn't read version from server: Get httplocalhost:8080/api: dial tcp 127.0.0.1:8080: connection refused
So please kindly suggest how can I setup k8s master highly available setup on AWS.
To configure an HA master, you should follow the High Availability Kubernetes Cluster document, in particular making sure you have replicated storage across failure domains and a load balancer in front of your replicated apiservers.
Setting up HA controllers for kubernetes is not trivial and I can't provide all the details here but I'll outline what was successful for me.
Use kube-aws to set up a single-controller cluster: https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html. This will create CloudFormation stack templates and cloud-config templates that you can use as a starting point.
Go the AWS CloudFormation Management Console, click the "Template" tab and copy out the complete stack configuration. Alternatively, use $ kube-aws up --export to generate the cloudformation stack file.
User the userdata cloud-config templates generated by kube-aws and replace the variables with actual values. This guide will help you determine what those values should be: https://coreos.com/kubernetes/docs/latest/getting-started.html. In my case I ended up with four cloud-configs:
cloud-config-controller-0
cloud-config-controller-1
cloud-config-controller-2
cloud-config-worker
Validate your new cloud-configs here: https://coreos.com/validate/
Insert your cloud-configs into the CloudFormation stack config. First compress and encode your cloud config:
$ gzip -k cloud-config-controller-0
$ cat cloud-config-controller-0.gz | base64 > cloud-config-controller-0.enc
Now copy the content into your encoded cloud-config into the CloudFormation config. Look for the UserData key for the appropriate InstanceController. (I added additional InstanceController objects for the additional controllers.)
Update the stack at the AWS CloudFormation Management Console using your newly created CloudFormation config.
You will also need to generate TLS asssets: https://coreos.com/kubernetes/docs/latest/openssl.html. These assets will have to be compressed and encoded (same gzip and base64 as above), then inserted into your userdata cloud-configs.
When debugging on the server, journalctl is your friend:
$ journalctl -u oem-cloudinit # to debug problems with your cloud-config
$ journalctl -u etcd2
$ journalctl -u kubelet
Hope that helps.
There is also kops project
From the project README:
Operate HA Kubernetes the Kubernetes Way
also:
We like to think of it as kubectl for clusters
Download the latest release, e.g.:
cd ~/opt
wget https://github.com/kubernetes/kops/releases/download/v1.4.1/kops-linux-amd64
mv kops-linux-amd64 kops
chmod +x kops
ln -s ~/opt/kops ~/bin/kops
See kops usage, especially:
kops create cluster
kops update cluster
Assuming you already have s3://my-kops bucket and kops.example.com hosted zone.
Create configuration:
kops create cluster --state=s3://my-kops --cloud=aws \
--name=kops.example.com \
--dns-zone=kops.example.com \
--ssh-public-key=~/.ssh/my_rsa.pub \
--master-size=t2.medium \
--master-zones=eu-west-1a,eu-west-1b,eu-west-1c \
--network-cidr=10.0.0.0/22 \
--node-count=3 \
--node-size=t2.micro \
--zones=eu-west-1a,eu-west-1b,eu-west-1c
Edit configuration:
kops edit cluster --state=s3://my-kops
Export terraform scripts:
kops update cluster --state=s3://my-kops --name=kops.example.com --target=terraform
Apply changes directly:
kops update cluster --state=s3://my-kops --name=kops.example.com --yes
List cluster:
kops get cluster --state s3://my-kops
Delete cluster:
kops delete cluster --state s3://my-kops --name=kops.identityservice.co.uk --yes

How to auto create new hosts in Logentries for AWS EC2 autoscaling group

What's the best way to send logs from Auto scaling groups (of EC2) to Logentries.
I previously used the EC2 platform to create EC2 log monitoring for all of my EC2 instances created by an Autoscaling group. However according to Autoscaling rules, new instance will spin up if a current one is destroyed.
Now how do I create an automation for Logentries to create a new hosts and starting getting logs. I've read this https://logentries.com/doc/linux-agent-with-chef/#updating-le-agent I'm stuck at the override['le']['pull-server-side-config'] = false since I don't know anything about Chef (I just took the training from their site)
For an Autoscaling group, you need to get this baked into an AMI, or scripted to run on startup. You can get an EC2 instance to run commands on startup, after you've figured out which script to run.
The Logentries Linux Agent installation docs has setup instructions for an Amazon AMI (under Installation > Select your distro below > Amazon AMI).
Run the following commands one by one in your terminal:
You will need to provide your Logentries credentials to link the agent to your account.
sudo -s
tee /etc/yum.repos.d/logentries.repo <<EOF
[logentries]
name=Logentries repo
enabled=1
metadata_expire=1d
baseurl=http://rep.logentries.com/amazon\$releasever/\$basearch
gpgkey=http://rep.logentries.com/RPM-GPG-KEY-logentries
EOF
yum update
yum install logentries
le register
yum install logentries-daemon
I recommend trying that script once and seeing if it works properly for you, then you could include it in the user data for your Autoscaling launch configuration.