I am trying to attach the ephemeral partitions on my ec2 instances using boto.
I have tried that with AWS tools and worked:
./ec2-run-instances ami-018c9568
--instance-type m2.4xlarge
--availability-zone us-east-1a
-n 1 -k my_keypair -g sg-11111111
-b "/dev/xvdb=ephemeral0"
-b "/dev/xvdc=ephemeral1"
-b "/dev/xvdd=ephemeral2"
-b "/dev/xvde=ephemeral3"
When I try it with boto it doesn't work:
mapping = BlockDeviceMapping()
eph0 = BlockDeviceType()
eph1 = BlockDeviceType()
eph2 = BlockDeviceType()
eph3 = BlockDeviceType()
eph0.ephemeral_name = 'ephemeral0'
eph1.ephemeral_name = 'ephemeral1'
eph2.ephemeral_name = 'ephemeral2'
eph3.ephemeral_name = 'ephemeral3'
mapping['/dev/xvdb'] = eph0
mapping['/dev/xvdc'] = eph1
mapping['/dev/xvdd'] = eph2
mapping['/dev/xvde'] = eph3
It is correct. I was experimenting with instance type with a single ephemeral (I thought it had two). Hope it helps other people.
Related
I am creating an instance from a sourceImage, using this terraform template:
resource "tls_private_key" "sandbox_ssh" {
algorithm = "RSA"
rsa_bits = 4096
}
output "tls_private_key_sandbox" { value = "${tls_private_key.sandbox_ssh.private_key_pem}" }
locals {
custom_data1 = <<CUSTOM_DATA
#!/bin/bash
CUSTOM_DATA
}
resource "google_compute_instance_from_machine_image" "sandboxvm_test_fromimg" {
project = "<proj>"
provider = google-beta
name = "sandboxvm-test-fromimg"
zone = "us-central1-a"
tags = ["test"]
source_machine_image = "projects/<proj>/global/machineImages/sandboxvm-test-img-1"
can_ip_forward = false
labels = {
owner = "test"
purpose = "test"
ami = "sandboxvm-test-img-1"
}
metadata = {
ssh-keys = "${var.sshuser}:${tls_private_key.sandbox_ssh.public_key_openssh}"
}
network_interface {
network = "default"
access_config {
// Include this section to give the VM an external ip address
}
}
metadata_startup_script = local.custom_data1
}
output "instance_ip_sandbox" {
value = google_compute_instance_from_machine_image.sandboxvm_test_fromimg.network_interface.0.access_config.0.nat_ip
}
output "user_name" {
value = var.sshuser
}
I can't even ping / netcat, neither the private or public IP of the VM created. Even the "serial port" ssh, passed inside custom script helps.
I'm suspecting, that since it is a "google beta" capability, is it even working / reliable?
Maybe we just can't yet, create VMs i.e GCEs from "SourceImages" in GCP, Unless proven otherwise, with a simple goof-up not very evident in my TF.
I could solve it actually, and all this somewhere sounds very sick of GCE.
Problem was while creating the base image, the instance I had chosen has had the following :
#sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.6 2
#sudo update-alternatives --install /usr/bin/python3 python /usr/bin/python3.7 1
Maybe I should try with "python3" instead of "python",
but when instantiating GCEs basis this MachineImage, it looks for a rather deprecated "python2.7" and not "python3" and complained of missing / unreadable packages like netiplan etc.
Commenting the "update-alternatives" and installing python3.6 and python3.7 explicitly did the trick!
Why don't I create Amazon lightsailclient and set up UserData?
var shuju = new CreateInstancesRequest()
{
BlueprintId = "centos_7_1901_01",
BundleId = "micro_2_0",
AvailabilityZone = "ap-northeast-1d",
InstanceNames = new System.Collections.Generic.List<string>() { "test" },
UserData = "echo root:test123456- |sudo chpasswd root\r\nsudo sed -i 's/^#\\?PermitRootLogin.*/PermitRootLogin yes/g' /etc/ssh/sshd_config;\r\nsudo sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/g' /etc/ssh/sshd_config;\r\nsudo reboot\r\n"
};
If you wish to run a User Data script on a Linux instance, then the first line must begin with #!.
It uses the same technique as an Amazon EC2 instance, so see: Running Commands on Your Linux Instance at Launch - Amazon Elastic Compute Cloud
My problem
I have successfully deployed a nomad job with a few dozen Redis Docker containers on AWS, using the default Redis image from Dockerhub.
I've slightly altered the default config file created by nomad init to change the number of running containers, and everything works as expected
The problem is that the actual image I would like to run is in ECR, which requires AWS permissions (access and secret key), and I don't know how to send these.
Code
job "example" {
datacenters = ["dc1"]
type = "service"
update {
max_parallel = 1
min_healthy_time = "10s"
healthy_deadline = "3m"
auto_revert = false
canary = 0
}
group "cache" {
count = 30
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
ephemeral_disk {
size = 300
}
task "redis" {
driver = "docker"
config {
# My problem here
image = "https://-whatever-.dkr.ecr.us-east-1.amazonaws.com/-whatever-"
port_map {
db = 6379
}
}
resources {
network {
mbits = 10
port "db" {}
}
}
service {
name = "global-redis-check"
tags = ["global", "cache"]
port = "db"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
}
}
}
What have I tried
Extensive Google Search
Reading the manual
Placing the aws credentials in the machine which runs the nomad file (using aws configure)
My question
How can nomad be configured to pull Docker containers from AWS ECR using the AWS credentials?
Pretty late for you, but aws ecr does not handle authentication in the way that docker expects. There you need to run sudo $(aws ecr get-login --no-include-email --region ${your region}) Running the returned command actually authenticates in a docker compliant way
Note that region is optional if aws cli is configured. Personally, I allocate an IAM role the box (allowing ecr pull/list/etc), so that I don't have to manually deal with credentials.
I don't use ECR, but if it acts like a normal docker registry, this is what I do for my registry, and it works. Assuming the previous sentence, it should work fine for you as well:
config {
image = "registry.service.consul:5000/MYDOCKERIMAGENAME:latest"
auth {
username = "MYMAGICUSER"
password = "MYMAGICPASSWORD"
}
}
According to documentation, using terraform, I'm able to create a droplet on digital ocean:
resource "digitalocean_volume" "foobar" {
region = "nyc1"
name = "baz"
size = 100
description = "an example volume"
}
So, I'm also able to add a volume to it:
resource "digitalocean_droplet" "foobar" {
name = "baz"
size = "1gb"
image = "coreos-stable"
region = "nyc1"
volume_ids = ["${digitalocean_volume.foobar.id}"]
}
I'd like to know how to mount this on a desired location.
I need to mount it automatically. I mean, when droplet is up I need to the volume is mounted. I was thinking about using chef...
Any ideas?
To mount the volume automatically, you can use user_data via cloud init to run a script as follow:
This is how your digitalocean_droplet resources should reflect:
resource "digitalocean_droplet" "foobar" {
name = "baz"
size = "1gb"
image = "coreos-stable"
region = "nyc1"
volume_ids = ["${digitalocean_volume.foobar.id}"]
# user data
user_data = "${data.template_cloudinit_config.cloudinit-example.rendered}"
}
Then your cloud.init file that contains the cloudinit_config should be as bellow. It will reference the shell script in ${TERRAFORM_HOME}/scripts/disk.sh that would mount your volume automatically:
provider "cloudinit" {}
data "template_file" "shell-script" {
template = "${file("scripts/disk.sh")}"
}
data "template_cloudinit_config" "cloudinit-example" {
gzip = false
base64_encode = false
part {
content_type = "text/x-shellscript"
content = "${data.template_file.shell-script.rendered}"
}
}
The shell script to mount the volume automatically on startup is in ${TERRAFORM_HOME}/scripts/disk.sh
It will first check if a file system exist. If true it wouldn't format the disk if not it will
#!/bin/bash
DEVICE_FS=`blkid -o value -s TYPE ${DEVICE}`
if [ "`echo -n $DEVICE_FS`" == "" ] ; then
mkfs.ext4 ${DEVICE}
fi
mkdir -p /data
echo '${DEVICE} /data ext4 defaults 0 0' >> /etc/fstab
mount /data
I hope this helps
Mounting the volume needs to be done from the guest OS itself using mount, fstab, etc.
The digital ocean docs cover this here.
Using Chef you could use resource_mount to mount it in an automated fashion.
The device name will be /dev/disk/by-id/scsi-0DO_Volume_YOUR_VOLUME_NAME. So, using the example from the Terraform docs, it would be /dev/disk/by-id/scsi-0DO_Volume_baz.
Is there a utility or script available to retrieve a list of all instances from AWS EC2 auto scale group?
I need a dynamically generated list of production instance to hook into our deploy process. Is there an existing tool or is this something I am going to have to script?
Here is a bash command that will give you the list of IP addresses of your instances in an AutoScaling group.
for ID in $(aws autoscaling describe-auto-scaling-instances --region us-east-1 --query AutoScalingInstances[].InstanceId --output text);
do
aws ec2 describe-instances --instance-ids $ID --region us-east-1 --query Reservations[].Instances[].PublicIpAddress --output text
done
(you might want to adjust the region and to filter per AutoScaling group if you have several of them)
On a higher level point of view - I would question the need to connect to individual instances in an AutoScaling Group. The dynamic nature of AutoScaling would encourage you to fully automate your deployment and admin processes. To quote an AWS customer : "If you need to ssh to your instance, change your deployment process"
--Seb
The describe-auto-scaling-groups command from the AWS Command Line Interface looks like what you're looking for.
Edit: Once you have the instance IDs, you can use the describe-instances command to fetch additional details, including the public DNS names and IP addresses.
You can use the describe-auto-scaling-instances cli command, and query for your autoscale group name.
Example:
aws autoscaling describe-auto-scaling-instances --region us-east-1
--query 'AutoScalingInstances[?AutoScalingGroupName==`YOUR_ASG`]' --output text
Hope that helps
You can also use below command to fetch private ip address without any jq/awk/sed/cut
$ aws autoscaling describe-auto-scaling-instances --region us-east-1 --output text \
--query "AutoScalingInstances[?AutoScalingGroupName=='ASG-GROUP-NAME'].InstanceId" \
| xargs -n1 aws ec2 describe-instances --instance-ids $ID --region us-east-1 \
--query "Reservations[].Instances[].PrivateIpAddress" --output text
courtesy this
I actually ended up writing a script in Python because I feel more comfortable in Python then Bash,
#!/usr/bin/env python
"""
ec2-autoscale-instance.py
Read Autoscale DNS from AWS
Sample config file,
{
"access_key": "key",
"secret_key": "key",
"group_name": "groupName"
}
"""
from __future__ import print_function
import argparse
import boto.ec2.autoscale
try:
import simplejson as json
except ImportError:
import json
CONFIG_ACCESS_KEY = 'access_key'
CONFIG_SECRET_KEY = 'secret_key'
CONFIG_GROUP_NAME = 'group_name'
def main():
arg_parser = argparse.ArgumentParser(description=
'Read Autoscale DNS names from AWS')
arg_parser.add_argument('-c', dest='config_file',
help='JSON configuration file containing ' +
'access_key, secret_key, and group_name')
args = arg_parser.parse_args()
config = json.loads(open(args.config_file).read())
access_key = config[CONFIG_ACCESS_KEY]
secret_key = config[CONFIG_SECRET_KEY]
group_name = config[CONFIG_GROUP_NAME]
ec2_conn = boto.connect_ec2(access_key, secret_key)
as_conn = boto.connect_autoscale(access_key, secret_key)
try:
group = as_conn.get_all_groups([group_name])[0]
instances_ids = [i.instance_id for i in group.instances]
reservations = ec2_conn.get_all_reservations(instances_ids)
instances = [i for r in reservations for i in r.instances]
dns_names = [i.public_dns_name for i in instances]
print('\n'.join(dns_names))
finally:
ec2_conn.close()
as_conn.close()
if __name__ == '__main__':
main()
Gist
The answer at https://stackoverflow.com/a/12592543/20774 was helpful in developing this script.
Use the below snippet for sorting out ASGs with specific tags and listing out its instance details.
#!/usr/bin/python
import boto3
ec2 = boto3.resource('ec2', region_name='us-west-2')
def get_instances():
client = boto3.client('autoscaling', region_name='us-west-2')
paginator = client.get_paginator('describe_auto_scaling_groups')
groups = paginator.paginate(PaginationConfig={'PageSize': 100})
#print groups
filtered_asgs = groups.search('AutoScalingGroups[] | [?contains(Tags[?Key==`{}`].Value, `{}`)]'.format('Application', 'CCP'))
for asg in filtered_asgs:
print asg['AutoScalingGroupName']
instance_ids = [i for i in asg['Instances']]
running_instances = ec2.instances.filter(Filters=[{}])
for instance in running_instances:
print(instance.private_ip_address)
if __name__ == '__main__':
get_instances()
for ruby using aws-sdk gem v2
First create ec2 object as this:
ec2 = Aws::EC2::Resource.new(region: 'region',
credentials: Aws::Credentials.new('IAM_KEY', 'IAM_SECRET')
)
instances = []
ec2.instances.each do |i|
p "instance id---", i.id
instances << i.id
end
This will fetch all instance ids in particular region and can use more filters like ip_address.