Ansible ec2 module ignores "volumes" parameter - amazon-web-services

I'm trying to get Ansible to bring up new ec2 boxes for me with a volume size larger than the default ~8g. I've added the volumes option with volume_size specified, but when I run with that, the volumes option seems to be ignored and I still get a new box with ~8g. The relevant part of my playbook is as follows:
- name: provision new boxes
hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
group: "{{ aws_security_group }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ aws_vpc_subnet_id }}"
key_name: "{{ aws_key_name }}"
wait: true
count: "{{ num_machines }}"
instance_tags: "{{ tags }}"
volumes:
- device_name: /dev/sda1
volume_size: 15
register: ec2
What am I doing wrong?

So, I just updated ansible to 1.9.4 and now all of a sudden it works. So there's my answer. The code above is fine. Ansible was just broken.

Related

ansible: gcp_compute_disk - problem creating disk from a snapshot

I am new to Ansible and I am trying to figure out how to create a GCP disk from a GCP snapshot, using the gcp_compute_disk module. I am using the following documentation: https://docs.ansible.com/ansible/2.10/collections/google/cloud/gcp_compute_disk_module.html
I created the playbook below, but it only creates an empty new disk, not a disk from the snapshot. My ansible version is 2.9.20.
- name: Create GCP snapshots
hosts: localhost
gather_facts: yes
vars:
gcp_project: test-project
gcp_cred_kind: serviceaccount
gcp_cred_file: /etc/ansible/keys/ansible-test-project-service-account-key.json
zone: "us-central1-a"
region: "us-central1"
instancename: "test-instance"
snapshot:
selfLink: https://www.googleapis.com/compute/v1/projects/test-project/global/snapshots/test-snapshot1
tasks:
- name: create data disk from a snapshot
gcp_compute_disk:
name: "{{ instancename }}-data-1"
description: "{{ instancename }}-data-1"
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
source_snapshot: "{{ snapshot }}"
labels:
usage: test-label
type: "https://www.googleapis.com/compute/v1/projects/test-project/zones/us-central1-b/diskTypes/pd-standard"
state: present
register: disk_data
I have also tried to create the snapshot first with gcp_compute_snapshot, then registered that snapshot (register: disksnapshot), and then used that dictionary to reference the snapshot (source_snapshot: "{{ disksnapshot }}"). The result is the same.
Thanks in advance for your help.

Ansible AWS Route53 URL doesn't resolve but page loads OK via IP

The following ansible playbook runs fine, no error at all but the URL just don't resolve/load afterwards. If I use the public IP created for the instance, the page loads.
---
- name: Provision an EC2 Instance
hosts: local
remote_user: ubuntu
become: yes
connection: local
gather_facts: false
vars:
instance_type: t2.micro
security_group: "Web Subnet Security Group"
image: ami-0c5199d385b432989
region: us-east-1
keypair: demo-key
count: 1
vars_files:
- keys.yml
tasks:
- name: Create key pair using ouw own pubkey
ec2_key:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
name: demo-key
key_material: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
region: us-east-1
state: present
- name: Launch the new EC2 Instance
ec2:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
assign_public_ip: yes
vpc_subnet_id: subnet-0c799bda2a466f8d4
group: "{{ security_group }}"
instance_type: "{{ instance_type}}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
keypair: "{{ keypair }}"
count: "{{ count }}"
state: present
register: ec2
- name: Add tag to Instance(s)
ec2_tag:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
resource: "{{ item.id }}"
region: "{{ region }}"
state: present
tags:
Name: demo-webserver
with_items: "{{ ec2.instances }}"
- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
lineinfile:
path="./hosts"
line="{{ item.public_ip }}"
insertafter='\[demo-webserver\]'
state=present
with_items: "{{ ec2.instances }}"
- name: Pause for 2 minutes
pause:
minutes: 2
- name: Write the new ec2 instance host key to known hosts
connection: local
shell: "ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts"
with_items: "{{ ec2.instances }}"
- name: Waiting for the instance to come
local_action: wait_for
host="{{ item.public_ip }}"
delay=10
connect_timeout=300
state=started
port=22
with_items: "{{ ec2.instances }}"
- name: Install packages
delegate_to: "{{ item.public_ip }}"
raw: bash -c "test -e /usr/bin/python || (apt -qqy update && apt install -qqy python-minimal && apt install -qqy apache2 && systemctl start apache2 && systemctl enable apache2)"
with_items: "{{ ec2.instances }}"
- name: Register new domain
route53_zone:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
- name: Create new DNS record
route53:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
record: ansible-demo-domain.com
type: A
ttl: 300
value: "{{ item.public_ip }}"
state: present
overwrite: yes
private_zone: no
wait: yes
with_items: "{{ ec2.instances }}"
- name: Create new DNS record
route53:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
record: www.ansible-demo-domain.com
type: CNAME
ttl: 300
value: ansible-demo-domain.com
state: present
overwrite: yes
private_zone: no
wait: yes
Appreciate your help to point what/where I'm missing is. I usually wait at least 5 minutes before testing the URL but really doens't resolve/load.
Thank you!
20190301_Update: Here's how the hosted zone looks like after provisioning:
hosted-zone-after-provisioning and its associated TTLs ttl

Ansible EC2 with security groups across VPC peering connections

I have 3 separate VPCs on aws and am using ansible to handle deploys. My problem is that a few of my environments use security groups from another VPC.
Here is my EC2 module -
- name: Create instance
ec2:
image: "{{ image }}"
instance_type: "{{ instance_type }}"
aws_access_key: "{{ aws_access_key_id }}"
aws_secret_key: "{{ aws_secret_access_key }}"
key_name: "{{ key_name }}"
instance_tags:
Name: "{{ name }}"
Environment: "{{ env }}"
Product: "{{ product }}"
Service: "{{ service }}"
region: "{{ region }}"
volumes:
- device_name: "{{ disk_name }}"
volume_type: "{{ disk_type }}"
volume_size: "{{ disk_size }}"
delete_on_termination: "{{ delete_on_termination }}"
# group: "{{ security_group_name }}"
group_id: "{{ security_group_id }}"
wait: true
vpc_subnet_id: "{{ vpc_subnet_id }}"
count: "{{ instance_count }}"
monitoring: "{{ detailed_monitoring }}"
instance_profile_name: "{{ iam_role }}"
assign_public_ip: "{{ assign_public_ip }}"
termination_protection: "{{ termination_protection }}"
register: ec2
When I pass in a security group id from another VPC, I get this -
"msg": "Instance creation failed => InvalidParameter: Security group sg-e7284493 and subnet subnet-19d97e50 belong to different networks."
Is there a workaround in Ansible for this?
You can't assign a foreign security group to an EC2 in a different VPC. Meaning, the security groups assigned to an EC2 must be associated with the security groups in that VPC.
The way to do this would be to create a security group in the VPC where your EC2 lives that allows the foreign security group access, then apply the created security group to your EC2.

Launch multiple volumes with ec2 instance using ansible

I am provisioning an ec2 instance with number of volumes attached to it. Following is my playbook to do the same.
---
- hosts: localhost
connection: local
gather_facts: false
vars:
instance_type: 't2.micro'
region: 'my-region'
aws_zone: 'myzone'
security_group: my-sg
image: ami-sample
keypair: my-keypair
vpc_subnet_id: my-subnet
tasks:
- name: Launch instance
ec2:
image: "{{ image }}"
instance_type: "{{ instance_type }}"
keypair: "{{ keypair}}"
instance_tags: '{"Environment":"test","Name":"test-provisioning"}'
region: "{{region}}"
aws_zone: "{{ region }}{{ aws_zone }}"
group: "{{ security_group }}"
vpc_subnet_id: "{{vpc_subnet_id}}"
wait: true
volumes:
- device_name: "{{ item }}"
with_items:
- /dev/sdb
- /dev/sdc
volume_type: gp2
volume_size: 100
delete_on_termination: true
encrypted: true
register: ec2_info
But getting following error
fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'item' is undefined
If I replace the {{item}} with /dev/sdb the instance gets launched with the specific volume easily. But I want to create more than one volume with the specified list of items - /dev/sdb, /dev/sdc etc
Any possible way to achieve this?
You can't use with_items with vars and parameters – it's just for tasks.
You need to construct your volumes list in advance:
- name: Populate volumes list
set_fact:
vol:
device_name: "{{ item }}"
volume_type: gp2
volume_size: 100
delete_on_termination: true
encrypted: true
with_items:
- /dev/sdb
- /dev/sdc
register: volumes
And then exec ec2 module with:
volumes: "{{ volumes.results | map(attribute='ansible_facts.vol') | list }}"
Update: another approach without set_fact:
Define a variable – kind of template dictionary for volume (without device_name):
vol_default:
volume_type: gp2
volume_size: 100
delete_on_termination: true
encrypted: true
Then in your ec2 module you can use:
volumes: "{{ [{'device_name': '/dev/sdb'},{'device_name': '/dev/sdc'}] | map('combine',vol_default) | list }}"

Restore Instance from aws snapshot using ansible

I have taken a snapshot of instance. I just want to restore it back using ansible.
Please provide any solution, my ansible version is 1.9.4.
You can make use of ec2_vol module:
http://docs.ansible.com/ansible/ec2_vol_module.html
Note: Keep an eye on the options it supports and the version they were added in.
- name: Detach the old volume
ec2_vol:
region: "{{ aws_region }}"
id: "{{ get_id.volume_id }}"
instance: None
register: detach_vol
- name: Creating a Volume from a snapshot
ec2_vol:
snapshot: "{{snap_id}}"
region: "{{ aws_region }}"
volume_size: 40
instance: "{{ instance_id }}"
register: ec2_vol
tags: attach
- name: Attach the Created volume to an instance
ec2_vol:
instance: "{{ instance_id }}"
id: "{{ ec2_vol.volume_id }}"
device_name: /dev/sda1
delete_on_termination: yes