ansible ec2 ami module not working time out - amazon-web-services

I'm trying to implement a playbook dealing with Amazon EC2 .
I came to the part where I want to create a new image( I used a role for that ), here is what I did:
---
- debug: msg="{{ id_image }}"
- ec2_ami:
instance_id: "{{ id_image }}"
name: azzouz
wait: no
region: "{{ aws_region }}"
tags:
project: "{{ project }}"
register: image
I tried to specifiy the timeout to 900 and it's still not working. I changed it to wait: no and I'm still dealing with the same problem, here is last part of the error:
... return self._sslobj.read(len)\nssl.SSLError: The read operation timed out\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
could you please help me.

You don't have to set wait: no. Use this syntax and it will work.
- ec2_ami:
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
instance_id: i-xxxxxx
wait: yes
name: newtest
tags:
Name: newtest
Service: TestService
register: instance

Related

Iterating in a with_items loop ansible

Hi there i am atm trying to get the hang of ansible with aws and i am really likeing the flexibility of it so far.
Sadly now i am starting to hit a brick wall with my experiments. I am trying to tag volumes for specific instances that have the tag environment: test with the tages environment: test and backup: true . The playbook is working as intended if i specify every single Index of the array in the with_items loop. Here's my playbook so far:
---
- name: Tag the EBS Volumes
hosts: tag_environment_test
gather_facts: False
tags: tag
vars_files:
- /etc/ansible/vars/aws.yml
tasks:
- name: Gather instance instance_ids
local_action:
module: ec2_remote_facts
region: '{{ aws_region }}'
filters:
instance-state-name: running
"tag:environment": test
register: test_id
- name: Gather volume information for instance
local_action:
module: ec2_vol
region: '{{ aws_region }}'
instance: "{{ item.id }}"
state: list
with_items:
- "{{ test_id.instances }}"
register: ec2_volumes
- debug:
var: ec2_volumes
- name: Do some actual tagging
local_action:
module: ec2_tag
region: '{{ aws_region }}'
resource: "{{ item.id }}"
args:
tags:
environment: test
backup: true
with_items:
- "{{ ec2_volumes.results[0].volumes }}"
# - "{{ ec2_volumes.results[1].volumes }}"
My question now is it possible to iterate over the full array in ec2_volumes.results without having the need to specify every single value in the array. Like for example _ec2_volumes.results[X].volumes X=X+1_ so every time he goes through the loop he iterates +1 until the end of the array.
Every Input also on the rest of the playbook would be very appriciated (like i said still trying to get the hang of ansible. :)
Greeting
Drees
You can iterate over your list of results:
- name: Do some actual tagging
delegate_to: localhost
ec2_tag:
region: '{{ aws_region }}'
resource: "{{ item.volume.id }}"
tags:
environment: test
backup: true
with_items: "{{ ec2_volumes.results }}"
Every Input also on the rest of the playbook would be very appreciated
Consider using delegate_to: localhost instead of local_action. Consider this task:
- name: an example
command: do something
Using delegate_to, I only need to add a single line:
- name: an example
delegate_to: localhost
command: do something
Whereas using local_action I need to rewrite the task:
- name: an example
local_action:
module: command do something
And of course, delegate_to is more flexible: you can use it to delegate to hosts other than localhost.
Update
Without seeing your actual playbook it's hard to identify the source of the error. Here's a complete playbook that runs successfully (using synthesized data and wrapping your ec2_tag task in a debug task):
---
- hosts: localhost
gather_facts: false
vars:
aws_region: example
ec2_volumes:
results:
- volume:
id: 1
- volume:
id: 2
tasks:
- name: Do some actual tagging
debug:
msg:
ec2_tag:
region: '{{ aws_region }}'
resource: '{{ item.volume.id }}'
tags:
environment: test
backup: true
with_items: "{{ ec2_volumes.results }}"

Ansible: provision newly allocated ec2 instance

This playbook appears to be SSHing onto my local machine rather than the remote one. This condition is guessed based on the output I've included at the bottom.
I've adapted the example from here: http://docs.ansible.com/ansible/guide_aws.html#provisioning
The playbook is split into two plays:
creation of the EC2 instance and
configuration of the EC2 instance
Note: To run this you'll need to create a key-pair with the same name as the project (you can get more information here: https://us-west-2.console.aws.amazon.com/ec2/v2/home?region=us-west-2#KeyPairs:sort=keyName)
The playbook is listed below:
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
vars:
project_name: my-test
tasks:
- name: Get the current username
local_action: command whoami
register: username_on_the_host
- name: Capture current instances
ec2_remote_facts:
region: "us-west-2"
register: ec2_instances
- name: Create instance
ec2:
region: "us-west-2"
zone: "us-west-2c"
keypair: "{{ project_name }}"
group:
- "SSH only"
instance_type: "t2.nano"
image: "ami-59799439" # debian:jessie amd64 hvm on us-west 2
count_tag: "{{ project_name }}-{{ username_on_the_host.stdout }}-test"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ project_name }}-{{ username_on_the_host.stdout }}-test"
"{{ project_name }}-{{ username_on_the_host.stdout }}-test": simple_ec2
Creator: "{{ username_on_the_host.stdout }}"
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ item.public_dns_name }}"
port: 22
with_items: "{{ ec2_info.instances }}"
when: ec2_info|changed
- name: Add new instance to launched group
add_host:
hostname: "{{ item.public_dns_name }}"
groupname: launched
with_items: "{{ ec2_info.instances }}"
when: ec2_info|changed
- name: Get ec2_info information
debug:
msg: "{{ ec2_info }}"
# Configure and install all we need
- hosts: launched
remote_user: admin
gather_facts: true
tasks:
- name: Display all variables/facts known for a host
debug:
var: hostvars[inventory_hostname]
- name: List hosts
debug: msg="groups={{groups}}"
- name: Get current user
command: whoami
- name: Prepare system
become: yes
become_method: sudo
apt: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
The output I have is:
TASK [Get current user] ********************************************************
changed: [ec2-35-167-142-43.us-west-2.compute.amazonaws.com] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.006532", "end": "2017-01-09 14:53:55.806000", "rc": 0, "start": "2017-01-09 14:53:55.799468", "stderr": "", "stdout": "brianbruggeman", "stdout_lines": ["brianbruggeman"], "warnings": []}
TASK [Prepare system] **********************************************************
failed: [ec2-35-167-142-43.us-west-2.compute.amazonaws.com] (item=['software-properties-common', 'python-software-properties', 'devscripts', 'build-essential', 'libffi-dev', 'libssl-dev', 'vim']) => {"failed": true, "item": ["software-properties-common", "python-software-properties", "devscripts", "build-essential", "libffi-dev", "libssl-dev", "vim"], "module_stderr": "sudo: a password is required\n", "module_stdout": "", "msg": "MODULE FAILURE"}
This should work.
- name: Create Ec2 Instances
hosts: localhost
connection: local
gather_facts: False
vars:
project_name: device-graph
ami_id: ami-59799439 # debian jessie 64-bit hvm
region: us-west-2
zone: "us-west-2c"
instance_size: "t2.nano"
tasks:
- name: Provision a set of instances
ec2:
key_name: my_key
group: ["SSH only"]
instance_type: "{{ instance_size }}"
image: "{{ ami_id }}"
wait: true
exact_count: 1
count_tag:
Name: "{{ project_name }}-{{ username.stdout }}-test"
Creator: "{{ username.stdout }}"
Project: "{{ project_name }}"
instance_tags:
Name: "{{ project_name }}-{{ username.stdout }}-test"
Creator: "{{ username.stdout }}"
Project: "{{ project_name }}"
register: ec2
- name: Add all instance public IPs to host group
add_host:
hostname: "{{ item.public_ip }}"
groups: launched_ec2_hosts
with_items: "{{ ec2.tagged_instances }}"
- name: configuration play
hosts: launched_ec2_hosts
user: admin
gather_facts: true
vars:
ansible_ssh_private_key_file: "~/.ssh/project-name.pem"
tasks:
- name: get the username running the deploy
shell: whoami
register: username

Restore Instance from aws snapshot using ansible

I have taken a snapshot of instance. I just want to restore it back using ansible.
Please provide any solution, my ansible version is 1.9.4.
You can make use of ec2_vol module:
http://docs.ansible.com/ansible/ec2_vol_module.html
Note: Keep an eye on the options it supports and the version they were added in.
- name: Detach the old volume
ec2_vol:
region: "{{ aws_region }}"
id: "{{ get_id.volume_id }}"
instance: None
register: detach_vol
- name: Creating a Volume from a snapshot
ec2_vol:
snapshot: "{{snap_id}}"
region: "{{ aws_region }}"
volume_size: 40
instance: "{{ instance_id }}"
register: ec2_vol
tags: attach
- name: Attach the Created volume to an instance
ec2_vol:
instance: "{{ instance_id }}"
id: "{{ ec2_vol.volume_id }}"
device_name: /dev/sda1
delete_on_termination: yes

Ansible ec2 module ignores "volumes" parameter

I'm trying to get Ansible to bring up new ec2 boxes for me with a volume size larger than the default ~8g. I've added the volumes option with volume_size specified, but when I run with that, the volumes option seems to be ignored and I still get a new box with ~8g. The relevant part of my playbook is as follows:
- name: provision new boxes
hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
group: "{{ aws_security_group }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ aws_vpc_subnet_id }}"
key_name: "{{ aws_key_name }}"
wait: true
count: "{{ num_machines }}"
instance_tags: "{{ tags }}"
volumes:
- device_name: /dev/sda1
volume_size: 15
register: ec2
What am I doing wrong?
So, I just updated ansible to 1.9.4 and now all of a sudden it works. So there's my answer. The code above is fine. Ansible was just broken.

Create many AWS instances with Ansible, 'count' does not work

I have this playbook:
---
# Run it like this:
# ansible-playbook --extra-vars '{"VAR":"var-value", "VAR":"var-value"}' playbook-name.yml
- hosts: localhost
vars:
instance_tag : "{{ TAG }}"
instances_num: 2
tasks:
- name: Create new AWS instances
local_action:
module: ec2
region: us-east-1
key_name: integration
instance_type: m3.medium
image: ami-61dcvfa
group: mysecgroup
instance_tags:
Name: "{{ instance_tag }}"
with_sequence: count = {{ instances_num | int }}
When I run it it throws this:
TASK: [Create new AWS instances] **********************************************
fatal: [localhost] => unknown error parsing with_sequence arguments: u'count = 1'
FATAL: all hosts have already failed -- aborting
What am I doing wrong?
I have tried with 2 also, but throws the same error.
I have tried also with "{{instances_num}}" but nothing.
The ec2 module has a count parameter that you can use directly rather than trying to loop the task over a sequence.
You can use it like this:
---
# Run it like this:
# ansible-playbook --extra-vars '{"VAR":"var-value", "VAR":"var-value"}' playbook-name.yml
- hosts: localhost
vars:
instance_tag : "{{ TAG }}"
instances_num: 2
tasks:
- name: Create new AWS instances
local_action:
module: ec2
region: us-east-1
key_name: integration
instance_type: m3.medium
image: ami-61dcvfa
group: mysecgroup
instance_tags:
Name: "{{ instance_tag }}"
count: "{{ instances_num }}"