Ansible EC2 Module is not retrieving instance information [duplicate] - amazon-web-services

This question already has answers here:
How to use Ansible's with_item with a variable?
(2 answers)
Closed 5 years ago.
I wrote an ansible task to create ec2 instance and add the host as dynamic host. The task is working perfectly and created the instance, But Am not able to retrieve instance information.
My Ansible version : 2.2.0.0 / Ubuntu 14.04
Here is my code
- name: launch ec2 instance for QA
local_action:
module: ec2
key_name: "{{ ec2_keypair }}"
group: "{{ ec2_security_group }}"
instance_type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
vpc_subnet_id: "{{ ec2_subnet_ids }}"
region: "{{ ec2_region }}"
instance_tags: '{"Name":"{{ec2_tag_Name}}","Type":"{{ec2_tag_Type}}","Environment":"{{ec2_tag_Environment}}"}'
assign_public_ip: yes
wait: true
count: 1
register: ec2
- debug: var=item
with_items: ec2.instances
- add_host: name={{ item.public_ip }} >
groups=dynamically_created_hosts
with_items: ec2.instances
- name: Wait for the instances to boot by checking the ssh port
wait_for: host={{item.public_ip}} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
The output what am getting is:
TASK [launch ec2 instance for QA] **********************************************
changed: [localhost -> localhost]
TASK [debug] *******************************************************************
ok: [localhost] => (item=ec2.instances) => {
"item": "ec2.instances"
}
TASK [add_host] ****************************************************************
fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'unicode object' has no attribute 'public_ip'\n\nThe error appears to have been in '/var/lib/jenkins/jobs/QA/workspace/dynamic-ec2.yml': line 37, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - add_host: name={{ item.public_ip }} >\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \"{{ foo }}\"\n"}
Is there any other way to do this?

You cannot use bare variables in 2.2. The syntax has been deprecated and users were warned since version 2.0.
You should read the error message you have pasted, and although it suggests a different reason, you should follow the example given:
Should be written as:
with_items:
- "{{ foo }}"
In your case it's enough to replace all with_items: ec2.instances with:
with_items: "{{ ec2.instances }}"

Related

Check if EC2 instances exists and running then skip deployment

I am trying to run a playbook that calls a role to deploy some EC2 instances, everything works fine except that I want to put a condition when the EC2 instance exists and in running state to skip the deployment I used the following to retrieve the ec2_infos :
## Check if an instance with same name exist on AWS
- name: Get {{ ec2_name }} infos
ec2_instance_info:
region: "us-east-1"
filters:
"tag:Name": "{{ ec2_name }}"
instance-state-name: [ "running"]
register: ec2_infos
- name: DEBUG
debug: msg="{{ aws_ec2_infos }}"
and on the Deployment stage my condition is as follows :
- name: "{{ ec2_description }} - {{ ec2_name }}"
cloudformation:
stack_name: "some name "
state: "present"
region: "{{ aws_region }}"
template: "PATH/ec2.json"
template_parameters:
Name: "{{ ec2_name }}"
Description: "{{ ec2_description }}"
KeyName: "{{key_name }}"
KmsKeyId: "{{ key_id }}"
GroupSet: "{{ id }}"
IamInstanceProfile: "{{ name }}"
Type: "OS"
**when: ec2_infos.state[0].name != 'running'**
but I get an error that says :
"msg": "The conditional check 'aws_ec2_infos.state[0].name != 'running'' failed. The error was: error while evaluating conditional (aws_ec2_infos.state[0].name != 'running'): 'dict object' has no attribute
I think I am missing something in my condition but I can't find what exactly. Any tip or advice is more than welcome
As #benoit and #mdaniel said the error was in my understanding the condition should be :
aws_ec2_infos.instances[0].state.name != 'running'

How to delete old launch configuration using ansible code

I am working on autoscaling ansible project can somebody tell me how I can delete old launch configuration using ansible playbooks.
Thanks
Some time ago, I've created a pull request for a new Ansible module that you could use to find old launch configurations, and delete them.
For example, if you'd like to keep the top 10 most recent launch configurations, and remove old ones, you could have a task like:
---
- name: "Find old Launch Configs"
ec2_lc_find:
profile: "{{ boto_profile }}"
region: "{{ aws_region }}"
name_regex: "*nameToFind*"
sort: true
sort_end: -10
register: old_launch_config
- name: "Remove old Launch Configs"
ec2_lc:
profile: "{{ boto_profile }}"
region: "{{ aws_region }}"
name: "{{ item.name }}"
state: absent
with_items: old_launch_config.results
ignore_errors: yes

Creating n new instances in AWS EC2 VPC and then configuring them

I'm having a really hard time doing what seems like a fairly standard task so I'm hoping somebody can help me. I've googled this like crazy and most of the examples are not in VPC or use deprecated structure that makes them wrong or unusable in my use case.
Here are my goals:
I want to launch a whole mess of new instances in my VPC (the same
code below has 3 but it could be a hundred)
I want to wait for thoseinstances to come alive
I then want to configure those instances (ssh into them, change
hostname, enable some services, etc. etc.)
Now I could probably do this in 2 tasks. I could create the instances in 1 playbook. Wait for them to settle down. Then run a 2nd playbook to configure them. That's probably what I'm going to do now because I want to get moving - but there has to be a one shot answer to this.
Here's what I have so far for a playbook
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item.private_ip }} port=22 delay=60 timeout=900 state=started
with_items: '{{ec2.instances}}'
- name: Update hostname on instances
hostname: name={{ item.private_ip }}
with_items: '{{ec2.instances}}'
And that doens't work. What I get is
TASK [Wait for SSH to come up] *************************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
TASK [Update hostname on instances] ********************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
Which makes me sad. Now this is my latest incarnation of that playbook. But I've tried to rewrite it using every example I can find on the internet. Most of them have with_items written in a different way, but ansible tells me that way is depricated, and then fails.
So far ansible has been fun and easy, but this is making me want to toss my laptop across the street.
Any suggestions? Should I be using register and with_items at all? Would I be better off using something like this:
add_host: hostname={{item.public_ip}} groupname=deploy
instead? I'm wide open to a rewrite here. I'm going to go write this up in 2 playbooks and would love to get suggestions.
Thanks!
****EDIT****
Now it's just starting to feel broken or seriously changed. I've googled dozens of examples and they all are written the same way and they all fail with the same error. This is my simple playbook now:
---
- hosts: localhost
connection: local
gather_facts: False
vars:
builderstart: 93
builderend: 94
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: dakey
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: True
group_id: sg-OU812
exact_count: 1
count_tag:
Name: "{{ item }}"
instance_tags:
Name: "{{ item }}"
role: "dostuff"
extracheese: "True"
register: ec2
with_sequence: start="{{builderstart}}" end="{{builderend}}" format=builder%03d
- name: the newies
debug: msg="{{ item }}"
with_items: "{{ ec2.instances }}"
It really couldn't be more straight forward. No matter how I write it, no matter how I vary it, I get the same basic error:
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the
future this will be a fatal error.: 'dict object' has no attribute
'instances'.
So it looks like it's the with_items: "{{ ec2.instances }}" line that's causing the error.
I've used debug to print out ec2 and that error looks accurate. It looks like the structure changed to me. It looks like ec2 now contains a dictionary with results as a key to another dictionary object and that instances is a key in that dictionary. But I can't find a sane way to access the data.
For what it's worth, I've tried accessing this in 2.0.1, 2.0.2, and 2.2 and I get the same problem in every case.
Are the rest of you using 1.9 or something? I can't find an example anywhere that works. It's very frustrating.
Thanks again for any help.
Don't do it like this:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
Because by using it you flushing all info from ec2 in your item.
You receiving following output:
TASK [Launch instance] *********************************************************
changed: [localhost] => (item={u'hostname': u'eggroll1'})
changed: [localhost] => (item={u'hostname': u'eggroll2'})
but item should be like this:
changed: [localhost] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-29-85.ec2.internal', u'public_ip': u'54.208.138.217', u'private_ip': u'172.31.29.85', u'id': u'i-003b63636e7ffc27c', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-37581295'}}, u'key_name': u'eggfooyong', u'image_id': u'ami-fce3c696', u'tenancy': u'default', u'groups': {u'sg-aabbcc34': u'ssh'}, u'public_dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'state_code': 16, u'tags': {u'ansibleowned': u'True', u'role': u'supper'}, u'placement': u'us-east-1d', u'ami_launch_index': u'1', u'dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'region': u'us-east-1', u'launch_time': u'2016-04-19T08:19:16.000Z', u'instance_type': u't2.micro', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'})
Try to use following code
- name: Create a sandbox instance
hosts: localhost
gather_facts: False
vars:
keypair: eggfooyong
instance_type: t2.micro
security_group: ssh
image: ami-8675309
region: us-east-1
subnet: subnet-8675309
instance_names:
- eggroll1
- eggroll2
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
assign_public_ip: no
count: "{{ instance_names | length }}"
register: ec2
- name: tag instances
ec2_tag:
resource: '{{ item.0.id }}'
region: '{{ region }}'
tags:
Name: '{{ item.1 }}'
role: "supper"
ansibleowned: "True"
with_together:
- '{{ ec2.instances }}'
- '{{ instance_names }}'
- name: Wait for SSH to come up
wait_for: host={{ private_ip }} port=22 delay=60 timeout=320 state=started
with_items: '{{ ec2.instances }}'
Assumption that your ansible host located inside of VPC
To achieve this goal, I have written a really small filter plugin get_ec2_info.
Create a directory with the named filter_plugins
Create a plugin file get_ec2_info.py with the following content:
from jinja2.utils import soft_unicode
class FilterModule(object):
def filters(self):
return {
'get_ec2_info': get_ec2_info,
}
def get_ec2_info(list, ec2_key):
ec2_info = []
for item in list:
for ec2 in item['instances']:
ec2_info.append(ec2[ec2_key])
return ec2_info
Then you can use this in your playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
- name: Create SSH Group to login dynamically to EC2 Instance(s)
add_host:
hostname: "{{ item }}"
groupname: my_ec2_servers
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
- name: Wait for SSH to come up on EC2 Instance(s)
wait_for:
host: "{{ item }}"
port: 22
state: started
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
# CALL THE DYNAMIC GROUP IN THE SAME PLAYBOOK
- hosts: my_ec2_servers
become: yes
remote_user: ubuntu
gather_facts: yes
tasks:
- name: DO YOUR TASKS HERE
EXTRA INFORMAITON:
using ansible 2.0.1.0
assuming you are spinning up ubuntu instances, if not then change the value in remote_user: ubuntu
assuming ssh key is properly configured
Please consult these github repos for more help:
ansible-aws-role-1
ansible-aws-role-2
I thinks this would be helpful for debug.
https://www.middlewareinventory.com/blog/ansible-dict-object-has-no-attribute-stdout-or-stderr-how-to-resolve/
The ec2 register is a dict type. And it has a key results.
results key has many elements including dict and list like below:
{
"msg": {
"results": [
{
"invocation": {
},
"instances": [],
"changed": false,
"tagged_instances": [
{
}
],
"instance_ids": null,
"failed": false,
"item": [
],
"ansible_loop_var": "item"
}
],
"msg": "All items completed",
"changed": false
},
"_ansible_verbose_always": true,
"_ansible_no_log": false,
"changed": false
}
So, you can get the desired data using ., for instance, item.changed which has false boolean value.
- debug:
msg: "{{ item.changed }}"
loop: "{{ ec2.results }}"

Ansible: Create new RDS DB from last snapshot of another DB

Promote command does not seem to work on the version of Ansible that I am using.
So I am trying to create a new database as a replica of an existing one and after making it master, delete the source database.
I was trying to do it like this:
Make replica
Promote replica
Delete source database
But now I am thinking of this:
Create new database from source database last snapshot [as master from the beginning]
Delete the source database
How would that playbook go?
My playbook:
- hosts: localhost
vars:
source_db_name: "{{ SOURCE_DB }}" # stagingdb
new_db_name: "{{ NEW_DB }}" # stagingdb2
tasks:
- name: Make RDS replica
local_action:
module: rds
region: us-east-1
command: replicate
instance_name : "{{ new_db_name }}"
source_instance: "{{ source_db_name }}"
wait: yes
wait_timeout: 900 # wait 15 minutes
# Notice - not working [Ansible bug]
- name: Promote RDS replica
local_action:
module: rds
region: us-east-1
command: promote
instance_name: "{{ new_db_name }}" # stagingdb2
backup_retention: 0
wait: yes
wait_timeout: 300
- name: Delete source db
local_action:
command: delete
instance_name: "{{ source_db_name }}"
region: us-east-1
You just need to use the restore command in the RDS module.
Your playbook would then look something like:
- hosts: localhost
connection: local
gather_facts: yes
vars:
date: "{{ ansible_date_time.year }}-{{ ansible_date_time.month }}-{{ ansible_date_time.day }}-{{ ansible_date_time.hour }}-{{ ansible_date_time.minute }}"
source_db_name: "{{ SOURCE_DB }}" # stagingdb
new_db_name: "{{ NEW_DB }}" # stagingdb2
snapshot_name: "snapshot-{{ source_db_name }}--{{ date }}"
tasks:
- name : Take RDS snapshot
rds :
command : snapshot
instance_name : "{{ source_db_name }}"
snapshot : "{{ snapshot_name }}"
wait : yes
register: snapshot_out
- name : get facts
rds :
command : facts
instance_name : "{{ instance_name }}"
register: db_facts
- name : Restore RDS from snapshot
rds :
command : restore
instance_name : "{{ new_db_name }}"
snapshot : "{{ snapshot_name }}"
instance_type : "{{ db_facts.instance.instance_type }}"
subnet : primary # Unfortunately this isn't returned by db_facts
wait : yes
wait_timeout : 1200
- name : Delete source db
rds :
command : delete
instance_name : "{{ source_db_name }}"
There's a couple of extra tricks in there:
I set connection to local at the start of the play so, when combined with hosts: localhost all of the tasks will be local tasks.
I build a date time stamp that looks like YYYY-mm-dd-hh-mm from the Ansible host's own facts (from gather_facts and it only targeting localhost). This is then used for the snapshot name to make sure that we create it (if one exists with the same name then Ansible won't create another snapshot - something that could be bad in this case as it would use an older snapshot before deleting your source database).
I fetch the facts about the RDS instance in a task and use that to set the instance type to be the same as the source database. If you don't want that then you can define the instance_type directly and remove the whole get facts task

How to add and mount volumes for EC2 instance with Ansible

I am trying to learn the Ansible with all my AWS stuff. So the first task which I want to do is creation the basic EC2 instance with mounted volumes.
I wrote the Playbook according to Ansible docs, but it doesn't really work. My Playbook:
# The play operates on the local (Ansible control) machine.
- name: Create a basic EC2 instance v.1.1.0 2015-10-14
hosts: localhost
connection: local
gather_facts: false
# Vars.
vars:
hostname: Test_By_Ansible
keypair: MyKey
instance_type: t2.micro
security_group: my security group
image: ami-d05e75b8 # Ubuntu Server 14.04 LTS (HVM)
region: us-east-1 # US East (N. Virginia)
vpc_subnet_id: subnet-b387e763
sudo: True
locale: ru_RU.UTF-8
# Launch instance. Register the output.
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ vpc_subnet_id }}"
assign_public_ip: yes
wait: true
wait_timeout: 500
count: 1 # number of instances to launch
instance_tags:
Name: "{{ hostname }}"
os: Ubuntu
type: WebService
register: ec2
# Create and attach a volumes.
- name: Create and attach a volumes
ec2_vol:
instance: "{{ item.id }}"
name: my_existing_volume_Name_tag
volume_size: 1 # in GB
volume_type: gp2
device_name: /dev/sdf
with_items: ec2.instances
register: ec2_vol
# Configure mount points.
- name: Configure mount points - mount device by name
mount: name=/system src=/dev/sda1 fstype=ext4 opts='defaults nofail 0 2' state=present
mount: name=/data src=/dev/xvdf fstype=ext4 opts='defaults nofail 0 2' state=present
But this Playbook crushes on volumes mount with error:
fatal: [localhost] => One or more undefined variables: 'item' is undefined
How can I resolve this?
You seem to have copy/pasted a lot of stuff all at once, and rather than needing a specific bit of information that SO can help you with, you need to go off and learn the basics of Ansible so you can think through all the individual bits that don't match up in this playbook.
Let's look at the specific error that you're hitting - item is undefined. It's triggered here:
# Create and attach a volumes.
- name: Create and attach a volumes
ec2_vol:
instance: "{{ item.id }}"
name: my_existing_volume_Name_tag
volume_size: 1 # in GB
volume_type: gp2
device_name: /dev/sdf
with_items: ec2.instances
register: ec2_vol
This task is meant to be looping through every item in a list, and in this case the list is ec2.instances. It isn't, because with_items should be de-indented so it sits level with register.
If you had a list of instances (which you don't, as far as I can see), it'd use the id for the for each one in that {{ item.id }} line... but then probably throw an error, because I don't think they'd all be allowed to have the same name.
Go forth and study, and you can figure out this kind of detail.