Getting correct subnet_id from Ansible [duplicate] - amazon-web-services

I've got a dictionary with different names like
vars:
images:
- foo
- bar
Now, I want to checkout repositories and afterwards build docker images only when the source has changed.
Since getting the source and building the image is the same for all items except the name I created the tasks with with_items: images
and try to register the result with:
register: "{{ item }}"
and also tried
register: "src_{{ item }}"
Then I tried the following condition
when: "{{ item }}|changed"
and
when: "{{ src_item }}|changed"
This always results in fatal: [piggy] => |changed expects a dictionary
So how can I properly save the results of the operations in variable names based on the list I iterate over?
Update: I would like to have something like that:
- hosts: all
vars:
images:
- foo
- bar
tasks:
- name: get src
git:
repo: git#foobar.com/repo.git
dest: /tmp/repo
register: "{{ item }}_src"
with_items: images
- name: build image
shell: "docker build -t repo ."
args:
chdir: /tmp/repo
when: "{{ item }}_src"|changed
register: "{{ item }}_image"
with_items: images
- name: push image
shell: "docker push repo"
when: "{{ item }}_image"|changed
with_items: images

So how can I properly save the results of the operations in variable names based on the list I iterate over?
You don't need to. Variables registered for a task that has with_items have different format, they contain results for all items.
- hosts: localhost
gather_facts: no
vars:
images:
- foo
- bar
tasks:
- shell: "echo result-{{item}}"
register: "r"
with_items: "{{ images }}"
- debug: var=r
- debug: msg="item.item={{item.item}}, item.stdout={{item.stdout}}, item.changed={{item.changed}}"
with_items: "{{r.results}}"
- debug: msg="Gets printed only if this item changed - {{item}}"
when: item.changed == true
with_items: "{{r.results}}"

Related

How do I use ansible docker modules to list all containers in a specific docker network

How can I use docker ansible modules to list all containers in a specific network?
I would like to accomplish this without using ansible shell commands.
Is this possible?
I found this post which would work if I used shell commands. But again, I dont want to do that. How can I do with docker ansible modules?
You can use the community.docker.docker_network_info module to inspect the network; the returned information includes a list of containers attached to the network.
For example, this playbook will display a list of containers attached to the network name in the docker_network variable:
- hosts: localhost
gather_facts: false
vars:
docker_network: bridge
collections:
- community.docker
tasks:
- name: "get network info"
docker_network_info:
name: "{{ docker_network }}"
register: net_info
- name: "get container info"
docker_container_info:
name: "{{ item }}"
register: container_info
loop: "{{ net_info.network.Containers.keys() }}"
- debug:
msg: "{{ item }}"
loop: "{{ container_info.results|json_query('[].container.Name') }}"

ansible list files in a directory

Can somone explain to me why this doesn't work? I want to get a list of files within a directory and use it as an input for the loop.
---
tasks:
- set_fact:
capabilities: []
- name: find CE_Base capabilities
find:
paths: /opt/netsec/ansible/orchestration/capabilities/CE_BASE
patterns: '*.yml'
register: CE_BASE_capabilities
- name: debug_files
debug:
msg: "{{ item.path }}"
with_items: "{{ CE_BASE_capabilities.files }}"
- set_fact:
thispath: "{{ item.path }}"
capabilities: "{{ capabilities + [ thispath ] }}"
with_items: "{{ CE_BASE_capabilities.files }}"
- name: Include CE_BASE
include_tasks: /opt/netsec/ansible/orchestration/process_capabilities_CE_BASE.yml
loop: "{{ capabilities }}"
Edit:
This code is attempting to create a list called capabilties, which contatins a list of files in a particular directory.
When i ran this code without trying to get the files automatically, it looked like this.
- hosts: localhost
vars:
CE_BASE_capabilities:
- '/opt/netsec/ansible/orchestration/capabilities/CE_BASE/CE_BASE_1.yml'
- '/opt/netsec/ansible/orchestration/capabilities/CE_BASE/CE_BASE_2.yml'
- name: Include CE_BASE
include_tasks: /opt/netsec/ansible/orchestration/process_capabilities_CE_BASE.yml
loop: "{{ CE_BASE_capabilities }}"
Don't define thispath as a fact but as a local vars in the set_fact task. Beside that, you don't need to init capabilities if you use the default filter.
- vars:
thispath: "{{ item.path }}"
set_fact:
capabilities: "{{ capabilities | default([]) + [ thispath ] }}"
with_items: "{{ CE_BASE_capabilities.files }}"
Moreover, you don't even need to loop. You can extract the info directly from the existing result:
- set_fact:
capabilities: "{{ CE_BASE_capabilities.files | map(attribute='path') | list }}"

ansible ec2_instance_facts filter by "tag:Name" does not filter by instance Name

I want to run ec2_instance_facts to find an instance by name. However, I must be doing something wrong because I cannot get the filter to actually work. The following returns everything in my set AWS_REGION:
- ec2_instance_facts:
filters:
"tag:Name": "{{myname}}"
register: ec2_metadata
- debug: msg="{{ ec2_metadata.instances }}"
The answer is to use the ec2_remote_facts module, not the ec2_instance_facts module.
- ec2_remote_facts:
filters:
"tag:Name": "{{myname}}"
register: ec2_metadata
- debug: msg="{{ ec2_metadata.instances }}"
Based on the documentation ec2_remote_facts is marked as DEPRECATED from ansible version 2.8 in favor of using ec2_instance_facts.
This is working good for me:
- name: Get instances list
ec2_instance_facts:
region: "{{ region }}"
filters:
"tag:Name": "{{ myname }}"
register: ec2_list
- debug: msg="{{ ec2_metadata.instances }}"
Maybe the filte is not being applied? Can you go through the results in the object?

How to create the Count tags name with sequential numbers using ansible script

I create 2 windows aws machine using exact_count tag as 2.
It creates the both of 2 AWS machine with same name.
For example:
1) itg-Web-windows
2) itg-web-windows
I want to create the machine Name as instance_tags:
1)itg-windows-web-1
2)itg-windows-web-2
Below are my code:
name: ensure instances are running
ec2:
region: "{{ region }}"
image: "{{ image_id }}"
group_id: sg-1234
vpc_subnet_id: subnet-5678
instance_tags:
Name: "itg-windows-web"
exact_count: 2
count_tag:
Name: "itg-windows-web"`
register: ec2_result
This will create servers with name tags web_server_1, web_server_3 and web_server_5:
- name: create instances
ec2:
- image: <your_ami>
instance_type: t2.micro
key_name: <your_ssh_key>
region: us-east-1
vpc_subnet_id: <your_subnet_id>
count_tag:
Name: "web_server_{{ item }}"
exact_count: 1
instance_tags:
Name: "web_server_{{ item }}"
with_items: ['1', '3', '5']
Use the below ansible template:
---
- name: A sample template
hosts: local
connection: local
gather_facts: False
tasks:
- name: create instance
ec2:
keypair: test-ssh-key
instance_type: t2.micro
image: ami-abcd1234
wait: yes
instance_tags:
ec2type: web
exact_count: "{{ count }}"
count_tag:
ec2type: web
region: us-east-1
vpc_subnet_id: subnet-1234abcd
register: ec2
- name: generate sequence id for tagging
debug: msg="{{ item }}"
no_log: true
with_sequence: start="{{ startindex }}" end="{{ count }}" format=%02d
register: sequence
- name: tag instances
no_log: true
ec2_tag:
region: us-east-1
resource: "{{ item.0.id }}"
tags:
Name: "itg-windows-web-{{ item.1.msg }}"
with_together:
- "{{ ec2.instances }}"
- "{{ sequence.results }}"
command:
ansible-playbook -i ./hosts ec2-basic.yml --extra-vars "startindex=1
count=2"
Invocation-1:
ansible-playbook -i ./hosts ec2-basic.yml --extra-vars "startindex=1 count=2"
This will create 2 instances and attach name tag itg-windows-web-01 and itg-windows-web-02 to it.
Invocation 2:
ansible-playbook -i ./hosts ec2-basic.yml --extra-vars "startindex=3 count=4"
This will add 2 more instances and attach name tag itg-windows-web-03 and itg-windows-web-04 to it.
All these instances are grouped by ec2type tag.
How it works:
Use a custom tag other than Name tag for attribute count_tag. If you use Name tag, then the same tag-value is assigned for all the instances that are created(which defeats your purpose). In the above script, I have used ec2type: web as my instance_tags and count_tag. So ansible will use this tag to determine how many nodes should run based on the specific tag criteria.
The count value which you pass is assigned to exact_count in the template. Also you can have further control by passing startindex which controls the start of sequence.
with_sequence generates a sequence based on your input. Click here to read more about it.
with_together loops over parallel set of data. Click here to read more about it.
Using the above ansible loops, we append 01, 02 ... and so on to itg-windows-web text and add it to the instance Name tag.

Ansible running one command out of many locally in a loop

Background,
I'm trying to create a loop that iterates over hash read from qa.yml file and for every user in the list it tries to find a file on the local server (public key), once the file is found it creates the user on remote machine and copies the public key to authorized_key on remote machine.
I'm trying to implement it in a way of iteration, so in order to update the keys or add more users keys I need to change the .yml list and place the public key file in the proper place. However I can't get the local_action + find working.
---
- hosts: tag_Ansible_CLOUD_QA
vars_files:
- ../users/qa.yml
- ../users/groups.yml
remote_user: ec2-user
sudo: yes
tasks:
- name: Create groups
group: name="{{ item.key }}" state=present
with_dict: "{{ user_groups }}"
- name: Create remote users QA
user: name="{{ item.key }}" comment="user" group=users groups="qa"
with_dict: "{{ qa_users }}"
- name: Erase previous authorized keys QA
shell: rm -rf /home/"{{ item.key }}"/.ssh/authorized_keys
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users QA
local_action:
find: paths="{{'/opt/pubkeys/2016/q2/'}}" patterns="{{ item.key }}"
register: result
authorized_key: user="{{ item.key }}" key="{{ lookup('file', result) }}"
with_dict: "{{ qa_users }}"
Hash:
qa_users:
user1:
name: User 1
user2:
name: User 2
You're cramming two tasks into a single task item in that final task so Ansible isn't going to like that.
Splitting the task properly should work:
- name: Find keys
local_action: find paths="{{'/opt/pubkeys/2016/q2/'}}" patterns="{{ item.key }}"
register: result
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users QA
authorized_key: user="{{ item.0.key }}" key="{{ lookup('file', item.1.stdout) }}"
with_together:
- "{{ qa_users }}"
- result
The second task then loops over the dictionary and the result from the previous task using a with_together loop which advances through the two data structures in step.
However, this looks like a less than ideal way to solve your problem.
If you look at what your tasks here are trying to do you could replace it more simply with something like this:
- name: Add public keys to remote users QA
authorized_key: user="{{ item.key }}" key="{{ lookup('file', '/opt/pubkeys/2016/q2/' + item.key ) }}"
with_dict:
- "{{ qa_users }}"
You can also remove the thid task where you cleared down the user's previous keys by simply using the exclusive parameter of the authorized_keys module:
- name: Add public keys to remote users QA
authorized_key: user="{{ item.key }}" key="{{ lookup('file', '/opt/pubkeys/2016/q2/' + item.key ) }}" exclusive=yes
with_dict:
- "{{ qa_users }}"
Also, it might be a case of you trying to simplify things in an odd way for the question but your data structures you are using are less than ideal right now so I'd take a look at that if that's really what they look like.
Thank you #ydaetskcoR for sharing the right approach, the following solution did the dynamic public key distribution for me, when files residing on local machine and provisioned on remote EC2 machines:
---
- hosts: tag_Ansible_CLOUD_QA
vars_files:
- ../users/groups.yml
- ../users/qa.yml
remote_user: ec2-user
become: yes
become_method: sudo
tasks:
- name: Find user matching key files
become_user: jenkins
local_action: find paths="{{'/opt/pubkeys/2016/q1/'}}" patterns="{{ '*' + item.key + '*' }}"
register: pub_key_files
with_dict: "{{ qa_users }}"
- name: Create groups
group: name="{{ item.key }}" state=present
with_dict: "{{ user_groups }}"
- name: Allow test users to have passwordless sudo
lineinfile: "dest=/etc/sudoers state=present regexp='^%{{ item.key }} ALL=.*ALL.* NOPASSWD: ALL' line='%{{ item.key }} ALL=(ALL) NOPASSWD: ALL'"
with_dict: "{{ user_groups }}"
- name: Create remote users qa
user: name="{{ item.key }}" comment="user" group=users groups="qa"
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users qa
#debug: "msg={{ 'User:' + item.item.key + ' KeyFile:' + item.files.0.path }}"
authorized_key: user="{{ item.item.key }}" key="{{ lookup('file', item.files.0.path) }}" exclusive=yes
with_items: "{{ pub_key_files.results }}"
This is the command line to get dynamic inventory based on EC2 Tags:
ansible-playbook -i inventory/ec2.py --private-key <path to your key file> --extra-vars '{"QUATER":"q1"}' credentials/distribute-public-key.yml