How do I use ansible docker modules to list all containers in a specific docker network - list

How can I use docker ansible modules to list all containers in a specific network?
I would like to accomplish this without using ansible shell commands.
Is this possible?
I found this post which would work if I used shell commands. But again, I dont want to do that. How can I do with docker ansible modules?

You can use the community.docker.docker_network_info module to inspect the network; the returned information includes a list of containers attached to the network.
For example, this playbook will display a list of containers attached to the network name in the docker_network variable:
- hosts: localhost
gather_facts: false
vars:
docker_network: bridge
collections:
- community.docker
tasks:
- name: "get network info"
docker_network_info:
name: "{{ docker_network }}"
register: net_info
- name: "get container info"
docker_container_info:
name: "{{ item }}"
register: container_info
loop: "{{ net_info.network.Containers.keys() }}"
- debug:
msg: "{{ item }}"
loop: "{{ container_info.results|json_query('[].container.Name') }}"

Related

Getting correct subnet_id from Ansible [duplicate]

I've got a dictionary with different names like
vars:
images:
- foo
- bar
Now, I want to checkout repositories and afterwards build docker images only when the source has changed.
Since getting the source and building the image is the same for all items except the name I created the tasks with with_items: images
and try to register the result with:
register: "{{ item }}"
and also tried
register: "src_{{ item }}"
Then I tried the following condition
when: "{{ item }}|changed"
and
when: "{{ src_item }}|changed"
This always results in fatal: [piggy] => |changed expects a dictionary
So how can I properly save the results of the operations in variable names based on the list I iterate over?
Update: I would like to have something like that:
- hosts: all
vars:
images:
- foo
- bar
tasks:
- name: get src
git:
repo: git#foobar.com/repo.git
dest: /tmp/repo
register: "{{ item }}_src"
with_items: images
- name: build image
shell: "docker build -t repo ."
args:
chdir: /tmp/repo
when: "{{ item }}_src"|changed
register: "{{ item }}_image"
with_items: images
- name: push image
shell: "docker push repo"
when: "{{ item }}_image"|changed
with_items: images
So how can I properly save the results of the operations in variable names based on the list I iterate over?
You don't need to. Variables registered for a task that has with_items have different format, they contain results for all items.
- hosts: localhost
gather_facts: no
vars:
images:
- foo
- bar
tasks:
- shell: "echo result-{{item}}"
register: "r"
with_items: "{{ images }}"
- debug: var=r
- debug: msg="item.item={{item.item}}, item.stdout={{item.stdout}}, item.changed={{item.changed}}"
with_items: "{{r.results}}"
- debug: msg="Gets printed only if this item changed - {{item}}"
when: item.changed == true
with_items: "{{r.results}}"

Ansible Register Instances and Create ELBs

I'm trying to create an ansible playbook to dynamically find any instances matching AWS tags, create an ELB and then add the instances to it. So far I have been successful in creating these for one set of tags and one ELB at a time.
I'm trying to figure out the best way to have this run for any number of tags without specifying my variables function and release up front.
For example, the function and release variables would be defined in a vars file something like this.
function:
- api
- webapp
- mysql
release:
- prod
- stage
- dev
My playbook looks like this. I'm struggling to find a way to loop the entire playbook through a variable list. If I add a with_items to the first task it loops that entire task before moving onto the next one which does not accomplish what I want.
- ec2_remote_facts:
filters:
instance-state-name: running
"tag:Function": "{{ function }}"
"tag:Release": "{{ release }}"
region: us-east-1
register: ec2instance
- local_action:
module: ec2_elb_lb
name: "{{ function }}-{{ release }}"
state: present
instance_ids: "{{ item.id }}"
purge_instance_ids: true
region: us-east-1
subnets:
- subnet-1
- subnet-2
listeners:
- protocol: https
load_balancer_port: 443
instance_port: 80
ssl_certificate_id: "{{ ssl_certificate_id }}"
health_check:
ping_protocol: http
ping_port: 80
ping_path: "/status"
response_timeout: 3
interval: 5
unhealthy_threshold: 2
healthy_threshold: 2
access_logs:
interval: 5
s3_location: "{{ function }}-{{ release }}-elb"
s3_prefix: "logs"
with_items: ec2instance.instances
The easiest thing I can think of is parameterized include.
Make a list of tasks for a single shot, e.g. elb_from_tagged_instances.yml.
Then make main.yml with include in a loop:
- include: elb_from_tagged_instances.yml function={{item[0]}} release={{item[1]}}
with_together:
- "{{function}}"
- "{{release}}"
And if you don't need to somehow cross-intersect functions/releases, I'd replace two lists function/release with one list of dict and iterate over it.
UPDATE: Example for nested loop to get 9 pairs:
---
- hosts: localhost
connection: local
vars:
functions:
- api
- webapp
- mysql
releases:
- prod
- stage
- dev
tasks:
- include: include_z1.yml function="{{item[0]}}" release="{{item[1]}}"
with_nested:
- "{{functions}}"
- "{{releases}}"
Also note, that you should use different names for list and parameter (function and functions (plural) in my example) to avoid recursive templating.

Installing Apache through Ansible

I am attempting to install Apache on an EC2 instance through Ansible. My playbook looks like this:
# Configure and deploy Apache
- hosts: localhost
connection: local
remote_user: ec2-user
gather_facts: false
roles:
- ec2_apache
- apache
The 'ec2_apache' role provisions an EC2 instance and the first task within the apache/main.yml looks like this:
- name: confirm using the latest Apache server
become: yes
become_method: sudo
yum:
name: httpd
state: latest
However, I am getting the following error:
"module_stderr": "sudo: a password is required\n"
I did take a look at: How to switch a user per task or set of tasks? but it did not seem to resolve my problem.
Because the configuration of the Ec2 instance is in one role and the installation of Apache is in another, did I hork up the security in some way?
The issue you've got is that your playbook that runs both roles is targeting localhost so your Apache role is trying to run sudo yum install httpd locally rather than on the target EC2 instance.
As the ec2 module docs example shows you need to use the add_host module to add your new EC2 instance(s) to a group that you can then target with a further play.
So your playbook might look something like this:
# Configure and deploy Apache
- name: provision instance for Apache
hosts: localhost
connection: local
remote_user: ec2-user
gather_facts: false
roles:
- ec2_apache
- name: install Apache
hosts: launched
remote_user: ec2-user
roles:
- apache
And then, as per the example in the ec2 module docs, just do something like this in your ec2_apache role:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
register: ec2
- name: Add new instance to host group
add_host: hostname={{ item.public_ip }} groupname=launched
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
As an aside you can see quickly that your ec2_apache role is actually pretty generic and you could turn this into a generic ec2_provision role that all sorts of other things could use, helping you re-use your code.
This is what I did to install apache.
Based on #ydaetskcoR suggestion, all I added was connection: local to fix the following problems.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive).", "unreachable": true}
See code below
---
- name: Install Apache and other packages
hosts: localhost
become: yes
connection: local
gather_facts: false
tasks:
- name: Install a list of packages with a list variable
yum:
name: "{{ packages }}"
state: latest
vars:
packages:
- httpd
- httpd-tools
- nginx
register: result
you also have to run your code as follows: -K stands for --ask-become-pass
ansible-playbook -i hosts.ini startapache.yml -K -vvv
Are you sure you are using ansible correctly and are you provindig a password for sudo on the remote host?
Just use --ask-become-pass when you execute the playbook. You should be prompted for the password.

Using Ansible to update dynamic variables(ip's) in cassandra files while launching multiple ec2 instances in AWS at a time

I am spining multiple ec2 instances in AWS and installing cassandra on those instances.
i got stucked up at updating ip addresses of those instances dynamically in the cassandra files.
I tried using set facts module to pass variables between different plays, it is updating the ip address of the last machine built out of the three ec2 instances in all the files.
My use case is to update the ip address in the file with regard to that ec2 instance.
###########################################################
Here is my playbook which consists of two plays:
#### Play1 - to spin 3 ec2 instances in AWS##########
- name: Play1
hosts: local
connection: local
gather_facts: True
vars:
key_location: "path to pem file location"
server_name: dbservers
private_ip: item.private_ip
tasks:
- name: create ec2 instance
ec2:
key_name: {{ my_key_name}}
region: us-east-1
instance_type: t2.micro
image: ami-8fcee4e5
wait: yes
group: {{ my_security_group_name}}
count: 3
vpc_subnet_id: {{ my_subnet_id }}
instance_tags:
Name: devops-i-cassandra1-d-1c-common
Stack: Ansible
Owner: devops
register: ec2
- name: set facts ## to capture the ip addresses of the ec2 instances, but only last ip is being captured
set_fact:
one_fact={{ item.private_ip }}
with_items: ec2.instances
- name: debugging private ip value
debug: var=one_fact
- name: Add the newly created EC2 instance(s) to the dbservers group in the inventory file
local_action: lineinfile
dest="/home/admin/hosts"
regexp={{ item.private_ip }}
insertafter="[dbservers]" line={{ item.private_ip }}
with_items: ec2.instances
- name: Create Host Group to login dynamically to EC2 Instance
add_host:
hostname={{ item.private_ip }}
groupname={{ server_name }}
ansible_ssh_private_key_file={{ key_location }}
ansible_ssh_user=ec2-user
ec2_id={{ item.id }}
with_items: ec2.instances
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.private_ip }}
port=22
delay=60
timeout=360
state=started
with_items: ec2.instances
####################Play2-Installing and Configuring Cassandra on Ec2 Instances
- name: Play2
hosts: dbservers
remote_user: ec2-user
sudo: yes
vars:
private_ip: "{{ hostvars.localhost.one_fact }}"
vars_files:
- ["/home/admin/vars/var.yml"]
tasks:
- name: invoke a shell script to install cassandra
script: /home/admin/cassandra.sh creates=/home/ec2-user/cassandra.sh
- name: configure cassandra.yaml file
template: src=/home/admin/cassandra.yaml dest=/etc/dse/cassandra/cassandra.yaml owner=ec2-user group=ec2-user mode=755
#
Thanks in advance
With ansible 2.0+, you refresh the dynamic inventory in the middle of the playbook as the task like this:
- meta: refresh_inventory
To extend this a bit, If you are getting problem with the cache in your playbook, then you can use it like this:
- name: Refresh the ec2.py cache
shell: "./inventory/ec2.py --refresh-cache"
changed_when: no
- name: Refresh inventory
meta: refresh_inventory
where ./inventory is the path to your dynamic inventory, please adjust it accordingly.
During the creation of your EC2 instances, you have added the tags to them, which you can use now with the dynamic inventory to configure these instances. Your second play will be like this:
- name: Play2
hosts: tag_Name_devops-i-cassandra1-d-1c-common
remote_user: ec2-user
sudo: yes
tasks:
- name: ---------
Hope this will help you.

Ansible EC2 - Perform operation on set of instances

This is probably obvious, but how do you execute an operation against a set of servers in Ansible (this is with the EC2 plugin)?
I can create my instances:
---
- hosts: 127.0.0.1
connection: local
- name: Launch instances
local_action:
module: ec2
region: us-west-1
group: cassandra
keypair: cassandra
instance_type: t2.micro
image: ami-4b6f650e
count: 1
wait: yes
register: cass_ec2
And I can put the instances into a tag:
- name: Add tag to instances
local_action: ec2_tag resource={{ item.id }} region=us-west-1 state=present
with_items: cass_ec2.instances
args:
tags:
Name: cassandra
Now, let's say I want to run an operation on each server:
# This does not work - It runs the command on localhost
- name: TEST - touch file
file: path=/test.txt state=touch
with_items: cass_ec2.instances
How to run the command against the remote instances just created?
For running against just the newly created servers, I use a temporary group name and do something like the following by using a second play in the same playbook:
- hosts: localhost
tasks:
- name: run your ec2 create a server code here
...
register: cass_ec2
- name: add host to inventory
add_host: name={{ item.private_ip }} groups=newinstances
with_items: cas_ec2.instances
- hosts: newinstances
tasks:
- name: do some fun stuff on the new instances here
Alternatively if you have consistently tagged all your servers (and with multiple tags if you also have to differentiate between production and development; and you are also using the ec2.py as the dynamic inventory script; and you are running this against all the servers in a second playbook run, then you can easily do something like the following:
- hosts: tag_Name_cassandra
tasks:
- name: run your cassandra specific tasks here
Personally I use a mode tag (tag_mode_production vs tag_mode_development) as well in the above and force Ansible to only run on servers of a specific type (in your case Name=cassandra) in a specific mode (development). This looks like the following:
- hosts: tag_Name_cassandra:&tag_mode_development
Just make sure you specify the tag name and value correctly - it is case sensitive...
Please use the following playbook pattern to perform the both operations in a single playbook (means lauch an ec2 instance(s) and perform the certain tasks on it/them ) at the same time.
Here is the working playbook, that perform the following task, this playbook suppose that you have the hosts file in this same directory, where you are running the playbook:
---
- name: Provision an EC2 Instance
hosts: local
connection: local
gather_facts: False
tags: provisioning
# Necessary Variables for creating/provisioning the EC2 Instance
vars:
instance_type: t1.micro
security_group: cassandra
image: ami-4b6f650e
region: us-west-1
keypair: cassandra
count: 1
# Task that will be used to Launch/Create an EC2 Instance
tasks:
- name: Launch the new EC2 Instance
local_action: ec2
group={{ security_group }}
instance_type={{ instance_type}}
image={{ image }}
wait=true
region={{ region }}
keypair={{ keypair }}
count={{count}}
register: ec2
- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
local_action: lineinfile
dest="./hosts"
regexp={{ item.public_ip }}
insertafter="[cassandra]" line={{ item.public_ip }}
with_items: ec2.instances
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.public_ip }}
port=22
state=started
with_items: ec2.instances
- name: Add tag to Instance(s)
local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
with_items: ec2.instances
args:
tags:
Name: cassandra
- name: SSH to the EC2 Instance(s)
add_host: hostname={{ item.public_ip }} groupname=cassandra
with_items: ec2.instances
- name: Install these things on Newly created EC2 Instance(s)
hosts: cassandra
sudo: True
remote_user: ubuntu # Please change the username here,like root or ec2-user, as I am supposing that you are lauching ubuntu instance
gather_facts: True
# Run these tasks
tasks:
- name: TEST - touch file
file: path=/test.txt state=touch
Your hosts file should be look like this:
[local]
localhost
[cassandra]
Now you can run this playbook like this:
ansible-playbook -i hosts ec2_launch.yml