I'm currently trying to further automate VM automation by not having to include the IP address in the variables file. I found nslookup module with dig, but feel I'm going about this the wrong way, for example here is variables file which is read upon creation for details:
# VMware Launch Variables
# If this is a test deployment you must ensure the vm is terminated after use.
vmname: agent5
esxi_datacenter: Datacenter
esxi_cluster: Cluster
esxi_datastore: ds1 # Do not change value.
esxi_template: template-v2
esxi_folder: agents # Folder must be pre-created
# Static IP Addresses
esxi_static_ip: "{{ lookup('dig', '{{ vmname }}.example.com.') }}"
esxi_netmask: 255.255.252.0
esxi_gateway: 10.0.0.1
What I was hoping to do with these was just to have the "esxi_static_ip" but pulled on the fly from a lookup with dig. This, however, in its current state does not work.
What is happening is either the VM launches without an ipv4 address or more often it fails with the following error:
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Failed to create a virtual machine : A specified parameter was not correct: spec.nicSettingMap.adapter.ip.ipAddress"}
I get the IP and pass it along, which works when I hard code the esxi_static_ip: in my vmware-lanch-vars.yml file. However, if I use (including the examples) it fails.
The newvm is registered when I run my vmware_guest playbook.
- name: Make virtual machine IP persistant
set_fact:
newvm_ip_address: '{{ newvm.instance.ipv4 }}'
- name: Add host to in memory inventory
add_host:
hostname: "{{ newvm_ip_address }}"
groups: just_created
newvm_ip_address: "{{ newvm.instance.ipv4 }}"
When I run with -vvvv I can see no IP is being attached:
"networks": [
{
"device_type": "vmxnet3",
"gateway": "0.0.0.01",
"ip": "",
"name": "Network",
"netmask": "255.255.252.0",
"type": "static"
}
],
UPDATE 3
When I created a simple playbook it works, just not when I put it into my regular flow, this below works:
---
- hosts: localhost
vars:
vmname: "apim-sb-ng1-agent2"
vm_dig_fqdn: "{{ vmname }}.example.com."
esxi_static_ip: "{{ lookup('dig', vm_dig_fqdn) }}"
tasks:
- debug: msg="{{ esxi_static_ip }}"
I am not sure this is the first problem your are facing (see my comment above), but your jinja2 template expression is wrong.
You cannot use jinja2 expression expansion while already inside a jinja2 expression expansion.
In this case, you have to concatenate your variable and string with the + operator:
esxi_static_ip: "{{ lookup('dig', vmname + '.example.com.') }}"
If your prefer to use jinja2 expansion everywhere, you can separate this in different variables, e.g.:
vm_dig_fqdn: "{{ vmname }}.example.com."
esxi_static_ip: "{{ lookup('dig', vm_dig_fqdn) }}"
Related
How can I use docker ansible modules to list all containers in a specific network?
I would like to accomplish this without using ansible shell commands.
Is this possible?
I found this post which would work if I used shell commands. But again, I dont want to do that. How can I do with docker ansible modules?
You can use the community.docker.docker_network_info module to inspect the network; the returned information includes a list of containers attached to the network.
For example, this playbook will display a list of containers attached to the network name in the docker_network variable:
- hosts: localhost
gather_facts: false
vars:
docker_network: bridge
collections:
- community.docker
tasks:
- name: "get network info"
docker_network_info:
name: "{{ docker_network }}"
register: net_info
- name: "get container info"
docker_container_info:
name: "{{ item }}"
register: container_info
loop: "{{ net_info.network.Containers.keys() }}"
- debug:
msg: "{{ item }}"
loop: "{{ container_info.results|json_query('[].container.Name') }}"
On my local machine I set the following environment vars:
export AWS_ACCESS_KEY='xxxx'
export AWS_SECRET_KEY='xxxx'
export AWS_REGION='us-east-1'
then in a playbook I put this:
...
tasks:
- name: Get some secrets
vars:
db_password: "{{ lookup('amazon.aws.aws_secret', 'DB_PASSWORD') }}"
debug:
msg: "{{ db_password }}"
...
When running the playbook the connection to AWS secrets works just fine, the necessary AWS variables are taken from the environment and I get the proper value in db_password.
When I'm trying to do the same in AWX, I set the above three variables in the section Settings > Job Settings > Extra Environment Variables:
{
"AWS_ACCESS_KEY": "xxx",
"AWS_SECRET_KEY": "xxx",
"AWS_REGION": "us-east-1"
}
Now, when I'm running a playbook from AWX containing the above code "{{ lookup('amazon.aws.aws_secret', 'DB_PASSWORD') }}" I get the error that I need to specify a region and if I set the region manually like "{{ lookup('amazon.aws.aws_secret', 'DB_PASSWORD', region='us-east-1') }}" I get the error that AWX can't find the credentials.
So, for some reason these three variables are not read from the extra environment variables.
To make it work I had to write the following code in the playbook:
region: "{{ lookup('env', 'AWS_REGION') }}"
aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY') }}"
aws_secret_key: "{{ lookup('env', 'AWS_SECRET_KEY') }}"
db_password: "{{ lookup('amazon.aws.aws_secret', 'DB_PASSWORD', aws_access_key=aws_access_key, aws_secret_key=aws_secret_key, region=region) }}"
But I don't like this solution and I would prefer to avoid to explicitly set those three vars in the lookup and somehow tell AWX to take these three values from the extra environment variables. Is there any way to achieve this?
I have used Ansible to create 1 AWS EC2 instance using the examples in the Ansible ec2 documentation. I can successfully create the instance with a tag. Then I temporarily add it to my local inventory group using add_host.
After doing this, I am having trouble when I try to configure the newly created instance. In my Ansible play, I would like to specify the instance by its tag name. eg. hosts: <tag_name_here>, but I am getting an error.
Here is what I have done so far:
My directory layout is
inventory/
staging/
hosts
group_vars/
all/
all.yml
site.yml
My inventory/staging/hosts file is
[local]
localhost ansible_connection=local ansible_python_interpreter=/home/localuser/ansible_ec2/.venv/bin/python
My inventory/staging/group_vars/all/all.yml file is
---
ami_image: xxxxx
subnet_id: xxxx
region: xxxxx
launched_tag: tag_Name_NginxDemo
Here is my Ansible playbook site.yml
- name: Launch instance
hosts: localhost
gather_facts: no
tasks:
- ec2:
key_name: key-nginx
group: web_sg
instance_type: t2.micro
image: "{{ ami_image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ subnet_id }}"
assign_public_ip: yes
instance_tags:
Name: NginxDemo
exact_count: 1
count_tag:
Name: NginxDemo
exact_count: 1
register: ec2
- name: Add EC2 instance to inventory group
add_host:
hostname: "{{ item.public_ip }}"
groupname: tag_Name_NginxDemo
ansible_user: centos_user
ansible_become: yes
with_items: "{{ ec2.instances }}"
- name: Configure EC2 instance in launched group
hosts: tag_Name_NginxDemo
become: True
gather_facts: no
tasks:
- ping:
I run this playbook with
$ cd /home/localuser/ansible_ec2
$ source .venv/bin/activate
$ ansible-playbook -i inventory/staging site.yml -vvv`
and this creates the EC2 instance - the 1st play works correctly. However, the 2nd play gives the following error
TASK [.....] ******************************************************************
The authenticity of host 'xx.xxx.xxx.xx (xx.xxx.xxx.xx)' can't be established.
ECDSA key fingerprint is XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.
Are you sure you want to continue connecting (yes/no)? yes
fatal: [xx.xxx.xxx.xx]: FAILED! => {"changed": false, "module_stderr":
"Shared connection to xx.xxx.xxx.xx closed.\r\n", "module_stdout": "/bin/sh:
1: /usr/bin/python: not found\r\n", "msg": "MODULE FAILURE", "rc": 127}
I followed the instructions from
this SO question to create the task with add_hosts
here to set gather_facts: False, but this still does not allow the play to run correctly.
How can I target the EC2 host using the tag name?
EDIT:
Additional info
This is the only playbook I have run to this point. I see this message requires Python but I cannot install Python on the instance as I cannot connect to it in my play Configure EC2 instance in launched group...if I could make that connection, then I could install Python (if this is the problem). Though, I'm not sure how to connect to the instance.
EDIT 2:
Here is my Python info on the localhost where I am running Ansible
I am running Ansible inside a Python venv.
Here is my python inside the venv
$ python --version
Python 2.7.15rc1
$ which python
~/ansible_ec2/.venv/bin/python
Here are my details about Ansible that I installed inside the Python venv
ansible 2.6.2
config file = /home/localuser/ansible_ec2/ansible.cfg
configured module search path = [u'/home/localuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/localuser/ansible_ec2/.venv/local/lib/python2.7/site-packages/ansible
executable location = /home/localuser/ansible_ec2/.venv/bin/ansible
python version = 2.7.15rc1 (default, xxxx, xxxx) [GCC 7.3.0]
Ok, so after a lot of searching, I found 1 possible workaround here. Basically, this workaround uses the lineinfile module and adds the new EC2 instance details to the hosts file permanently....not just for the in-memory plays following the add_host task. I followed this suggestion very closely and this approach worked for me. I did not need to use the add_host module.
EDIT:
The line I added in the lineinfile module was
- name: Add EC2 instance to inventory group
- lineinfile: line="{{ item.public_ip }} ansible_python_interpreter=/usr/bin/python3" insertafter=EOF dest=./inventory/staging/hosts
with_items: "{{ ec2.instances }}"
Why does this task (from Best way to launch aws ec2 instances with ansible):
- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
local_action: lineinfile
dest="./hosts"
regexp={{ item.public_ip }}
insertafter="[webserver]" line={{ item.public_ip }}
with_items: ec2.instances
create this error?
TASK [Add the newly created EC2 instance(s) to the local host group (located inside the directory)] ********************************************************************
fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'ansible.vars.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'public_ip'\n\nThe error appears to have been in '/Users/snowcrash/ansible-ec2/ec2_launch.yml': line 55, column 9, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)\n ^ here\n"}
the issue is here
with_items: ec2.instances
It should be:
with_items: '{{ ec2.instances }}'
ec2 is variable referencing a dictionary so you will need to reference it with the proper syntax
Put:
- debug: msg="{{ ec2.instances }}"
before that code and inspect what are the contents of that variable. It should be a list of dictionaries that each have a member public_ip, otherwise you'd get the message that you're getting.
I'm trying to create an ansible playbook to dynamically find any instances matching AWS tags, create an ELB and then add the instances to it. So far I have been successful in creating these for one set of tags and one ELB at a time.
I'm trying to figure out the best way to have this run for any number of tags without specifying my variables function and release up front.
For example, the function and release variables would be defined in a vars file something like this.
function:
- api
- webapp
- mysql
release:
- prod
- stage
- dev
My playbook looks like this. I'm struggling to find a way to loop the entire playbook through a variable list. If I add a with_items to the first task it loops that entire task before moving onto the next one which does not accomplish what I want.
- ec2_remote_facts:
filters:
instance-state-name: running
"tag:Function": "{{ function }}"
"tag:Release": "{{ release }}"
region: us-east-1
register: ec2instance
- local_action:
module: ec2_elb_lb
name: "{{ function }}-{{ release }}"
state: present
instance_ids: "{{ item.id }}"
purge_instance_ids: true
region: us-east-1
subnets:
- subnet-1
- subnet-2
listeners:
- protocol: https
load_balancer_port: 443
instance_port: 80
ssl_certificate_id: "{{ ssl_certificate_id }}"
health_check:
ping_protocol: http
ping_port: 80
ping_path: "/status"
response_timeout: 3
interval: 5
unhealthy_threshold: 2
healthy_threshold: 2
access_logs:
interval: 5
s3_location: "{{ function }}-{{ release }}-elb"
s3_prefix: "logs"
with_items: ec2instance.instances
The easiest thing I can think of is parameterized include.
Make a list of tasks for a single shot, e.g. elb_from_tagged_instances.yml.
Then make main.yml with include in a loop:
- include: elb_from_tagged_instances.yml function={{item[0]}} release={{item[1]}}
with_together:
- "{{function}}"
- "{{release}}"
And if you don't need to somehow cross-intersect functions/releases, I'd replace two lists function/release with one list of dict and iterate over it.
UPDATE: Example for nested loop to get 9 pairs:
---
- hosts: localhost
connection: local
vars:
functions:
- api
- webapp
- mysql
releases:
- prod
- stage
- dev
tasks:
- include: include_z1.yml function="{{item[0]}}" release="{{item[1]}}"
with_nested:
- "{{functions}}"
- "{{releases}}"
Also note, that you should use different names for list and parameter (function and functions (plural) in my example) to avoid recursive templating.