how to add a disk to vcenter guest using Ansible - vmware

I'm attempting to add a second disk to a vmware vcenter instance.
Here is what I have:
- name: "Modifying ..."
local_action:
module: vsphere_guest
vcenter_hostname: "{{ vcenter.hostname }}"
username: "{{ vcenter_user[datacenter]['username'] }}"
password: "{{ vcenter_user[datacenter]['password'] }}"
guest: "{{ inventory_hostname }}"
# Looky looky heeya ...#
state: reconfigured
########################
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: "{{ datacenter }} {{ purpose |replace('_',' ') }}"
vm_disk:
disk1:
size_gb: 50
type: thin
datastore: "{{ vcenter.datastore }}"
disk2:
size_gb: 200
type: thin
datastore: "{{ vcenter.datastore }}"
vm_hardware:
memory_mb: "{{ vm.memory|int }}"
num_cpus: "{{ vm.cpus|int }}"
osid: "{{ os.id }}"
esxi:
datacenter: "{{ esxi.datacenter }}"
hostname: "{{ esxi.hostname }}"
So the vcenter sees the reconfigure and there are no errors displayed.
Also there are no errors on the console when I runt the playbook.
It just simply does not add the second disk.
So is there a way to add the disk or will I have to write a python script to do it?
Thanks.

The function def reconfigure_vm in the vsphere_guest module does only include code for changing the RAM and the CPU. But i don't see any code for changing the other hardware. This is only possible while creating a new VM at the moment.

Related

Ansible "ec2_asg" module keeps changing the desired/min/max capacity settings

For cost reasons, our ASG's in the QA environment run with desired/min/max capacity set to "1". That's not the case for Production but since we use the same code for QA and Prod deployment (minus a few variables of course) this is causing problems with the QA automation jobs.
- name: create autoscale groups original_lc
ec2_asg:
name: "{{ app_name }}"
target_group_arns: "{{alb_target_group_facts.target_groups[0].target_group_arn}}"
launch_config_name: "{{ launch_config.name }}"
min_size: 1
max_size: 1
desired_capacity: 1
region: "{{ region }}"
vpc_zone_identifier: "{{ subnets | join(',') }}"
health_check_type: "{{health_check}}"
replace_all_instances: yes
wait_for_instances: false
replace_batch_size: '{{ rollover_size }}'
lc_check: false
default_cooldown: "{{default_cooldown}}"
health_check_period: "{{health_check_period}}"
notification_topic: "{{redeem_notification_group}}"
tags:
- Name : "{{ app_name }}"
- Application: "{{ tag_Application }}"
- Product_Owner: "{{ tag_Product_Owner }}"
- Resource_Owner: "{{ tag_Resource_Owner }}"
- Role: "{{ tag_Role }}"
- Service_Category: "{{ tag_Service_Category }}"
register: asg_original_lc
On the first run, the "ec2_asg" module creates the group properly, with the correct desired/min/max settings.
But when we run the job a second time to update the same ASG, it changes desired/min/max to "2" in AWS. We don't want that. We just want it to rotate out that one instance in the group. Is there a way to achieve that?

Is it possible to loop into two different lists in the same playbook (Ansible)?

I'm writing a Playbook Ansible and I want to loop into two different lists.
I now that I can use with_items to loop in a list, but can I use with_items twice in the same playbook?
Here is what I want to do:
- name: Deploy the network in fabric 1 and fabric 2
tags: [merged]
role_network:
config:
- net_name: "{{ networkName }}"
vrf_name: "{{ vrf }}"
net_id: 30010
net_template: "{{ networkTemplate }}"
net_extension_template: "{{ networkExtensionTemplate }}"
vlan_id: "{{ vlan }}"
gw_ip_subnet: "{{ gw }}"
attach: "{{ item }}"
deploy: false
fabric: "{{ item }}"
state: merged
with_items:
- "{{ attachs }}"
"{{ fabric }}"
register: networks
So for the first call, I want to use the playbook with fabric[0] and attachs[0].
For the second call, I want to use the playbook with fabric[1] and attachs[1].
And so on...
Can someone help me please?
What you are looking to achieve is what was with_together and that is, now, recommanded to achieve with the zip filter.
So: loop: "{{ attachs | zip(fabric) | list }}".
Where the element of the first list (attachs) would be item.0 and the element of the second list (fabric) would be item.1.
- name: Deploy the network in fabric 1 and fabric 2
tags: [merged]
role_network:
config:
- net_name: "{{ networkName }}"
vrf_name: "{{ vrf }}"
net_id: 30010
net_template: "{{ networkTemplate }}"
net_extension_template: "{{ networkExtensionTemplate }}"
vlan_id: "{{ vlan }}"
gw_ip_subnet: "{{ gw }}"
attach: "{{ item.0 }}"
deploy: false
fabric: "{{ item.1 }}"
state: merged
loop: "{{ attachs | zip(fabric) | list }}"
register: networks

Ansible in AWS, list processing question using ec2_instance_info for several nodes

I am running several Ansible playbooks in AWS and I am having difficulty with a test yaml file. The purpose of the yaml file is to query AWS for a list of servers using a filter and set_fact the instance name, the instance ID the instance size and the private IP.
The code I have is returning only data for the first node in the list and repeats the debug line every 12 lines. All the other lines show no data returned. I am using the ec2_instance_info to get the various data about the instances.
Here is the Ansible yaml file
---
# This script gathers the Instance ID's et. al.
- name: Get EC2 Info
ec2_instance_info:
region: '{{ aws_region }}'
aws_access_key: "{{ lookup('ini', 'aws_access_key_id section=saml file=~/.aws/credentials') }}"
aws_secret_key: "{{ lookup('ini', 'aws_secret_access_key section=saml file=~/.aws/credentials') }}"
security_token: "{{ lookup('ini', 'aws_session_token section=saml file=~/.aws/credentials') }}"
filters:
"tag:Name": "test-envMan*"
register: Instance_ID
- name: Get Instance ID
debug:
msg: "{{ item.0 }} | {{ item.1 }} | {{ item.2 }} | {{ item.3 }}"
with_together:
- "{{ Instance_ID.instances | map(attribute='tags.Name') | list }}"
- "{{ Instance_ID.instances[0].instance_id }}"
- "{{ Instance_ID.instances[1].instance_type }}"
- "{{ Instance_ID.instances[2].private_ip_address }}"
- name: Gather and Save info
set_fact:
Tag_Name: "{{ Instance_ID.instances | map(attribute='tags.Name') | list }}"
Instance_ID: "{{ Instance_ID.instances[0].instance_id }}"
Instance_Size: "{{ Instance_ID.instances[1].instance_type }}"
Instance_PrivIP: "{{ Instance_ID.instances[2].private_ip_address }}"
The output shows 12 lines of Ansible "ok" output for each server. The first line of which includes the debug output of the expected fields for the first node.
So 1 line of "ok" log output, then the debug line. Then 11 lines of "ok" log output of the same node. Then 1 line of "ok" output for the second node, the the debug line for the first node. etc.
I need to discover what I am doing incorrectly and how to make it behave.
Any comments, suggestions or pointers are appreciated.
Thanks.

is it possible to add a condition statement in .yml file ansible?

I have created a role and a playbook , Now in my playbook , I am defining Ec2 tags such as :
instance_tags: "Name={{ name }},bld_env={{ bld_env }},server={{ app_name }}-{{ bld_env }}"
I also have to make a condition here for bld_env thats suppose if bld_env is "test", add more tags .
can you help me out with tagging in case of conditional statement ?
Eg:
My
tagging.yml
file :
- name: Launch EC2 host
hosts: localhost
connection: local
gather_facts: False
vars:
bld_env: "{{ bld_env }}"
count: 1
instance_type: "{{ size }}"
image: "{{ image }}"
ansible_ssh_user: "ec2-user"
region: "{{ region }}"
pem_path: "~/.ssh"
group_name: "ec2hosts"
vpc_subnet_id: "{{ lookup('ini', 'vpc_subnet section=vpc file=./{{ bld_env }}.ini') }}"
assign_public_ip: "{{ ip }}"
keypair: "{{ key }}"
access_key: "{{ access }}"
secret_key: "{{ secret }}"
instance_tags: "Name={{ name }},bld_env={{ bld_env }},server={{ app_name }}-{{ bld_env }}"
security_group: "[{{ group }}]"
roles:
- ec2
Now as you can see the instance_tags here , I want to add a condition under this instance_tags as : when bld_env is "test" add another tag "Label=ec2start-stop"
So this Label has to be added only when bld_env is "test" since I am dealing with multiple env here ..
You can use any Jinja2 conditions inside expressions, for example (see end of line):
instance_tags: "Name={{ name }},bld_env={{ bld_env }},server={{ app_name }}-{{ bld_env }}{{ '' if bld_env != 'test' else ',other_tag='+other_tag_value }}"
This will append empty line if bld_env != 'test' or ,other_tag=<other_tag_value> otherwise.

Ansible running one command out of many locally in a loop

Background,
I'm trying to create a loop that iterates over hash read from qa.yml file and for every user in the list it tries to find a file on the local server (public key), once the file is found it creates the user on remote machine and copies the public key to authorized_key on remote machine.
I'm trying to implement it in a way of iteration, so in order to update the keys or add more users keys I need to change the .yml list and place the public key file in the proper place. However I can't get the local_action + find working.
---
- hosts: tag_Ansible_CLOUD_QA
vars_files:
- ../users/qa.yml
- ../users/groups.yml
remote_user: ec2-user
sudo: yes
tasks:
- name: Create groups
group: name="{{ item.key }}" state=present
with_dict: "{{ user_groups }}"
- name: Create remote users QA
user: name="{{ item.key }}" comment="user" group=users groups="qa"
with_dict: "{{ qa_users }}"
- name: Erase previous authorized keys QA
shell: rm -rf /home/"{{ item.key }}"/.ssh/authorized_keys
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users QA
local_action:
find: paths="{{'/opt/pubkeys/2016/q2/'}}" patterns="{{ item.key }}"
register: result
authorized_key: user="{{ item.key }}" key="{{ lookup('file', result) }}"
with_dict: "{{ qa_users }}"
Hash:
qa_users:
user1:
name: User 1
user2:
name: User 2
You're cramming two tasks into a single task item in that final task so Ansible isn't going to like that.
Splitting the task properly should work:
- name: Find keys
local_action: find paths="{{'/opt/pubkeys/2016/q2/'}}" patterns="{{ item.key }}"
register: result
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users QA
authorized_key: user="{{ item.0.key }}" key="{{ lookup('file', item.1.stdout) }}"
with_together:
- "{{ qa_users }}"
- result
The second task then loops over the dictionary and the result from the previous task using a with_together loop which advances through the two data structures in step.
However, this looks like a less than ideal way to solve your problem.
If you look at what your tasks here are trying to do you could replace it more simply with something like this:
- name: Add public keys to remote users QA
authorized_key: user="{{ item.key }}" key="{{ lookup('file', '/opt/pubkeys/2016/q2/' + item.key ) }}"
with_dict:
- "{{ qa_users }}"
You can also remove the thid task where you cleared down the user's previous keys by simply using the exclusive parameter of the authorized_keys module:
- name: Add public keys to remote users QA
authorized_key: user="{{ item.key }}" key="{{ lookup('file', '/opt/pubkeys/2016/q2/' + item.key ) }}" exclusive=yes
with_dict:
- "{{ qa_users }}"
Also, it might be a case of you trying to simplify things in an odd way for the question but your data structures you are using are less than ideal right now so I'd take a look at that if that's really what they look like.
Thank you #ydaetskcoR for sharing the right approach, the following solution did the dynamic public key distribution for me, when files residing on local machine and provisioned on remote EC2 machines:
---
- hosts: tag_Ansible_CLOUD_QA
vars_files:
- ../users/groups.yml
- ../users/qa.yml
remote_user: ec2-user
become: yes
become_method: sudo
tasks:
- name: Find user matching key files
become_user: jenkins
local_action: find paths="{{'/opt/pubkeys/2016/q1/'}}" patterns="{{ '*' + item.key + '*' }}"
register: pub_key_files
with_dict: "{{ qa_users }}"
- name: Create groups
group: name="{{ item.key }}" state=present
with_dict: "{{ user_groups }}"
- name: Allow test users to have passwordless sudo
lineinfile: "dest=/etc/sudoers state=present regexp='^%{{ item.key }} ALL=.*ALL.* NOPASSWD: ALL' line='%{{ item.key }} ALL=(ALL) NOPASSWD: ALL'"
with_dict: "{{ user_groups }}"
- name: Create remote users qa
user: name="{{ item.key }}" comment="user" group=users groups="qa"
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users qa
#debug: "msg={{ 'User:' + item.item.key + ' KeyFile:' + item.files.0.path }}"
authorized_key: user="{{ item.item.key }}" key="{{ lookup('file', item.files.0.path) }}" exclusive=yes
with_items: "{{ pub_key_files.results }}"
This is the command line to get dynamic inventory based on EC2 Tags:
ansible-playbook -i inventory/ec2.py --private-key <path to your key file> --extra-vars '{"QUATER":"q1"}' credentials/distribute-public-key.yml