Ansible gcp_compute inventory plugin - groups based on machine names - google-cloud-platform

Consider the following config for ansible's gcp_compute inventory plugin:
plugin: gcp_compute
projects:
- myproj
scopes:
- https://www.googleapis.com/auth/compute
filters:
- ''
groups:
connect: '"connect" in list"'
gcp: 'True'
auth_kind: serviceaccount
service_account_file: ~/.gsutil/key.json
This works for me, and will put all hosts in the gcp group as expected. So far so good.
However, I'd like to group my machines based on certain substrings appearing in their names. How can I do this?
Or, more broadly, how can I find a description of the various variables available to the jinja expressions in the groups dictionary?

The variables available are the keys available inside each of the items in the response, as listed here: https://cloud.google.com/compute/docs/reference/rest/v1/instances/list
So, for my example:
plugin: gcp_compute
projects:
- myproj
scopes:
- https://www.googleapis.com/auth/compute
filters:
- ''
groups:
connect: "'connect' in name"
gcp: 'True'
auth_kind: serviceaccount
service_account_file: ~/.gsutil/key.json

For complete your accurate answer, for choose the machines based on certain substrings appearing in their names in the parameter 'filter' you can add a, for example, expression like this:
filters:
- 'name = gke*'
This value list only the instances that their name start by gke.

Related

How to list group ids in GCP using cli or console

I would like to know is there any way to get group ids either using CLI or Cloud console? Using this id, I need to collect the members list using Cloud API. I was going through google documentation but I couldn't find it.
For example if I use:
gcloud identity groups memberships list --group-email=abc#xyz.com
It gives the group id. Then I am using this doc to get the list of members.
If using Google's SDK tool gcloud is ok with you then you can do it as follows:
Group's ID is it's actual email address - you can see it below:
wb#cloudshell:~ $ gcloud identity groups describe esa111#google.com
createTime: '2021-10-12T09:13:16.737141Z'
description: test group
displayName: esa111
groupKey:
id: esa111#google.com
labels:
cloudidentity.googleapis.com/groups.discussion_forum: ''
name: groups/00rj4333f0glbwez
parent: customers/Cx2hsdde9nw
updateTime: '2021-10-12T09:13:16.737141Z'
To get a members list:
wb#cloudshell:~ $ gcloud identity groups memberships list --group-email=esa111#google.com
---
name: groups/00rj4333f0glbwez/memberships/129543432329845052
preferredMemberKey:
id: esa222#google.com
roles:
- name: MEMBER
---
name: groups/00rj4333f0glbwez/memberships/11674834e3327905886
preferredMemberKey:
id: esa111#google.com
roles:
- name: OWNER
- name: MEMBER
And to have just group's members ID's listed use grep and you'll get:
wb#cloudshell:~ $ gcloud identity groups memberships list --group-email=esa111#google.com | grep id:
id: esa222#google.com
id: esa111#google.com
Here's some docs on the gcloud identity groups describe and list commands.

DM create bigquery view then authorize it on dataset

Using Google Deployment Manager, has anybody found a way to first create a view in BigQuery, then authorize one or more datasets used by the view, sometimes in different projects, and were not created/managed by deployment manager? Creating a dataset with a view wasn't too challenging. Here is the jinja template named inventoryServices_bigquery_territory_views.jinja:
resources:
- name: territory-{{properties["OU"]}}
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: territory_{{properties["OU"]}}
- name: files
type: gcp-types/bigquery-v2:tables
properties:
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableReference:
tableId: files
view:
query: >
SELECT DATE(DAY) DAY, ou, email, name, mimeType
FROM `{{properties["files_table_id"]}}`
WHERE LOWER(SPLIT(ou, "/")[SAFE_OFFSET(1)]) = "{{properties["OU"]}}"
useLegacySql: false
The deployment configuration references the above template like this:
imports:
- path: inventoryServices_bigquery_territory_views.jinja
resources:
- name: inventoryServices_bigquery_territory_views
type: inventoryServices_bigquery_territory_views.jinja
In the example above files_table_id is the project.dataset.table that needs the newly created view authorized.
I have seen some examples of managing IAM at project/folder/org level, but my need is on the dataset, not project. Looking at the resource representation of a dataset it seems like I can update access.view with the newly created view, but am a bit lost on how I would do that without removing existing access levels, and for datasets in projects different than the one the new view is created in. Any help appreciated.
Edit:
I tried adding the dataset which needs the view authorized like so, then deploy in preview mode just to see how it interprets the config:
-name: files-source
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: {{properties["files_table_id"]}}
access:
view:
projectId: {{env['project']}}
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableId: $(ref.territory_files.tableReference.tableId)
But when I deploy in preview mode it throws this error:
errors:
- code: MANIFEST_EXPANSION_USER_ERROR
location: /deployments/inventoryservices-bigquery-territory-views-us/manifests/manifest-1582283242420
message: |-
Manifest expansion encountered the following errors: mapping values are not allowed here
in "<unicode string>", line 26, column 7:
type: gcp-types/bigquery-v2:datasets
^ Resource: config
Strange to me, hard to make much sense of that error since the line/column it points to is formatted exactly the same as the other dataset in the config, except that maybe it doesn't like that the files-source dataset already exists and was created from outside of deployment manager.

Parsing variables in ansible inventory in python

I'm trying to parse ansible variables using python specified in an inventory file like below:
[webservers]
foo.example.com type=news
bar.example.com type=sports
[dbservers]
mongodb.local type=mongo region=us
mysql.local type=mysql region=eu
I want to be able to parse type=news for host foo.example.com in webservers and type=mongo region=us for host mongodb.local under dbservers. Any help with this is greatly appreciated
The play below
- name: List type=news hosts in the group webservers
debug:
msg: "{{ hostvars[item].inventory_hostname }}"
loop: "{{ groups['webservers'] }}"
when: hostvars[item].type == "news"
- name: List type=mongo and region=us hosts in the group dbservers
debug:
msg: "{{ hostvars[item].inventory_hostname }}"
loop: "{{ groups['dbservers'] }}"
when:
- hostvars[item].type == "mongo"
- hostvars[item].region == "us"
gives:
"msg": "foo.example.com"
"msg": "mongodb.local"
If the playbook will be run on the host:
foo.example.com
you can get "type = news" simply by specifying "{{type}}". If you want to use in "when" conditions, then simply indicating "type"
If the playbook will be run on the host:
mongodb.local
then the value for "type" in this case will automatically be = "mongo", and "region" will automatically be = "us"
The values of the variables, if they are defined in the hosts file as you specified, will automatically be determined on the specified hosts
Thus, the playbook can be executed on all hosts and if you get a value for "type", for example:
- debug:
     msg: "{{type}}"
On each of the hosts you will get your unique values that are defined in the hosts file
I'm not sure that I understood the question correctly, but if it meant that on the foo.example.com host it was necessary to get a list of servers from the "webservers" group that have "type = news", then the answer is already given.
Rather than re-inventing the wheel, I suggest you have a look at how ansible itsef is parsing ini files to turn them into an inventory object
You could also easily get this info in json format with a very simple playbook (as suggested by #vladimirbotka), or rewrite your inventory in yaml which would be much easier to parse with any external tool
inventory.yaml
---
all:
children:
webservers:
hosts:
foo.example.com:
type: news
bar.example.com:
type: sports
dbservers:
hosts:
mongodb.local:
type: mongo
region: us
mysql.local:
type: mysql
region: eu

Limit hosts using Workflow Template

I'm using Ansible AWX (Tower) and have a template workflow that executes several templates one after the other, based on if the previous execution was successful.
I noticed I can limit to a specific host when running a single template, I'd like to apply this to the a workflow and my guess is I would have to use the survey option to achieve this, however I'm not sure how.
I have tried to see if I can override the "hosts" value and that failed like I expected it to.
How can I go about having it ask me at the beginning of the workflow for the hostname/ip and not for every single template inside the workflow?
You have the set_stats option.
Let's suppose you have the following inventory:
10.100.1.1
10.100.1.3
10.100.1.6
Your inventory is called MyOfficeInventory. First rule is that you need this inventory across all your Templates to play with the host from the first one.
I want to ping only my 10.100.1.6 machine, so in the Template I choose MyOfficeInventory and limit to 10.100.1.6.
If we do:
---
- name: Ping
hosts: all
gather_facts: False
connection: local
tasks:
- name: Ping
ping:
We get:
TASK [Ping] ********************************************************************
ok: [10.100.10.6]
Cool! So from MyOfficeInventory I have my only host selected pinged. So now, in my workflow I have the next Template with *MyOfficeInventory** selected (This is the rule as said). If I ping, I will ping all of them unless you limit again so let's do the magic:
In your first Template do:
- name: add devices with connectivity to the "working_hosts" group
group_by:
key: working_hosts
- name: "Artifact URL of test results to Tower Workflows"
set_stats:
data:
myinventory: "{{ groups['working_hosts'] }}"
run_once: True
Be careful, because for your playbook,
groups['all']
means:
"groups['all']": [
"10.100.10.1",
"10.100.10.3",
"10.100.10.6"
]
And with your new working_hosts group, you get only your current host:
"groups['working_hosts']": [
"10.100.10.6"
]
So now you have your brand new myinventory inventory.
Use it like this in the rest of your Playbooks assigned to your Templates:
- name: Ping
hosts: "{{ myinventory }}"
gather_facts: False
tasks:
- name: Ping
ping:
Your inventory variable will be transferred and you will get:
ok: [10.100.10.6]
One step further. Do you want to select your host from a Survey?
Create one with your hostname input and add keep your first Playbook as:
- name: Ping
hosts: "{{ mysurveyhost }}"
gather_facts: False

Ansible returns wrong hosts in dynamic inventory (private ip collision?)

I have two instances on different VPCs which have the same private address.
ci-vpc:
172.18.50.180:
tags:
Environment: ci
Role: aRole
test-vpc:
172.18.50.180:
tags:
Environment: test
Role: web
I am running the following playbook:
- name: "print account specific variables"
hosts: "tag_Environment_ci:&tag_Role_web"
tasks:
- name: "print account specific variables for account {{ account }}"
debug:
msg:
- 'ec2_tag_Name': "{{ ec2_tag_Name }}"
'ec2_tag_Role': "{{ ec2_tag_Role }}"
'ec2_private_ip_address': "{{ ec2_private_ip_address }}"
'ec2_tag_Environment': "{{ ec2_tag_Environment }}"
Since I am asking for both role web and environment ci, none of these instances should be picked, but nevertheless the result that I am getting is:
ok: [172.18.50.180] => {
"changed": false,
"msg": [
{
"ec2_private_ip_address": "172.18.50.180",
"ec2_tag_Environment": "test",
"ec2_tag_Name": "test-web-1",
"ec2_tag_Role": "web"
}
]
}
Obviously this instance does not meet the requirements under hosts...
It seems like ec2.py searched for the Environment tag, found ci for 172.18.50.180, then searched separately for the role tag, found another one under 172.18.50.180, and just marked that instance as ok, even though these are two different instances on different vpcs.
I've tried changing vpc_destination_variable in ec2.ini to id but then I'm getting error when Ansible is trying to connect to these instances because it cannot connect to the id...
fatal: [i-XXX]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname i-XXX: Name or service not known\r\n", "unreachable": true
}
Is there another option that will work under vpc_destination_variable? Any known solution for such a collision?
tl;dr: This is exactly what hostname_variable in ec2.ini is for, as documented:
# This allows you to override the inventory_name with an ec2 variable, instead
# of using the destination_variable above. Addressing (aka ansible_ssh_host)
# will still use destination_variable. Tags should be written as 'tag_TAGNAME'.
Unfortunetely I've missed it and found it after looking around in ec2.py
Longer answer with additional options to hostnames
After finding out about hostname_variable, I had another problem that it can receive only one variable. In my case I had some instances with the same private ip on one hand, and some with the same tags on the other (AWS autoscaling groups, same tags on all hosts), so I needed a way to differentiate between them.
I've created a gist with this option. My change is in line 848. This allows you to use multiple comma separated variables in hostname_variable, e.g.:
hostname_variable = tag_Name,private_ip_address