AWS WAF Update IP set Automation - amazon-web-services

I am trying to automate the process of updating IPs to help engineers whitelist IPs on AWS WAF IP set. aws waf-regional update-ip-set returns a ChangeToken which has to be used in the next run of update-ip-set command.
This automation I am trying to achieve is through Rundeck job (community edition). Ideally engineers will not have access to the output of previous job to retrieve ChangeToken. What's the best way to accomplish this task?

You can hide the step output using the "Mask Log Output by Regex" output filter.
Take a look at the following job definition example, the first step is just a simulation of getting the token, but it's hidden by the filter.
- defaultTab: nodes
description: ''
executionEnabled: true
id: fcf8cf5d-697c-42a1-affb-9cda02183fdd
loglevel: INFO
name: TokenWorkflow
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "abc123"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'false'
name: mytoken
regex: s*([^\s]+?)\s*
type: key-value-data
- config:
maskOnlyValue: 'false'
regex: .*
replacement: '[SECURE]'
type: mask-log-output-regex
- exec: echo ${data.mytoken}
keepgoing: false
strategy: node-first
uuid: fcf8cf5d-697c-42a1-affb-9cda02183fdd
The second step uses that token (to show the data passing the steps print the data value generated in the first step, of course in your case the token is used by another command).
Update (passing the data value to another job)
Just use the job reference step and put the data variable name on the remote job option as an argument.
Check the following example:
The first job generates the token (or gets it from your service, hiding the result like in the first example). Then, it calls another job that "receives" that data in an option (Job Reference Step > Arguments) using this format:
-token ${data.mytoken}
Where -token is the target job option name, and ${data.mytoken} is the current data variable name.
- defaultTab: nodes
description: ''
executionEnabled: true
id: fcf8cf5d-697c-42a1-affb-9cda02183fdd
loglevel: INFO
name: TokenWorkflow
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "abc123"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'false'
name: mytoken
regex: s*([^\s]+?)\s*
type: key-value-data
- config:
maskOnlyValue: 'false'
regex: .*
replacement: '[SECURE]'
type: mask-log-output-regex
- jobref:
args: -token ${data.mytoken}
group: ''
name: ChangeRules
nodeStep: 'true'
uuid: b6975bbf-d6d0-411e-98a6-8ecb4c3f7431
keepgoing: false
strategy: node-first
uuid: fcf8cf5d-697c-42a1-affb-9cda02183fdd
This is the job that receive the token and do something, the example show the token but the idea is to use internally to do some action (like the first example).
- defaultTab: nodes
description: ''
executionEnabled: true
id: b6975bbf-d6d0-411e-98a6-8ecb4c3f7431
loglevel: INFO
name: ChangeRules
nodeFilterEditable: false
options:
- name: token
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.token}
keepgoing: false
strategy: node-first
uuid: b6975bbf-d6d0-411e-98a6-8ecb4c3f7431

Related

Using terraform output in kitchen terraform tests

I am using Kitchen terraform to deploy/test a environment on GCP.
I am struggling to get the kitchen/inspec part to use the terraform output values, so i can use them in my tests.
This is what I have
My inspec.yml
name: default
depends:
- name: inspec-gcp
url: https://github.com/inspec/inspec-gcp/archive/master.tar.gz
supports:
- platform: gcp
attributes:
- name: gcloud_project
required: true
description: gcp project
type: string
My Kitchen Yaml
driver:
name: terraform
root_module_directory: test/fixtures/tf_module
provisioner:
name: terraform
verifier:
name: terraform
format: documentation
systems:
- name: default
backend: gcp
controls:
- instance
platforms:
- name: terraform
suites:
- name: kt_suite
My Unit test
gcloud_project = attribute('gcloud_project',
{ description: "The name of the project where resources are deployed." })
control "instance" do
describe google_compute_instance(project: "#{gcloud_project}", zone: 'us-central1-c', name: 'test') do
its('status') { should eq 'RUNNING' }
its('machine_type') { should match 'n1-standard-1' }
end
end
my output.tf
output "gcloud_project" {
description = "The name of the GCP project to deploy against. We need this output to pass the value to tests."
value = "${var.project}"
}
The error I am getting is
× instance: /mnt/c/Users/Github/terra-test-project/test/integration/kt_suite/controls/default.rb:4
× Control Source Code Error /mnt/c/Users/Github/terra-test-project/test/integration/kt_suite/controls/default.rb:4
bad URI(is not URI?): "https://compute.googleapis.com/compute/v1/projects/Input 'gcloud_project' does not have a value. Skipping test./zones/us-central1-c/instances/test"
Everything works if i directly declare the project name in the control loop, however obviously dont want to have to do this.
How can i get kitchen/inspec to use the terraform outputs?
Looks like this may just be due to a typo. You've listed gcp_project under attributes in your inspec.yml but gcloud_project everywhere else.
Not sure if this is fixed, but I am using something like below and it works pretty well. I assume that it could be the way you are using google_project attribute.
Unit Test
dataset_name = input('dataset_name')
account_name = input('account_name')
project_id = input('project_id')
control "gcp" do
title "Google Cloud configuration"
describe google_service_account(
name: account_name,
project: project_id
) do
it { should exist }
end
describe google_bigquery_dataset(
name: dataset_name,
project: project_id
) do
it { should exist }
end
end
inspec.yml
name: big_query
depends:
- name: inspec-gcp
git: https://github.com/inspec/inspec-gcp.git
tag: v1.8.0
supports:
- platform: gcp
inputs:
- name: dataset_name
required: true
type: string
- name: account_name
required: true
type: string
- name : project_id
required: true
type: string

How do you set key/value secret in AWS secrets manager using Ansible?

The following code does not set the key/value pair for secrets. It only creates a string. But I want to create key/value and the documentation does not even mention it....
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Add string to AWS Secrets Manager
aws_secret:
name: 'testvar'
state: present
secret_type: 'string'
secret: "i love devops"
register: secret_facts
- debug:
var: secret_facts
IF this matches anything like the Secrets Manager CLI then to set key values pairs you should expect to create a key value pair like the below:
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Add string to AWS Secrets Manager
aws_secret:
name: 'testvar'
state: present
secret_type: 'string'
secret: "{\"username\":\"bob\",\"password\":\"abc123xyz456\"}"
register: secret_facts
- debug:
var: secret_facts
While the answer here is not "wrong", it will not work if you need to use variables to build your secrets. The reason is when the string gets handed off to Jinja2 to handle the variables there is some variable juggling that goes on which ends in the double quotes being replaced by single quotes no matter what you do!
So the example above done with variables:
secret: "{\"username\":\"{{ myusername }}\",\"password\":\"{{ mypassword }}\"}"
Ends up as:
{'username:'bob','password':'abc123xyz456'}
And of course AWS fails to parse it. The solution is ridiculously simple and I found it here: https://stackoverflow.com/a/32014283/896690
If you put a space or a new line at the start of the string then it's fine!
secret: " {\"username\":\"{{ myusername }}\",\"password\":\"{{ mypassword }}\"}"

Parsing variables in ansible inventory in python

I'm trying to parse ansible variables using python specified in an inventory file like below:
[webservers]
foo.example.com type=news
bar.example.com type=sports
[dbservers]
mongodb.local type=mongo region=us
mysql.local type=mysql region=eu
I want to be able to parse type=news for host foo.example.com in webservers and type=mongo region=us for host mongodb.local under dbservers. Any help with this is greatly appreciated
The play below
- name: List type=news hosts in the group webservers
debug:
msg: "{{ hostvars[item].inventory_hostname }}"
loop: "{{ groups['webservers'] }}"
when: hostvars[item].type == "news"
- name: List type=mongo and region=us hosts in the group dbservers
debug:
msg: "{{ hostvars[item].inventory_hostname }}"
loop: "{{ groups['dbservers'] }}"
when:
- hostvars[item].type == "mongo"
- hostvars[item].region == "us"
gives:
"msg": "foo.example.com"
"msg": "mongodb.local"
If the playbook will be run on the host:
foo.example.com
you can get "type = news" simply by specifying "{{type}}". If you want to use in "when" conditions, then simply indicating "type"
If the playbook will be run on the host:
mongodb.local
then the value for "type" in this case will automatically be = "mongo", and "region" will automatically be = "us"
The values of the variables, if they are defined in the hosts file as you specified, will automatically be determined on the specified hosts
Thus, the playbook can be executed on all hosts and if you get a value for "type", for example:
- debug:
     msg: "{{type}}"
On each of the hosts you will get your unique values that are defined in the hosts file
I'm not sure that I understood the question correctly, but if it meant that on the foo.example.com host it was necessary to get a list of servers from the "webservers" group that have "type = news", then the answer is already given.
Rather than re-inventing the wheel, I suggest you have a look at how ansible itsef is parsing ini files to turn them into an inventory object
You could also easily get this info in json format with a very simple playbook (as suggested by #vladimirbotka), or rewrite your inventory in yaml which would be much easier to parse with any external tool
inventory.yaml
---
all:
children:
webservers:
hosts:
foo.example.com:
type: news
bar.example.com:
type: sports
dbservers:
hosts:
mongodb.local:
type: mongo
region: us
mysql.local:
type: mysql
region: eu

Ansible gcp_compute inventory plugin - groups based on machine names

Consider the following config for ansible's gcp_compute inventory plugin:
plugin: gcp_compute
projects:
- myproj
scopes:
- https://www.googleapis.com/auth/compute
filters:
- ''
groups:
connect: '"connect" in list"'
gcp: 'True'
auth_kind: serviceaccount
service_account_file: ~/.gsutil/key.json
This works for me, and will put all hosts in the gcp group as expected. So far so good.
However, I'd like to group my machines based on certain substrings appearing in their names. How can I do this?
Or, more broadly, how can I find a description of the various variables available to the jinja expressions in the groups dictionary?
The variables available are the keys available inside each of the items in the response, as listed here: https://cloud.google.com/compute/docs/reference/rest/v1/instances/list
So, for my example:
plugin: gcp_compute
projects:
- myproj
scopes:
- https://www.googleapis.com/auth/compute
filters:
- ''
groups:
connect: "'connect' in name"
gcp: 'True'
auth_kind: serviceaccount
service_account_file: ~/.gsutil/key.json
For complete your accurate answer, for choose the machines based on certain substrings appearing in their names in the parameter 'filter' you can add a, for example, expression like this:
filters:
- 'name = gke*'
This value list only the instances that their name start by gke.

Limit hosts using Workflow Template

I'm using Ansible AWX (Tower) and have a template workflow that executes several templates one after the other, based on if the previous execution was successful.
I noticed I can limit to a specific host when running a single template, I'd like to apply this to the a workflow and my guess is I would have to use the survey option to achieve this, however I'm not sure how.
I have tried to see if I can override the "hosts" value and that failed like I expected it to.
How can I go about having it ask me at the beginning of the workflow for the hostname/ip and not for every single template inside the workflow?
You have the set_stats option.
Let's suppose you have the following inventory:
10.100.1.1
10.100.1.3
10.100.1.6
Your inventory is called MyOfficeInventory. First rule is that you need this inventory across all your Templates to play with the host from the first one.
I want to ping only my 10.100.1.6 machine, so in the Template I choose MyOfficeInventory and limit to 10.100.1.6.
If we do:
---
- name: Ping
hosts: all
gather_facts: False
connection: local
tasks:
- name: Ping
ping:
We get:
TASK [Ping] ********************************************************************
ok: [10.100.10.6]
Cool! So from MyOfficeInventory I have my only host selected pinged. So now, in my workflow I have the next Template with *MyOfficeInventory** selected (This is the rule as said). If I ping, I will ping all of them unless you limit again so let's do the magic:
In your first Template do:
- name: add devices with connectivity to the "working_hosts" group
group_by:
key: working_hosts
- name: "Artifact URL of test results to Tower Workflows"
set_stats:
data:
myinventory: "{{ groups['working_hosts'] }}"
run_once: True
Be careful, because for your playbook,
groups['all']
means:
"groups['all']": [
"10.100.10.1",
"10.100.10.3",
"10.100.10.6"
]
And with your new working_hosts group, you get only your current host:
"groups['working_hosts']": [
"10.100.10.6"
]
So now you have your brand new myinventory inventory.
Use it like this in the rest of your Playbooks assigned to your Templates:
- name: Ping
hosts: "{{ myinventory }}"
gather_facts: False
tasks:
- name: Ping
ping:
Your inventory variable will be transferred and you will get:
ok: [10.100.10.6]
One step further. Do you want to select your host from a Survey?
Create one with your hostname input and add keep your first Playbook as:
- name: Ping
hosts: "{{ mysurveyhost }}"
gather_facts: False