Multiple exact matches within envoy proxy - istio

I was wondering if there's a way to perform multiple exact matches within envoy ?
For e.g. interested in directing traffic to two different clusters based on a header attribute,
- match:
prefix: "/service/2"
headers:
- name: X-SOME-TAG
exact_match: "SomeString"
This works as expected but is it possible to specify a list of strings in a list to match against in exact_match e.g. exact_match: ["some_string", another"] ?
I can also write it as,
- match:
prefix: "/service/2"
headers:
- name: X-SOME-TAG
exact_match: "some_string"
route:
cluster: service1
- match:
prefix: "/service/2"
headers:
- name: X-SOME-TAG
exact_match: "another"
route:
cluster: service1
But not sure, if this is un-necessarily verbose and the right way.
Or do I have to use something like regex_match for this or patterns ?
Sorry, I just haven't been able to get this to work, testing with the example on the envoy documentation for front-proxies, hence figured would put this out there. Thanks!

I'm not sure based on your question whether you want to AND the matches or OR them. If you want both to have to match (AND), both matches need to be under the same - match: section, otherwise, make them in seperate - match: sections. The second example you provided above would be the equivalent of an OR, i.e. "if X-SOME-TAG == "some_string" OR X-SOME-TAG == "another", route to service1.

You can try:
- match:
prefix: "/service/2"
headers:
- name: X-SOME-TAG
safe_regex_match:
google_re2: {}
regex: "some_string|another"
route:
cluster: service1

Related

AWS WAF Update IP set Automation

I am trying to automate the process of updating IPs to help engineers whitelist IPs on AWS WAF IP set. aws waf-regional update-ip-set returns a ChangeToken which has to be used in the next run of update-ip-set command.
This automation I am trying to achieve is through Rundeck job (community edition). Ideally engineers will not have access to the output of previous job to retrieve ChangeToken. What's the best way to accomplish this task?
You can hide the step output using the "Mask Log Output by Regex" output filter.
Take a look at the following job definition example, the first step is just a simulation of getting the token, but it's hidden by the filter.
- defaultTab: nodes
description: ''
executionEnabled: true
id: fcf8cf5d-697c-42a1-affb-9cda02183fdd
loglevel: INFO
name: TokenWorkflow
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "abc123"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'false'
name: mytoken
regex: s*([^\s]+?)\s*
type: key-value-data
- config:
maskOnlyValue: 'false'
regex: .*
replacement: '[SECURE]'
type: mask-log-output-regex
- exec: echo ${data.mytoken}
keepgoing: false
strategy: node-first
uuid: fcf8cf5d-697c-42a1-affb-9cda02183fdd
The second step uses that token (to show the data passing the steps print the data value generated in the first step, of course in your case the token is used by another command).
Update (passing the data value to another job)
Just use the job reference step and put the data variable name on the remote job option as an argument.
Check the following example:
The first job generates the token (or gets it from your service, hiding the result like in the first example). Then, it calls another job that "receives" that data in an option (Job Reference Step > Arguments) using this format:
-token ${data.mytoken}
Where -token is the target job option name, and ${data.mytoken} is the current data variable name.
- defaultTab: nodes
description: ''
executionEnabled: true
id: fcf8cf5d-697c-42a1-affb-9cda02183fdd
loglevel: INFO
name: TokenWorkflow
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "abc123"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'false'
name: mytoken
regex: s*([^\s]+?)\s*
type: key-value-data
- config:
maskOnlyValue: 'false'
regex: .*
replacement: '[SECURE]'
type: mask-log-output-regex
- jobref:
args: -token ${data.mytoken}
group: ''
name: ChangeRules
nodeStep: 'true'
uuid: b6975bbf-d6d0-411e-98a6-8ecb4c3f7431
keepgoing: false
strategy: node-first
uuid: fcf8cf5d-697c-42a1-affb-9cda02183fdd
This is the job that receive the token and do something, the example show the token but the idea is to use internally to do some action (like the first example).
- defaultTab: nodes
description: ''
executionEnabled: true
id: b6975bbf-d6d0-411e-98a6-8ecb4c3f7431
loglevel: INFO
name: ChangeRules
nodeFilterEditable: false
options:
- name: token
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.token}
keepgoing: false
strategy: node-first
uuid: b6975bbf-d6d0-411e-98a6-8ecb4c3f7431

GCP-loadbalancer routing rule with regular expression

actually I am struggling with the routing rules definition for the gcp load-balancer. For my use-case I would need a regular expression so I used a snipped from the examples and tried to adapt it to my needs:
defaultService: projects/***/global/backendServices/***
name: path-matcher-1
routeRules:
- matchRules:
- prefixMatch: /mobile/
headerMatches:
- headerName: User-Agent
regexMatch: .*Android.*
priority: 2
routeAction:
weightedBackendServices:
- backendService: projects/***/global/backendServices/***
weight: 100
urlRewrite:
pathPrefixRewrite: android
- matchRules:
- prefixMatch: /
priority: 1
routeAction:
weightedBackendServices:
- backendService: projects/***/global/backendServices/***
weight: 100
But I can do what I want, I always get following error:
Is there anyone who can tell me what I'm doing wrong?
thx
I was able to find the answer in the documentation:
https://cloud.google.com/compute/docs/reference/rest/v1/urlMaps
pathMatchers[].routeRules[].matchRules[].headerMatches[].regexMatch
➡ regexMatch only applies to load balancers that have loadBalancingScheme set to INTERNAL_SELF_MANAGED
and that is not my case

Istio authorization - Pattern matching in Istio 'paths' field

I want to create a rule in the Istio authorization:
- to:
- operation:
methods: [ "POST" ]
paths: [ "/data/api/v1/departments/*/users/*/position" ]
when:
- key: request.auth.claims[resource_access][roles]
values: [ "edit" ]
so I want to use path variables here (in places with '*'). What should I put instead of '*' to make it working?
It doesn't work in the current setup.
I get 'RBAC denied', I have a role 'edit' and path to that role is okay. It works fine for endpoints without '*' signs
Posting this answer as a community wiki as similar question has been already answered here:
Stackoverflow.com: Answer: Istio authorization - Pattern matching in Istio 'paths' field
Part of the question:
- operation:
methods: ["PUT"]
paths: ["/my-service/docs/*/activate/*"]
Answer:
According to istio documentation:
Rule
Rule matches requests from a list of sources that perform a list of
operations subject to a list of conditions. A match occurs when at
least one source, operation and condition matches the request. An
empty rule is always matched.
Any string field in the rule supports Exact, Prefix, Suffix and
Presence match:
Exact match: “abc” will match on value “abc”.
Prefix match: “abc*” will match on value “abc” and “abcd”.
Suffix match: “*abc” will match on value “abc” and “xabc”.
Presence match: “*” will match when value is not empty.
So Authorization Policy does support wildcard, but I think the issue is with the */activate/* path, because paths can use wildcards only at the start, end or whole string, double wildcard just doesn't work.
There are related open github issues about that:
https://github.com/istio/istio/issues/16585
https://github.com/istio/istio/issues/25021

How do you set key/value secret in AWS secrets manager using Ansible?

The following code does not set the key/value pair for secrets. It only creates a string. But I want to create key/value and the documentation does not even mention it....
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Add string to AWS Secrets Manager
aws_secret:
name: 'testvar'
state: present
secret_type: 'string'
secret: "i love devops"
register: secret_facts
- debug:
var: secret_facts
IF this matches anything like the Secrets Manager CLI then to set key values pairs you should expect to create a key value pair like the below:
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Add string to AWS Secrets Manager
aws_secret:
name: 'testvar'
state: present
secret_type: 'string'
secret: "{\"username\":\"bob\",\"password\":\"abc123xyz456\"}"
register: secret_facts
- debug:
var: secret_facts
While the answer here is not "wrong", it will not work if you need to use variables to build your secrets. The reason is when the string gets handed off to Jinja2 to handle the variables there is some variable juggling that goes on which ends in the double quotes being replaced by single quotes no matter what you do!
So the example above done with variables:
secret: "{\"username\":\"{{ myusername }}\",\"password\":\"{{ mypassword }}\"}"
Ends up as:
{'username:'bob','password':'abc123xyz456'}
And of course AWS fails to parse it. The solution is ridiculously simple and I found it here: https://stackoverflow.com/a/32014283/896690
If you put a space or a new line at the start of the string then it's fine!
secret: " {\"username\":\"{{ myusername }}\",\"password\":\"{{ mypassword }}\"}"

Ansible gcp_compute inventory plugin - groups based on machine names

Consider the following config for ansible's gcp_compute inventory plugin:
plugin: gcp_compute
projects:
- myproj
scopes:
- https://www.googleapis.com/auth/compute
filters:
- ''
groups:
connect: '"connect" in list"'
gcp: 'True'
auth_kind: serviceaccount
service_account_file: ~/.gsutil/key.json
This works for me, and will put all hosts in the gcp group as expected. So far so good.
However, I'd like to group my machines based on certain substrings appearing in their names. How can I do this?
Or, more broadly, how can I find a description of the various variables available to the jinja expressions in the groups dictionary?
The variables available are the keys available inside each of the items in the response, as listed here: https://cloud.google.com/compute/docs/reference/rest/v1/instances/list
So, for my example:
plugin: gcp_compute
projects:
- myproj
scopes:
- https://www.googleapis.com/auth/compute
filters:
- ''
groups:
connect: "'connect' in name"
gcp: 'True'
auth_kind: serviceaccount
service_account_file: ~/.gsutil/key.json
For complete your accurate answer, for choose the machines based on certain substrings appearing in their names in the parameter 'filter' you can add a, for example, expression like this:
filters:
- 'name = gke*'
This value list only the instances that their name start by gke.