actually I am struggling with the routing rules definition for the gcp load-balancer. For my use-case I would need a regular expression so I used a snipped from the examples and tried to adapt it to my needs:
defaultService: projects/***/global/backendServices/***
name: path-matcher-1
routeRules:
- matchRules:
- prefixMatch: /mobile/
headerMatches:
- headerName: User-Agent
regexMatch: .*Android.*
priority: 2
routeAction:
weightedBackendServices:
- backendService: projects/***/global/backendServices/***
weight: 100
urlRewrite:
pathPrefixRewrite: android
- matchRules:
- prefixMatch: /
priority: 1
routeAction:
weightedBackendServices:
- backendService: projects/***/global/backendServices/***
weight: 100
But I can do what I want, I always get following error:
Is there anyone who can tell me what I'm doing wrong?
thx
I was able to find the answer in the documentation:
https://cloud.google.com/compute/docs/reference/rest/v1/urlMaps
pathMatchers[].routeRules[].matchRules[].headerMatches[].regexMatch
➡ regexMatch only applies to load balancers that have loadBalancingScheme set to INTERNAL_SELF_MANAGED
and that is not my case
Related
I am trying to automate the process of updating IPs to help engineers whitelist IPs on AWS WAF IP set. aws waf-regional update-ip-set returns a ChangeToken which has to be used in the next run of update-ip-set command.
This automation I am trying to achieve is through Rundeck job (community edition). Ideally engineers will not have access to the output of previous job to retrieve ChangeToken. What's the best way to accomplish this task?
You can hide the step output using the "Mask Log Output by Regex" output filter.
Take a look at the following job definition example, the first step is just a simulation of getting the token, but it's hidden by the filter.
- defaultTab: nodes
description: ''
executionEnabled: true
id: fcf8cf5d-697c-42a1-affb-9cda02183fdd
loglevel: INFO
name: TokenWorkflow
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "abc123"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'false'
name: mytoken
regex: s*([^\s]+?)\s*
type: key-value-data
- config:
maskOnlyValue: 'false'
regex: .*
replacement: '[SECURE]'
type: mask-log-output-regex
- exec: echo ${data.mytoken}
keepgoing: false
strategy: node-first
uuid: fcf8cf5d-697c-42a1-affb-9cda02183fdd
The second step uses that token (to show the data passing the steps print the data value generated in the first step, of course in your case the token is used by another command).
Update (passing the data value to another job)
Just use the job reference step and put the data variable name on the remote job option as an argument.
Check the following example:
The first job generates the token (or gets it from your service, hiding the result like in the first example). Then, it calls another job that "receives" that data in an option (Job Reference Step > Arguments) using this format:
-token ${data.mytoken}
Where -token is the target job option name, and ${data.mytoken} is the current data variable name.
- defaultTab: nodes
description: ''
executionEnabled: true
id: fcf8cf5d-697c-42a1-affb-9cda02183fdd
loglevel: INFO
name: TokenWorkflow
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "abc123"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'false'
name: mytoken
regex: s*([^\s]+?)\s*
type: key-value-data
- config:
maskOnlyValue: 'false'
regex: .*
replacement: '[SECURE]'
type: mask-log-output-regex
- jobref:
args: -token ${data.mytoken}
group: ''
name: ChangeRules
nodeStep: 'true'
uuid: b6975bbf-d6d0-411e-98a6-8ecb4c3f7431
keepgoing: false
strategy: node-first
uuid: fcf8cf5d-697c-42a1-affb-9cda02183fdd
This is the job that receive the token and do something, the example show the token but the idea is to use internally to do some action (like the first example).
- defaultTab: nodes
description: ''
executionEnabled: true
id: b6975bbf-d6d0-411e-98a6-8ecb4c3f7431
loglevel: INFO
name: ChangeRules
nodeFilterEditable: false
options:
- name: token
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.token}
keepgoing: false
strategy: node-first
uuid: b6975bbf-d6d0-411e-98a6-8ecb4c3f7431
So I have my CloudFormation template defined to include a Parameters section with several parameters including
Parameters:
DefaultLimit:
Type: Number
I also have a GraphQL API defined in which I am using AppSync PIPELINE resolvers to run multiple operations in sequence.
QueryResolver:
Type: AWS::AppSync::Resolver
DependsOn: AppSyncSchema
Properties:
ApiId: !GetAtt [AppSyncAPI, ApiId]
TypeName: Query
FieldName: getData
Kind: PIPELINE
PipelineConfig:
Functions:
- !GetAtt AuthFunction.FunctionId
- !GetAtt ScanDataFunction.FunctionId
RequestMappingTemplate: |
{
# Inject value of DefaultLimit in $context object
}
ResponseMappingTemplate: "$util.toJson($context.result)"
That all works as expected, except for injecting CFN parameter values in mapping templates.
The issue I am having is this -- I would like to pass the value of DefaultLimit to the before RequestMappingTemplate so that the value is available to the ScanDataFunction. The goal is for that value be used as the default limit value when the second function does, say a DynamoDB scan operation, and returns paginated results.
My current approach is to hardcode a default value of 20 for limit in the request mapping template of the ScanDataFunction. I am using a DynamoDB resolver for this operation. Instead, I would like to inject the parameter value because it would give me the flexibility to set different default values for different deployment environments.
Any help with this would be appreciated.
The | character in YAML starts a block and what you enter indented after that is all treated as text.
CloudFormation isn't going to process any of that. The solution I have generally seen is to use the Join intrinsic function. It ends up looking pretty bad can be difficult to maintain so I recommend using it sparingly. Below is a rough possible example:
Parameters:
DefaultLimit:
Type: Number
Resourece:
QueryResolver:
Type: AWS::AppSync::Resolver
DependsOn: AppSyncSchema
Properties:
ApiId: !GetAtt [AppSyncAPI, ApiId]
TypeName: Query
FieldName: getData
Kind: PIPELINE
PipelineConfig:
Functions:
- !GetAtt AuthFunction.FunctionId
- !GetAtt ScanDataFunction.FunctionId
RequestMappingTemplate:
Fn::Join:
- ""
- - "Line 1 of the template\n"
- "Line 2 of the template, DefaultList="
- Ref: DefaultLimit
- "\nLine 3 of the template"
ResponseMappingTemplate: "$util.toJson($context.result)"
Untested code warning
I was wondering if there's a way to perform multiple exact matches within envoy ?
For e.g. interested in directing traffic to two different clusters based on a header attribute,
- match:
prefix: "/service/2"
headers:
- name: X-SOME-TAG
exact_match: "SomeString"
This works as expected but is it possible to specify a list of strings in a list to match against in exact_match e.g. exact_match: ["some_string", another"] ?
I can also write it as,
- match:
prefix: "/service/2"
headers:
- name: X-SOME-TAG
exact_match: "some_string"
route:
cluster: service1
- match:
prefix: "/service/2"
headers:
- name: X-SOME-TAG
exact_match: "another"
route:
cluster: service1
But not sure, if this is un-necessarily verbose and the right way.
Or do I have to use something like regex_match for this or patterns ?
Sorry, I just haven't been able to get this to work, testing with the example on the envoy documentation for front-proxies, hence figured would put this out there. Thanks!
I'm not sure based on your question whether you want to AND the matches or OR them. If you want both to have to match (AND), both matches need to be under the same - match: section, otherwise, make them in seperate - match: sections. The second example you provided above would be the equivalent of an OR, i.e. "if X-SOME-TAG == "some_string" OR X-SOME-TAG == "another", route to service1.
You can try:
- match:
prefix: "/service/2"
headers:
- name: X-SOME-TAG
safe_regex_match:
google_re2: {}
regex: "some_string|another"
route:
cluster: service1
Consider the following config for ansible's gcp_compute inventory plugin:
plugin: gcp_compute
projects:
- myproj
scopes:
- https://www.googleapis.com/auth/compute
filters:
- ''
groups:
connect: '"connect" in list"'
gcp: 'True'
auth_kind: serviceaccount
service_account_file: ~/.gsutil/key.json
This works for me, and will put all hosts in the gcp group as expected. So far so good.
However, I'd like to group my machines based on certain substrings appearing in their names. How can I do this?
Or, more broadly, how can I find a description of the various variables available to the jinja expressions in the groups dictionary?
The variables available are the keys available inside each of the items in the response, as listed here: https://cloud.google.com/compute/docs/reference/rest/v1/instances/list
So, for my example:
plugin: gcp_compute
projects:
- myproj
scopes:
- https://www.googleapis.com/auth/compute
filters:
- ''
groups:
connect: "'connect' in name"
gcp: 'True'
auth_kind: serviceaccount
service_account_file: ~/.gsutil/key.json
For complete your accurate answer, for choose the machines based on certain substrings appearing in their names in the parameter 'filter' you can add a, for example, expression like this:
filters:
- 'name = gke*'
This value list only the instances that their name start by gke.
I'm using Ansible AWX (Tower) and have a template workflow that executes several templates one after the other, based on if the previous execution was successful.
I noticed I can limit to a specific host when running a single template, I'd like to apply this to the a workflow and my guess is I would have to use the survey option to achieve this, however I'm not sure how.
I have tried to see if I can override the "hosts" value and that failed like I expected it to.
How can I go about having it ask me at the beginning of the workflow for the hostname/ip and not for every single template inside the workflow?
You have the set_stats option.
Let's suppose you have the following inventory:
10.100.1.1
10.100.1.3
10.100.1.6
Your inventory is called MyOfficeInventory. First rule is that you need this inventory across all your Templates to play with the host from the first one.
I want to ping only my 10.100.1.6 machine, so in the Template I choose MyOfficeInventory and limit to 10.100.1.6.
If we do:
---
- name: Ping
hosts: all
gather_facts: False
connection: local
tasks:
- name: Ping
ping:
We get:
TASK [Ping] ********************************************************************
ok: [10.100.10.6]
Cool! So from MyOfficeInventory I have my only host selected pinged. So now, in my workflow I have the next Template with *MyOfficeInventory** selected (This is the rule as said). If I ping, I will ping all of them unless you limit again so let's do the magic:
In your first Template do:
- name: add devices with connectivity to the "working_hosts" group
group_by:
key: working_hosts
- name: "Artifact URL of test results to Tower Workflows"
set_stats:
data:
myinventory: "{{ groups['working_hosts'] }}"
run_once: True
Be careful, because for your playbook,
groups['all']
means:
"groups['all']": [
"10.100.10.1",
"10.100.10.3",
"10.100.10.6"
]
And with your new working_hosts group, you get only your current host:
"groups['working_hosts']": [
"10.100.10.6"
]
So now you have your brand new myinventory inventory.
Use it like this in the rest of your Playbooks assigned to your Templates:
- name: Ping
hosts: "{{ myinventory }}"
gather_facts: False
tasks:
- name: Ping
ping:
Your inventory variable will be transferred and you will get:
ok: [10.100.10.6]
One step further. Do you want to select your host from a Survey?
Create one with your hostname input and add keep your first Playbook as:
- name: Ping
hosts: "{{ mysurveyhost }}"
gather_facts: False