json parse extract substring - regex

I am trying to extract credentials client secret from the cloud foundry env json string
cf env myapp
gives the exact following,(not a proper json so thats why i cant use jq)
Getting env variables for app icm in org myorg / space myspace as
xxyy...
OK
{
"myenv_env_json": {
"http_proxy": "http://mycompany-svr-proxy-qa.mycompany.com:7070",
"https_proxy": "http://mycompany-svr-proxy-qa.mycompany.com:7070",
"no_proxy": "*.mycompany.com"
},
"running_env_json": {},
"system_env_json": {
"VCAP_SERVICES": {
"user-provided": [
{
"name": "myapp-parameters",
"instance_name": "myapp-parameters",
"binding_name": null,
"credentials": {
"auth-domain": "https://sso.login.run-np.mycompany.com",
"backend-url-other": "https://myservice-other.apps-np.mycompany.com",
"client-secret": "121322332-32322-23232-232-32-23232",
"stage": "mystg",
"backend-url": "https://myservice-other.apps-np.mycompany.com",
"client-secret-other": "121322332-32322-23232-232-32-23232"
},
"syslog_drain_url": "",
"volume_mounts": [],
"label": "user-provided",
"tags": []
},
{
"name": "appdynamics",
"instance_name": "appdynamics",
"binding_name": null,
"credentials": {
"account-access-key": "1213232-232-322-2322323-2323232-311",
"account-name": "customer1",
"application-name": "myenv-dev",
"host-name": "appdx-qa.mycompany.com",
"node-name": "$(ruby -e \"require 'json'; a = JSON.parse(ENV['VCAP_APPLICATION']); puts \\\"#{a['application_name']}-#{a['cf_api'].split(/\\.|-/)[2]}:#{a['instance_index']}\\\"\")",
"port": "9401",
"ssl-enabled": "true",
"tier-name": "$(ruby -e \"require 'json'; a = JSON.parse(ENV['VCAP_APPLICATION']); puts \\\"#{a['application_name']}-#{a['cf_api'].split(/\\.|-/)[2]}\\\"\")",
"version": "4.2.7_1"
},
"syslog_drain_url": "",
"volume_mounts": [],
"label": "user-provided",
"tags": []
}
],
"p-identity": [
{
"name": "sso",
"instance_name": "sso",
"binding_name": null,
"credentials": {
"auth_domain": "https://sso.login.run-np.mycompany.com",
"client_secret": "123232-23232-2323243-242323r3r",
"client_id": "afdvdf-dvdfdd-fgdgdf-d23232"
},
"syslog_drain_url": null,
"volume_mounts": [],
"label": "p-identity",
"provider": null,
"plan": "sso",
"tags": []
}
]
}
},
"application_env_json": {
"VCAP_APPLICATION": {
"cf_api": "https://api.run-np.mycompany.com",
"limits": {
"fds": 16384
},
"application_name": "myapp",
"application_uris": [
"myapp-dev.apps-np.mycompany.com"
],
"name": "myapp",
"space_name": "myapp-dev",
"space_id": "392929-23223-2323-2322-2322",
"uris": [
"myapp-dev.apps-np.mycompany.com"
],
"users": null,
"application_id": "fwew78cc-wewc5c-dfd8a7-89d5-fdfefwewb"
}
}
}
User-Provided:
APP_ENV: development
GRANT_TYPE: authorization_code
SSO_AUTO_APPROVED_SCOPES: openid
SSO_IDENTITY_PROVIDERS: mycompany-single-signon
SSO_LAUNCH_URL: https://myapp-dev.apps-np.mycompany.com/
SSO_REDIRECT_URIS: https://myapp-dev.apps-np.mycompany.com/callback,http://myapp-dev.apps-np.mycompany.com/callback
SSO_SCOPES: openid,user_attributes
callback_url: https://myapp-dev.apps-np.mycompany.com/callback
client_secret: secret
client_secret_other: secretother
No running env variables have been set
Staging Environment Variable Groups:
http_proxy: http://myapp-svr-proxy-qa.mycompany.com:7070
https_proxy: http://myapp-svr-proxy-qa.mycompany.com:7070
no_proxy: *.mycompany.com
Here is what i am trying to use, and so far no luck extracting p-identity sub json, what is wrong in my sed
cf env myapp|sed 's/.*\(p-identity[^}].*}\).*/\1/p'
my expected output should be as follows
"p-identity": [
{
"name": "sso",
"instance_name": "sso",
"binding_name": null,
"credentials": {
"auth_domain": "https://sso.login.run-np.mycompany.com",
"client_secret": "123232-23232-2323243-242323r3r",
"client_id": "afdvdf-dvdfdd-fgdgdf-d23232"
}

I found a dirty workaround, may not be efficient but works for now
cf env myapp|sed 1,4d|sed -n '/User-Provided:/q;p'|jq -c -r '.VCAP_SERVICES."p-identity"[0].credentials.client_secret'| head -n1

For your case it may be easier to pipe the output to grep to extract the JSON, then use jq to extract the field that you want, for example:
cf env myapp | grep -oz '{.*}' | jq 'your filter here'

Related

Removing ids from Postman Collection with bash script - sed and regex

I'm trying to solve an issue with Postman Collections.
Test scripts added to collection generates additional field "id".
Id field change after each export of the Collection to file.
Due to this fact PRs with changes in Postman Collections are very hard to read.
I want to solve that issue with git pre commit hook and bash script which will remove all id's from script object of collection.
There are three possible locations of the id in scripts object:
First element of object
"script":{
"id": "83d9076e-64c7-47fa-9b50-b7635718c925",
"exec": [
"console.log(\"foo\");"
],
"type": "text/javascript"
}
Middle of object
"script":{
"exec": [
"console.log(\"foo\");"
],
"id": "83d9076e-64c7-47fa-9b50-b7635718c925",
"type": "text/javascript"
}
End of object
"script":{
"exec": [
"console.log(\"foo\");"
],
"type": "text/javascript",
"id": "83d9076e-64c7-47fa-9b50-b7635718c925"
}
From regex point of view case 1 and 2 are the same:
.*"id": "[a-f0-9-]*",
Case 3 is different and regex which handles this option is:
,\n.*"id": "[a-f0-9-]*",
As I mentioned before, I want to use this regexp in bash script:
postmanClean.sh
#!/bin/bash
COLLECTION_FILES=$(find . -type f -name "*postman_collection.json")
for POSTMAN_COLLECTION in ${COLLECTION_FILES}
do
echo "Harmonizing Postman $POSTMAN_COLLECTION"
sed -i -e 's/.*"id": "[a-f0-9-]*"\,//' ${POSTMAN_COLLECTION} # Remove test/script ID
sed -i -e 's/\,\n.*"id": "[a-f0-9-]*"//' ${POSTMAN_COLLECTION} # Remove test/script ID
done
Above solution is incorrect. I tried different options, but this regexp are not working.
How properly build this request to make them work with sed command?
Collection file:
demo.postman_collection.json
{
"info": {
"_postman_id": "258b2fe2-5768-47f8-9e82-70971bab6bbd",
"name": "demo",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "One",
"item": [
{
"name": "Demo 1",
"event": [
{
"listen": "test",
"script": {
"id": "83d9076e-64c7-47fa-9b50-b7635718c925",
"exec": [
"console.log(\"foo\");"
],
"type": "text/javascript"
}
}
],
"protocolProfileBehavior": {
"disableBodyPruning": true
},
"request": {
"method": "GET",
"header": [],
"body": {
"mode": "raw",
"raw": "foo"
},
"url": {
"raw": "https://postman-echo.com/delay/1",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
"delay",
"1"
]
}
},
"response": []
}
],
"protocolProfileBehavior": {}
},
{
"name": "Two",
"item": [
{
"name": "Demo 2",
"event": [
{
"listen": "test",
"script": {
"exec": [
"console.log(\"bar\");"
],
"type": "text/javascript",
"id": "facb28f7-c54d-46e2-adb2-4c929fd1edd3"
}
}
],
"protocolProfileBehavior": {
"disableBodyPruning": true
},
"request": {
"method": "GET",
"header": [],
"body": {
"mode": "raw",
"raw": "bar"
},
"url": {
"raw": "https://postman-echo.com/delay/2",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
"delay",
"2"
]
}
},
"response": []
},
{
"name": "Demo 3",
"event": [
{
"listen": "test",
"script": {
"exec": [
"console.log(\"foobar\");"
],
"id": "facb28f7-c54d-46e2-adb2-4c929fd1edd3",
"type": "text/javascript"
}
}
],
"protocolProfileBehavior": {
"disableBodyPruning": true
},
"request": {
"method": "GET",
"header": [],
"body": {
"mode": "raw",
"raw": "bar"
},
"url": {
"raw": "https://postman-echo.com/delay/3",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
"delay",
"3"
]
}
},
"response": []
}
],
"protocolProfileBehavior": {}
}
],
"protocolProfileBehavior": {}
}
I think jq is the right tool for this job and the solution will be as simple as walk(del(.id?)). here a rewrite of your script using jq:
#!/bin/bash
COLLECTION_FILES=$(find . -type f -name "*postman_collection.json")
for f in ${COLLECTION_FILES}
do
echo "Harmonizing Postman $f"
jq --indent 4 'walk(del(.id?))' "$f" > "$f.tmp" && mv "$f.tmp" "$f"
done
and a demo (please note how jq takes care of removing the extra , from "type": "text/javascript", which will otherwise invalidate the json):
$ cp demo.postman_collection.json demo.postman_collection.json.bak
$ ./postmanClean.sh
Harmonizing Postman ./demo.postman_collection.json
$ diff demo.postman_collection.json.bak demo.postman_collection.json
17d16
< "id": "83d9076e-64c7-47fa-9b50-b7635718c925",
65,66c64
< "type": "text/javascript",
< "id": "facb28f7-c54d-46e2-adb2-4c929fd1edd3"
---
> "type": "text/javascript"
104d101
< "id": "facb28f7-c54d-46e2-adb2-4c929fd1edd3",
$
You don't need to distinguish the two patterns, as you can use sed to just match any line that contains the "id": "..." pattern and then use it to delete the entire line where it matched using the d command. So you do not need to care about the newlines, whitespace or whether the trailing comma is there or not.
Executed on your example
sed -i '/"id": "[a-f0-9-]*"/d' demo.postman_collection.json
removes all the id lines (except the "_postman_id" of course).

admin-create-user command doesn't work properly

I'm trying to run admin-create-user cli command as shown in the official doc, but it doesn't seems to run properly.
I don't get all attributes created event though they were in the command. I always get only the last attribute typed in the command.
am I doing something wrong? is there any solution?
aws cognito-idp admin-create-user --user-pool-id us-west-2_aaaaaaaaa --username diego#example.com --user-attributes=Name=email,Value=kermit2#somewhere.com,Name=phone_number,Value="+15555551212" --message-action SUPPRESS
and I'm getting
{
"User": {
"Username": "diego#example.com",
"Enabled": true,
"UserStatus": "FORCE_CHANGE_PASSWORD",
"UserCreateDate": 1566470568.864,
"UserLastModifiedDate": 1566470568.864,
"Attributes": [
{
"Name": "sub",
"Value": "5dac8ce5-2997-4185-b862-86cf15aede77"
},
{
"Name": "phone_number",
"Value": "+15555551212"
}
]
}
}
instead of
{
"User": {
"Username": "7325c1de-b05b-4f84-b321-9adc6e61f4a2",
"Enabled": true,
"UserStatus": "FORCE_CHANGE_PASSWORD",
"UserCreateDate": 1548099495.428,
"UserLastModifiedDate": 1548099495.428,
"Attributes": [
{
"Name": "sub",
"Value": "7325c1de-b05b-4f84-b321-9adc6e61f4a2"
},
{
"Name": "phone_number",
"Value": "+15555551212"
},
{
"Name": "email",
"Value": "diego#example.com"
}
]
}
}
The shorthand notation that you're using, as referenced in the docs here, does indeed seem to be producing the results you are receiving.
A quick way around this issue is to change to using JSON format for the user-attributes option. If you modify the user-attributes option to use JSON, your command will look like this:
aws cognito-idp admin-create-user --user-pool-id us-west-2_aaaaaaaaa --username a567 --user-attributes '[{"Name": "email","Value": "kermit2#somewhere.com"},{"Name": "phone_number","Value": "+15555551212"}]' --message-action SUPPRESS
Which, when executed, produces this output:
{
"User": {
"Username": "a567",
"Enabled": true,
"UserStatus": "FORCE_CHANGE_PASSWORD",
"UserCreateDate": 1566489693.408,
"UserLastModifiedDate": 1566489693.408,
"Attributes": [
{
"Name": "sub",
"Value": "f6ff3e05-5f15-4a53-a45f-52e939b941fd"
},
{
"Name": "phone_number",
"Value": "+15555551212"
},
{
"Name": "email",
"Value": "kermit2#somewhere.com"
}
]
}
}

Cloudformation Init config files not writing the files

I am using AWS Cloudformation scripts to bring up a Auto-scaling Ec2 Instance - sample code provided below
"GatewayLabAutoScalingGroup": {
"Metadata": {
"AWS::CloudFormation::Init": {
"config": {
"commands": {
"a_install_pip": {
"command": "pip install requests boto3"
},
"c_restart_cron": {
"command": "service crond restart"
},
"d_restart_cfn_hup": {
"command": "service cfn-hup restart"
}
},
"files": {
"/etc/cfn/cfn-hup.conf": {
"content": {
"Fn::Join": [
"",
[
"[main]\nstack=",
{
"Ref": "AWS::StackName"
},
"\nregion=",
{
"Ref": "AWS::Region"
},
"\nverbose=true\ninterval=1\n"
]
]
},
"group": "root",
"mode": "000644",
"owner": "root"
},
"/usr/local/sbin/join_ad_script.sh": {
"content": {
"Fn::Join": [
"",
[
"sudo yum -y update\nsudo yum -y install sssd realmd krb5-workstation\nsudo realm leave\n\nDOMAIN=\"",
{
"Ref": "SimpleADDomain"
},
"\"\n\ncat <<EOF > /etc/resolv.conf\nnameserver ",
{
"Fn::Select": [
0,
{
"Fn::GetAtt": [
"WorkspacesSimplead",
"DnsIpAddresses"
]
}
]
},
"\nnameserver ",
{
"Fn::Select": [
1,
{
"Fn::GetAtt": [
"WorkspacesSimplead",
"DnsIpAddresses"
]
}
]
},
"\nEOF\n\n# empty all current sssd cache\nsss_cache -E\n\necho ",
{
"Ref": "SimpleADPassword"
},
" | sudo realm join -U Administrator#${DOMAIN^^} ${DOMAIN^^} --verbose\nsudo sed -re 's/^(PasswordAuthentication)([[:space:]]+)no/\\1\\2yes/' -i.`date -I` /etc/ssh/sshd_config\necho \"enumerate=true\" >> /etc/sssd/sssd.conf\nsudo service sssd restart\nsudo service sshd restart\n\n# empty all current sssd cache\nsss_cache -E\n"
]
]
},
"group": "root",
"mode": "000755",
"owner": "root"
}
}
}
}
},
"Properties": {
"AvailabilityZones": [
{
"Fn::Select": [
0,
{
"Fn::GetAZs": ""
}
]
}
],
"HealthCheckGracePeriod": 300,
"HealthCheckType": "EC2",
"LaunchConfigurationName": {
"Ref": "GatewayLabLaunchConfiguration"
},
"LoadBalancerNames": [
],
"MaxSize": 2,
"MinSize": 1,
"Tags": [
{
"Key": "Name",
"PropagateAtLaunch": true,
"Value": "hub-autoscaling"
}
}
],
"VPCZoneIdentifier": [
{
"Ref": "EC2SubnetSubnet1"
}
]
},
"Type": "AWS::AutoScaling::AutoScalingGroup",
"UpdatePolicy": {
"AutoScalingRollingUpdate": {
"MaxBatchSize": 1,
"MinInstancesInService": 1,
"PauseTime": "PT60S"
}
}
}
The files are not written to in the instance
The instance is coming up in a Private VPC
We have a proxy configured on port 8080
The works fine when the instance is connected to a NAT Gateway without a proxy
I do have ports 80, 22 & 443 opened up
the userdata statements are run initially
They then call the cfn-init scripts
There were some errors in the scripts and they never completed; one of the problems as mentioned above was my instance behind a proxy
Getting the proxy configuration in as part of the UserData helped

AWS cli query to get to cloudfront "Domain Name" with specific origin name

This is my JSON output from awscli I want to get xxxxxxxx.cloudfront.net using Origin DomainName example1.com with AWS cli query only. { I know this filtering with jq, awk and cut, grep }.
"DistributionList": {
"Items": [
{
"WebACLId": "",
"Origins": {
"Items": [
{
"OriginPath": "",
"CustomOriginConfig": {
"OriginProtocolPolicy": "http-only",
"HTTPPort": 80,
"HTTPSPort": 443
},
"Id": "DNS for Media Delivery",
"DomainName": "example1.com"
}
],
"Quantity": 1
},
"DomainName": "xxxxxxxx.cloudfront.net",
},
{
"WebACLId": "",
"Origins": {
"Items": [
{
"OriginPath": "",
"CustomOriginConfig": {
"OriginProtocolPolicy": "http-only",
"HTTPPort": 80,
"HTTPSPort": 443
},
"Id": "DNS for Media Delivery",
"DomainName": "example2.com"
}
],
"Quantity": 1
},
"DomainName": "yyyyyyyyyy.cloudfront.net",
},
]
}
As AWS CLI --query parameter works on top of JMESPath you can build awesome filters.
Answer for your question will be:
--query "DistributionList.Items[].{DomainName: DomainName, OriginDomainName: Origins.Items[0].DomainName}[?contains(OriginDomainName, 'example1.com')] | [0]"
and it will return you:
{
"DomainName": "xxxxxxxx.cloudfront.net",
"OriginDomainName": "example1.com"
}
P.S. Hope it will help someone.

Use regex in Powershell v2 to get values from a json file

How would I access the following values using the regex function in Powershell, and assign each one to an individual variable?:
id (i.e. get the value: TOKEN_ID) - under token
id (i.e. get the value: TENANT_ID) - under token, tenant
adminURL (i.e. get the value: http://10.100.0.222:35357/v2.0) - the first value under serviceCatalog,endpoints
As I am using Powershell v2, I can't use the ConvertFrom-Json cmdlet. So far I've tried converting the document to an xml file using the a third-party PS script, but it doesn't always get it right. I'd like to use regex, but I am not very comfortable with it.
$json =
"{
"access": {
"metadata": {
"is_admin": 0,
"roles": [
"9fe2ff9ee4384b1894a90878d3e92bab"
]
},
"serviceCatalog": [
{
"endpoints": [
{
"adminURL": "http://10.100.0.222:8774/v2/TENANT_ID",
"id": "0eb78b6d3f644438aea327d9c57b7b5a",
"internalURL": "http://10.100.0.222:8774/v2/TENANT_ID",
"publicURL": "http://8.21.28.222:8774/v2/TENANT_ID",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "nova",
"type": "compute"
},
{
"endpoints": [
{
"adminURL": "http://10.100.0.222:9696/",
"id": "3f4b6015a2f9481481ca03dace8acf32",
"internalURL": "http://10.100.0.222:9696/",
"publicURL": "http://8.21.28.222:9696/",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "neutron",
"type": "network"
},
{
"endpoints": [
{
"adminURL": "http://10.100.0.222:8776/v2/TENANT_ID",
"id": "16f6416588f64946bdcdf4a431a8f252",
"internalURL": "http://10.100.0.222:8776/v2/TENANT_ID",
"publicURL": "http://8.21.28.222:8776/v2/TENANT_ID",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "cinder_v2",
"type": "volumev2"
},
{
"endpoints": [
{
"adminURL": "http://10.100.0.222:8779/v1.0/TENANT_ID",
"id": "be48765ae31e425cb06036b1ebab694a",
"internalURL": "http://10.100.0.222:8779/v1.0/TENANT_ID",
"publicURL": "http://8.21.28.222:8779/v1.0/TENANT_ID",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "trove",
"type": "database"
},
{
"endpoints": [
{
"adminURL": "http://10.100.0.222:9292",
"id": "1adfcb5414304f3596fb81edb2dfb514",
"internalURL": "http://10.100.0.222:9292",
"publicURL": "http://8.21.28.222:9292",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "glance",
"type": "image"
},
{
"endpoints": [
{
"adminURL": "http://10.100.0.222:8777",
"id": "350f3b91d73f4b3ab8a061c94ac31fbb",
"internalURL": "http://10.100.0.222:8777",
"publicURL": "http://8.21.28.222:8777",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "ceilometer",
"type": "metering"
},
{
"endpoints": [
{
"adminURL": "http://10.100.0.222:8000/v1/",
"id": "2198b0d32a604e75a5cc1e13276a813d",
"internalURL": "http://10.100.0.222:8000/v1/",
"publicURL": "http://8.21.28.222:8000/v1/",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "heat-cfn",
"type": "cloudformation"
},
{
"endpoints": [
{
"adminURL": "http://10.100.0.222:8776/v1/TENANT_ID",
"id": "7c193c4683d849ca8e8db493722a4d8c",
"internalURL": "http://10.100.0.222:8776/v1/TENANT_ID",
"publicURL": "http://8.21.28.222:8776/v1/TENANT_ID",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "cinder",
"type": "volume"
},
{
"endpoints": [
{
"adminURL": "http://10.100.0.222:8773/services/Admin",
"id": "11fac8254be74d7d906110f0069e5748",
"internalURL": "http://10.100.0.222:8773/services/Cloud",
"publicURL": "http://8.21.28.222:8773/services/Cloud",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "nova_ec2",
"type": "ec2"
},
{
"endpoints": [
{
"adminURL": "http://10.100.0.222:8004/v1/TENANT_ID",
"id": "38fa4f9afce34d4ca0f5e0f90fd758dd",
"internalURL": "http://10.100.0.222:8004/v1/TENANT_ID",
"publicURL": "http://8.21.28.222:8004/v1/TENANT_ID",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "heat",
"type": "orchestration"
},
{
"endpoints": [
{
"adminURL": "http://10.100.0.222:35357/v2.0",
"id": "256cdf78ecb04051bf0f57ec11070222",
"internalURL": "http://10.100.0.222:5000/v2.0",
"publicURL": "http://8.21.28.222:5000/v2.0",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "keystone",
"type": "identity"
}
],
"token": {
"audit_ids": [
"gsjrNoqFSQeuLUo0QeJprQ"
],
"expires": "2014-12-15T15:09:29Z",
"id": "TOKEN_ID",
"issued_at": "2014-12-15T14:09:29.794527",
"tenant": {
"description": "Auto created account",
"enabled": true,
"id": "TENANT_ID",
"name": "USERNAME"
}
},
"user": {
"id": "USER_ID",
"name": "USERNAME",
"roles": [
{
"name": "_member_"
}
],
"roles_links": [],
"username": "USERNAME"
}
}
}"
If you are using .NET 3.5 or higher on your machines with PowerShell 2.0, you can use a JSON serializer (from the linked answer):
[System.Reflection.Assembly]::LoadWithPartialName("System.Web.Extensions")
$json = "{a:1,b:2,c:{nested:true}}"
$ser = New-Object System.Web.Script.Serialization.JavaScriptSerializer
$obj = $ser.DeserializeObject($json)
This would be preferable to using regex.
For admin URL for example, you'd refer to:
$obj.access.serviceCatalog[0].endpoints[0].adminURL
Using RegEx Anyway
if ($json -match '(?s)"serviceCatalog".+?"endpoints".+?"adminURL"[^"]+"(?<adminUrl>[^"]+)".+?"token".+?"id"[^"]+"(?<tokenID>[^"]+)".+?"tenant".+?"id"[^"]+"(?<tenantID>[^"]+)') {
$Matches['adminURL']
$Matches['tokenID']
$Matches['tenantID']
}
RegEx Breakdown:
(?s) tells the regex engine that . matches anything, including newlines (by default it wouldn't).
Of course all of the "whatever" parts just match literally.
.+? matches 1 or more of any character (including newlines since we're using s), and the ? makes it non-greedy.
[^"]+ this matches 1 or more characters that are not a double quote.
() is a capturing group. By using (?<name>) we can refer back to the group later by name rather than number, just a nicety.
So the basic idea is to look for the literals, then get to a point where we can capture the values needed. After a -regex operator match in PowerShell, the $Matches variable is populated with the matches, groups, etc.
Note that this relies on the values being in the order they are in the posted JSON. If they were in a different order it would fail.
To work around that you could split this into 3 different regex matches.