Amazon CLI, route 53, TXT error - amazon-web-services

I'm trying to create a TXT record in Route53 via the Amazon CLI for DNS-01 validation. Seems like I'm very close but possibly running into a CLI issue (or a formatting issue I don't see). As you can see, it's complaining about a value that should be in quotes, but is indeed in quotes already...
Command Line:
aws route53 change-resource-record-sets --hosted-zone-id ID_HERE --change-batch file://c:\dev\test1.json
JSON File:
{
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "DOMAIN_NAME_HERE",
"Type": "TXT",
"TTL": 60,
"ResourceRecords": [
{
"Value": "test"
}
]
}
}
]
}
Error:
An error occurred (InvalidChangeBatch) when calling the ChangeResourceRecordSets operation: Invalid Resource Record: FATAL problem: InvalidCharacterString (Value should be enclosed in quotation marks) encountered with 'test'

Those quotes are the JSON quotes, and those are not the quotes they're looking for.
The JSON string "test" encodes the literal value test.
The JSON string "\"test\"" encodes the literal value "test".
(This is because in JSON, a literal " in a string is escaped with a leading \).
It sounds like they want actual, literal quotes included inside the value, so if you're building this JSON manually you probably want the latter: "Value": "\"test\"".
A JSON library should do this for you if you passed it the value with the leading and trailing " included.

Related

How to create AWS SSM Parameter from Terraform

I am trying to copy AWS SSM parameter(for cloudwatch) from one region to another. I have the json which is created as a String in one region.
I am trying to write a terraform script to create this ssm parameter in another region.
According to the terraform documentation, I need to do this
resource "aws_ssm_parameter" "foo" {
name = "foo"
type = "String"
value = "bar"
}
In my case value is a json. Is there a way to store the json in a file and pass this file as value to the above resource? I tried using jsonencode,
resource "aws_ssm_parameter" "my-cloudwatch" {
name = "my-cloudwatch"
type = "String"
value = jsonencode({my-json})
that did not work either. I am getting this error
Extra characters after interpolation expression I believe this is because the json has characters like quotes and colon.
Any idea?
I tested the following & this worked for me:
resource "aws_ssm_parameter" "my-cloudwatch" {
name = "my-cloudwatch"
type = "String"
#value = file("${path.module}/ssm-param.json")
value = jsonencode(file("${path.module}/files/ssm-param.json"))
}
./files/ssm-param.json content:
{
"Value": "Something"
}
and the parameter store value looks like this:
"{\n \"Value\": \"Something\"\n}"
I just faced this issue the $ in the CW config is causing the problem. Use $$
"Note: If you specify the template as a literal string instead of loading a file, the inline template must use double dollar signs (like $${hello}) to prevent Terraform from interpolating values from the configuration into the string. "
https://www.terraform.io/docs/configuration-0-11/interpolation.html
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "$${aws:AutoScalingGroupName}",
"ImageId": "$${aws:ImageId}",
"InstanceId": "$${aws:InstanceId}",
"InstanceType": "$${aws:InstanceType}"
},
I prefer Pauls aproach though.
You need to insert your json with escaped quotas, is a little trick in AWS, and you need to parse this when retrieve:
const value = JSON.parse(Value)
Example of insert:
"Value": "\"{\"flag\":\"market_store\",\"app\":\"ios\",\"enabled\":\"false\"}\"",

Multiple RedactedFields in AWS WAFv2 put-logging-configuration command

I'm trying to set up logging on our Web ACL with WAFv2.
I can successfully run the put-logging-configuration command with one 'RedactedField', but I am having issue adding more headers after the first one.
Here is the documentation in question -- I can't quite get my head around it:
The part of a web request that you want AWS WAF to inspect. Include the single FieldToMatch type that you want to inspect, with additional specifications as needed, according to the type. You specify a single request component in FieldToMatch for each rule statement that requires it. To inspect more than one component of a web request, create a separate rule statement for each component.
Here is my command which works:
aws --region="us-west-2" wafv2 put-logging-configuration \
--logging-configuration ResourceArn=${MY_WEB_ACL_ARN},LogDestinationConfigs=${MY_FIREHOSE_DELIVERY_STREAM_ARN},RedactedFields={SingleHeader={Name="cookie"}}
This gives the following result:
{
"LoggingConfiguration": {
"ResourceArn": "{My arn}",
"LogDestinationConfigs": [
"{My firehose log stream arn}"
],
"RedactedFields": [
{
"SingleHeader": {
"Name": "cookie"
}
}
]
}
}
I also wish to redact the "authorization" header.
I have tried the following as part of "RedactedFields" portion of --logging-configuration:
1) Two SingleHeader statements within brackets
RedactedFields={SingleHeader={Name="cookie"},SingleHeader={Name="cookie"}}
(Results in 'Unknown options' error.)
2) Two sets of brackets with comma
RedactedFields={SingleHeader={Name="cookie"}},{SingleHeader={Name="authorization"}}
Error parsing parameter '--logging-configuration': Expected: '=', received: '{' for input:
3) Two sets of brackets, no comma
RedactedFields={SingleHeader={Name="cookie"}}{SingleHeader={Name="authorization"}}
Error parsing parameter '--logging-configuration': Expected: ',', received: '{' for input:
4) Two SingleHeader statements within brackets, no comma
RedactedFields={SingleHeader={Name="cookie"}{SingleHeader={Name="authorization"}}
Error parsing parameter '--logging-configuration': Expected: ',', received: '{' for input:
5) One SingleHeader statement, two headers (Isn't really a SingleHeader anymore, is it?)
RedactedFields={SingleHeader={Name="cookie", "authorization"}}
Unknown options: authorization}}
What am I getting wrong here? I've tried many other ways including [] square brackets, multiple instances of 'Name', multiple instances of 'RedactedFields' entirely -- none work.
To add multiple SingleHeaders to RedactedFields via shorthand-syntax, I had to
Give each SingleHeader it's own set of brackets
Add a comma between each bracket set
Wrap all of the sets with square brackets
Wrap everything in single quotes.
For example, if I wanted two SingleHeaders, one for 'cookie' and one for 'authorization', I would need to use the following for the RedactedFields portion of --logging-configuration:
RedactedFields='[{SingleHeader={Name="cookie"}},{SingleHeader={Name="authorization"}}]'
In conclusion, if we add this to put-logging-configuration, the whole command would be:
aws --region=${MY_REGION} wafv2 put-logging-configuration \
--logging-configuration ResourceArn=${MY_WEB_ACL_ARN},LogDestinationConfigs=${MY_FIREHOSE_DELIVERY_STREAM_ARN},RedactedFields='[{SingleHeader={Name="cookie"}},{SingleHeader={Name="authorization"}}]'
Giving the following result:
{
"LoggingConfiguration": {
"ResourceArn": "{my acl arn}",
"LogDestinationConfigs": [
"{my firehose log stream arn}"
],
"RedactedFields": [
{
"SingleHeader": {
"Name": "cookie"
}
},
{
"SingleHeader": {
"Name": "authorization"
}
},
]
}
}
This formatting can be used for any other FieldToMatch, such as SingleQueryArgument, AllQueryArguments, QueryString, UriPath, Body, etc.

Regex and config.json -file

I am building an Angular application and trying to figure out the way to write ngsw-config.json -file in order to define rules for service worker.
I assumed that regex would be recognized as regex in configuration file and not interpret as normal characters / text automatically, but it was not so. I have for example following piece of a code:
"name": "authentication",
"urls": [
"/login",
"/.*authentication.*"
],
part .* is not in my understanding recognized as regex (regex meaning in this case that any path that has text "authentication" would fall into this category, right?). This piece of a configuration tries to prevent service worker to take a lead in these two cases, it works with /login, but not with authentication part.
Question:
Can I somehow modify my file to make it recognize regex definitions?
According to the documentation at https://angular.io/guide/service-worker-config
you can use a limited glob format.
I don't know what kind of url you want to match.
Option: If you want to match a url like /foo/bar/authentication/foo2/bar2 you could use:
"name": "authentication",
"urls": [
"/login",
"/**/authentication/**/*"
],
Option: If you want to match a url like /foo/bar/something-authentication-otherthing/foo2/bar2 you could use:
"name": "authentication",
"urls": [
"/login",
"/**/*authentication*/**/*"
],

How do I define an AWS MetricFilter FilterPattern to match a JSON-formatted log event in CloudWatch?

I am trying to define a metric filter, in an AWS CloudFormation template, to match JSON-formatted log events from CloudWatch.
Here is an example of the log event:
{
"httpMethod": "GET",
"resourcePath": "/deployment",
"status": "403",
"protocol": "HTTP/1.1",
"responseLength": "42"
}
Here is my current attempt to create a MetricFilter to match the status field using the examples given from the documentation here: FilterAndPatternSyntax
"DeploymentApiGatewayMetricFilter": {
"Type": "AWS::Logs::MetricFilter",
"Properties": {
"LogGroupName": "/aws/apigateway/DeploymentApiGatewayLogGroup",
"FilterPattern": "{ $.status = \"403\" }",
"MetricTransformations": [
{
"MetricValue": "1",
"MetricNamespace": "ApiGateway",
"DefaultValue": 0,
"MetricName": "DeploymentApiGatewayUnauthorized"
}
]
}
}
I get a "Invalid metric filter pattern" message in CloudFormation.
Other variations I've tried that didn't work:
"{ $.status = 403 }" <- no escaped characters
{ $.status = 403 } <- using a json object instead of string
I've been able to successfully filter for space-delimited log events using the bracket notation defined in a similar manner but the json-formatted log events don't follow the same convention.
Ran into the same problem and was able to figure it out by writing a few lines with the aws-cdk to generate the filter pattern template to see the difference between that and what I had.
Seems like it needs each piece of criteria wrapped in parenthesis.
- FilterPattern: '{ $.priority = "ERROR" && $.message != "*SomeMessagePattern*" }'
+ FilterPattern: '{ ($.priority = "ERROR") && ($.message != "*SomeMessagePattern*") }'
It is unfortunate that the AWS docs for MetricFilter in CloudFormation have no examples of JSON patterns.
I kept running into this error too, because I had the metric filter formatted with double quotes on the outside like this.
FilterPattern: "{ ($.errorCode = '*UnauthorizedOperation') || ($.errorCode = 'AccessDenied*') }"
The docs say:
Strings that consist entirely of alphanumeric characters do not need to be quoted. Strings that have unicode and other characters such as ‘#,‘ ‘$,' ‘,' etc. must be enclosed in double quotes to be valid.
It didn't explicitly list the splat/wildcard * character, so I thought it would be OK inside single quotes, but it kept saying the metric filter pattern was bad because of the * in single quotes
I could have used single quotes around the outside of the pattern and double quotes around the strings inside, but I opted for escaping the double quotes like this instead.
FilterPattern: "{ ($.errorCode = \"*UnauthorizedOperation\") || ($.errorCode = \"AccessDenied*\") }"

Regex In body of API test

I'm testing API with https://cloud.google.com/datastore/docs/reference/data/rest/v1/projects/lookup
The following brings a found result with data. I would like to use a regular expression with bring back all records with name having the number 100867. All my attempts result wit a missing result set.
i.e. change to "name": "/1000867.*/"
{
"keys": [
{
"path": [
{
"kind": "Job",
"name": "1000867:100071805:1"
}
]
}
]
}
The Google documentation for lookup key states that the name is a "string" and that
The name of the entity. A name matching regex __.*__ is reserved/read-only. A name must not be more than 1500 bytes when UTF-8 encoded. Cannot be "".
The regex part threw me off and the solution was to use runQuery!
Consider this closed.