Multiple RedactedFields in AWS WAFv2 put-logging-configuration command - amazon-web-services

I'm trying to set up logging on our Web ACL with WAFv2.
I can successfully run the put-logging-configuration command with one 'RedactedField', but I am having issue adding more headers after the first one.
Here is the documentation in question -- I can't quite get my head around it:
The part of a web request that you want AWS WAF to inspect. Include the single FieldToMatch type that you want to inspect, with additional specifications as needed, according to the type. You specify a single request component in FieldToMatch for each rule statement that requires it. To inspect more than one component of a web request, create a separate rule statement for each component.
Here is my command which works:
aws --region="us-west-2" wafv2 put-logging-configuration \
--logging-configuration ResourceArn=${MY_WEB_ACL_ARN},LogDestinationConfigs=${MY_FIREHOSE_DELIVERY_STREAM_ARN},RedactedFields={SingleHeader={Name="cookie"}}
This gives the following result:
{
"LoggingConfiguration": {
"ResourceArn": "{My arn}",
"LogDestinationConfigs": [
"{My firehose log stream arn}"
],
"RedactedFields": [
{
"SingleHeader": {
"Name": "cookie"
}
}
]
}
}
I also wish to redact the "authorization" header.
I have tried the following as part of "RedactedFields" portion of --logging-configuration:
1) Two SingleHeader statements within brackets
RedactedFields={SingleHeader={Name="cookie"},SingleHeader={Name="cookie"}}
(Results in 'Unknown options' error.)
2) Two sets of brackets with comma
RedactedFields={SingleHeader={Name="cookie"}},{SingleHeader={Name="authorization"}}
Error parsing parameter '--logging-configuration': Expected: '=', received: '{' for input:
3) Two sets of brackets, no comma
RedactedFields={SingleHeader={Name="cookie"}}{SingleHeader={Name="authorization"}}
Error parsing parameter '--logging-configuration': Expected: ',', received: '{' for input:
4) Two SingleHeader statements within brackets, no comma
RedactedFields={SingleHeader={Name="cookie"}{SingleHeader={Name="authorization"}}
Error parsing parameter '--logging-configuration': Expected: ',', received: '{' for input:
5) One SingleHeader statement, two headers (Isn't really a SingleHeader anymore, is it?)
RedactedFields={SingleHeader={Name="cookie", "authorization"}}
Unknown options: authorization}}
What am I getting wrong here? I've tried many other ways including [] square brackets, multiple instances of 'Name', multiple instances of 'RedactedFields' entirely -- none work.

To add multiple SingleHeaders to RedactedFields via shorthand-syntax, I had to
Give each SingleHeader it's own set of brackets
Add a comma between each bracket set
Wrap all of the sets with square brackets
Wrap everything in single quotes.
For example, if I wanted two SingleHeaders, one for 'cookie' and one for 'authorization', I would need to use the following for the RedactedFields portion of --logging-configuration:
RedactedFields='[{SingleHeader={Name="cookie"}},{SingleHeader={Name="authorization"}}]'
In conclusion, if we add this to put-logging-configuration, the whole command would be:
aws --region=${MY_REGION} wafv2 put-logging-configuration \
--logging-configuration ResourceArn=${MY_WEB_ACL_ARN},LogDestinationConfigs=${MY_FIREHOSE_DELIVERY_STREAM_ARN},RedactedFields='[{SingleHeader={Name="cookie"}},{SingleHeader={Name="authorization"}}]'
Giving the following result:
{
"LoggingConfiguration": {
"ResourceArn": "{my acl arn}",
"LogDestinationConfigs": [
"{my firehose log stream arn}"
],
"RedactedFields": [
{
"SingleHeader": {
"Name": "cookie"
}
},
{
"SingleHeader": {
"Name": "authorization"
}
},
]
}
}
This formatting can be used for any other FieldToMatch, such as SingleQueryArgument, AllQueryArguments, QueryString, UriPath, Body, etc.

Related

Create a cloud scheduler job in CLI with nested json request body

I am running into trouble trying to schedule a Workflow using Google Cloud Scheduler through the Google CLI.
In particular my workflow requires the following request body:
{
"arg1": "some_string",
"arg2": "some_other_string",
"arg3": [
{
"foo": "foo1",
"bar": "bar1"
},
{
"foo": "foo2",
"bar": "bar2"
}
]
}
In a different workflow with request body consisting only of arg1 and arg2 I was able to schedule a cloud function using the double-escaped json string format:
gcloud scheduler jobs create http <NAME> --schedule=<CRON> --uri=<URI> --message-body="{\"argument\": \"{\\\"arg1\\\":\\\"some_string\\\",\\\"arg2\\\":\\\"some_other_string\\\"}\"}" --time-zone="UTC"
With the above request body I am unclear how to do this, I tried setting the message-body as
"{\"argument\": \"{\\\"arg1\\\":\\\"some_string\\\",\\\"arg2\\\":\\\"some_other_string\\\",\\\"arg3\\\":\\\"[{\\\\\"foo\\\\\":\\\\\"foo1\\\\\",\\\\\"bar\\\\\":\\\\\"bar1\\\\\"}]\\\"}\"}"
But it didn't seem to like this and threw an "INVALID ARGUMENT" status. I've also tried a few other variations such as without quotes around the list brackets but haven't had any success.
Apologies for how ugly these strings are. Is anyone aware how to format them correctly, or better yet, a simplified way of entering the request body in the command?
Thanks in advance.
Edit: I have tried using the --message-body-from-file argument as mentioned in the comments by #john-hanley. I found it still required escape quotes to work on my simple case.
body.json
{"argument": "{\"arg1\":\"some_string\",\"arg2\":\"some_other_string\"}"}
When I tried the nested case however with no quotes around the list it did work!
body.json
{"argument": "{\"arg1\":\"some_string\",\"arg2\":\"some_other_string\", \"arg3\": [{\"foo\": \"foo1\", \"bar\": \"bar1\"},{\"foo\":\"foo2\", \"bar\": \"bar2\"}]"}
Solved by incorporating comments by #JohnHanley and unquoting the repeated field
Command:
gcloud scheduler jobs create http <NAME> --schedule=<CRON> --uri=<URI> --message-body-from-file="body.json" --time-zone="UTC"
body.json
{"argument": "{\"arg1\":\"some_string\",\"arg2\":\"some_other_string\", \"arg3\": [{\"foo\": \"foo1\", \"bar\": \"bar1\"},{\"foo\":\"foo2\", \"bar\": \"bar2\"}]"}

How do I define an AWS MetricFilter FilterPattern to match a JSON-formatted log event in CloudWatch?

I am trying to define a metric filter, in an AWS CloudFormation template, to match JSON-formatted log events from CloudWatch.
Here is an example of the log event:
{
"httpMethod": "GET",
"resourcePath": "/deployment",
"status": "403",
"protocol": "HTTP/1.1",
"responseLength": "42"
}
Here is my current attempt to create a MetricFilter to match the status field using the examples given from the documentation here: FilterAndPatternSyntax
"DeploymentApiGatewayMetricFilter": {
"Type": "AWS::Logs::MetricFilter",
"Properties": {
"LogGroupName": "/aws/apigateway/DeploymentApiGatewayLogGroup",
"FilterPattern": "{ $.status = \"403\" }",
"MetricTransformations": [
{
"MetricValue": "1",
"MetricNamespace": "ApiGateway",
"DefaultValue": 0,
"MetricName": "DeploymentApiGatewayUnauthorized"
}
]
}
}
I get a "Invalid metric filter pattern" message in CloudFormation.
Other variations I've tried that didn't work:
"{ $.status = 403 }" <- no escaped characters
{ $.status = 403 } <- using a json object instead of string
I've been able to successfully filter for space-delimited log events using the bracket notation defined in a similar manner but the json-formatted log events don't follow the same convention.
Ran into the same problem and was able to figure it out by writing a few lines with the aws-cdk to generate the filter pattern template to see the difference between that and what I had.
Seems like it needs each piece of criteria wrapped in parenthesis.
- FilterPattern: '{ $.priority = "ERROR" && $.message != "*SomeMessagePattern*" }'
+ FilterPattern: '{ ($.priority = "ERROR") && ($.message != "*SomeMessagePattern*") }'
It is unfortunate that the AWS docs for MetricFilter in CloudFormation have no examples of JSON patterns.
I kept running into this error too, because I had the metric filter formatted with double quotes on the outside like this.
FilterPattern: "{ ($.errorCode = '*UnauthorizedOperation') || ($.errorCode = 'AccessDenied*') }"
The docs say:
Strings that consist entirely of alphanumeric characters do not need to be quoted. Strings that have unicode and other characters such as ‘#,‘ ‘$,' ‘,' etc. must be enclosed in double quotes to be valid.
It didn't explicitly list the splat/wildcard * character, so I thought it would be OK inside single quotes, but it kept saying the metric filter pattern was bad because of the * in single quotes
I could have used single quotes around the outside of the pattern and double quotes around the strings inside, but I opted for escaping the double quotes like this instead.
FilterPattern: "{ ($.errorCode = \"*UnauthorizedOperation\") || ($.errorCode = \"AccessDenied*\") }"

Defined Template from logstash not being used by elastic search for mapping

I have the following logstash output config to go into elasticsearch from a postgres database
https://pastebin.com/BFCH3tuZ
I have defined the location and my template as the following:
https://pastebin.com/mK5qshKM
When I run logstash I see the output as follows:
[2017-05-24T20:54:10,828][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-05-24T20:54:10,982][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0xff97ab URL:http://localhost:9200/>}
[2017-05-24T20:54:10,985][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>"/etc/logstash/universe_template.json"}
[2017-05-24T20:54:11,045][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"universe_elastic", "settings"=>{"analysis"=>{"filter"=>{"gr$
[2017-05-24T20:54:11,052][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/universe_elastic
[2017-05-24T20:54:11,145][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0xe60519 URL://localhost:9200$
[2017-05-24T20:54:11,154][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inf$
[2017-05-24T20:54:11,988][INFO ][logstash.pipeline ] Pipeline main started
[2017-05-24T20:54:12,079][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-05-24T20:54:12,108][INFO ][logstash.inputs.jdbc ] (0.101000s) select planet.id, planet.x || ':' || planet.y || ':' || planet.z coords, planet.x, planet.y, planet.z ,planetname,ru$
[2017-05-24T20:54:15,006][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
When I query elasticseach templates I can see my template listed at:
http://xxxx:9200/_template/ { "universe_elastic": {
"order": 0,
"template": "universe_elastic",
"settings": {
"index": {
"analysis": {
"filter": {
"gramFilter": {
"token_chars": [
"letter",
"digit",
"punctuation",
"symbol"
], ETC ETC ETC......
However when I run a check on my "universe" index the mapping haven't come through:
https://pastebin.com/hw9hYfLn
I would expect to see the _all field and the include in all references set to true/false. But nothing.. Also the queries do not then use the analyzers I have specified.
Any ideas what might be going wrong here? I have deleted out all the other possible templates created as well as re-created indexes etc.
You've done it almost all correctly, you just need to change a single thing:
In your template, this line
"template": "universe_elastic",
should read
"template": "universe",
ES is going to apply the apply only if your index name matches with the template name.

Amazon CLI, route 53, TXT error

I'm trying to create a TXT record in Route53 via the Amazon CLI for DNS-01 validation. Seems like I'm very close but possibly running into a CLI issue (or a formatting issue I don't see). As you can see, it's complaining about a value that should be in quotes, but is indeed in quotes already...
Command Line:
aws route53 change-resource-record-sets --hosted-zone-id ID_HERE --change-batch file://c:\dev\test1.json
JSON File:
{
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "DOMAIN_NAME_HERE",
"Type": "TXT",
"TTL": 60,
"ResourceRecords": [
{
"Value": "test"
}
]
}
}
]
}
Error:
An error occurred (InvalidChangeBatch) when calling the ChangeResourceRecordSets operation: Invalid Resource Record: FATAL problem: InvalidCharacterString (Value should be enclosed in quotation marks) encountered with 'test'
Those quotes are the JSON quotes, and those are not the quotes they're looking for.
The JSON string "test" encodes the literal value test.
The JSON string "\"test\"" encodes the literal value "test".
(This is because in JSON, a literal " in a string is escaped with a leading \).
It sounds like they want actual, literal quotes included inside the value, so if you're building this JSON manually you probably want the latter: "Value": "\"test\"".
A JSON library should do this for you if you passed it the value with the leading and trailing " included.

requestParameters returning "Invalid mapping expression specified: true"

I'm configuring a lambda function's API gateway integration with the Serverless Framework version 0.4.2.
My problem is with defining an endpoint's request parameters. The AWS docs for API gateway entry says:
requestParameters
Represents request parameters that can be accepted by Amazon API Gateway. Request parameters are represented as a key/value map, with a source as the key and a Boolean flag as the value. The Boolean flag is used to specify whether the parameter is required. A source must match the pattern method.request.{location}.{name}, where location is either querystring, path, or header. name is a valid, unique parameter name. Sources specified here are available to the integration for mapping to integration request parameters or templates.
As I understand it, the config in the s-function.json is given directly to the AWS CLI, so I've specified the request parameters in the format:
"method.request.querystring.startYear": true. However, I'm receiving an Invalid mapping expression specified: true error. I've also tried specifying the config as "method.request.querystring.startYear": "true" with the same result.
s-function.json:
{
"name": "myname",
// etc...
"endpoints": [
{
"path": "mypath",
"method": "GET",
"type": "AWS",
"authorizationType": "none",
"apiKeyRequired": false,
"requestParameters": {
"method.request.querystring.startYear": true,
"method.request.querystring.startMonth": true,
"method.request.querystring.startDay": true,
"method.request.querystring.currentYear": true,
"method.request.querystring.currentMonth": true,
"method.request.querystring.currentDay": true,
"method.request.querystring.totalDays": true,
"method.request.querystring.volume": true,
"method.request.querystring.userId": true
},
// etc...
}
],
"events": []
}
Any ideas? Thanks in advance!
It looks like the requestParameters in the s-function.json file is meant for configuring the integration request section, so I ended up using:
"requestParameters": {
"integration.request.querystring.startYear" : "method.request.querystring.startYear",
"integration.request.querystring.startMonth" : "method.request.querystring.startMonth",
"integration.request.querystring.startDay" : "method.request.querystring.startDay",
"integration.request.querystring.currentYear" : "method.request.querystring.currentYear",
"integration.request.querystring.currentMonth" : "method.request.querystring.currentMonth",
"integration.request.querystring.currentDay" : "method.request.querystring.currentDay",
"integration.request.querystring.totalDays" : "method.request.querystring.totalDays",
"integration.request.querystring.volume" : "method.request.querystring.volume",
"integration.request.querystring.userId" : "method.request.querystring.userId"
},
This ended up adding them automatically to the method request section on the dashboard as well:
I could then use them in the mapping template to turn them into a method post that would be sent as the event into my Lambda function. Right now I have a specific mapping template that I'm using, but I may in the future use Alua K's suggested method for mapping all of the inputs in a generic way so that I don't have to configure a separate mapping template for each function.
You can pass query params to your lambda like
"requestTemplates": {
"application/json": {
"querystring": "$input.params().querystring"
}
}
In lambda function access querystring like this event.querystring
First, you need to execute a put-method command for creating the Method- Request with query parameters:
aws apigateway put-method --rest-api-id "yourAPI-ID" --resource-id "yourResource-ID" --http-method GET --authorization-type "NONE" --no-api-key-required --request-parameters "method.request.querystring.paramname1=true","method.request.querystring.paramname2=true"
After this you can execute the put-integration command then only this will work. Otherwise it will give invalid mapping error
"requestParameters": {
"integration.request.querystring.paramname1" : "method.request.querystring.paramname1",
"integration.request.querystring.paramname2" : "method.request.querystring.paramname2",
Make sure you're using the right end points as well. There are two types or some such in AWS.. friend of mine got caught out with that in the past.