AWS CloudWatch selecting first existing field - amazon-web-services

I have two kinds of messages in the AWS CloudWatch and would like to select the first field that has some text in it. For example
Mesasge 1:
"message": {
"message": "I am the first priority"
}
Message 2:
"message": {
"err": {
"message": "I am second priority"
}
}
I would like to have these in a single column of the CloudWatch table depending which one is present. Is there any way to do this? Something like this (which obviously doesn't work):
fields #timestamp, ispresent(message.message) ? message.message : message.err.message

Apparently coalesce function is what I needed. It selected the first value that is not null:
fields #timestamp, component, coalesce(message.message, message.err.message) as TheMessage
More info at CloudWatch Logs Insights query syntax

Related

Lambda event filtering for DynamoDB trigger

Here is a modified version of an Event type I am receiving in my handler for a lambda function with a DynamoDB someTableName table trigger that I logged using cargo lambda.
Event {
records: [
EventRecord {
change: StreamRecord {
approximate_creation_date_time: ___
keys: {"id": String("___")},
new_image: {
....
"valid": Boolean(true),
},
...
},
...
event_name: "INSERT",
event_source: Some("aws:dynamodb"),
table_name: None
}
]
}
Goal: Correctly filter with event_name=INSERT && valid=false
I have tried a number of options, for example;
{"eventName": ["INSERT"]}
While the filter is added correctly, it does not trigger the lambda on item inserted.
Q1) What am I doing incorrectly here?
Q2) Why is table_name returning None? The lambda function is created with a specific table name as trigger. The returned fields are returning an option (Some(_)) so I'm asssuming it returns None if the table name is specified on lambda creation, but seems odd to me?
Q3) From AWS Management Console > Lambda > ... > Trigger Detail, I see the following (which is slightly different from my code mentioned above), where does "key" come from and what does it represent in the original Event?
Filters must follow the documented syntax for filtering in the Event Source Mapping between Lambda and DynamoDB Streams.
If you are entering the filter in the Lambda console:
{ "eventName": ["INSERT"], "dynamodb": { "NewImage": {"valid": { "BOOL" : [false]}} } }
The attribute name is actually eventName, so your filter should look like this:
{"eventName": ["INSERT"]}

Filtering AWS SNS messages by attribute value but only if attribute is present

I would like to know how I can create a filtering policy for AWS SNS subscription that would check a message attribute value but only if that attribute is present. By default if I check an attribute value but attribute is not present message is ignored e.g:
"customer_interests": ["paintball"]
I also found this for attribute presence checking:
"customer_interests": [{"exists": true}]
But I'm not sure how to combine this two checks into a single policy.
I've tried the obvious thing:
{
"customer_interests": [{"exists": false}, "paintball"]
}
but it doesn't work.
i have tried the following thing ( your obvious thing ) and it worked for me, i didn't have to even wait for 15 minutes (clearly eventual consistency is not the issue )
Used the following policy
{
"customer_interests": [
{
"exists": false
},
"paintball"
]
}
Created a sns topic with email-based subscription
added filter policy to subscriber
tried with three use case
Case 1
message hello
attribute type string
atrribute name customer_interests
atrribute value paintball
status recieved
Case 3
message hello
attribute type string
atrribute name customer_interests
atrribute value xyz
status not recieved
Case 3
message hello
attribute type string
atrribute name jatin
atrribute value test
status recieved
You apply OR logic in SNS subscription filter policies, by assigning multiple values to an attribute name.
Your example:
{
"customer_interests": [{"exists": false}, "paintball"]
}
should result in filtering messages that:
do not have the customer_interests message attribute
OR have the customer_interests message attribute set to paintball
Please note that additions or changes to a subscription filter policy require up to 15 minutes to fully take effect, as Amazon SNS is eventually consistent.
Changes will not always take effect immediately.

AWS CloudWatch Logs Insight Nested Json

My json log object is like this.
{
"FileName":"file1.xlsx",
"IsSuccess":false,
"LogList":[
{"ErrorDetail":"Text1"},
{"ErrorDetail":"Text2"},
{"ErrorDetail":"Text3"},
]
}
When I write the query in CloudWatch Logs Insight like below it list the nested json in a single line.
fields #timestamp, FileName, LogList.ErrorDetail:
LogsInsight Query Result Nested Json
LogsInsight Query Result
This makes very difficult to read as the user need to scroll horizontally. I want the list to be displayed veritically. How can this be achieved?

AWS AppSync & DynamoDB Single Table Design: PutItem based on another item's field value

I'm new to AWS AppSync and I have adopted the single table pattern in DynamoDB. Now I am trying to create an item based on a particular field value in the existing item in the same table. For example, I have a table called transaction which holds 2 types of records.
Request
Response
As you can see the above table, I can insert (PutItem) multiple responses for a particular request. Before I insert a new response, I need to validate whether the request (RequestID) is already exists. Is there any way to do via conditional expression in the resolver? Below is my current request resolver code which is not working as expected.
#set( $Id = $util.autoId() )
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"PK": $util.dynamodb.toDynamoDBJson("USER#$ctx.args.input.UserId"),
"SK": $util.dynamodb.toDynamoDBJson("RESPONSE#$Id"),
},
"attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args.input),
"condition": {
"expression": "SK = :SK",
"expressionValues" : {
":SK" : {
"S" : "REQUEST#${ctx.args.input.RequestId}"
}
}
}
}
You could do this in one request mapping template by using DynamoDB transactions, see (https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-transact.html).
In one of the "transactWriteItems", you update an arbitrary value (or perhaps, number of responses) in the request item with a condition that checks if the request item exists with that requestId in the SK. If the conditions succeeds, the request item is updated.
Also make sure you have the response item in your "transactWriteItems" so that the response item is also written if the request condition passes.
You won't be able to do a conditional PutItem based on another entry no. In this case you'd want to use pipeline resolvers. In your first function you'd fetch the request item and in the second you can then do your PutItem condition - the previous GetItem result will be available as $ctx.prev.result.

How to parse mixed text and JSON log entries in AWS CloudWatch for Log Metric Filter

I am trying to parse log entries which are a mix of text and JSON. The first line is text representation and the next lines are JSON payload of the event. One of the possible examples are:
2016-07-24T21:08:07.888Z [INFO] Command completed lessonrecords-create
{
"key": "lessonrecords-create",
"correlationId": "c1c07081-3f67-4ab3-a5e2-1b3a16c87961",
"result": {
"id": "9457ce88-4e6f-4084-bbea-14fff78ce5b6",
"status": "NA",
"private": false,
"note": "Test note",
"time": "2016-02-01T01:24:00.000Z",
"updatedAt": "2016-07-24T21:08:07.879Z",
"createdAt": "2016-07-24T21:08:07.879Z",
"authorId": null,
"lessonId": null,
"groupId": null
}
}
For these records I try to define Log Metric Filter to a) match records b) select data or dimensions if possible.
According to the AWS docs JSON pattern should look like this:
{ $.key = "lessonrecords-create" }
however, it does not match anything. My guess is that because of mix text and JSON in a single log entry.
So, the questions are:
1. Is it possible to define a pattern that will match this log format?
2. Is it possible to extract dimensions, values from such a log format?
3. Help me with a pattern to do this.
If you set up the metric filter in the way that you have defined, the test will not register any matches (I have also had this issue), however when you deploy the metric filter it will still register matches (at least mine did). Just keep in mind that there is no way (as far as I am aware) to run this metric filter BACKWARDS (ie. it will only capture data from when it is created). [If you're trying to get stats on past data, you're better off using log insight queries]
I am currently experimenting with different parse statements to try and extract data (its also a mix of JSON and text), this thread MAY help you (it didn't for me) Amazon Cloudwatch Logs Insights with JSON fields .
UPDATE!
I have found a way to parse the text but its a little bit clunky. If you export your cloudwatch logs using a lamda function to SumoLogic, their search tool allows for MUCH better log manipulation and lets you parse JSON fields (if you treat the entire entry as text). SumoLogic is also really helpful because you can just extract your search results as a CSV. For my purposes, I parse the entire log message in SumoLogic, extract all the logs as a CSV and then I used regex in Python to filter through and extract the values I need.
Let's say you have the following log
2021-09-29 15:51:18,624 [main] DEBUG com.company.app.SparkResources - AUDIT : {"user":"Raspoutine","method":"GET","pathInfo":"/analysis/123"}
you can parse it like this to be able to handle the part after "AUDIT : " as a JSON
fields #message
| parse #message "* [*] * * - AUDIT : *" as timestamp, thread, logLevel, clazz, msg
| filter ispresent(msg)
| filter method = "GET" # You can use fields which are contained in the JSON String of 'msg' field. Do not use 'msg.method' but directly 'method'
The fields contained in your isolated / parsed JSON field are automatically added as fields usable in the query
You can use CloudWatch Events for such purpose(aka Subscription Filters), what you will need to do is define a cloudwatch Rule which uses an expression statement to match your logs.
Here, I will let you do all the reading:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Scheduled-Rule.html
:)
Split the message into 3 fields and the 3rd field will be a valid json . I think in your case it would be
fields #timestamp, #message
| parse #message '[] * {"*"}' as field1, field2, field3
| limit 50
field3 is the valid json.
[INFO} will be the first field.
You can search JSON string representation, which is not as powerful.
For your example,
instead of { $.key = "lessonrecords-create" }
try "\"key\":\"lessonrecords-create\"".
This filter is not semantically identical to your requirement, though. It will also give events where key is not at the root of json.
you can use fluentd agent to send logs to Cloudwatch. Create custom grok pattern based on your metric filter.
Steps:
Install fluentd agent in your server
Install fluent-plugin-cloudwatch-logs plugin and fluent-plugin-grok-parser plugin
write your custom grok pattern based on your log format
please refer this blog for more information