Seeking help manuerving JSON files in CloudWatch Log Insight - amazon-web-services

I have a question on using CloudWatch Log Insights when it comes to JSON files.
I am trying to include two log streams in one query for CloudWatch Logs Insights where I would want to focus on "level" to find errors:
Here is my code:
filter #logStream = 'ingest-23j23d3-daf4343ff3, ingest-2fdfd434d-dsa32434d'
| fields #message, #timestamp
| parse #message '"level": "*"' as level
| filter level == "error"
Here is an example of the JSON:
{
"message": "Could not delete old file cache entries: rimraf: callback function required",
"level": "error"
}
How can I incorporate more than one #logStream in my query. Also, can anyone direct me to maneuvering the JSON file for future use. I would greatly appreciate it.

I was able to fix the issue I had. Since I had no knowledge of Regex, I had to go into it's documentation and also AWS's and find ways of displaying my data:
filter level = "error" | filter strcontains(#logStream, 'ingest-')
| fields #timestamp, #message, level
I was able to filter my levels (which were debug, info, and error) to only show error. From here, I filtered ALL my logstreams beginning with ingest to find the error logs. I hope this helps anybody out in need of answers.

Related

Don't show discovered fields in CloudWatch log insights?

Is it possible to configure CloudWatch log insights so it just shows the JSON log message, not the discovered fields? For example, currently my logs show up like this:
#timestamp xxx
#message { "field1": "value1", "field2": "value2" }
field1 value1
field2 value2
Because CloudWatch discovered the two fields in the JSON message. But is it possible to just see the JSON, like so:
{
"field1": "value1",
"field2": "value2"
}
This would make it much easier for me to copy logs to different sources, as the #message field can get very long and unreadable if it's not formatted as JSON, which each field on a separate line and with indentations. Also, when I copy a log entry in the first format from Log Insights, the formatting puts the field name and value on different lines which is difficult to read.

AWS CloudWatch selecting first existing field

I have two kinds of messages in the AWS CloudWatch and would like to select the first field that has some text in it. For example
Mesasge 1:
"message": {
"message": "I am the first priority"
}
Message 2:
"message": {
"err": {
"message": "I am second priority"
}
}
I would like to have these in a single column of the CloudWatch table depending which one is present. Is there any way to do this? Something like this (which obviously doesn't work):
fields #timestamp, ispresent(message.message) ? message.message : message.err.message
Apparently coalesce function is what I needed. It selected the first value that is not null:
fields #timestamp, component, coalesce(message.message, message.err.message) as TheMessage
More info at CloudWatch Logs Insights query syntax

AWS CloudWatch Logs Insight Nested Json

My json log object is like this.
{
"FileName":"file1.xlsx",
"IsSuccess":false,
"LogList":[
{"ErrorDetail":"Text1"},
{"ErrorDetail":"Text2"},
{"ErrorDetail":"Text3"},
]
}
When I write the query in CloudWatch Logs Insight like below it list the nested json in a single line.
fields #timestamp, FileName, LogList.ErrorDetail:
LogsInsight Query Result Nested Json
LogsInsight Query Result
This makes very difficult to read as the user need to scroll horizontally. I want the list to be displayed veritically. How can this be achieved?

How to parse mixed text and JSON log entries in AWS CloudWatch for Log Metric Filter

I am trying to parse log entries which are a mix of text and JSON. The first line is text representation and the next lines are JSON payload of the event. One of the possible examples are:
2016-07-24T21:08:07.888Z [INFO] Command completed lessonrecords-create
{
"key": "lessonrecords-create",
"correlationId": "c1c07081-3f67-4ab3-a5e2-1b3a16c87961",
"result": {
"id": "9457ce88-4e6f-4084-bbea-14fff78ce5b6",
"status": "NA",
"private": false,
"note": "Test note",
"time": "2016-02-01T01:24:00.000Z",
"updatedAt": "2016-07-24T21:08:07.879Z",
"createdAt": "2016-07-24T21:08:07.879Z",
"authorId": null,
"lessonId": null,
"groupId": null
}
}
For these records I try to define Log Metric Filter to a) match records b) select data or dimensions if possible.
According to the AWS docs JSON pattern should look like this:
{ $.key = "lessonrecords-create" }
however, it does not match anything. My guess is that because of mix text and JSON in a single log entry.
So, the questions are:
1. Is it possible to define a pattern that will match this log format?
2. Is it possible to extract dimensions, values from such a log format?
3. Help me with a pattern to do this.
If you set up the metric filter in the way that you have defined, the test will not register any matches (I have also had this issue), however when you deploy the metric filter it will still register matches (at least mine did). Just keep in mind that there is no way (as far as I am aware) to run this metric filter BACKWARDS (ie. it will only capture data from when it is created). [If you're trying to get stats on past data, you're better off using log insight queries]
I am currently experimenting with different parse statements to try and extract data (its also a mix of JSON and text), this thread MAY help you (it didn't for me) Amazon Cloudwatch Logs Insights with JSON fields .
UPDATE!
I have found a way to parse the text but its a little bit clunky. If you export your cloudwatch logs using a lamda function to SumoLogic, their search tool allows for MUCH better log manipulation and lets you parse JSON fields (if you treat the entire entry as text). SumoLogic is also really helpful because you can just extract your search results as a CSV. For my purposes, I parse the entire log message in SumoLogic, extract all the logs as a CSV and then I used regex in Python to filter through and extract the values I need.
Let's say you have the following log
2021-09-29 15:51:18,624 [main] DEBUG com.company.app.SparkResources - AUDIT : {"user":"Raspoutine","method":"GET","pathInfo":"/analysis/123"}
you can parse it like this to be able to handle the part after "AUDIT : " as a JSON
fields #message
| parse #message "* [*] * * - AUDIT : *" as timestamp, thread, logLevel, clazz, msg
| filter ispresent(msg)
| filter method = "GET" # You can use fields which are contained in the JSON String of 'msg' field. Do not use 'msg.method' but directly 'method'
The fields contained in your isolated / parsed JSON field are automatically added as fields usable in the query
You can use CloudWatch Events for such purpose(aka Subscription Filters), what you will need to do is define a cloudwatch Rule which uses an expression statement to match your logs.
Here, I will let you do all the reading:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Scheduled-Rule.html
:)
Split the message into 3 fields and the 3rd field will be a valid json . I think in your case it would be
fields #timestamp, #message
| parse #message '[] * {"*"}' as field1, field2, field3
| limit 50
field3 is the valid json.
[INFO} will be the first field.
You can search JSON string representation, which is not as powerful.
For your example,
instead of { $.key = "lessonrecords-create" }
try "\"key\":\"lessonrecords-create\"".
This filter is not semantically identical to your requirement, though. It will also give events where key is not at the root of json.
you can use fluentd agent to send logs to Cloudwatch. Create custom grok pattern based on your metric filter.
Steps:
Install fluentd agent in your server
Install fluent-plugin-cloudwatch-logs plugin and fluent-plugin-grok-parser plugin
write your custom grok pattern based on your log format
please refer this blog for more information

WSO2 CEP siddhi Filter issue

I am trying to use the siddhi query langage but it seems I am misusing it.
I have some events with the following streamdef :
{ 'name':'eu.ima.stat.events', 'version':'1.1.0', 'nickName': 'Flux event Information', 'description': 'Details of Analytics Statistics', 'metaData':[ {name:'HostIP','type':'STRING'} ], 'correlationData':[ {name:'ProcessType','type':'STRING'}, {name:'Flux','type':'STRING'}, {name:'ReferenceId','type':'STRING'} ], 'payloadData':[ {'name':'Timestamp','type':'STRING'}, {'name':'EventCode','type':'STRING'}, {'name':'Type','type':'STRING'}, {'name':'EventInfo','type':'STRING'} ]}
I am just trying to filter events with the same processus value and the same flux value using a query like this one :
from myEventStream[processus == 'SomeName' and flux == 'someOtherName' ]
insert into someStream
processus, flux, timestamp
Whenever I try this, no output is generated. When I get rid of the filter
from myEventStream
insert into someStream
processus, flux, timestamp
all my events are ther in the output.
What's wrong with my query ?
I can see some spell mistakes in your query... In the filter you have used a variable name called "processus" which is not in the event stream. That is why this query does not give any output. When you are creating a bucket in WSO2 CEP, make sure that the bucket is deployed correctly in the CEP server and check in the management console.(CEP BUCKETS --> List).
On your situation. bucket will not be deployed because of the wrong configuration and also there will be error messages printed in the terminal where CEP server runs. After correcting this mistake your query will run perfectly without any issue...
Regards,
Mohan
Considering Mohan's answer,rename 'ProcessType' or change your query like this
from myEventStream[ ProcessType == 'SomeName' and flux == 'someOtherName' ]
insert into someStream
ProcessType, flux, timestamp