I typically run a query like
fields #timestamp, #message
| filter #message like /ERROR/
| sort #timestamp desc
| limit 20
Is there any way to get additional lines of context around the messages containing "ERROR"? Similar to the A, B, and C flags with grep?
Example
For example, if I have a given log with the following lines
DEBUG Line 1
DEBUG Line 2
ERROR message
DEBUG Line 3
DEBUG Line 4
Currently I get the following result
ERROR message
But I would like to get more context lines like
DEBUG Line 2
ERROR message
DEBUG Line 3
with the option to get more lines of context if I want.
You can actually query the #logStream as well, which in the results will be a link to the exact spot in the respective log stream of the match:
fields #timestamp, #message, #logStream
| filter #message like /ERROR/
| sort #timestamp desc
| limit 20
That will give you a column similar to the right-most one in this screenshot:
Clicking the link to the right will take you to and highlight the matching log line. I like to open this in a new tab and look around the highlighted line for context.
I found that the most useful solution is to do your query and search for errors and get the request id from the "requestId" field and open up a second browser tab. In the second tab perform a search on that request id.
Example:
fields #timestamp, #message
| filter #requestId like /fcd09029-0e22-4f57-826e-a64ccb385330/
| sort #timestamp asc
| limit 500
With the above query you get all the log messages in the correct order for the request where the error occurred. This is an example that works out of the box with lambda. But if you push logs to CloudWatch in a different way and there is no requestId i would suggest creating a requestId per request or another identifier that is more useful for you use case and push that with your log event.
Related
Hope you're well. I've been trying to put together a CloudWatch Query that returns the first event in each contactId.
I thought I'd add a count stat, and then exclude all events that were equal to or greater than 2. I'm clearly not doing something right though. Although I am being provided with the count, it seems for some reason that the count is excluding other information from the query. The query returns almost no information on the event that it is counting. I'd like the count to be added, and also INCLUDE the information from the query.
Here is the query I am using:
fields #timestamp, #message
| sort number asc
| stats count(ContactId) as number by ContactId
| filter ContactFlowModuleType = 'SetLoggingBehavior' and Parameters.LoggingBehavior = 'Enable'
| fields #message
| display Results, ContactId, #timestamp, ContactFlowModuleType, number
With this query, it says that 'time stamp' is invalid. I believe the stats clause has something to do with it.
I'm looking to determine the sequence of events on a contactId basis, so that I can exclude all logged events after the initial event. For now, I'd just like to see a count on the basis of ContactId, so I can perform the exclusion myself.
Steve
I am trying to generate a graph that will display the success/failure rate of an operation. In my application I am pushing log events in the following format:
[loggingType] loggingMessage.
I want to create a pie chart that shows the ratio of success/failure but its not working. I am running the following:
filter #logStream="RunLogs"
| parse #message "[*] *" as loggingType, loggingMessage
| filter loggingType in ["pass","fail"]
| stats count(loggingType="pass")/count(loggingType="fail") as ratio by bin(12w)
It seems like the condition inside count does not work and grabs everything. It returns 1 every time :(
I came across a similar scenario; but, super weirdly I believe, if you change the query to use sum instead of count it works. Not sure why AWS query execution interprets in this way.
filter #logStream="RunLogs"
| parse #message "[*] *" as loggingType, loggingMessage
| filter loggingType in ["pass","fail"]
| stats sum(loggingType="pass")/sum(loggingType="fail") as ratio by bin(12w)
Is there a way to filter in a widget only the #LogStream s which do not contain a specific log message?
I tried to use a Log table with some filters as presented in:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html
and:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html#CWL_QuerySyntax-regex
As I understand the filters apply to messages, but I need a way to filter and select at Log stream level.
Tried something like this:
fields #logStream, strcontains(#logStream, "[INFO] - My message") as found
| filter found=0
| display #logStream
| limit 20
But the filter is not working, it's displaying all the messages.
Thank you!
Your strcontains(#logStream, "[INFO] - My message") as found part of your query looks incorrect to me. I think you meant for this to be: strcontains(#message, "[INFO] - My message") as found?
you can filter by logstream:
filter #logStream = 'xxx/yyy/5d4d93708e4e42a1924fb4a1f7003ccd'
| fields #timestamp, #message
I have a log string:
F, [2021-02-24T09:06:30.428708 #9] FATAL -- : [3c25b3e6-fa19-48c8-93c7-5661dc2ec338]
ActionController::RoutingError (No route matches [GET] "/api/jsonws/invoke"):
I want to extract the path /api/jsonws/invoke as a parsed field with this request:
fields #timestamp, #message
| limit 300
| parse #message /.*No route matches [[A-Z]{3,7}] "(?<path>.*)".*/
I expect to see /api/jsonws/invoke in the output in column path, but instead my path column in the output is always empty.
I've tested the regexp expression with an online tool and it seem to work as I expect. I'm also sure that there are matching logs in the output.
Is there any mistake in my Log Insights query?
Regexp didn't work out, so I ended up doing this:
fields #timestamp, #message
| parse #message "(No route matches [*] \"*\"):" as method, path
| filter ispresent(path)
| stats count(*) as count by path, method
| sort count desc
I have messages which are like below, the following message is one of the messages (have so many JSON formats which are not at all related to this)
request body to the server {'sender': '65ddd20eac244AAe619383e4d8cb558834', 'message': 'hello'}
I would like to group of these messages based on sender (alphanumeric value) which is enclosed in JSON.
CloudWatch Logs Insights query:
fields #message |
filter #message like 'request body to the server' |
parse #message "'sender': '*', 'message'" as sender |
stats count(*) by sender
Query results:
-------------------------------------------------
| sender | count(*) |
|------------------------------------|----------|
| 65ddd20eac244AAe619383e4d8cb558834 | 4 |
| 55ddd20eac244AAe619383e4d8cb558834 | 3 |
-------------------------------------------------
Screenshot:
you can use filter.
fields #timestamp, #message
| filter #message like "65ddd20eac244AAe619383e4d8cb558834"
| sort #timestamp desc
| limit 20
it will filter all the messages limit to 20 that send by 65ddd20eac244AAe619383e4d8cb558834.
update:
suppose the JSON log formate is this
{
"sender": "65ddd20eac244AAe619383e4d8cb558835",
"message": "Hi"
}
Now I want to count number of messages from 65ddd20eac244AAe619383e4d8cb558835
how many messages are coming from each user?
so simple you can run the query
stats count(sender) by sender |
# To filter only message the contain sender, to avoid lambda default logs
filter #message like "sender"
if you want to see messages as well then modify the query a bit
stats count(*) by sender, message |
filter #message like "sender"
Here #message refers to whole to index where message refer to the JSON object message.
count_distinct
Returns the number of unique values for the field. If the field has
very high cardinality (contains many unique values), the value
returned by count_distinct is just an approximation.
how many distinct users in the selected interval?
It will list distinct users in 3hr of interval
stats count_distinct(sender) as distinct_sender by bin(3hr) as interval