I want to add CloudWatch custom dashboard for the Lambda's error logs. I want the metric with only logs which are reflecting ERRORs in Lambda function. I tried with following query in log insights but it is not working:
fields #timestamp, #message
| sort #timestamp desc
| filter #message like ERROR
| limit 20
Also I tried to create filter but it is showing me There are no metrics in this namespace for the region "Europe (London)"
I managed to solve this issue by :
> fields #message
> | parse #message "[*] *" as loggingType, loggingMessage
> | fields #message | filter #message like /Error/
> | display loggingMessage
> | limit 500
Related
I want to parse this message :
[2021-08-30T14:01:01.443908+00:00] technical.INFO: Webhook
"239dfb55-c8f3-4ae2-8974-22dadb7417ba" (wallet.create) has been
handle.
To have :
UUID (here : 239dfb55-c8f3-4ae2-8974-22dadb7417ba)
The words in brackets (here: wallet.create)
I can get the UUID but not the terms in brackets.
I think my regex is correct but, it doesn't work on Log Insight :(
My query :
fields #message
| filter #message like /technical.INFO: Webhook "/
| parse #message /(?<webhookId>\b[0-9a-f]{8}\b-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-\b[0-9a-f]{12}\b)/
| parse #message /(?<#endpt_get>\(([^)]+)\)/
| sort #timestamp desc
| limit 5
My regex for word in brackets :
https://regex101.com/r/ewSm6O/1
If i comment this line of my query :
parse #message /(?<#endpt_get>\(([^)]+)\)/
enter image description here
I have the good result
The line of code I commented above blocks the result, I return nothing.
Could you please help me?
if your log messages are all going to have this same format, you can use glob instead of regex (and for something complex like this, that may be easier)
fields #message, #timestamp
| parse #message "technical.INFO: Webhook \"*\" (*) has been handle" as uuid, term_to_catch
| sort #timestamp by desc
| display #timestamp, uuid, term_to_catch
if some of the sections of the message (like technical.INFO ) would change, you can always * them and put a dummy variable to catch but then do nothing with it
| parse #message "*: Webhook \"*\" (*) has been handle" as type, uuid, term_to_catch
| display #timestamp, uuid, term_to_catch
alternatively - if you insist on your regex - then the reason is most likely because you are not storing the parsed results as their own variable, and so they are overwriting each other
| parse #message /your*regex/ as uuid
| parse #message /your*second.regex/ as term_to_catch
may get what you need as well.
I'm running AWS lambda. And I should find some informations from the Cloudwatch logs.
And What am I doing seems to too inefficient. But I don't know how to work.
I want to know more efficient way.
What am I doing is...
I have some ids
1111-1111-111
2222-2222-222
3333-3333-333
...
Search for specific messages with id in AWS log insight conolse
fields #timestamp, #logStream ,#message
| filter #message like /myId/
| sort #timestamp desc
| parse #message '"myId" : "*"' as my_id
| filter my_id like /1111-1111-111/
Download result csv file.
Parse #logStream with python
with open('1111-1111-111.csv') as csvfile:
reader = csv.DictReader(csvfile, delimiter=',')
for row in reader:
print(str(i) +": " +row['#logStream'])
get logStreams and search again in logInsight console
2021/06/05/[$LATEST]1111111111111
2021/06/05/[$LATEST]1111111111111
2021/06/05/[$LATEST]1111111111111
2021/06/05/[$LATEST]2222222222222
2021/06/05/[$LATEST]3333333333333
2021/06/05/[$LATEST]3333333333333
2021/06/05/[$LATEST]3333333333333
2021/06/05/[$LATEST]3333333333333
2021/06/05/[$LATEST]4444444444444
...
Search again with logStreams and get what I really want.
fields #timestamp, #logStream ,#message
| filter #logStream='2021/06/05/[$LATEST]1111111111111'
| filter #message like /file_name/
| parse #message "'file_name': '*'" as file_name
After getting file_name, I should search again inside file with myId. Because I'm not sure because of same logStreams.
If I do this manually, This is too hard. And If I do this with aws boto3 it's also hard for me because I'm not familiar with boto3 logs client wait process and result. Also I think there would be better way.
Could you suggest to me better workflow?
Is there a way to filter in a widget only the #LogStream s which do not contain a specific log message?
I tried to use a Log table with some filters as presented in:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html
and:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html#CWL_QuerySyntax-regex
As I understand the filters apply to messages, but I need a way to filter and select at Log stream level.
Tried something like this:
fields #logStream, strcontains(#logStream, "[INFO] - My message") as found
| filter found=0
| display #logStream
| limit 20
But the filter is not working, it's displaying all the messages.
Thank you!
Your strcontains(#logStream, "[INFO] - My message") as found part of your query looks incorrect to me. I think you meant for this to be: strcontains(#message, "[INFO] - My message") as found?
you can filter by logstream:
filter #logStream = 'xxx/yyy/5d4d93708e4e42a1924fb4a1f7003ccd'
| fields #timestamp, #message
I have messages which are like below, the following message is one of the messages (have so many JSON formats which are not at all related to this)
request body to the server {'sender': '65ddd20eac244AAe619383e4d8cb558834', 'message': 'hello'}
I would like to group of these messages based on sender (alphanumeric value) which is enclosed in JSON.
CloudWatch Logs Insights query:
fields #message |
filter #message like 'request body to the server' |
parse #message "'sender': '*', 'message'" as sender |
stats count(*) by sender
Query results:
-------------------------------------------------
| sender | count(*) |
|------------------------------------|----------|
| 65ddd20eac244AAe619383e4d8cb558834 | 4 |
| 55ddd20eac244AAe619383e4d8cb558834 | 3 |
-------------------------------------------------
Screenshot:
you can use filter.
fields #timestamp, #message
| filter #message like "65ddd20eac244AAe619383e4d8cb558834"
| sort #timestamp desc
| limit 20
it will filter all the messages limit to 20 that send by 65ddd20eac244AAe619383e4d8cb558834.
update:
suppose the JSON log formate is this
{
"sender": "65ddd20eac244AAe619383e4d8cb558835",
"message": "Hi"
}
Now I want to count number of messages from 65ddd20eac244AAe619383e4d8cb558835
how many messages are coming from each user?
so simple you can run the query
stats count(sender) by sender |
# To filter only message the contain sender, to avoid lambda default logs
filter #message like "sender"
if you want to see messages as well then modify the query a bit
stats count(*) by sender, message |
filter #message like "sender"
Here #message refers to whole to index where message refer to the JSON object message.
count_distinct
Returns the number of unique values for the field. If the field has
very high cardinality (contains many unique values), the value
returned by count_distinct is just an approximation.
how many distinct users in the selected interval?
It will list distinct users in 3hr of interval
stats count_distinct(sender) as distinct_sender by bin(3hr) as interval
I'm trying to create an AWS dashboard visualization that displays the counts of cache hits vs. misses over a period of time. To do this, I'm setting up a log type dashboard with an insights query on the log. To be as simple as possible, my log is either:
{"cache.hit", true} or {"cache.hit", false}.
I would like for my dashboard to track both possibilities on the same graph, but it seems like I can't without breaking my log up into distinct rows for these values. For example, if my logs were simply:
{"cache.hit.true", true} or {"cache.hit.false", true}, then I could create 2 separate graphs to track these values independently in the dashboard, but that's not as clean.
To get them on one dash, I've tried this, but all it does is display the two fields, and the values for both display fields are the same, when they definitely shouldn't be:
fields #timestamp, #message, cache.hit as cache_hits
| filter cache_hits IN [0, 1]
| display cache_hits = 0 as in_cache_false
| display cache_hits = 1 as in_cache_true
| stat count (in_cache_true), count(in_cache_false) by bin(30s)
| sort #timestamp desc
| limit 20
This query below extracts out the cache hits and cache misses and then works out the cache hit percentage.
fields #timestamp, #message
| filter #message like /cache.hit/
| fields strcontains(#message, "true") as #CacheHit,
strcontains(#message, "false") as #CacheMiss
| stats sum(#CacheHit) as CacheHits, sum(#CacheMiss) as CacheMisses, sum(#CacheHit) / (sum(#CacheMiss) + sum(#CacheHit)) * 100 as HitPercentage by bin(30s)
| sort #timestamp desc