I want to get a specific timestamp so for example:
I need to query: December 31st, 2021 at 8:55AM
fields #timestamp, #message
| sort #timestamp desc
| limit 25
Here are some screenshots of where to change it in the top right of the console as well. First click on Custom:
then switch to Absolute and specify the exact start and end dates/times you want:
Filter by timestamp query on AWS Cloudwatch Logs Insights may address your question if you want to do it in the query.
Related
I have a query to retrieve how much time an specific 'event' takes to finish:
fields #timestamp, #message
| parse #message "[Id: *] *" as eventID, loggingMessage
| stats sortsFirst(#timestamp) as date1, sortsLast(#timestamp) as date2 by eventID
this returns a table like
I can do things like | display (date2-date1) to make some calculations but what I would really like to do is to group all of them and calculate the avg(date2-date1). So only one result should appear.
I've tried what other posts recommend but
| stats sortsFirst(#timestamp) as date1, sortsLast(#timestamp) as date2 by eventID, avg(date2-date1)
Results in bad syntax due to the 'by eventID'. But if I remove this, my query is not being grouped by eventID.
How could I get around this?
I want to display a bar diagram where each bar represents a week starting on a Monday.
The below meta code does that except that each week starts on a Sunday.
I realize that the Query syntax only mentions time periods with m for minutes and h for hours, but it seems to work fine also using d for day and w for week (except that I cannot set the starting day).
Any idea how to make weeks starting on Mondays instead of Sundays?
fields #timestamp, #message
| fields strcontains(#message, 'the start') as start
| fields strcontains(#message, 'the result') as result
| stats sum(start) as startCalls, sum(result) as resultCalls by bin(1w) as t
| sort t asc
We want to find the missed utterance rate per day from Lex logs.
For example:
Day 1 - 10 total utterances, 1 missed utterance
Day 2 - 20 total utterances, 4 missed utterance
...
We want to be able to plot (missed utterances/total utterances x 100) per day (essentially, %) for one week, however we also need to include Lambda exceptions as part of our "missed utterances" count.
How do we calculate the total & missed utterance count and then obtain a %?
Is this possible in CloudWatch Insight Logs?
Expectd output is a graph for 7 days that has the percentage of missed utterances+exceptions to total utterances for the day.
<date 1> 1%
<date 2> 4%
...
One query we tried is:
fields #message
| filter #message like /Exception/ or missedUtterance=1
| stats count(*) as exceptionCount, count(#message) as messageCount by bin(1d)
| display exceptionCount, (exceptionCount/messageCount) * 100
| sort #timestamp desc
This is unfortunately not possible to do within CloudWatch Log Insights as you would need to have 2 filter & 2 stats commands.
One filter would be used for getting the total count & another for getting the exception + missed utterance count.
While you can filter after one another, you can't get the counts of the result of each filter as 2 stats commands are not supported from within Log Insights (yet).
The most you can do within CloudWatch is to create a dashboard (or 2 Log Insights) with the below queries and calculate the percentage yourself:
fields #message
| stats count(*) as totalUtteranceCount by bin(1d)
fields #message
| filter #message like /Exception/ or missedUtterance = 1
| stats count(*) as exceptionAndMissedUtteranceCount by bin(1d)
In an enterprise chatbot project that I was an engineer on, I configured logs to be exported to ElasticSearch (OpenSearch in AWS Console), which then opened a whole new world of data analysis & gave me the ability to run statistics like the above.
If this is a must, I would look to implementing a similar solution until AWS improves CloudWatch Log Insights or provides this statistic within Amazon Lex itself.
In the long run, I would go with the first option however Log Insights is not meant to be a full-blown data analysis tool & you'll need to carry out much more analysis on your data (missed utterances, intents etc.) anyway in order to be able to improve your bot.
Hopefully, something like this query works in the future!
fields #message
| stats count(*) as totalUtteranceCount by bin(1d)
| filter #message like /Exception/ or missedUtterance = 1
| stats count(*) as exceptionAndMissedUtteranceCount by bin(1d)
| display (exceptionAndMissedUtteranceCount/totalUtteranceCount) * 100
| sort #timestamp desc
We could get it working using the below query:
fields strcontains(#message, 'Error') as ErrorMessage
|fields strcontains(#message, '"missedUtterance":true') as #missedUtterance
| stats sum(ErrorMessage) as ErrorMessagCount , sum(missedUtterance) as missedCount,
count(#message) as messageCount , (((ErrorMessagCount) + (missedCount)) /messageCount * 100) by bin(1d)
Here, we are using strcontains instead of parse because if there are no missed utterance on a particular day, the calculation (ErrorMesageCount + missedCount)/messageCount * 100 was empty.
Answer is like:
I have a dynamo db table which contains Date, City and other attributes as the columns. I have configured GSI with Date as the hash key. The table contains 27 attributes from 350 cities recorded daily.
| Date | City | Attribute1 | Attribute27|
+------------+------------+-------------+------------+
| 25-06-2020 | Boston | someValue | someValue |
| 25-06-2020 | NY | someValue | someValue |
| 25-06-2020 | Chicago | someValue | someValue |
+------------+------------+-------------+------------+
I have a Lambda proxy integration setup in API Gateway. The lambda function receives a 7 day date range as the request. Each of the date, in this range is used query the dynamodb (using query input) to get all the items for a given day. The result for each day is consolidated for a week, and is then sent back as a JSON response.
The latency seen in POSTMAN is around 1.5s, after increasing the lambda memory to 1024MB (Even though, only 76MB is being consumed).
Is there any way to improve the performance? The dynamo db is already running in On-Demand Capacity.
You don't say if you are using parallel queries or not.
If not do so.
You also don't say what cloudwatch is showing for Query latency, as mentioned by Marcin, DAX can help reduce that.
You also don't mention what cloudwatch is showing for lambda execution. There's various articles about optimizing lambda.
Whatever's left is networking...not much you can do about that..one piece to consider is reusing Db connections in your lambda
I have a lot of AWS Lambda logs which I need to query to find the relevant log stream name,
I am logging a particular string in the logs,
Which I need to do a like or exact query on.
The log format is something like this -
Request ID => 572bf6d2-e3ff-45dc-bf7d-c9c858dd5ccd
I am able to query the logs without the UUID string -
But if I mention the UUID in the query, it does not show results -
Queries used -
fields #timestamp, #message
| filter #message like /Request ID =>/
| sort #timestamp desc
| limit 20
fields #timestamp, #message
| filter #message like /Request ID => 572bf6d2-e3ff-45dc-bf7d-c9c858dd5ccd/
| sort #timestamp desc
| limit 20
Have you tried adding an additional filter on the message field to your first query to further narrow your results?
fields #timestamp, #message
| filter #message like /Request ID =>/
| filter #message like /572bf6d2-e3ff-45dc-bf7d-c9c858dd5ccd/
| sort #timestamp desc
| limit 20
Alternatively if all of your logs follow the same format you could use the parse keyword to split out your UUID field and search on it with something like
fields #timestamp, #message
| parse #message "* * Request ID => *" as datetime, someid, requestuuid
| filter uuid like /572bf6d2-e3ff-45dc-bf7d-c9c858dd5ccd/
| sort #timestamp desc
| limit 20
Also try widening your relative time range at the top right of the query, just in case the request you're looking for has dropped outside of the 1hr range since attempting the first query.
instead of using two like filters like in accepted answer, I would suggest using the in operator as follows. This way your code is shorter and cleaner.
fields #timestamp, #message
| filter #message in ["Request ID =>", "572bf6d2-e3ff-45dc-bf7d-c9c858dd5ccd"]
| sort #timestamp desc
| limit 20