I have a simple Google Cloud Monitoring Query Language to show the count of all requests to all containers in kubernetes from log-based metrics. The query is below.
k8s_container::logging.googleapis.com/user/service-api-gateway-prod-request-in-count | sum
The widget will look like below
I would like to rename the long label for the line chart to something shorter like "request count". How do I do it?
So the best I can do is to add a new column to the table and map the column.
In my example, I add add [p: 'error count'] | map [p] to the line, and become like this.
k8s_container::logging.googleapis.com/user/service-api-gateway-prod-request-in-count | sum | add [p: 'error count'] | map [p]
This works in my case.
References
https://cloud.google.com/monitoring/mql/reference#map
Instead of using MQL(Monitoring Query Language), try the Advanced tab. Just for an example I will be using a metric name mysite-container-exited, you can name it whatever you want.
Select your resource type and metric that you created in log-based metric.
Select No preprocessing step.
Select alignment function as SUM.
Now the widget will just show the name that you entered in the log-based metric details tab.
The sum is actually a shortcut to group_by table operation with sum aggregator. Using the complete form of group_by allow you to control the output value column name.
k8s_container::logging.googleapis.com/user/service-api-gateway-prod-request-in-count
| group_by [], [request_count: sum(val())]
You can try renaming the value column with | value [request_count: val()].
Reference entry for the value operator
Related
I am creating a line chart in a CloudWatch dashboard. I can create a line representing the frequency with which one string appears the logs, using a filter. But, I don't know how to create two or more such lines.
After selecting a log group, I run this query:
filter name = "first log string"
| stats count(*) as firstString by bin(1hour)
This generates counts of results that, in the Visiulization tab, are displayed as a line chart with a single line.
Now I want to add another line representing "second log string" on the chart. I assume I will have to modify the filter in some way, or add a second filter.
Here are some things that don't work:
adding a second name to the filter
trying filter #message or #name as this documentation suggests
simply pasting in a modified duplicate of the query
Further, I seem to lack documentation that explains how filter is supposed to work. Search engines keep sending me back to this Filter And Pattern Syntax AWS document which dosen't appear to give any actual examples using filter.
To answer my own question, I had to use an array in my filter. I also used the sum() function instead of count().
filter name in ["first log string", "second log string"]
| fields name = "first log string" as #first_string, name = "second log string" as #second_string
| stats sum(#first_string) as first_string, sum(#second_string) as second_string by bin(1hour)
Now I have a line chart with two lines representing the frequency of two logged items
The same question once again but with (I hope) better explanation:
I created the most simple case:
An Interactive Grid IG with data source EMP ( table with 14 records contains Ename, Job, HireDate, Salary etc. etc.)
Text field P7_ENAME
After running it looks like below:
What I would like to do is to copy Ename from selected record of IG to P7_ENAME field .
I found several tutorials (text and video) how to do it. Most of them suggest to create dynamic action SelectionChange on IG and when TRUE add a JavaScript code something like below:
var v_ename;
model = this.data.model;
v_ename = model.getValue( this.data.selectedRecords[0], "Ename");
apex.item( "P7_ENAME" ).setValue (v_ename);
and the second step is to create another action: Refresh.
So finally I have a dynamic action with two steps : the first one is a Java script code and the second refresh function on my P7_ENAME field.
Sounds simple and it is simple to repeat/implement. A guy (I suppose) from India published a video on YouTube (https://www.youtube.com/watch?v=XuFz885Yndw) which I followed and in his case it works good. In my case it simple does not work - field P7ENAME is always empty, no errors appears. Any idea why ? Any hints, suggestion ?
thanks for any help
K.
The best way to debug and achieve what you are trying to do is as follows:
create the Dynamic action with the following setup:
-when -> selection change[interactive grid],
-selection type -> region, region -> your IG region,
-client side condition -> javascript expression: ```this.data.selectedRecords[0] != undefined```
First action of the true of the DA with the type: execute javascript code and fire on initialization is turned on, code: console.log(this.data.selectedRecords);
Run your page, and check the browser console. You should see an array of columns when you select a record from that IG as follows:
Find in that array, which sort number of the array contains the data that you want to use for the page item. Let's say I want the 3rd element which is "2694" then I should change my dynamic action's execute javascript code to:
var value = this.data.selectedRecords[0][2];
apex.item( "P7_ENAME" ).setValue (value);
The last thing I should do is add another true action (and the refresh action at the end) to the same dynamic action with type 'SET VALUE' and 'PLSQL EXPRESSION' as type, put :P7_ENAME in the expression, items to submit P7_ENAME and affected element: item / P7_ENAME as follows:
I'm running a CloudWatch log insights query on a single log stream that corresponds to a single Python AWS Lambda function. This function logs a unique line corresponding to the key in s3 that it is processing. It logs this line once at the beginning of the invocation. The only condition where it won't log this line is if it fails before it even reads the event.
The query is:
parse #message /(?<#unique_key>Processing key: \w+\/[\w=_-]+\/\w+\.\d{4}-\d{2}-\d{2}-\d{2}\.[\w-]+\.\w+\.\w+)/
| filter #message like /Processing key: \w+\/[\w=_-]+\/\w+\.\d{4}-\d{2}-\d{2}-\d{2}\.[\w-]+\.\w+\.\w+/
| stats count(#unique_key) - count_distinct(#unique_key) as #distinct_unique_keys_delta
by datefloor(#timestamp, 1d) as #_datefloor
| sort #_datefloor asc
The two regular expressions in this query will parse the full key of the s3 file being processed. In this particular problem and in general, my understanding is that the count(...) of any quantity minus the count_distinct(...) of the same quantity should always be greater than or equal to zero.
For several of the days in the results, it is a negative number.
I thought I might be misunderstanding the correct usage of datefloor(), so I tried running the following query:
parse #message /(?<#unique_key>Processing key: \w+\/[\w=_-]+\/\w+\.\d{4}-\d{2}-\d{2}-\d{2}\.[\w-]+\.\w+\.\w+)/
| filter #message like /Processing key: \w+\/[\w=_-]+\/\w+\.\d{4}-\d{2}-\d{2}-\d{2}\.[\w-]+\.\w+\.\w+/
| stats count(#unique_key) - count_distinct(#unique_key) as #distinct_unique_keys_delta
The result was -20,347.
At this point the only scenarios I can see are
Something wrong with the code executing the query.
I'm misunderstanding this tool.
I have discovered that the count_distinct function in AWS Log Insights queries doesn't really return a distinct count! As per the documentation
Returns the number of unique values for the field. If the field has very high cardinality (contains many unique values), the value returned by count_distinct is just an approximation.
Apparently I can't just assume that a function returns an accurate result.
The documentation page.
Hope you're well. I've been trying to put together a CloudWatch Query that returns the first event in each contactId.
I thought I'd add a count stat, and then exclude all events that were equal to or greater than 2. I'm clearly not doing something right though. Although I am being provided with the count, it seems for some reason that the count is excluding other information from the query. The query returns almost no information on the event that it is counting. I'd like the count to be added, and also INCLUDE the information from the query.
Here is the query I am using:
fields #timestamp, #message
| sort number asc
| stats count(ContactId) as number by ContactId
| filter ContactFlowModuleType = 'SetLoggingBehavior' and Parameters.LoggingBehavior = 'Enable'
| fields #message
| display Results, ContactId, #timestamp, ContactFlowModuleType, number
With this query, it says that 'time stamp' is invalid. I believe the stats clause has something to do with it.
I'm looking to determine the sequence of events on a contactId basis, so that I can exclude all logged events after the initial event. For now, I'd just like to see a count on the basis of ContactId, so I can perform the exclusion myself.
Steve
I am trying to generate a graph that will display the success/failure rate of an operation. In my application I am pushing log events in the following format:
[loggingType] loggingMessage.
I want to create a pie chart that shows the ratio of success/failure but its not working. I am running the following:
filter #logStream="RunLogs"
| parse #message "[*] *" as loggingType, loggingMessage
| filter loggingType in ["pass","fail"]
| stats count(loggingType="pass")/count(loggingType="fail") as ratio by bin(12w)
It seems like the condition inside count does not work and grabs everything. It returns 1 every time :(
I came across a similar scenario; but, super weirdly I believe, if you change the query to use sum instead of count it works. Not sure why AWS query execution interprets in this way.
filter #logStream="RunLogs"
| parse #message "[*] *" as loggingType, loggingMessage
| filter loggingType in ["pass","fail"]
| stats sum(loggingType="pass")/sum(loggingType="fail") as ratio by bin(12w)