Google Stack Driver Dashboard legend (GCP) - google-cloud-platform

First, I am using Google StackDriver (Google Cloud Plataform)
I have a chart that shows me information amount of entry log for a specific rule, using "Logs-based metrics".
The chart contains 2 labels: "Value" and "Name".
Value is amount the entry log.
Name is "information about log and names of resources"
My problems is, I do not understand why the chart is printing 2 differents lines (2 colors/rows).
Why there are 2 differentes lines color/rows if is the same rule? I see a differente value of date, but I do not understand it.
When the chart will generate a new serie (color/row)?
Follow the example below:
Edit:
Another Example

I found the answer for this case.
Each line references to each version of app.
In this case, this chart is about informations request for unique GAE module.
I saw that value "gae_app(2017...)" is the version that was deployed.
So, now I get it.
Thank you all for the attention

Related

AWS cloudwatch dynamic labels not working properly

I wrote a query to show some metrics in a graph in AWS cloudwatch. This query is grouping by 2 different dimensions, and the label by default is hard to understand:
I was trying to use dynamic queries to make the label more expressive:
[action: ${PROP('Dim.Action')}, exception: ${PROP('Dim.exception')}]
But the values of the dimentions never get printed (the name of the dimentions is correct):
I tried with other properties, such as namespace or metric name, but none of them get printed.
Any idea what might be preventing the dynamic queries from working correctly?
According to this answer on the AWS boards, this type of dynamic labelling is only supported for single metrics, not for queries, so this will just not work at the moment.
https://repost.aws/questions/QUOqinLRJFQIC_sI-NclckOw/aws-cloud-watch-graphed-metrics-are-dynamic-labels-with-dimensions-broken

Kibana: can I store "Time" as a variable and run a consecutive search?

I want to automate a few search in one, here are the steps:
Search in Kibana for this ID:"b2c729b5-6440-4829-8562-abd81991e2a0" which will return me a bunch of logs. Of these logs I need to take the first and the last timestamp:
I now would like to store these two data FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524 in 2 variables
Run a second search in Kibana for the word "fail" in between these two variable of time
How to automate the whole process without need of copy/paste and running a second query?
EDIT:
SHORT STORY LONG: I work in a company that produce a software for autonomous vehicles.
SCENARIO: A booking is rejected and we need to understand why.
WHERE IS THE PROBLE: I need to monitor just a few seconds of logs on 3 different machines. Each log is completely separated, there is no relation between the logs so I cannot write a query in discover, I need to run 3 separated queries.
EXAMPLE:
A booking was rejected, so I open Chrome and I search on "elk-prod.myhost.com" for the BookingID:"b2c729b5-6440-4829-8562-abd81991e2a0" and I have a dozen of logs returned during a range of 2 seconds (FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524).
Now I need to know what was happening on the car so I open a new Chrome tab and I search on "elk-prod.myhost.com" for the CarID: "Tesla-45-OU" on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
Now I need to know why the server which calculate the matching rejected the booking so I open a new Chrome tab and I search for the word CalculationMatrix always on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
CONCLUSION: I want to stop to keep opening Chrome tabs by hand and automate the whole thing. I have no idea around what time the book was made so I first need to search for the BookingID "b2c729b5-6440-4829-8562-abd81991e2a0", then store the timestamp of first and last log and run a second and third query based on those timestamps.
There is no relation between the 3 logs I search so there is no way to filter from the Discover, I need to automate 3 different query.
Here is how I would do it. First of all, from what I understand, you have three different indexes:
one for "bookings"
one for "cars"
one for "matchings"
First, in Discover, I would create three Saved Searches, one per index pattern. Then in Visualize, I would create a Vertical bar chart on the bookings saved search (Bucket X-Axis by date_histogram on the timestamp field, leave the rest as is). You'll get a nice histogram of all your booking events bucketed by time.
Finally, I would create a dashboard and add the vertical bar chart + those three saved searches inside it.
When done, the way I would search according to the process you've described above is as follows:
Search for the booking ID b2c729b5-6440-4829-8562-abd81991e2a0 in the top filter bar. In the bar chart histogram (bookings), you will see all documents related to the selected booking. On that chart, you can select the exact period from when the very first booking document happened to the very last. This will adapt the main time picker at the top and the start/end time will be "remembered" by Kibana
Remove the booking ID from the top filter (since we now know the time range and Kibana stores it). Search for Tesla-45-OU in the top filter bar. The bar histogram + the booking saved search + the matchings saved search will be empty, but you'll have data inside the second list, the one for cars. Find whatever you need to find in there and go to the next step.
Remove the car ID from the top filter and search for ComputationMatrix. Now the third saved search is going to show you whatever documents you need to see within that time range.
I'm lacking realistic data to try this out, but I definitely think this is possible as I've laid out above, probably with some adaptations.
Kibana does work like this (any order is ok):
Select time filter: https://www.elastic.co/guide/en/kibana/current/set-time-filter.html
Add additional criteria for search like for example field s is b2c729b5-6440-4829-8562-abd81991e2a0.
Add aditional criteria for search like for example field x is Fail.
Additionaly you can view surrounding documents https://www.elastic.co/guide/en/kibana/current/document-context.html#document-context
This is how Kibana works.
You can prepare some filters beforehands, save them and then use them if you want to automate the process of discovering somehow.
You can do that in Discover tab in Kibana using New/Save/Open options.
Edit:
I do not think you can achieve what you need in Kibana. As I mentioned earlier one option is to change the data that is comming to Elasticsearch so you can search for it via discover in Kibana. Another option could be builiding for example Java application, that is using Elasticsearch - then you can write algorithm that returns the data that you want. But i think it's a big overhead and I recommend checking the data first.
Edit: To clarify - you can create external Java let's say SpringBoot application that uses Elasticsearch - all the data that you need is inside it.
But in this option you will not use Kibana at all.
You can export the result to csv or what you want in the code.
SpringBoot application can ask ElasticSearch for whatever it needs, then it would be easy to store these time variables inside of Java code.
EDIT: After OP edited question to change it dramatically:
#FrancescoMantovani Well the edited version is very different from where you first posted here How to automate the whole process without need of copy/paste and running a second query? and search for word fail in a single shot. In accepted answer you are still using a three filters one at a time so it is not one search, but three.
What's more if you would use one index, and send data from multiple hosts via filebeat you don't even to have to create this dashboard to do that. Then you can you can select the exact period from when the very first document happened to the very last regarding filter and then remove it and add another filter that you need - it's simple as that. Before you were writing about one query,
How to automate the whole process without need of copy/paste and
running a second query?
not three. And you don't need to open new tab in Chrome each time you want to change filter just organize the data by for example using filebeat as mentioned before.
There is no relation between the 3 logs
From what you wrote the realation exist and it is time.
If the data is in for example three diferent indicies (cause documents don't have much similiar data) you can do it like that:
You change them easily in dicover see:
You can go to discover select index 1 search, select time range that you need, when you change index the time range is still the one you selected, you only need to change filter - you will get what you need.

Add click event to elements of a chart in superset

First,I follow official site,use docker and ubuntu 18, install superset,do not know is it the newest version of superset.
then I upload a csv as a table called "sales",csv is:
shop,sales
ibm,200
microsoft,100
sony,50
column shop is string type,"sales" is bigint type.
Then I add a bar chart and a big number chart.
The metric of the big number chart is sum(sales).
I Add these two chart to dashboard,it looks like:
What I need is when I click the bar "ibm",the big number chart(sum int) shows 200,when I click the bar microsoft,it shows 100.
I found a web page ,they say add call addFitler() on slice,but I can not find where is the slice and do not know how to addFilter.
Actually,I have many charts on a dashboard,and it is best if I click one element on any chart,other charts can change corresponding

Google Stackdriver Log Based Metrics: how to extract values using a regular expression from the log line

I have log lines of the following form in my Google Cloud Console:
Updated blacklist info about 123 minions. max_blacklist_per_minion=20, median_blacklist_per_minion=8, blacklist_free_minions=31
And I'm trying to set up some log-based metrics to get a longer-term overview of the values (ie. how are they changing? is it lower or higher than yesterday? etc).
However I didn't find any examples for this scenario in the documentation and what I could think of doesn't seem to work. Specifically I'm trying to understand what I need to select in "Field name" to have access to the log line (so that I can write a regular expression against).
I tried textPayload but that seems to be empty for this log entry. Looking at the actual log entry there should also be a protoPayload.line[0], but that doesn't seem to work either
In the "Metric Editor" built into the logs viewer UI you can use "protoPayload.line.logMessage" as the field name. For some reason the UI doesn't want to suggest 'line' (seems like a bug; same behavior in the filter box).
The log based metric won't distinguish based on the index of the app log line, so something like 'line[0]' won't work. For a distribution all values are extracted. A count metric would count the log entry (ie 1 regardless the number of 'line' matches).

How do I save the web service response to the same excel sheet I extracted the data from?

For example:
The given sample HP Flights SampleAppData.xls and using the CreateFlightOrder, we can link the data to the test functions and get a OrderNumber and Price response from the Web Service. And in the SampleAppData.xls Input tab, we can see that there is a empty column of OrderNumber.
So here is my question, is there any ways that I can take the OrderNumber response and fill the empty column in SampleAppData.xls?
My point to do this is because, let's say I have many test cases to do and will take days, and today I do this certain test and I would need the result of today for the next day's test.
Although I know that the responses are saved in the result but it beats the point of automation if I am required to check the response for each and every test cases?
Yes of course you can. There are a number of ways to do this. The simplest is as follows.
'Datatable.Value("columnName","sheetName")="Value"
DataTable.Value(“Result”,”Action1”)=“Pass”
Once you have recorded the results in the Datasheet, your can export them using
DataTable.ExportSheet("C:\SavePath\Results.xls")
You can write back the response programatically , if you already imported mannually .
You can use GetDataSource Class of UFT API , it will work like this lets say you imported excel from FlightSampleData.xls, and named it as FlightSampleData, you have sheet, accessing the sheet will be like below:
GetDataSource("FlightSampleData!input).Set(ROW,ColumnName,yourValue);
GetDataSource("FlightSampleData!input).Get(ROW,ColumnName);
for exporting you can use ExportToExcelFile method of GetDataSourse class after your test run . Please let me know if you have any furthur question about this.