Google Data Studio Table Heatmap for Negative values - google-cloud-platform

I'm looking to have heatmap in google data studio to work with negative and positive values. Can there be two different color schemes for negative and positive values?
Currently, my column looks like this:
Style settings:
I want to highlight the negative values too (for example, red color scheme).

It can be achieved using Conditional Formatting; for example (where Score represents the respective field):
1) Red (Negative Values)
Colour Type: Single Colour
Format Rues: Score Less Than 0
Colour and Style: Score field with a Red Background Colour
2) Green (Positive Values)
Colour Type: Single Colour
Format Rues: Score Greater Than or Equal to 0
Colour and Style: Score field with a Green Background Colour
Google Data Studio Report and a GIF to elaborate:

Not as a colour scale, no. Nimantha's solution will work if you only need binary colours for positive and negative, but if you want a positive gradient and negative gradient, you'd need to enter the min and max values for each column as a conditional format.
Data Studio doesn't currently have a way to sub in min and max variables for a range, so that means manual entry for every column (and even then, it won't have full functionality when date ranges are adjusted). Bit of a pain that they haven't implemented something so straightforward.

Related

How to color multi-row card by 'legend'?

I want to color the padding (the vertical bar on the left) based on a column variable. Is this possible?
For example, for 'High' I want to color it red, 'Low' in green, etc.
This will ensure that the coloring is consistent across all graphics when I refer to this different level of the column (urgency).
It seems like there's no option to set color of vertical bar based on the value, but you can change the background color of the card based on the value of the card or custom metric.
Here Patric shows how to use field value to set format color programmatically https://www.youtube.com/watch?v=FgnPIaxpdJ0

Power BI: How to change the background color for specific rows in a matrix?

I want to change the background color for specific rows in a matrix based on the name of the row.
Here is my matrix
What I did so far was to create a conditional column X in the data table that says, for example, when asset_name is A82 give me 1, in all other cases give me 0. Then for each field in Values, I created a conditional rule based on that X column -
when column X is 1 - blue color, when is 0 - white color. Basically, I apply conditional background color for the columns. However, I want to be able to conditionally color the rows. There is no option to choose a background color for the fields in Rows. Therefore, I'm able to custom-color only the column part of the matrix.
Is there any workaround for this?
Could you use something like this - you can conditionally format a row based on the value of a measure https://www.cloudfronts.com/conditional-formatting-by-row-in-a-matrix/

Matrix Conditional formatting : why when column has 0 then its all red? Power BI

I set up Color Scale no color from lowest value to red color for highest value.
But if I filter data and column "Losses" have values $0 then for some reason the whole column becomes red.
Is any way how can it make it with "No color" if all the values are 0's?
in order to fix the problem I also have to indicate max value. So I used maximum value as $1,000,000 which is fit for my dataset.

How to adjust brightness and contrast using min and max values using OpenCV

In ImageJ you can adjust the brightness and contrast of an image using minimum and maximum values. You can also use the setMinAndMax() macro function. The dialog looks like this:
It maps each pixel to fit between the minimum and maximum values.
I'm trying to do the same thing in OpenCV (C++). I can change the contrast and brightness using the alpha and beta parameters of Mat::convertTo(), but I don't know how to do it with minimum and maximum values.
In my case I'm using a 12-bit image so the pixel values range from 0 to 4095. I'm not sure if that matters.
You do it like this.
First, find the current maximum and minimum.
Let's say the darkest and brightest are 80 and 220 respectively. Now you need to stretch this range 80..220 onto the full range 0..4095.
So you subtract 80 from every pixel in your image to shift down to zero at the left end of the histogram, so your range is now 0..140.
Now you need to multiply every pixel by 4095/140 to stretch the right end out to 4095.
Effectively, the formula you need is this:
newvalue = int((current value - darkest)*4095/(brightest-darkest))
You do it in three steps:
get the current min/max using minMaxLoc
adjust 'min' value by adding the difference to the image (no special function, just do 'image = image + offset' in C++ or python)
adjust 'max' value by scaling the image (no special function, 'image = image * scale')

How to create data fom image like "Letter Image Recognition Dataset" from UCI

I am using letter_regcog example from OpenCV, it used dataset from UCI which have structure like this:
Attribute Information:
1. lettr capital letter (26 values from A to Z)
2. x-box horizontal position of box (integer)
3. y-box vertical position of box (integer)
4. width width of box (integer)
5. high height of box (integer)
6. onpix total # on pixels (integer)
7. x-bar mean x of on pixels in box (integer)
8. y-bar mean y of on pixels in box (integer)
9. x2bar mean x variance (integer)
10. y2bar mean y variance (integer)
11. xybar mean x y correlation (integer)
12. x2ybr mean of x * x * y (integer)
13. xy2br mean of x * y * y (integer)
14. x-ege mean edge count left to right (integer)
15. xegvy correlation of x-ege with y (integer)
16. y-ege mean edge count bottom to top (integer)
17. yegvx correlation of y-ege with x (integer)
example:
T,2,8,3,5,1,8,13,0,6,6,10,8,0,8,0,8
I,5,12,3,7,2,10,5,5,4,13,3,9,2,8,4,10
now I have segmented image of letter and want to transform it into data like this to put recognize it but I don't understand the mean of all value like "6. onpix total # on pixels" what is it mean ? Can you please explain the mean of these value. thanks.
I am not familiar with OpenCV's letter_recog example, but this appears to be a feature vector, or set of statistics about the image of a letter that is used to classify the future occurrences of the letter. The results of your segmentation should leave you with a binary mask with 1's on the letter and 0's everywhere else. onpix is simply the total count of pixels that fall on the letter, or in other words, the sum of your binary mask.
Most of the rest values in the list need to be calculated based on the set of pixels with a value of 1 in your binary mask. x and y are just the position of the pixel. For instance, x-bar is just the sample mean of all of the x positions of all pixels that have a 1 in the mask. You should be able to easily find references on the web for mathematical definitions of mean, variance, covariance and correlation.
14-17 are a little different since they are based on edge pixels, but the calculations should be similar, just over a different set of pixels.
My name is Antonio Bernal.
In page 3 of this article you will find a good description for each value.
Letter Recognition Using Holland-Style Adaptive Classifiers.
If you have any doubt let me know.
I am trying to make this algorithm work, but my problem is that I do not know how to scale the values to fit them to the range 0-15.
Do you have any idea how to do this?
Another Link from Google scholar -> Letter Recognition Using Holland-Style Adaptive Classifiers