Cell Element Background Colour - Conditional Formatting - powerbi

I'm trying to set a cell element to show as red if % is below 95%, simple enough.
But when setting the rule >= 0% and < 95% then Red as shown in screen shot.
As you can see, the rules have been set, however cells are showing red on everything except 100%
What am I doing wrong?

Change Percent to number and try inputting 0 and 0.95

Related

How to execute a function x% of the times it is to be executed?

Use case: Live images are provided in real time and are to be viewed on the screen. To lower the CPU load, the user should be able to discard images and only show, say 10%, or 50%, of the images.
If the user choose 50%, then every other image should be shown (not 50 images in a row, and then discard 50, as that would be 50% too..)
The current code:
void paintImage(int everyWhatImage)
{
showImage();
}
shows the image 100% of the times.
If the user supply an integer, like 1,2,3 .., meaning every one, every second, every third and so on, something familiar like this could be used:
void paintImage(int everyWhatImage)
{
if(counter % everyWhatImage)
{
showImage();
}
counter++;
}
However, the above algorithm don't support showing less than 50% (the '2') of the images, and so the question is, how to do that?
As this is a live streaming application, it needs to be fast.
In addition, the above code is executed in a callback function, so there is no knowledge of when it is to be executed.
Any given picture is either shown, or not shown. If it is not shown, then the chance of any future picture being shown should go up. If it IS shown, the chance of any future picture should go way down.
So, count up to 100. If you get on or over 100, show the picture... and subtract 100:
static counter(0);
if (counter >= 100) {
showImage();
counter -= 100;
}
counter += chance;
where chance is for example 70, which would mean 'show 70% of the pictures'. Going through it:
first picture: Not shown; counter is at 70.
second picture: Shown; counter is at 40.
third picture: Shown; counter is at 10.
fourth picture: Not shown; counter is at 80.
fifth picture: Shown; counter is at 50.
sixth picture: Shown; counter is at 20.
seventh picture: Not shown; counter is at 90.
eighth picture: Shown; counter is at 60.
ninth picture: Shown; counter is at 30.
tenth picture: Shown; counter is at 0.
.. and it loops from there.
Save up the given proportion in a "running balance". Every time it reaches at least 1.00, "cash in" the savings for an image.
Choosing 27% for an example ...
show_count = 0.00
ratio = 0.27
while (we have more images to show) {
show_count += ratio
if (show_count >= 1.00) {
show_image()
show_count -= 1.00
}
The other answers to this question all appear to implement a counter, so I thought I would suggest a probabilistic approach. You could generate a random number between zero and one each time your function is called:
random = ((double) rand() / (RAND_MAX))
Then, if the random number is below the prescribed percentage of images to be shown, show the image. Otherwise, discard it.

Power BI - Show only values > 0 in chart

Is it possible to show only values > 0 in a chart? On the X - axis I have several fields with a 0 and that's why I want to hide them. So can I set anywhere a minimum value?
Try using the "Y" axis instead of "X" - It has the effect you may be looking for.
If you want to start at your highest value, enter a 1 for 100% in the Y axis start box and leave the end at auto, and the graph will automatically size from your lowest value greater than zero

Detect change in color of pixel

I'm new to making programs and I have no idea where to really begin.
However I have this simple idea that I want to turn into reality.
I need to find a red pixel in a blue area on screen. The area is a rectangle from
(x = 86)(y = 457) to (x = 770)(y = 641) -- that's just can example.
Then get a list of all the pixels within that region and check if they are a certain color like (Red=186, Blue=10, Green=10)
Then 0.2 seconds later check if those pixels that were red are still red.
Then check again 3 times, every 0.2 seconds.
After that tell the program to wait until those pixels turn blue.
When they do open C:User/User1/documentss/pull.mcs --random file.
I would like to create this thing, but I have no idea how to get all the pixels within a certain region (since there is thousands and doing it manually won't work) then check their color and finally tell the program to open another program.
The picture attached is what I am working on, this red thing moves and I need to first find where it is and then make sure that the pixel stays red. Eventually the thing will sink and I need a program to start.
Thanks for reading and please give me some suggestions.
https://i.stack.imgur.com/EvC6t.png
In case someone else is looking for a solution to a problem like this. This can e accomplished with autohotkey very easily.
"find a red pixel in a blue area on screen. The area is a rectangle from
(x = 86)(y = 457) to (x = 770)(y = 641)"
in this example the red area is irrelevant, we're just looking for the blue pixel
PixelSearch, x, y, 457, 770, 770, 641, 0x0000FF, Fast RGB
if (ErrorLevel = 0){
MsgBox, Found the pixel at %x%,%y%
}
else MsgBox, We did not find the pixel.
Return

Marker and figure size in matplotlib : not sure how it works

I want to make a figure that marker's size depend on the size of the figure. That way, using square marker size, no matter what resolution or figure size you choose, all the markers will touch each other, masking the backgroud without overlapping. Here is where I am at:
The marker size is specified in pt^2, with 1pt=1/72inch, the resolution in Pixel Per Inches, and the figure size in pixels (also the proportion that main subplot represent out of the main figure size : 0.8). So, if my graph's limits are lim_min and lim_max, I should by able to get the corresponding marker size using :
marker_size=((fig_size*0.8*72/Resolution)/(lim_max-lim_min))**2
because (fig_size*0.8*72/Resolution) is the size of the figure in points, and (lim_max-lim_min) the number of marker I want to fill a line.
And that should do the trick !... Well it doesn't... At all... The marker are so small they are invisible without a zoom. And I don't get why.
I understand this my not be the best way, and the way you would do it, but I see no reason why it wouldn't work, so I want to understand where I am wrong.
PS : both my main figure and my subplot are squares
Edit :
Okay so I found the reason of the problem, not the solution. The problem in the confusion between ppi and dpi. Matplotlib set the resolution in dpi, which is defined as a unit specific to scanner or printer depending on the model (?!?).
Needless to say I am extremely confused on the actual meaning of the resolution in matplotlib. It simply makes absolutely no sens to me. Please someone help. How do i convert this to a meaningful unit ? It seems that matplotlib website is completely silent on the matter.
If you specify the figure size in inches and matplotlib uses a resolution of 72 points per inch (ppi), then for a given number of markers the width of each marker should be size_in_inches * points_per_inch / number_of_markers points (assuming for now that the subplot uses the entire figure)? As I see it, dpi is only used to display or save the figure in a size of size_in_inches * dpi pixels.
If I understand your goal correctly, the code below should reproduce the required behavior:
# Figure settings
fig_size_inch = 3
fig_ppi = 72
margin = 0.12
subplot_fraction = 1 - 2*margin
# Plot settings
lim_max = 10
lim_min = 2
n_markers = lim_max-lim_min
# Centers of each marker
xy = np.arange(lim_min+0.5, lim_max, 1)
# Size of the marker, in points^2
marker_size = (subplot_fraction * fig_size_inch * fig_ppi / n_markers)**2
fig = pl.figure(figsize=(fig_size_inch, fig_size_inch))
fig.subplots_adjust(margin, margin, 1-margin, 1-margin, 0, 0)
# Create n_markers^2 colors
cc = pl.cm.Paired(np.linspace(0,1,n_markers*n_markers))
# Plot each marker (I could/should have left out the loops...)
for i in range(n_markers):
for j in range(n_markers):
ij=i+j*n_markers
pl.scatter(xy[i], xy[j], s=marker_size, marker='s', color=cc[ij])
pl.xlim(lim_min, lim_max)
pl.ylim(lim_min, lim_max)
This is more or less the same as you wrote (in the calculation of marker_size), except the division by Resolution has been left out.
Result:
Or when settings fig_ppi incorrectly to 60:

Pyplot rotated labels offset by one

Just getting into matplot lib and running into odd problem - I'm trying to plot 10 items, and use their names on the x-axis. I followed this suggestion and it worked great, except that my label names are long and they were all scrunched up. So I found that you can rotate labels, and got the following:
plt.plot([x for x in range(len(df.columns))], df[df.columns[0]], 'ro',)
plt.xticks(range(10), df.columns, rotation=45)
The labels all seem to be off by a tick ("Arthrobacter" should be aligned with 0). So I thought my indexing was wrong, and tried a bunch of other crap to fix it, but it turns out it's just odd (at least to me) behavior of the rotation. If I do rotation='vertical', I get what I want:
I see now that the center of the labels are clearly aligned with the ticks, but I expected that they'd terminate on the ticks. Like this (done in photoshop):
Is there a way to get this done automatically?
The labels are not "off", labels are actually placed via their "center". In your second image, the corresponding tick is above the center of the label, not above its endpoint. You can change that by adding ha='right' which modifies the horizontal alignement of the label.
plt.plot([x for x in range(len(df.columns))], df[df.columns[0]], 'ro',)
plt.xticks(range(10), df.columns, rotation=45, ha='right')
See the comparison below :
1)
plt.plot(np.arange(4), np.arange(4))
plt.xticks(np.arange(4), ['veryverylongname']*4, rotation=45)
plt.tight_layout()
2)
plt.plot(np.arange(4), np.arange(4))
plt.xticks(np.arange(4), ['veryverylongname']*4, rotation=45, ha='right')
plt.tight_layout()