Google line graph, define shaded area - google-visualization

I have a normal line graph with 2 lines. In addition, to the lines, I want to have a full height shaded area which would take up around 25% of the chart on the x axis.
Is this possible?

Related

How to plot 'outside' of the a matplotlib plot?

I have values 1 through 5000 along the x-axis and percentage along the y-axis, however, I only want the values from 1-150 to be visible along the x-axis (in order to make the scale more usable), but I'm having trouble figuring out how to do it.
Originally, I was just excluding the data with values over 150, but that obviously doesn't work with percentages.
You can achieve this by limiting the range of the x-axis (or y-axis) of the plot: plt.xlim(1, 150) or ax.set_xlim(1, 150).

How to Find Diameter of the circle

I have an image of a circle but my circle is not perfect
Firstly I found transition coordinates
Detecting Circles without using Hough Circles
and than ı use this formula https://math.stackexchange.com/questions/675203/calculating-centre-of-rotation-given-point-coordinates-at-different-positions/1414344#1414344
Finally ı have fourn radiuses which the longest and shortest
Now I have this image:
BUt they are radius I need to find diameter How to find diameter from image??
Or How to I can find mutual/symetric/ point in a circle
For this image, the approaches mentioned are overkill. Just find the bounding box of the non-black pixels. Because of sampling artifacts, the horizontal and vertical side lengths may differ by one or two pixels.
If I am right, the outer circle is 277 x 273 pixels. If you consider the difference to be significant, then this is an ellipse, not a circle.
3 ways to do this:
I think you need to measure it from image:
so use edge detection (blue line width from left to right = width of bounding box of the blue pixels) then count pixels.
If you need then convert to any unit you want like inch (using pixel per inch).
If your circle is not perfect circle (stretched) measure in many direction so you can find its deviation too.
There is another way named Monte-Carlo method:
first generate random x and y (inside square) and then evaluate that point (x, y) is inside the circle or not and count the number of inside occurrences, then you can calculate the Area of circle using ratio on inside count/total (and therefore diameter).
without using random numbers:
fill(color) circle inside then simply count black pixels this is = to Area outside circle => Total Area(Square Area)- black pixel Area = Circle Area => calculate diameter.

How to interpolate a columnof n number of pixels to m number of pixels in opencv c++?

I am working to extract a region from an image by drawing the circle and place the complete circular region linearly into another image.
First i draw a circle with known center and required radius 60. With help of lineIterator i extracted points along the line from centre to one endpoint on circle and placed all these points linearly into another image, done this for 360 lines as circle has 360 degrees.
there is no problem at 0,90,180 and 270 degrees. but with angles in between the number of pixels get reduced and the blue line in my second image looks like mountains.
I want to interpolate each column from my second image so that the blue curves looks line a straight line.
I want to know how to interpolate a column of 44 pixels to 60 pixels? I hope by doing this i will get reauired image.
Any other suggestions from your side are also accepted.

Google Charts - How to stretch x axis over chart's full width

I can't figure out why Google Charts draws this simple chart aligned to center and doesn't fill entire white area.
Note: X axis is discrete because it represents weeks.
Do you have any idea what can I do with it?
That is how the charts display when you use a discrete (string) axis. If you want edge-to-edge lines, you need to use a continuous (number, date, datetime, timeofday) axis. See an example of the differences here: http://jsfiddle.net/asgallant/Xfx3h/.

Grouping different scale bounding boxes

I've created an openCV application for human detection on images.
I run my algorithm on the same image over different scales, and when detections are made, at the end I have information about the bounding box position and at which scale it was taken from. Then I want to transform that rectangle to the original scale, given that position and size will vary.
I've wrapped my head around this and I've gotten nowhere. This should be rather simple, but at the moment I am clueless.
Help anyone?
Ok, got the answer elsewhere
"What you should do is store the scale where you are at for each detection. Then transforming should be rather easy right. Imagine you have the following.
X and Y coordinates (center of bounding box) at scale 1/2 of the original. This means that you should multiply with the inverse of the scale to get the location in the original, which would be 2X, 2Y (again for the bounxing box center).
So first transform the center of the bounding box, than calculate the width and height of your bounding box in the original, again by multiplying with the inverse. Then from the center, your box will be +-width_double/2 and +-height_double/2."