I have a question about the following demo - http://raphaeljs.com/hand.html.
Here is code from the sample...
var r = Raphael("holder", 640, 480), angle = 0;
while (angle < 360) {
var color = Raphael.getColor();
(function(t, c) {
r.circle(320, 450, 20).attr({
stroke : c,
fill : c,
transform : t,
"fill-opacity" : .4
}).click(function() {
s.animate({
transform : t,
stroke : c
}, 2000, "bounce");
}).mouseover(function() {
this.animate({
"fill-opacity" : .75
}, 500);
}).mouseout(function() {
this.animate({
"fill-opacity" : .4
}, 500);
});
})("r" + angle + " 320 240", color);
angle += 30;
}
Raphael.getColor.reset();
var s = r.set();
s.push(r.path("M320,240c-50,100,50,110,0,190").attr({
fill : "none",
"stroke-width" : 2
}));
s.push(r.circle(320, 450, 20).attr({
fill : "none",
"stroke-width" : 2
}));
s.push(r.circle(320, 240, 5).attr({
fill : "none",
"stroke-width" : 10
}));
s.attr({
stroke : Raphael.getColor()
});
The question I have is about the following line of code...
("r" + angle + " 320 240", color);
In the anonymous function the circle is initially drawn at 320, 450 with a radius of 20. Then a transform is applied, for example ("r30 320 240") when the angle is 30.
How does this transform work? The way I read this transform is to rotate the circle 30 degrees around 320,450 , then move 320 horizontally (to the right) and 240 vertically down.
But i'm obviously reading this transform wrong because this is not what is happening.
What am i missing?
Thanks
The transform "r30 320 240" sets the rotation of the object about the point (320,240) by 30 degrees. It does not add to the rotation. It overrides any previous transformations.
If you look at this example:
http://jsfiddle.net/jZyyy/1/
You can see that I am setting the rotation of the circle about the point (0,0). If you consider the point (0,0) to be the centre of a clock, then the circle begins at 3 o'clock. If I use the transform "r90 0 0" the circle will be rotated from 3 o'clock to 6 o'clock. If I then later set the transform to be "r30 0 0" the circle will be at 4 o'clock, rotated 30 degrees from the original 3 o'clock position about the point (0,0).
Related
First of all, I realize there are existing questions about converting an RGB image to an HSV image out there; I used one of those questions to help me write my code. However, I am getting values for HSV that don't make sense to me.
What I know about HSV I have gotten from this website. From this colorpicker, I have inferred that H is a number ranging from 0-360 degrees, S is a number ranging from 0-100%, and V is a number ranging from 0-100%. Therefore, I had assumed that my code (as follows) would return an H value between 0 and 360, and S/V values between 0 and 100. However, this is not the case.
I plugged my program's output into the above color picker, which all S/V values down to 100 when they exceeded 100. As you can see, the output is close to what it should be, but is not accurate. I feel like this is because I am interpreting the HSV values incorrectly.
For contex, I am going to establish a range for each color on the cube and from there look at the other faces and fill out the current setup of the cube in another program I have.
My code:
void get_color(Mat img, int x_offset, int y_offset)
{
Rect outline(x_offset - 2, y_offset - 2, 5, 5);
rectangle(img, outline, Scalar(255, 0, 0), 2);
Rect sample(x_offset, y_offset, 1, 1);
Mat rgb_image = img(sample);
Mat hsv_image;
cvtColor(rgb_image, hsv_image, CV_BGR2HSV);
Vec3b hsv = hsv_image.at<Vec3b>(0, 0);
int hue = hsv.val[0];
int saturation = hsv.val[1];
int value = hsv.val[2];
printf("H: %d, S: %d, V: %d \n", hue, saturation, value);
}
Output of the program:
H: 21, S: 120, V: 191 // top left cubie
H: 1, S: 180, V: 159 // top center cubie
H: 150, S: 2, V: 142 // top right cubie
H: 86, S: 11, V: 159 // middle left cubie
H: 75, S: 12, V: 133 // middle center cubie
H: 5, S: 182, V: 233 // middle right cubie
H: 68, S: 7, V: 156 // bottom left cubie
H: 25, S: 102, V: 137 // bottom center cubie
H: 107, S: 155, V: 69 // bottom right cubie
Starting image (pixel being extracted # center of each blue square):
Resulting colors (as the above color picker gave):
As you can see, the red and white is fairly accurate, but the orange and yellow are not correct and the blue is blatantly wrong; it is impossible for the pixel I looked at to actually be that color. What am I doing wrong? Any help would be greatly appreciated.
OpenCV has a funny way of representing its colors.
Hue - Represented as a number from 0-179 instead of 0-360. Therefore, multiply the H value by two before plugging it into a traditional color picker.
Saturation/Value - Represented as a number from 0-255. To get a percentage, divide given answer by 255 and multiply by 100 to get a percentage.
Everything works much more sensibly now. See this website for more details on OpenCV and HSV.
Given the height and the width of a rectangle of any size and an aspect ratio, how can I calculate the height and width of a minimum enclosing rectangle of the given aspect ratio?
In code, the function signature would look like this
public Size getMinimumEnclosingRectangle(Size originalRectangle, float aspectNumerator, float aspectDenomiator);
Calls to this function would look like
originalRectangle AspectRatio Result
-------------------------------------------
100x100 1:2 100x200
64x32 1:1 64x64
125x100 3:2 150x100
100x345 1:3 115x345
This may not be the best way, but the way I do this calculation is to calculate the change in aspect ratio and then base the resulting width/height calculation of of that. Here is some code to illustrate this algorithm in practice:
var sourceImages = [
{width: 100, height: 100, toAspectRatio:1/2 },
{width: 64, height: 32, toAspectRatio:1/1 },
{width: 125, height: 100, toAspectRatio:3/2 },
{width: 100, height: 345, toAspectRatio:1/3 },
{width: 345, height: 100, toAspectRatio:1/3 }
];
function calculateNewSize( sourceWidth, sourceHeight, toAspectRatio )
{
var aspectRatioChange = (sourceWidth / sourceHeight) / toAspectRatio;
var fitWidth = aspectRatioChange < 1 ? sourceHeight * toAspectRatio : sourceWidth;
var fitHeight = aspectRatioChange >= 1 ? sourceWidth / toAspectRatio : sourceHeight;
console.log('(' + aspectRatioChange + ') ' + sourceWidth + " x " + sourceHeight + " -> "
+ toAspectRatio + ' -> ' + fitWidth + ' x ' + fitHeight);
}
sourceImages.forEach(function(source) {
calculateNewSize(source.width, source.height, source.toAspectRatio);
});
I try to draw a normal XY plot using a TChart (TeeChart) component in Embarcadero RAD Studio. When I add new points that have evenly spaced x values, e. g.
x: 1 2 3 4 5
y: 10 20 5 8 100
everything is drawn OK.
But when I add points that are unevenly spaced on the x axis, e. g.
x: 1 2 100 120 150
y: 10 20 5 8 100
the chart is drawn in such a way that the points still have the same distance between each other on the x axis. That is the distance between points 1-2 is the same as between 2-100. Is it possible to draw a proportional XY plot?
This is my sample code:
Series1->Add(10, 1);
Series1->Add(20, 2);
Series1->Add(5, 100);
Series1->Add(8, 120);
Series1->Add(100, 150);
The style of Series1 is Line.
Instead of calling Add, you need to call AddXY to add XY points.
I was able to find an example of a Polar clock at http://raphaeljs.com/polar-clock.html
I modified it to draw concentric circles, but I need the arc to start at 6 o'clock. I am trying to dissect how it works, but haven't been able to figure it out.
JS Fiddle:
http://jsfiddle.net/5frQ8/
var r = Raphael("holder", 600, 600);
// Custom Attribute
r.customAttributes.arc = function (value, total, R, color)
{
var alpha = 360 / total * value,
a = (90 - alpha) * Math.PI / 180,
x = 300 + R * Math.cos(a),
y = 300 - R * Math.sin(a),
path;
if (total == value)
{
path = [["M", 300, 300 - R], ["A", R, R, 0, 1, 1, 299.99, 300 - R]];
}
else
{
path = [["M", 300, 300 - R], ["A", R, R, 0, +(alpha > 180), 1, x, y]];
}
return {path: path, stroke: color,"stroke-width": 30};
};
//West
r.path().attr({arc: [575, 2000, 200, '#19A69C']});
//Total#
r.path().attr({arc: [1000, 2000, 160, '#FEDC38']});
//East
r.path().attr({arc: [425, 2000, 120, '#7BBD26']});
I have modified the main function to make the arcs start from 6 o'clock equivalent position. Please note that the formulae to find a point in polar coordinates are always:
x = centerX + radius * cos(angle)
y = centerY + radius * sin(angle)
Find the starting and ending points accordingly.
To change the starting angle by "delta", all angles should be added by "delta". Thus,
newAngle = angle + delta
The values of delta are -90 and +90 for the arcs to start from 12 o'clock and 6 o'clock respectively.
The arc drawing function is changed accordingly.
// Custom Attribute
r.customAttributes.arc = function (value, total, R, color)
{
var angleShift = 90,
alpha = 360 / total * value,
a = (alpha + angleShift) * Math.PI / 180,
x = 300 + R * Math.cos(a),
y = 300 + R * Math.sin(a),
path;
if (total == value)
{
path = [["M", 300, 300 + R], ["A", R, R, 0, 1, 1, 300.01, 300 + R]];
}
else
{
path = [["M", 300, 300 + R], ["A", R, R, 0, +(alpha > 180), 1, x, y]];
}
return {path: path, stroke: color,"stroke-width": 30};
};
At Wikipedia's Mandelbrot set page there are really beautiful generated images of the Mandelbrot set.
I also just implemented my own Mandelbrot algorithm. Given n is the number of iterations used to calculate each pixel, I color them pretty simple from black to green to white like that (with C++ and Qt 5.0):
QColor mapping(Qt::white);
if (n <= MAX_ITERATIONS){
double quotient = (double) n / (double) MAX_ITERATIONS;
double color = _clamp(0.f, 1.f, quotient);
if (quotient > 0.5) {
// Close to the mandelbrot set the color changes from green to white
mapping.setRgbF(color, 1.f, color);
}
else {
// Far away it changes from black to green
mapping.setRgbF(0.f, color, 0.f);
}
}
return mapping;
My result looks like that:
I like it pretty much already, but which color gradient is used for the images in Wikipedia? How to calculate that gradient with a given n of iterations?
(This question is not about smoothing.)
The gradient is probably from Ultra Fractal. It is defined by 5 control points:
Position = 0.0 Color = ( 0, 7, 100)
Position = 0.16 Color = ( 32, 107, 203)
Position = 0.42 Color = (237, 255, 255)
Position = 0.6425 Color = (255, 170, 0)
Position = 0.8575 Color = ( 0, 2, 0)
where Position is in range [0, 1) and Color is RGB in range [0, 255].
The catch is that the colors are not linearly interpolated. The interpolation of colors is likely cubic (or something similar). Following image shows the difference between linear and Monotone cubic interpolation:
As you can see the cubic interpolation results in smoother and "prettier" gradient. I used monotone cubic interpolation to avoid "overshooting" of the [0, 255] color range that can be caused by cubic interpolation. Monotone cubic ensures that interpolated values are always in the range of input points.
I use following code to compute the color based on iteration i:
double smoothed = Math.Log2(Math.Log2(re * re + im * im) / 2); // log_2(log_2(|p|))
int colorI = (int)(Math.Sqrt(i + 10 - smoothed) * gradient.Scale) % colors.Length;
Color color = colors[colorI];
where i is the diverged iteration number, re and im are diverged coordinates, gradient.Scale is 256, and the colors is and array with pre-computed gradient colors showed above. Its length is 2048 in this case.
Well, I did some reverse engineering on the colours used in wikipedia using the Photoshop eyedropper. There are 16 colours in this gradient:
R G B
66 30 15 # brown 3
25 7 26 # dark violett
9 1 47 # darkest blue
4 4 73 # blue 5
0 7 100 # blue 4
12 44 138 # blue 3
24 82 177 # blue 2
57 125 209 # blue 1
134 181 229 # blue 0
211 236 248 # lightest blue
241 233 191 # lightest yellow
248 201 95 # light yellow
255 170 0 # dirty yellow
204 128 0 # brown 0
153 87 0 # brown 1
106 52 3 # brown 2
Simply using a modulo and an QColor array allows me to iterate through all colours in the gradient:
if (n < MAX_ITERATIONS && n > 0) {
int i = n % 16;
QColor mapping[16];
mapping[0].setRgb(66, 30, 15);
mapping[1].setRgb(25, 7, 26);
mapping[2].setRgb(9, 1, 47);
mapping[3].setRgb(4, 4, 73);
mapping[4].setRgb(0, 7, 100);
mapping[5].setRgb(12, 44, 138);
mapping[6].setRgb(24, 82, 177);
mapping[7].setRgb(57, 125, 209);
mapping[8].setRgb(134, 181, 229);
mapping[9].setRgb(211, 236, 248);
mapping[10].setRgb(241, 233, 191);
mapping[11].setRgb(248, 201, 95);
mapping[12].setRgb(255, 170, 0);
mapping[13].setRgb(204, 128, 0);
mapping[14].setRgb(153, 87, 0);
mapping[15].setRgb(106, 52, 3);
return mapping[i];
}
else return Qt::black;
The result looks pretty much like what I was looking for:
:)
I believe they're the default colours in Ultra Fractal. The evaluation version comes with source for a lot of the parameters, and I think that includes that colour map (if you can't infer it from the screenshot on the front page) and possibly also the logic behind dynamically scaling that colour map appropriately for each scene.
This is an extension of NightElfik's great answer.
The python library Scipy has monotone cubic interpolation methods in version 1.5.2 with pchip_interpolate. I included the code I used to create my gradient below. I decided to include helper values less than 0 and larger than 1 to help the interpolation wrap from the end to the beginning (no sharp corners).
#set up the control points for your gradient
yR_observed = [0, 0,32,237, 255, 0, 0, 32]
yG_observed = [2, 7, 107, 255, 170, 2, 7, 107]
yB_observed = [0, 100, 203, 255, 0, 0, 100, 203]
x_observed = [-.1425, 0, .16, .42, .6425, .8575, 1, 1.16]
#Create the arrays with the interpolated values
x = np.linspace(min(x_observed), max(x_observed), num=1000)
yR = pchip_interpolate(x_observed, yR_observed, x)
yG = pchip_interpolate(x_observed, yG_observed, x)
yB = pchip_interpolate(x_observed, yB_observed, x)
#Convert them back to python lists
x = list(x)
yR = list(yR)
yG = list(yG)
yB = list(yB)
#Find the indexs where x crosses 0 and crosses 1 for slicing
start = 0
end = 0
for i in x:
if i > 0:
start = x.index(i)
break
for i in x:
if i > 1:
end = x.index(i)
break
#Slice away the helper data in the begining and end leaving just 0 to 1
x = x[start:end]
yR = yR[start:end]
yG = yG[start:end]
yB = yB[start:end]
#Plot the values if you want
#plt.plot(x, yR, color = "red")
#plt.plot(x, yG, color = "green")
#plt.plot(x, yB, color = "blue")
#plt.show()