confusion about gluProject and OpenGL clipping - c++

Consider the fixed transformation pipeline of OpenGL, with the following parameters:
GL_MODELVIEW_MATRIX
0,175,303.109,0,688.503,-2741.84,1583,0,29.3148,5.52094,-3.18752,0,-87.4871,731.309,-1576.92,1
GL_PROJECTION_MATRIX
2.09928,0,0,0,0,3.73205,0,0,0,0,-1.00658,-1,0,0,-43.9314,0
GL_VIEWPORT
0,0,1920,1080
When I draw the faces of the unit cube I get the following:
By looking at the picture, I would expect half of the vertices to have pixel y-coordinate above 1080, and the other half to have a negative y-coordinate.
Instead, with gluProject, all vertices have y > 1080:
model coordinate 0 0 0 -> screen coordinate 848.191 1474.61 0.989359
model coordinate 1 0 0 -> screen coordinate 821.586 1973.88 0.986045
model coordinate 0 1 0 -> screen coordinate -198317 667165 4.61719
model coordinate 1 1 0 -> screen coordinate -2957.48 12504.1 1.07433
model coordinate 0 0 1 -> screen coordinate 885.806 1479.77 0.989388
model coordinate 1 0 1 -> screen coordinate 868.195 1979.01 0.986088
model coordinate 0 1 1 -> screen coordinate -438501 1.39841e+06 8.60228
model coordinate 1 1 1 -> screen coordinate -3191.35 12592.4 1.07507
I could successfully reproduce the gluProject() results with my "custom" calculations.
Why the y-coordinate of all vertices is above 1080?
P.S. To draw the cube I rely on:
glBegin(GL_QUADS);
for(int d = 0; d < 3; ++d)
for(int s = 0; s < 2; ++s)
for(int v = 0; v < 4; ++v)
{
const int a = (v & 1) ^ (v >> 1 & 1);
const int b = v >> 1 & 1;
const int d0 = d;
const int d1 = (d + 1) % 3;
const int d2 = (d + 2) % 3;
double p[3];
p[d] = s;
p[d1] = a;
p[d2] = b;
glColor3dv(p);
glVertex3dv(p);
}
glEnd();

I found the answer, in part thanks to this post.
The explanation is that the 4 vertices that have y < 0 in screen space, are also behind the camera, and so have w_clip < 0.
Perspective division (y_clip/w_clip) produces in turn a positive value in device independent coordinates and screen space.

Related

I try to understand the View Matrix in OpenGL

I try to construct my own View Matrix in OpenGL.
I'm following this link
https://www.opengl.org/sdk/docs/man2/xhtml/gluLookAt.xml
From the OpenGL doc, I have following.
eye position = eye(xe, ye, ze)
center position = cen(0, 0, 0)
up = up(xu, yu, zu). (e.g. up = (0, 1, 0))
forward vector
f' = cen - eye = (0, 0, 0) - (xe, ye, ze) = (-xe, -ye, -ze)
side vector
s' = f' x up
I don't understand why f' x up, why not up x f'
u' = s' x f'
I do't understand why u' = s' x f', why not u' = f' x s'
we normalize s', u', f'
s = norm(s'), u = norm(u'), f=norm(f')
We construct the rotation matrix with row-major(what we learn in algebra class)
R =
s_x u_x f_x 0
s_y u_y f_y 0
s_z u_z f_z 0
0 0 0 1
translation matrix:
T =
1 0 0 x
0 1 0 y
0 0 1 z
0 0 0 1
we know
M = T*R
View Matrix V = invert(M)
V = invert(T*R) = invert(R)invert(T)
V = transpose(R)invert(T)
transpose(R) =
s_x s_y s_z 0
u_x u_y u_z 0
f_x f_y f_z 0
0 0 0 1
invert(T) =
1 0 0 -x
0 1 0 -y
0 0 1 -z
0 0 0 1
so
View Matrix V = transpose(R)invert(T)
But from the OpenGL doc., f change to -f
The rotation changes to following
R =
s_x u_x -f_x 0
s_y u_y -f_y 0
s_z u_z -f_z 0
0 0 0 1
I Don't understand why we need to change the forward vector to negative.
The cross product order just follows from its definition. It is just like it is. You are setting up a right-handed coordinate system. So if you align the thumb of your right hand with the first factor and the index finger with the second factor, then the middle finger will point in the direction of the cross product (perpendicular to both). There is really not much more to tell about this.
And since you are setting up a right-handed coordinate system, the forward direction of the camera must be mapped to the negative z-direction. That's why the third column of the rotation matrix is inverted. If you don't do this, you end up with a left-handed coordinate system, where the camera looks in the direction of the positive z-axis.

calculate pixel coordinates for 8 equidistant points on a circle

I have a circle centred at 0 with radius 80. How using python do I calculate the coordinates for 8 equidistant points around the circumference of the circle?
r = 80
numPoints = 8.0
points = []
for index in range(numPoints):
points.append([r*math.cos((index*2*math.pi)/numPoints),r*math.sin((index*2*math.pi)/numPoints)])
return points
you can simplify this some if you know you are always going to have only 8 points with something like:
r = 80
numPoints = 8
points = []
x = (r*math.sqrt(2))/2
points = [[0,r],[x,x],[r,0],[-x,x],[-r,0],[-x,-x],[0,-r],[x,-x]]
print points
with x being the x/y of the point 45 degrees and 80 units away from the origin
click this pic for more clarity
in the above picture.
coordinates 1,2,3,4,5,6,7,8 are equidistant points on a circumference of circle Radius R and its centre is at X (0,0)
take the triangle XLZ , its aright angled at L ,
Let LZ = H ,
LY = A
XL + LY = R => XL + A = R => XL = R-A
since XLZ is right angled , XZ square = XL square + LZ square
R square = (R-A) square + h square ————1
since these 8 points makes an octagon theta = 360 deg / 8 = 45 deg
tan 45 deg = h / XL = h / R-A => 1 = h/ R-A => h = R-A —————2
Z coordinates are (R-A, h) = > (h,h)
from the equations 1 and 2
R square = h square + h square => 2 h square = R square => h = R/ sqrt 2
so the coordinates at point 2 (Z) = (R/sqrt2, R/sqrt2)
remaining can be derived easily as they are just oppside
So all coordinates are
1 (0,R)
2 (R/sqrt2,R/sqrt2)
3 (R,0)
4 (-R/sqrt2, R/sqrt2)
5 (-R,0)
6 (-R/sqrt2,-R/sqrt2)
7 (0,-R)
8 (R/sqrt2, -R/sqrt2)

OpenGL gaps within sphere

I have recently been trying to render a 3D sphere in OpenGL using triangles. I have been testing and modifying code from various websites and have finally found a winning combination. The only problem is that there are visible gaps in the sphere. Any thoughts on what would be causing this?
Code to render sphere
float Slices = 30;
float Stacks = 60;
float Radius = 20.0;
for (int i = 0; i <= Stacks; ++i){
float V = i / (float) Stacks;
float phi = V * glm::pi <float> ();
for (int j = 0; j <= Slices; ++j){
float U = j / (float) Slices;
float theta = U * (glm::pi <float> () * 4);
float x = cosf (theta) * sinf (phi);
float y = cosf (phi);
float z = sinf (theta) * sinf (phi);
x *= Radius;
y *= Radius;
z *= Radius;
Vertex *v = new Vertex {{x,y,z}, //Position
{255,0,0}}; //Color
screenToBuffer(v, 1);
delete []v;
}
}
Problem
Try and set it to GL_TRIANGLE_STRIP​
What might be the problem is that it considers each group of three vertices to be only one triangle.
Like so
Indices: 0 1 2 3 4 5 ...
Triangles: {0 1 2} {3 4 5}
The GL_TRIAGLE_STRIP will do this.
Indices: 0 1 2 3 4 5 ...
Triangles: {0 1 2}
{1 2 3} drawing order is (2 1 3) to maintain proper winding
{2 3 4}
{3 4 5}
See this answer for a proper way to do it.
https://stackoverflow.com/a/7958376/1943599

Swap two colors using color matrix

How can I swap two colors using a color matrix? For instance swapping red and blue is easy. The matrix would look like:
0 0 1 0 0
0 1 0 0 0
1 0 0 0 0
0 0 0 1 0
0 0 0 0 1
So how can I swap any two colors in general? For example, there is Color1 with R1, G1, B1 and Color2 with R2, G2, B2.
EDIT: By swap I mean Color1 will translate into color2 and color2 will translate into color1. Looks like I need a reflection transformation. How to calculate it?
GIMP reference removed. Sorry for confusion.
This appears to be the section of the color-exchange.c file in the GIMP source that cycles through all the pixels and if a pixel meets the chosen criteria(which can be a range of colors), swaps it with the chosen color:
for (y = y1; y < y2; y++)
{
gimp_pixel_rgn_get_row (&srcPR, src_row, x1, y, width);
for (x = 0; x < width; x++)
{
guchar pixel_red, pixel_green, pixel_blue;
guchar new_red, new_green, new_blue;
guint idx;
/* get current pixel-values */
pixel_red = src_row[x * bpp];
pixel_green = src_row[x * bpp + 1];
pixel_blue = src_row[x * bpp + 2];
idx = x * bpp;
/* want this pixel? */
if (pixel_red >= min_red &&
pixel_red <= max_red &&
pixel_green >= min_green &&
pixel_green <= max_green &&
pixel_blue >= min_blue &&
pixel_blue <= max_blue)
{
guchar red_delta, green_delta, blue_delta;
red_delta = pixel_red > from_red ?
pixel_red - from_red : from_red - pixel_red;
green_delta = pixel_green > from_green ?
pixel_green - from_green : from_green - pixel_green;
blue_delta = pixel_blue > from_blue ?
pixel_blue - from_blue : from_blue - pixel_blue;
new_red = CLAMP (to_red + red_delta, 0, 255);
new_green = CLAMP (to_green + green_delta, 0, 255);
new_blue = CLAMP (to_blue + blue_delta, 0, 255);
}
else
{
new_red = pixel_red;
new_green = pixel_green;
new_blue = pixel_blue;
}
/* fill buffer */
dest_row[idx + 0] = new_red;
dest_row[idx + 1] = new_green;
dest_row[idx + 2] = new_blue;
/* copy alpha-channel */
if (has_alpha)
dest_row[idx + 3] = src_row[x * bpp + 3];
}
/* store the dest */
gimp_pixel_rgn_set_row (&destPR, dest_row, x1, y, width);
/* and tell the user what we're doing */
if (!preview && (y % 10) == 0)
gimp_progress_update ((gdouble) y / (gdouble) height);
}
EDIT/ADDITION
Another way you could have transformed red to blue would be with this matrix:
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
-1 0 1 0 1
The only values that really matter are the bottom ones in this matrix.
This would be the same as saying subtract 255 from red, keep green the same, and then add 255 to blue. You could cut the alpha in half like so as well like so:
-1 0 1 -0.5 1
So (just like the gimp source) you just need to find the difference between your current color and your target color, for each channel, and then apply the difference. Instead of channel values from 0 to 255 you would use values from 0 to 1.
You could have changed it from red to green like so:
-1 1 0 0 1
See here for some good info:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms533875%28v=vs.85%29.aspx
Good luck.
I solved it by creating a reflection matrix via D3DXMatrixReflect using a plane that's perpendicular to the vector AB and intersects the midpoint of the AB.
D3DXVECTOR3 AB( colorA.r-colorB.r, colorA.g-colorB.g, colorA.b-colorB.b );
D3DXPLANE plane( AB.x, AB.y, AB.z, -AB.x*midpoint.x-AB.y*midpoint.y-AB.z*midpoint.z );
D3DXMatrixReflect

Linear programming add weight

I have been tasked with writing a linear program that will tell the user where to add weight onto a cylindrical drum in order to balance the center of gravity. The weights are 2 lbs and 5 lbs, and a Maximum of 10 lbs can be added into one location. The 2 lb weights are 2" tall and the 5 lb weights are 6" tall. I think the best way to go about this is to use polar coordinates and assume a perfect cyinder for now, as it will be within 1% of perfect. I also think I should start only changing the X and Y axis and keep the Z axis at 0 for now. Any tips to head me in the right direction would be appreciated.
!Drum weight problem;
!sets;
Sets:
Weight: Pounds, Height;
Location: X, Y, Angle;
Set(Weight, Location): PX, PY, PAngle;
Endsets
!data;
Data:
Weight = W1 W2 W3 W4;
Location = L1 L2 L3 L4;
!attribute values;
Pounds = 2 4 5 10;
Height = 2 4 6 12;
X = 0 1 2 3;
Y = 0 1 2 3;
Angle = 0 90 180 270;
Enddata
!objective;
Min = #MIN(Set(I, J): Weight (I, J), Location (K, L, M);
!constraints;
#FOR( Weight(I): [Weight_row]
Pounds >= 2;
Height >= 2;
#FOR( Location(J): [Location_row]
X >=0;
Y >=0;
Angle >=0;
End