Get normal of a gluDisk? - opengl

I'm writing a Solar System simulator in OpenGL. So one of the components in the Solar System is the Earth's orbit (simple circle around the sun, created by the gluDisk function).
I was wondering how can I retrieve the normal vector of this disk because I need to use it as a rotation vector for the camera (which needs to follow Earth's spin around the Sun).
This is the code (Java) that creates the orbit, it works well.
Using this code, how could I retrieve the normal of this disk? (as the disk is contained in some plane, which is defined by some normal).
gl.glPushMatrix();
// if tilt is 0, align the orbit with the xz plane
gl.glRotated(90d - orbitTilt, 1, 0, 0);
gl.glTranslated(0, 0, 0); // orbit is around the origin
gl.glColor3d(0.5, 0.5, 0.5); // gray
// draw orbit
glu.gluQuadricDrawStyle(quad, GLU.GLU_SILHOUETTE);
glu.gluDisk(quad, 0, position.length(), slices, 1);
gl.glPopMatrix();

Before transformation, the disk is in the xy-plane. This means that the normals point in the z-direction, (0, 0, 1). In fact, gluDisk() will emit these normal during rendering, and the transformations you specify will be applied to them, so for rendering you don't have to do anything.
To calculate the transformed normal yourself, you only need to apply the rotation. The translation is not used for transforming normals.
So we need to apply a rotation of 90 - t around the x-axis to the vector (0, 0, 1). Applying the corresponding rotation matrix gives:
[ 1 0 0 ] [ 0 ] [ 0 ]
[ 0 cos(90-t) -sin(90-t) ] * [ 0 ] = [ -sin(90-t) ]
[ 0 sin(90-t) cos(90-t) ] [ 1 ] [ cos(90-t) ]
Applying a couple of trigonometric identities:
cos(90 - t) = sin(t)
sin(90 - t) = cos(t)
gives:
[ 0 ] [ 0 ]
[ -sin(90-t) ] = [ -cos(t) ]
[ cos(90-t) ] [ sin(t) ]
When you apply the sin() and cos() functions in your code, keep in mind that the angles are specified in radians. So if your orbitTilt angle is currently in degrees, the normal would be:
xNormal = 0.0;
yNormal = -cos(orbitTilt * (M_PI / 180.0));
zNormal = sin(orbitTilt * (M_PI / 180.0));

Related

Waving flag shader normals

I'm trying to create a vertex shader that produces a waving flag. I got the vertex positions going fine, but I'm having trouble adjusting the NORMALS to be what they should be in a waving flag. I did a little kludge, but it works from only SOME angles.
Here is the pertinent vertex shader code:
vec4 aPos=input_XYZ;
float aTime=(uniform_timer/75.)*7.;
aPos.y+=sin(aTime)*.15f*input.XYZ.x; // The wave
output_XYZ=aPos*uniform_ComboMatrix; // Into scene space
vec4 aWorkNormal=input_Normal;
aWorkNormal.y+=sin(aTime)*.25f; // <-- Here's the kludge to inexpensively tip the normal into the "wave" pattern
output_Normal=aWorkNormal*uniform_NormalizedWorldMatrix; // Put into same space as the light's vector
So I want to lose the kludge and actually give it the correct normal for flag waving... what's the right way to transform/rotate a normal on a waving flag to point correctly after the y position is modifier by sin?
The transformation you apply to aPos is a linear one, which can be described by the matrix:
[ 1 0 0 ]
M = [ C 1 0 ]
[ 0 0 1 ]
C = sin(aTime)*0.15
The normals then need to be transformed by the matrix W = transpose(inverse(M)):
[ 1 -C 0 ]
W = [ 0 1 0 ]
[ 0 0 1 ]
Turning it back into code:
vec4 aWorkNormal = input_Normal;
aWorkNormal.x -= sin(aTime)*.15f*aWorkNormal.y;

Algorithm for 'pixelated circle' image recognition

Here are three sample images. In these images I want to find:
Coordinates of the those small pixelated partial circles.
Rotation of these circles. These circle have a 'pointy' side. I want to find its direction.
For example, coordinates and the angle with positive x axis of that small partial circle in the
first image is (51 px, 63 px), 240 degrees, respectively.
second image is (50 px, 52 px), 300 degrees, respectively.
third image is (80 px, 29 px), 225 degrees, respectively.
I don't care about scale invariance.
Methods I have tried:
ORB feature detection
SIFT feature detection
Feature detection don't seem to work here.
Above is the example of ORB feature detector finding similar features in 1st and 2nd image.
It is finding one correct match, rest are wrong.
Probably because these images are too low resolution to find any meaningful corners or blobs. The corners and blob it does find are not much different form other pixelated object present.
I have seen people use erosion and dilution to remove noise, but my objects are too small for that to work.
Perhaps some other feature detector can help?
I am also thinking about Generalized Hough transform, however I cant find a complete tutorial to implement it with OpenCV (c++). Also I want something that is fast. Hopefully in real time.
Any help is appreciated.
If the small circles have constant size, then you might try a convolution.
This is a quick and dirty test I ran with ImageMagick for speed, and coefficients basically pulled out of thin air:
convert test1.png -define convolve:scale='!' -morphology Convolve \
"12x12: \
-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9 \
-9,-7,-2,-1,0,0,0,0,-1,-2,-7,-9 \
-9,-2,-1,0,9,9,9,9,0,-1,-2,-9 \
-9,-1,0,9,7,7,7,7,9,0,-1,-9 \
-9,0,9,7,-9,-9,-9,-9,7,9,0,-9 \
-9,0,9,7,-9,-9,-9,-9,7,9,0,-9 \
-9,0,9,7,-9,-9,-9,-9,7,9,0,-9 \
-9,0,9,7,-9,-9,-9,-9,7,9,0,-9 \
-9,-1,0,9,7,7,7,7,9,0,-1,-9 \
-9,-2,0,0,9,9,9,9,0,0,-2,-9 \
-9,-7,-2,-1,0,0,0,0,-1,-2,-7,-9 \
-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9" \
test2.png
I then ran a simple level stretch plus contrast to bring out what already were visibly more luminous pixels, and a sharpen/reduction to shrink pixel groups to their barycenters (these last operations could be done by multiplying the matrix by the proper kernel), and got this.
The source image on the left is converted to the output on the right, the pixels above a certain threshold mean "circle detected".
Once this is done, I imagine the "pointy" end can be refined with a modified quicunx - use a 3x3 square grid centered on the center pixel, count the total luminosity in each of the eight peripheral squares, and that ought to give you a good idea of where the "point" is. You might want to apply thresholding to offset a possible blurring of the border (the centermost circle in the example below, the one inside the large circle, could give you a false reading).
For example, if we know the coordinates of the center in the grayscale matrix M, and we imagine the circle having diameter of 7 pixels (this is more or less what the convolution above says), we would do
uint quic[3][3] = { { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } };
for (y = -3; y <= 3; y++) {
for (x = -3; x <= 3; x++) {
if (matrix[cy+y][cx+x] > threshold) {
quic[(y+3)/2-1][(x+3)/2-1] += matrix[cy+y][cx+x];
}
}
}
// Now, we find which quadrant in quic holds the maximum:
// if it is, say, quic[2][0], the point is southeast.
// 0 1 2 x
// 0 NE N NW
// 1 E X W
// 2 SE S SW
// y
// Value X (1,1) is totally unlikely - the convolution would
// not have found the circle in the first place if it was so
For an accurate result you would have to use "sub-pixel" addressing, which is slightly more complicated. With the method above, one of the circles results in these quicunx values, that give a point to the southeast:
Needless to say, with this kind of resolution the use of a finer grid is pointless, you'd get an error of the same order of magnitude.
I've tried with some random doodles and the convolution matrix has a good rejection of non-signal shapes, but of course this is due to information about the target's size and shape - if that assumption fails, this approach will be a dead end.
It would help to know the image source: there're several tricks used in astronomy and medicine to detect specific shapes or features.
Python opencv2
The above can be implemented with Python:
#!/usr/bin/python3
import cv2
import numpy as np
# Scaling factor
d = 240
kernel1 = np.array([
[ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9 ],
[ -9,-7,-2,-1,0,0,0,0,-1,-2,-7,-9 ],
[ -9,-2,-1,0,9,9,9,9,0,-1,-2,-9 ],
[ -9,-1,0,9,7,7,7,7,9,0,-1,-9 ],
[ -9,0,9,7,-9,-9,-9,-9,7,9,0,-9 ],
[ -9,0,9,7,-9,-9,-9,-9,7,9,0,-9 ],
[ -9,0,9,7,-9,-9,-9,-9,7,9,0,-9 ],
[ -9,0,9,7,-9,-9,-9,-9,7,9,0,-9 ],
[ -9,-1,0,9,7,7,7,7,9,0,-1,-9 ],
[ -9,-2,0,0,9,9,9,9,0,0,-2,-9 ],
[ -9,-7,-2,-1,0,0,0,0,-1,-2,-7,-9 ],
[ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9 ]
], dtype = np.single)
sharpen = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]]);
image = cv2.imread('EuDpD.png')
# Scale kernel
for i in range(0, 12):
for j in range(0, 12):
kernel1[i][j] = kernel1[i][j]/d
identify = cv2.filter2D(src=image, ddepth=-1, kernel=kernel1)
# Sharpen image
identify = cv2.filter2D(src=identify, ddepth=-1, kernel=sharpen)
# Cut at ~90% of maximum
ret,thresh = cv2.threshold(identify, 220, 255, cv2.THRESH_BINARY)
cv2.imwrite('identify.png', thresh)
The above, ran on the grayscaled image (left), gives the following result (right). A better sharpening or adaptive thresholding could come up with a single pixel.

PTB-OpengGL stereo rendering and eye seperation value

I tried to draw a 3D dot cloud using OpenGL asymmetric frustum parallel axis projection. The general principle can be found on this website(http://paulbourke.net/stereographics/stereorender/#). But the problem now is that when I use real eye separation(0.06m), my eyes do not fuse well. When using eye separation = 1/30 * focal length, there is no pressure. I don’t know if there is a problem with the calculation, or there is a problem with the parameters? Part of the code is posted below. Thank you all.
for view = 0:stereoViews
% Select 'view' to render (left- or right-eye):
Screen('SelectStereoDrawbuffer', win, view);
% Manually reenable 3D mode in preparation of eye draw cycle:
Screen('BeginOpenGL', win);
% Set the eye seperation:
eye = 0.06; % in meter
% Caculate the frustum shift at the near plane:
fshift = 0.5 * eye * depthrangen/(vdist/100); % vdist is the focal length, 56cm, 0.56m
right_near = depthrangen * tand(FOV/2); % depthrangen is the depth of the near plane, 0.4. %FOV is the field of view, 18°
left_near = -right_near;
top_near = right_near* aspectr;
bottom_near = -top_near;
% Setup frustum projection for this eyes 'view':
glMatrixMode(GL.PROJECTION)
glLoadIdentity;
eyeside = 1+(-2*view); % 1 for left eye, -1 for right eye
glFrustum(left_near + eyeside * fshift, right_near + eyeside * fshift, bottom_near, top_near, %depthrangen, depthrangefObj);
% Setup camera for this eyes 'view':
glMatrixMode(GL.MODELVIEW);
glLoadIdentity;
gluLookAt(0 - eyeside * 0.5 * eye, 0, 0, 0 - eyeside * 0.5 * eye, 0, -1, 0, 1, 0);
% Clear color and depths buffers:
glClear;
moglDrawDots3D(win, xyz(:,:,iframe), 10, [], [], 1);
moglDrawDots3D(win, xyzObj(:,:,iframe), 10, [], [], 1);
% Manually disable 3D mode before calling Screen('Flip')!
Screen('EndOpenGL', win);
% Repeat for other eyes view if in stereo presentation mode...
end

Recalculating normal with curve (sine wave)

I'm trying to make a water geometry shader that waves using a sine wave.
For each vertex I calculate a sine for x and y, and then offset the vertex to the result * normal.
Because I offset my vertex I have to recalculate my normals, but if I do this using their triangle I get hard edges, while it should be smooth waves.
I understand that somehow I should use that sine function and get a 3D normal from it, but I'm puzzled.
Could someone explain how you get the normal of a sin calculation in 3D space?
You don't give the exact function you use, but it sounds like it's something like:
z = a * sin(b * x) * sin(b * y)
I'll walk through the process, so you should be able to apply the recipe even if your function looks slightly differently. Also, if your wave is not relative to the xy-plane, you can still use the same calculation, and then apply the necessary transformation matrix to the resulting normal.
What we have here is a parametric surface, where the 3 coordinates of a point of the plane are calculated from two parameters. In this case, the parameters are x and y, and the vector describing each point is:
[ x ]
v(x, y) = [ y ]
[ a * sin(b * x) * sin(b * y) ]
The process described here works for any parametric surface, including common geometric shapes. For example for a torus, the two parameters would be two angles. The mathematical tools needed for calculating are basic analysis (derivatives) and some vector geometry (cross product).
As the first step, we calculate the gradient vector for each of the two parameters. These gradient vectors consist of the partial derivatives of each vector component with the corresponding parameter. In the example, the results are:
[ 1 ]
dv(x, y)/dx = [ 0 ]
[ a * b * cos(b * x) * sin(b * y) ]
[ 0 ]
dv(x, y)/dy = [ 1 ]
[ a * b * sin(b * x) * cos(b * y) ]
The normal vector is calculated as the cross product of these two gradient vectors:
[ - a * b * cos(b * x) * sin(b * y) ]
vn = [ - a * b * sin(b * x) * cos(b * y) ]
[ 1 ]
Then you normalize this vector, and vn / |vn| is your normal vector.
You need to get the derivative in x and y so you can construct 2 vectors 1,0,x' and 0,1,y' then you take the cross product and normalize. That will be the normal.

Using camera rotation to make quad always appear in front of the camera (c++/opengl)

I'm trying to make a quad appear always in front of the camera, I'm trying to start by aligning it with the camera on the x-z plane and making sure it always faces the camera. I used this code...
float ry = cameraRY+PI_2;
float dis = 12;
float sz = 4;
float x = cameraX-dis*cosf(ry);
float y = cameraY;
float z = cameraZ-dis*sinf(ry)+cosf(ry)*sz;
float x2 = x + sinf(ry)*sz;
float y2 = y + sz;
float z2 = z - cosf(ry)*sz;
glVertex3f(x,y,z);
glVertex3f(x2,y,z2);
glVertex3f(x2,y2,z2);
glVertex3f(x,y2,z);
But it didn't quite look right, it seemed that the quad was rotating around a invisible point that was rotating correctly around the camera. I don't really know how to change it or how to go about doing this, any help appreciated!
Edit: Forgot to mention,
cameraX,cameraY,cameraZ are the camera's x,y,z positions
cameraRX and cameraRY are the camera's x and y rotations (Z rotation is always zero)
Check out this old tutorial on Lighthouse3D. It describes several "billboarding" techniques, which I believe are what you want.
Let P be your model view projection matrix, and c be the center of the quad you are trying to draw. You want to find a pair of vectors u, v that determine the edges of your quad,
Q = [ c-u-v, c-u+v, c-u-v, c+u-v ]
Such that u is pointing directly down in clip coordinates, while v is pointing to the right:
P(u) = (0, s, 0, 0)
P(v) = (s, 0, 0, 0)
Where s is the desired scale of your quad. Suppose that P is written in block diagonal form,
[ M | t ]
P = [-----------]
[ 0 0 1 | 0 ]
Then let m0, m1 be the first two rows of M. Now consider the equation we got for P(u), substituting and simplifying, we get:
[ 0 ]
P(u) ~> M u = [ s ]
[ 0 ]
Which leads to the following solution for u, v:
u = s * m1 / |m1|^2
v = s * m0 / |m0|^2