gluProject and 2D display - opengl

I would like to display a 2D image at a 2D point calculated from a 3D point using gluProject().
So I have my 3D point, I use gluProject to get its 2D coordinates, then I display my image at this point.
It works well but I have a problem with Z coordinate which makes my image appear two times on the screen : where it should really appear and at "the opposite".
Let's take an example : the camera is at (0,0,0) and I look at (0,0,-1) so in direction of negative Z.
I use 3D point (0,0,-1) for my object, gluProject gives me as 2D point the center of my window which is the good point.
So when I look in direction of (0,0,-1) my 2D image appears, when I rotate, it moves well until the point (0,0,-1) is not visible, which makes the 2D image go out of screen so not displayed.
But when I look at (0,0,1), it also appears. Consequently, I get the same result (for the display of my 2D image) if I use 3D point (0,0,-1) and (0,0,1) for example. I assume there is something to do with the Z coordinate that gluProject returns but I don't know what.
Here is my code : my zNear=0.1 and zFar=1000
GLint viewport[4];
GLdouble modelview[16];
GLdouble viewVector[3];
GLdouble projection[16];
GLdouble winX, winY, winZ;//2D point
GLdouble posX, posY, posZ;//3D point
posX=0.0;
posY=0.0;
posZ=-1.0;//the display is the same if posZ=1 which should not be the case
//get the matrices
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );
viewVector[0]=modelview[8];
viewVector[1]=modelview[9];
viewVector[2]=modelview[10];
glGetDoublev( GL_PROJECTION_MATRIX, projection );
glGetIntegerv( GL_VIEWPORT, viewport );
int res=gluProject(posX,posY,posZ,modelview,projection,viewport,&winX,&winY,&winZ);
if(viewVector[0]*posX+viewVector[1]*posY+viewVector[2]*posZ<0){
displayMyImageAt(winX,windowHeight-winY);
}
So, what do I need to do to get the good display of my 2D image, that's to say to take Z into account?

gluProject works correctly, you projection matrix projects points on the screen plane, you should check whether point is behind, you can achieve this by calculating dot product of your view vector and vector to point, if it is less then 0 then point is behind.

what is your DisplayImageAt function?
is it similar to this display function given in my code?
I am trying to as well get 2D coordinate of a point selected 3d coordinates.
here is my display pretty much all..
`void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glMultMatrixd(_matrix);
glColor3f(0.5,0.5,0.5);
glPushMatrix(); //draw terrain
glColor3f(0.7,0.7,0.7);
glBegin(GL_QUADS);
glVertex3f(-3,-0.85,3);
glVertex3f(3,-0.85,3);
glVertex3f(3,-0.85,-3);
glVertex3f(-3,-0.85,-3);
glEnd();
glPopMatrix();
glPushMatrix();
myDefMesh.glDraw(meshModel);
glPopMatrix();
glutSwapBuffers();
}'

Related

How to map window coordinates into object coordinates using vtk

I am doing a vtk progarm,In that I need to map window coordinates into object coordinates using vtk
I have the OpenGL code as:
winX = 0.2;//some float values
winY = 0.43;//some float values
double posX, posY, posZ;
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );
glGetDoublev( GL_PROJECTION_MATRIX, projection );
glGetIntegerv( GL_VIEWPORT, viewport );
glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ)
gluUnProject(winX, winY, winZ, modelview, projection, viewport, &posX, &posY, &posZ);
I don't know how to do this using vtk?Any help will be highly appreciated.I also googled and find out a solution for getting model view matrix like this
renderWindow->GetRenderers()->GetFirstRenderer()->GetActiveCamera()->GetViewTransformMatrix();
but I have no idea how to map window coordinates into object coordinates in vtk
Yes, VTK can map screen coordinates into world coordinates.
You can adapt the following code to your needs (below 2D case only):
// 1. Get the position in the widget.
int* clickPosition = this->GetInteractor()->GetEventPosition();
const int x = clickPosition[0];
const int y = clickPosition[1];
// 2. Transform screen coordinates to "world coordinates" i.e. into real coordinates.
vtkSmartPointer<vtkCoordinate> coordinate = vtkSmartPointer<vtkCoordinate>::New();
coordinate->SetCoordinateSystemToDisplay();
coordinate->SetValue(x, y, 0);
double* worldCoordinates = coordinate->GetComputedWorldValue(widget->GetRenderWindow()->GetRenderers()->GetFirstRenderer());
double worldX(worldCoordinates[0]), worldY(worldCoordinates[1]);
This is an ill-posed problem because you can't find the depth information from a single 2D position. In the general case there isn't an unique solution.
But there exist some options :
You have done a projection from the object coordinates to the screen coordinates and you can keep the depth information somewhere to do the back projection.
You want to get the 3D position of an object on your screen. So you use video games technologies such as ray-tracing. The idea is to send a ray from the camera and to take the intersection between the ray and the object as the object position. It's implemented in vtk https://blog.kitware.com/ray-casting-ray-tracing-with-vtk/.

Proper gluLookAt for gluCylinder

I'm trying to draw a cylinder in a specific direction with gluCylinder. To specify the direction I use gluLookAt, however, as so many before me, I am not sure about the "up" vector and thus can't get the cylinder to point to the correct direction.
I've read from another SO answer that
The intuition behind the "up" vector in gluLookAt is simple: Look at anything. Now tilt your head 90 degrees. Where you are hasn't changed, the direction you're looking at hasn't changed, but the image in your retina clearly has. What's the difference? Where the top of your head is pointing to. That's the up vector.
It is a simple explanation but in the case of my cylinder I feel like the up vector is totally unimportant. Since a cylinder can be rotated around its axis and still look the same, a different up vector wouldn't change anything. So there should be infinitely many valid up vectors for my problem: all orthogonals to the vector from start point to end point.
So this is what I do:
I have the world coordinates of where the start-point and end-point of the cylinder should be, A_world and B_world.
I project them to viewport coordinates A_vp and B_vp with gluProject:
GLdouble A_vp[3], B_vp[3], up[3], model[16], projection[16];
GLint gl_viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, &model[0]);
glGetDoublev(GL_PROJECTION_MATRIX, &projection[0]);
glGetIntegerv(GL_VIEWPORT, gl_viewport);
gluProject(A_world[0], A_world[1], A_world[2], &model[0], &projection[0], &gl_viewport[0], &A_vp[0], &A_vp[1], &A_vp[2]);
gluProject(B_world[0], B_world[1], B_world[2], &model[0], &projection[0], &gl_viewport[0], &B_vp[0], &B_vp[1], &B_vp[2]);
I call glOrtho to reset the camera to its default position: Negative z into picture, x to the right, y up:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, vp_edgelen, vp_edgelen, 0, 25, -25);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
I translate to coordinate A_vp, calculate the up vector as the normal to the vector A_vp — B_vp and specify the view with gluLookAt:
glTranslatef(A_vp[0], gl_viewport[2] - A_vp[1], A_vp[2]);
glMatrixMode(GL_MODELVIEW);
GLdouble[] up = {A_vp[1] * B_vp[2] - A_vp[2] * B_vp[1],
A_vp[2] * B_vp[0] - A_vp[0] * B_vp[2],
A_vp[0] * B_vp[1] - A_vp[1] * B_vp[0]};
gluLookAt(0, 0, 0,
B_vp[0], gl_viewport[2] - B_vp[1], B_vp[2],
up[0], up[1], up[2]);
I draw the cylinder with gluCylinder:
GLUquadricObj *gluCylObj = gluNewQuadric();
gluQuadricNormals(gluCylObj, GLU_SMOOTH);
gluQuadricOrientation(gluCylObj, GLU_OUTSIDE);
gluCylinder(gluCylObj, 10, 10, 50, 10, 10);
Here is the unexpected result:
Since the cylinder starts at the correct position and since I was able to draw a circle at position B_vp, the only thing that must be wrong is the "up" vector in gluLookAt, right?
gluLookAt() is not necessary to achieve the proper perspective. It is enough to rotate the current z-vector to point to the direction the cylinder should point.

How to get the position of an orbiting sphere

I'm trying to get the position of a sphere that is rotating around an idle object in my opengl application. This is how I perform the orbiting:
glTranslatef(positions[i].getPosX(), //center of rotation (yellow ball)
positions[i].getPosY(),
positions[i].getPosZ());
glRotatef(rotation_angle,0.0,1.0,0.0); //angle of rotation
glTranslatef(distance[i].getPosX(), //distance from the center of rotation
distance[i].getPosY(),
distance[i].getPosZ());
Variable rotation_angle loops from 0 to 360 endlessly. In the distance vector I'm only changing the z-distance of the object, for example let's say the idle object is in (0,0,0), the distance vector could be (0,0,200).
OpenGL just draws stuff. It doesn't maintain a "scene". So you'll have to do all the math yourself. This is as simple as multiplying the vector (0,0,0,1) with the current modelview-projection matrix and perform viewport remapping. This has been conveniently packed up in the GLU (not OpenGL) function gluProject.
Since you're using the (old-and-busted) fixed function pipeline the procedure follows about
GLdouble x,y,z;
GLdouble win_x, win_y, win_z;
GLdouble mv[16], prj[16];
GLint vp[4];
glGetDoublev(GL_MODELVIEW_MATRIX, mv);
glGetDoublev(GL_PROJECTION_MATRIX, prj);
glGetInteger(GL_VIEWPORT, vp);
gluProjection(x,y,z, mv, prj, vp, win_x, win_y, win_z);
Note that due to OpenGL's stateful nature the value of the modelview and projection matrix and the viewport at the moment of drawing the sphere matters. Retrieving those values at any other moment may produce very different data and result in an outcome inconsistent with the drawing.

3D Orthographic Projection

I want to construct a Orthographic projection to make my sun's shadow map look right. Unfortunately, the code is not achieving the desired results as using the regular perspective projection. Here's my code for setting up the projection matrix:
glViewport (0, 0, (GLsizei)shadowMap.x, (GLsizei)shadowMap.y);
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
//suns use this
glOrtho(0, shadowMap.x, 0, shadowMap.y, 0.1,1000.0);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity();
From what I understand that should be correct. However, after a quick debug render, I noticed that the scene was rendering in a tiny portion of the screen. After some experimentation, I found changing the shadowMap values in glOrtho made it cover the whole texture, but it was really zoomed in. In my perspective projection I use 0.1 and 1000.0 for my near and far, and I've experimented with those and it does change the results, but not get the desired results still. The only time that I get the correct results is when the values are kept with shadowMap.x and shadowMap.y, but like I said, its rendering really small.
What am I doing wrong here? Everything I've read said that the initial code is correct.
EDIT:
Apparently it wasn't clear that this is for the shadow map pass, the regular pass is rendered with perspective and is fine.
Shadow mapping is multi pass algorithm.
You are reffering to the first pass (point 1).
Render scene form light source view into depth texture
Render scene from camera view with depth texture projection mapping enabled.
current fragment xy+depth is then transformed into light projection coordinates and is compared to stored depth on depth texture
if both depths are equal (or nearly equal) current fragment should be considered as lit, otherwise as shadowed.
So everything's fine with your code, store depth values from this pass to depth texture and proceed to point 2.
One thing you should think about is how wide area your light should cover (in world space). With loadidentity on modelview you are attempting to cover 1 world unit x 1 world unit area for you light only.
Consider we have a sphere at 0,0,0 with radius 5.0
We have depth texture of 256,256 dims.
We want to project it along Z onto sphere.
glVieport(0,0,256,256);
glMatrixMode(GL_PROJECTION);
glLoadidentity();
glOrtho(-2.5,2.5,-2.5,2.5,-1000,1000);
glMatrixMode(GL_MODELVIEW);
glLoadidentity();
//flip z, we cast light from obove
glRotate(1,0,0,180);
I don't see where you set the light modelview matrix. You can render a shadow map using the code below:
double* getOrthoMVPmatrix(vector3 position,vector3 lookat,
GLdouble left, GLdouble right,
GLdouble bottom, GLdouble top,
GLdouble nearVal, GLdouble farVal)
{
glPushMatrix();
double projection[16];
double modelView[16];
double *matrix = new double [16];
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho( left, right, bottom, top, nearVal, farVal) ;
glMatrixMode(GL_MODELVIEW);
glEnable(GL_DEPTH_TEST);
glLoadIdentity();
gluLookAt(position.x,position.y,position.z,lookat.x,lookat.y,lookat.z,0,1,0);
glGetDoublev(GL_MODELVIEW_MATRIX, modelView);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
glPopMatrix();
matrix = projection*modelView;
return matrix ;
}
void renderShadowMap(void)
{
//"Bind your depth framebuffer"
glViewport(0,0,"Your SM SIZE","Your SM SIZE");
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
double *MVP = getOrthoMVPmatrix( "your light position","your light position" + "your light direction",
-"left","right",
"bottom","top",
"near","far"
) ;
//"call glUseProgram to bind your shader"
// set the uniform MVP we made "
//"Draw your scene "
glViewport(0,0,"screen width","screen height");
}
Your will need to make a multiplication operator for double [16] array. In my case i made a matrix class but do it your way.
Dont forget to call glCullFace(GL_BACK) before drawing your real scene and free MVP after.

OpenGL gluProject() - strange results

I'm tying to use gluProject function, to get point coordinates in 2d window after "rendering". The problem is, that I get strange results. For example: I've got a point with x=16.5. When I use gluProject on it I get x= -6200.0.
If I understand gluProject OK, I should get a pixel position of that point on my screen after "rendering" - am I right? How can I convert that strange result into on-screen pixel coordinates?
Thank you for any help!
Code I use (by "sum1stolemyname"):
GLdouble modelview[16], projection[16]
GLint viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, *modelView);
glGetDoublev(GL_PROJECTION_MATRIX, *projection);
glGetIntegerv(GL_VIEWPORT, *viewport);
double tx, ty, tz;
for(i = 0; i < VertexCount; i++)
{
gluProject(vertices[i].x, vertices[i].y, vertices[i].z,
modelview, projection, viewport,
&tx, &ty, &tz)
}
Yeah it does unfortunately it does it as far as the far plane so you can construct a 'ray' into the world. It does not give you the actual position of the pixel you are drawing in 3D space. What you can do is make a line from the screen to your point you get from the gluProject then use that to find the intersection point with your geometry to get the point in 3D space. Or another option is to modify your input matrices and viewport so the far plane is a more reasonable distance.