I'm tying to use gluProject function, to get point coordinates in 2d window after "rendering". The problem is, that I get strange results. For example: I've got a point with x=16.5. When I use gluProject on it I get x= -6200.0.
If I understand gluProject OK, I should get a pixel position of that point on my screen after "rendering" - am I right? How can I convert that strange result into on-screen pixel coordinates?
Thank you for any help!
Code I use (by "sum1stolemyname"):
GLdouble modelview[16], projection[16]
GLint viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, *modelView);
glGetDoublev(GL_PROJECTION_MATRIX, *projection);
glGetIntegerv(GL_VIEWPORT, *viewport);
double tx, ty, tz;
for(i = 0; i < VertexCount; i++)
{
gluProject(vertices[i].x, vertices[i].y, vertices[i].z,
modelview, projection, viewport,
&tx, &ty, &tz)
}
Yeah it does unfortunately it does it as far as the far plane so you can construct a 'ray' into the world. It does not give you the actual position of the pixel you are drawing in 3D space. What you can do is make a line from the screen to your point you get from the gluProject then use that to find the intersection point with your geometry to get the point in 3D space. Or another option is to modify your input matrices and viewport so the far plane is a more reasonable distance.
Related
I am doing a vtk progarm,In that I need to map window coordinates into object coordinates using vtk
I have the OpenGL code as:
winX = 0.2;//some float values
winY = 0.43;//some float values
double posX, posY, posZ;
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );
glGetDoublev( GL_PROJECTION_MATRIX, projection );
glGetIntegerv( GL_VIEWPORT, viewport );
glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ)
gluUnProject(winX, winY, winZ, modelview, projection, viewport, &posX, &posY, &posZ);
I don't know how to do this using vtk?Any help will be highly appreciated.I also googled and find out a solution for getting model view matrix like this
renderWindow->GetRenderers()->GetFirstRenderer()->GetActiveCamera()->GetViewTransformMatrix();
but I have no idea how to map window coordinates into object coordinates in vtk
Yes, VTK can map screen coordinates into world coordinates.
You can adapt the following code to your needs (below 2D case only):
// 1. Get the position in the widget.
int* clickPosition = this->GetInteractor()->GetEventPosition();
const int x = clickPosition[0];
const int y = clickPosition[1];
// 2. Transform screen coordinates to "world coordinates" i.e. into real coordinates.
vtkSmartPointer<vtkCoordinate> coordinate = vtkSmartPointer<vtkCoordinate>::New();
coordinate->SetCoordinateSystemToDisplay();
coordinate->SetValue(x, y, 0);
double* worldCoordinates = coordinate->GetComputedWorldValue(widget->GetRenderWindow()->GetRenderers()->GetFirstRenderer());
double worldX(worldCoordinates[0]), worldY(worldCoordinates[1]);
This is an ill-posed problem because you can't find the depth information from a single 2D position. In the general case there isn't an unique solution.
But there exist some options :
You have done a projection from the object coordinates to the screen coordinates and you can keep the depth information somewhere to do the back projection.
You want to get the 3D position of an object on your screen. So you use video games technologies such as ray-tracing. The idea is to send a ray from the camera and to take the intersection between the ray and the object as the object position. It's implemented in vtk https://blog.kitware.com/ray-casting-ray-tracing-with-vtk/.
I'm trying to get the position of a sphere that is rotating around an idle object in my opengl application. This is how I perform the orbiting:
glTranslatef(positions[i].getPosX(), //center of rotation (yellow ball)
positions[i].getPosY(),
positions[i].getPosZ());
glRotatef(rotation_angle,0.0,1.0,0.0); //angle of rotation
glTranslatef(distance[i].getPosX(), //distance from the center of rotation
distance[i].getPosY(),
distance[i].getPosZ());
Variable rotation_angle loops from 0 to 360 endlessly. In the distance vector I'm only changing the z-distance of the object, for example let's say the idle object is in (0,0,0), the distance vector could be (0,0,200).
OpenGL just draws stuff. It doesn't maintain a "scene". So you'll have to do all the math yourself. This is as simple as multiplying the vector (0,0,0,1) with the current modelview-projection matrix and perform viewport remapping. This has been conveniently packed up in the GLU (not OpenGL) function gluProject.
Since you're using the (old-and-busted) fixed function pipeline the procedure follows about
GLdouble x,y,z;
GLdouble win_x, win_y, win_z;
GLdouble mv[16], prj[16];
GLint vp[4];
glGetDoublev(GL_MODELVIEW_MATRIX, mv);
glGetDoublev(GL_PROJECTION_MATRIX, prj);
glGetInteger(GL_VIEWPORT, vp);
gluProjection(x,y,z, mv, prj, vp, win_x, win_y, win_z);
Note that due to OpenGL's stateful nature the value of the modelview and projection matrix and the viewport at the moment of drawing the sphere matters. Retrieving those values at any other moment may produce very different data and result in an outcome inconsistent with the drawing.
I would like to display a 2D image at a 2D point calculated from a 3D point using gluProject().
So I have my 3D point, I use gluProject to get its 2D coordinates, then I display my image at this point.
It works well but I have a problem with Z coordinate which makes my image appear two times on the screen : where it should really appear and at "the opposite".
Let's take an example : the camera is at (0,0,0) and I look at (0,0,-1) so in direction of negative Z.
I use 3D point (0,0,-1) for my object, gluProject gives me as 2D point the center of my window which is the good point.
So when I look in direction of (0,0,-1) my 2D image appears, when I rotate, it moves well until the point (0,0,-1) is not visible, which makes the 2D image go out of screen so not displayed.
But when I look at (0,0,1), it also appears. Consequently, I get the same result (for the display of my 2D image) if I use 3D point (0,0,-1) and (0,0,1) for example. I assume there is something to do with the Z coordinate that gluProject returns but I don't know what.
Here is my code : my zNear=0.1 and zFar=1000
GLint viewport[4];
GLdouble modelview[16];
GLdouble viewVector[3];
GLdouble projection[16];
GLdouble winX, winY, winZ;//2D point
GLdouble posX, posY, posZ;//3D point
posX=0.0;
posY=0.0;
posZ=-1.0;//the display is the same if posZ=1 which should not be the case
//get the matrices
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );
viewVector[0]=modelview[8];
viewVector[1]=modelview[9];
viewVector[2]=modelview[10];
glGetDoublev( GL_PROJECTION_MATRIX, projection );
glGetIntegerv( GL_VIEWPORT, viewport );
int res=gluProject(posX,posY,posZ,modelview,projection,viewport,&winX,&winY,&winZ);
if(viewVector[0]*posX+viewVector[1]*posY+viewVector[2]*posZ<0){
displayMyImageAt(winX,windowHeight-winY);
}
So, what do I need to do to get the good display of my 2D image, that's to say to take Z into account?
gluProject works correctly, you projection matrix projects points on the screen plane, you should check whether point is behind, you can achieve this by calculating dot product of your view vector and vector to point, if it is less then 0 then point is behind.
what is your DisplayImageAt function?
is it similar to this display function given in my code?
I am trying to as well get 2D coordinate of a point selected 3d coordinates.
here is my display pretty much all..
`void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glMultMatrixd(_matrix);
glColor3f(0.5,0.5,0.5);
glPushMatrix(); //draw terrain
glColor3f(0.7,0.7,0.7);
glBegin(GL_QUADS);
glVertex3f(-3,-0.85,3);
glVertex3f(3,-0.85,3);
glVertex3f(3,-0.85,-3);
glVertex3f(-3,-0.85,-3);
glEnd();
glPopMatrix();
glPushMatrix();
myDefMesh.glDraw(meshModel);
glPopMatrix();
glutSwapBuffers();
}'
I'm writing a MFC c++ application that uses OpenGL. The program allows for drawing and manipulating of objects in 3D. Right now I want to find the coordinates, in the same coordinate space that my objects are drawn in, anywhere I click my mouse on the screen.
So far I've been using a combination of glReadPixels and gluUnProject and it has been working but only when I click my mouse somewhere where an object has already been drawn. If I click anywhere outside my object the coordinates obtained are completely off.
So I was wondering how to change my code so that I can find the coordinates in the coordinate space my objects are in anywhere on the screen. Here's the code I've been using:
GLint viewport[4];
GLdouble ox, oy, oz;//the coordinates I need
GLfloat winZ = 0.0;
::glGetIntegerv(GL_VIEWPORT, viewport);
::glGetDoublev(GL_PROJECTION_MATRIX, projectionMatrix);
::glGetDoublev(GL_MODELVIEW_MATRIX, modelviewMatrix);
GLfloat winX = (float)point.x;//point.x and point.y are the mouse coordinates
GLfloat winY = (float)viewport[3] - (float)point.y;
::glReadPixels( winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
gluUnProject((GLdouble)winX, (GLdouble)winY, (GLdouble)winZ, modelviewMatrix, projectionMatrix, viewport, &ox, &oy, &oz);
gluUnProject takes window space coordinates and un-projects them with the inverse of the world-view-projection-viewport transformations. It has no clue if the coordinates correspond to an existing object or not.
When you clear the depth buffer, it is initialized everywhere with a value that is read as 1.0 with glReadPixels.
When visible fragments of an object are drawn they will pass the depth test and will override the depth value with a smaller value for every pixel intersecting these fragments.
This means that every time you read a pixel in the depth buffer with a value of 1.0, this means that nothing visible has been drawn in that pixel and this is where the result you obtain is completely off.
I have a scene consisting of a lot of points which I drew using
glBegin(GL_POINTS);
glVertex3f(x[i],y[i],z[i]); // the points are displayed properly ..
glEnd();
What I wish to do is to be able to click on one of the points on the scene using the mouse and get its 3-D coordinate.
I have seen other threads to use :
glReadPixels((GLdouble)mouse_x,
(GLdouble) (rect.Height()-mouse_y-1),1, 1,GL_DEPTH_COMPONENT, GL_FLOAT, &Z);
and use the value of z in
gluUnProject(mouse_x, mouse_y, 0, modelview, projection, viewport, out posX, out posY, out posZ);
but i always get z=0 as the output .Is this because these are points and not a polygon?Is there any way to get the coordinates of the z?
Unfortunately, it can't be done. Any point x,y point on the screen can refer to any point along a given ray in the scene.
Given that you're drawing points, you probably want to use select mode to select a specific point, and then determine the coordinates of that point.
I think you are calling glReadPixels the wrong way. x, y, width and height must be GLint, not double. This has nothing to do with the format of the result glReadPixels returns. So you should pass the window coordinates for the mouse position and window sizes to glReadPixels (e.g. glReadPixels (mouse_x, rect.Height() - mouse_y, rect.Width(), rect.Height(), GL_DEPTH_COMPONENT, GLfloat, &z);. If mouse_x and mouse_y value range is [0.0 .. 1.0], you need to properly scale them in your call to glReadPixels (rect.Width * mouse_x, rect.Height() * (1.0 - mouse_y) If you get this right, imo your code should work as expected.