LWJGL 3D picking - opengl

So I have been trying to understand the concept of 3D picking but as I can't find any video guides nor any concrete guides that actually speak English, it is proving to be very difficult. If anyone is well experienced with 3D picking in LWJGL, could you give me an example with line by line explanation of what everything means. I should mention that all I am trying to do it shoot the ray out of the center of the screen (not where the mouse is) and have it detect just a normal cube (rendered in 6 QUADS).

Though I am not an expert with 3D picking, I have done it before, so I will try to explain.
You mentioned that you want to shoot a ray, rather than go by mouse position; as long as this ray is parallel to the screen, this method will still work, just the same as it will for a random screen coordinate. If not, and you actually wish to shoot a ray out, angled in some direction, things get a little more complicated, but I will not go in to it (yet).
Now how about some code?
Object* picking3D(int screenX, int screenY){
//Disable any lighting or textures
glDisable(GL_LIGHTING);
glDisable(GL_TEXTURE);
//Render Scene
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
orientateCamera();
for(int i = 0; i < objectListSize; i++){
GLubyte blue = i%256;
GLubyte green = min((int)((float)i/256), 255);
GLubyte red = min((int)((float)i/256/256), 255);
glColor3ub(red, green, blue);
orientateObject(i);
renderObject(i);
}
//Get the pixel
GLubyte pixelColors[3];
glReadPixels(screenX, screenY, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixelColors);
//Calculate index
int index = pixelsColors[0]*256*256 + pixelsColors[1]*256 + pixelColors[2];
//Return the object
return getObject(index);
}
Code Notes:
screenX is the x location of the pixel, and screenY is the y location of the pixel (in screen coordinates)
orientateCamera() simply calls any glTranslate, glRotate, glMultMatrix, etc. needed to position (and rotate) the camera in your scene
orientateObject(i) does the same as orientateCamera, except for object 'i' in your scene
when I 'calculate the index', I am really just undoing the math I performed during the rendering to get the index back
The idea behind this method is that each object will be rendered exactly how the user sees it, except that all of a model is a solid colour. Then, you check the colour of the pixel for the screen coordinate requested, and which ever model the colour is indexed to: that's your object!
I do recommend, however, adding a check for the background color (or your glClearColor), just in case you don't actually hit any objects.
Please ask for further explanation if necessary.

Related

Drawing an OpenGL overlay using SwapBuffers interception

I'm trying to make a library that would allow me to draw my overlay on top of the content of a game window that uses OpenGL by intercepting the call to the SwapBuffers function. For interception i use Microsoft Detours.
BOOL WINAPI __SwapBuffers(HDC hDC)
{
HGLRC oldContext = wglGetCurrentContext();
if (!context) // Global variable
{
context = wglCreateContext(hDC);
}
wglMakeCurrent(hDC, context);
// Drawing
glRectf(0.1F, 0.5F, 0.2F, 0.6F);
wglMakeCurrent(hDC, oldContext);
return _SwapBuffers(hDC); // Call the original SwapBuffers
}
This code works, but occasionally, when I move my mouse, my overlay blinks. Why? Some forums have said that such an implementation can significantly reduce FPS. Is there any better alternative? How do I correctly translate a normal position to an OpenGL position? For example, width = 1366. It turns out 1366 = 1, and 0 = -1. How to get the value for example for 738? What about height?
To translate a screen coordinate to normal coordinate you need to know the screen width and screen height, linear mapping from [0, screenwidth] to [-1, 1] / [0, screenheight] to [-1, 1]. It is as simple as follows:
int screenwidth, screenheight;
//...
screenwidth = 1366;
screenheight = 738;
//...
float screenx, screeny;
float x = (screenx/(float)screenwidth)*2-1;
float y = (screeny/(float)screenheight)*2-1;
Problem z=0:
glRect renders to z=0, it is a problem because the plane would be infinitely near. Because opengl considers rendering to world space still. Screen space lies at (x, y, 1) in non transformed world space. OpenGL almost always works with 3D coordinates.
There are two ways to tackle this problem:
You should prefer using functions with a z component, because opengl does not render correctly at z=0. z=1 corresponds to the normalized screen space
or you add a glTranslatef(0,0,1); to get to the normalized screen space
Remember to disable depth testing when rendering 2D on the screen space and resetting the modelview matrix.

Rotating 2D camera to space ship's heading in OpenGL (OpenTK)

The game is a top-down 2D space ship game -- think of "Asteroids."
Box2Dx is the physics engine and I extended the included DebugDraw, based on OpenTK, to draw additional game objects. Moving the camera so it's always centered on the player's ship and zooming in and out work perfectly. However, I really need the camera to rotate along with the ship so it's always facing in the same direction. That is, the ship will appear to be frozen in the center of the screen and the rest of the game world rotates around it as it turns.
I've tried adapting code samples, but nothing works. The best I've been able to achieve is a skewed and cut-off rendering.
Render loop:
// Clear.
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
// other rendering omitted (planets, ships, etc.)
this.OpenGlControl.Draw();
Update view -- centers on ship and should rotate to match its angle. For now, I'm just trying to rotate it by an arbitrary angle for a proof of concept, but no dice:
public void RefreshView()
{
int width = this.OpenGlControl.Width;
int height = this.OpenGlControl.Height;
Gl.glViewport(0, 0, width, height);
Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
float ratio = (float)width / (float)height;
Vec2 extents = new Vec2(ratio * 25.0f, 25.0f);
extents *= viewZoom;
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0, 0, 0);
Vec2 lower = this.viewCenter - extents;
Vec2 upper = this.viewCenter + extents;
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
}
Now, I'm obviously doing this wrong. Degrees of 0 and 180 will keep it right-side-up or flip it, but any other degree will actually zoom it in/out or result in only blackness, nothing rendered. Below are examples:
If ship angle is 0.0f, then game world is as expected:
Degree of 180.0f flips it vertically... seems promising:
Degree of 45 zooms out and doesn't rotate at all... that's odd:
Degree of 90 returns all black. In case you've never seen black:
Please help!
Firstly the 2-4 arguments are the axis, so please state them correctly as stated by #pingul.
More importantly the rotation is applied to the projection matrix.
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
In this line your Orthogonal 2D projection matrix is being multiplied with the previous rotation and applied to your projection matrix. Which I believe is not what you want.
The solution would be move your rotation call to a place after the model view matrix mode is selected, as below
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0.0f, 0.0f, 1.0f);
And now your rotations will be applied to the model-view matrix stack. (I believe this is the effect you want). Keep in mind that glRotatef() creates a rotation matrix and multiplies it with the matrix at the top of the selected stack stack.
I would also strongly suggest you move away from fixed function pipeline if possible as suggested by #BDL.

LWJGL 2.9 gluProject 3D to 2D

So I've been playing with LWJGL 3D object coordinates to 2D screen space coordinates using GLU.gluProject, however I'm finding there to be quite a problem when the xyz of the 3D object is behind the camera. The screen space coordinates seem to be on screen twice, once for the actual potion which works fine, but again for when the object is behind, and the positions are somewhat inverted of the objects true position (camera moves left, so do the screen coordinates twice as fast as the camera).
Here's the code I'm using for 3D to 2D:
public static float[] get2DFrom3D(float x, float y, float z) {
FloatBuffer screen = BufferUtils.createFloatBuffer(3);
IntBuffer view = BufferUtils.createIntBuffer(16);
FloatBuffer model = BufferUtils.createFloatBuffer(16);
FloatBuffer proj = BufferUtils.createFloatBuffer(16);
GL11.glGetFloat(GL11.GL_MODELVIEW_MATRIX, model);
GL11.glGetFloat(GL11.GL_PROJECTION_MATRIX, proj);
GL11.glGetInteger(GL11.GL_VIEWPORT, view);
boolean res= GLU.gluProject(x, y, z, model, proj, view, screen);
if (res) {
return new float[] {screen.get(0), Display.getHeight() - screen.get(1), screen.get(2)};
}
return null;
}
Another query is what the screen.get(2) value is used for, as it majorly varies from 0.8 to 1.1, however occasionally reaches -18 or 30 when the position is just below the camera, and the camera pitch is sat just above or below the horizon.
Any help is appreciated.
Points behins the camera (or on the camera plane) can never be correctly projected. This case can only be handled for primitives like lines or triangles. During rendering, the primitives are clipped against the viewing frustum, so that new vertices (and new primitives) can be generated. But this is impossible to do for a single point, you always need lines or polygon edges to calculate any meaningful intersection point.
Individual points, and this is all what gluProject handles, can either be inside or outside of the frustum. But gluProject does not care abnout that, it just applies the transformations, mirroring points behind the camera in front of the camera. It is the responsibilty of the caller to ensure that the points to project are actually inside of the viewing frustum.

OpenGL "camera" Yaw on wrong axis

I am using gluLookAt() to set the "camera" position and orientation
GLU.gluLookAt(xPosition, yPosition, zPosition,
xPosition + lx, yPosition, zPosition + lz
0, 1, 0);
my lz and lx variables represent my forward vector
lz = Math.cos(angle);
lx = -Math.sin(angle);
When turn around in the 3D world, it appears that I am rotating around an axis that is always infront of me
I know this because my xPosition and yPosition variables stay the same, but I appear to spin around an object when im close to it and I turn.
I know there is not a problem with the maths that I have used here, because I have tried using code from past projects that have worked properly but the problem still remains.
This is what I am doing in the rendering loop
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//draw scene from user perspective
glLoadIdentity();
GLU.gluLookAt(camera.getxPos(), camera.getyPos(), camera.getzPos(),
camera.getxPos()+camera.getLx(), camera.getyPos(), camera.getzPos()+p1.getLz(),
0, 1, 0);
glBegin(GL_QUADS);
glVertex3f(-dim, dim, 0);
glVertex3f(dim, dim, 0);
glVertex3f(dim, 0, 0);
glVertex3f(-dim, 0, 0);
glEnd();
pollInput();
camera.update();
I have tried rendering a box where the player coordinates are and I got this result. The camera appears to be looking from behind the player coordinates. To use an analogy right now its like a 3rd Person game and It should look like a first person game
The small box here is rendered in the camera coordinates, to give some perspective the bigger box is infront.
Solved!
The problem was that I was initially calling gluLookAt() while the matrix mode was set to GL_PROJECTION.
I removed that line and moved it to just after I had set the matrix mode to GL_MODELVIEW and that solved the problem.

Giving 2D structures 3D depth [duplicate]

This question already has an answer here:
Closed 12 years ago.
Possible Duplicate:
How to give a 2D structure 3D depth
Hello everyone,
I posted this same question yesterday. I would like to have uploaded images showing my program output but due to spamming protection I am informed I need 10 reputation "points". I could send images of my output under different projection matrices to anyone willing.
I am beginning to learn OpenGL as part of a molecular modeling project, and currently I am trying to render 7 helices that will be arranged spatially close to each other and will move, tilt, rotate and interact with each other in certain ways.
My question is how to give the 2D scene 3-Dimensional depth so that the geometric structures look like true helices in three dimensions?
I have tried playing around with projection matrices (gluPerspective, glFrustum) without much luck, as well as using the glDepthRange function. As I understand from textbook/website references, when rendering a 3D scene it is appropriate to use a (perspective) projection matrix that has a vanishing point (either gluPerspective or glFrustum) to create the illusion of 3 dimensions on a 2D surface (the screen)
I include my code for rendering the helices, but for simplicity I insert the code for rendering one helix (the other 6 helices are exactly the same except for their translation matrix and the color function parameters) as well as the reshape handler.
This is the output ![enter image description here][1] I get when I run my program with an orthographic projection (glOrtho) and it looks as a 2D projection of helices (curved lines drawn in three dimensions). This is my output (![enter image description here][2]) when I use a perspective projection (glFrustum in my case). It does not appear as if I am looking at my helices in 3D!!
Perhaps the glFrustum parameters are wrong?
//GLOBALS
GLfloat x, y, z;
GLfloat c = 1.5f; //helical pitch
GLfloat theta; //constant angle between tangent and x-axis
thetarad = theta/(Pi/180.0); //angle converted from degrees to radians
GLfloat r = 7.0f; //radius
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST); /* enable depth testing */
glDepthFunc(GL_LESS); /* make sure the right depth function is used */
/*CALLED TO DRAW HELICES*/
void RenderHelix() {
/**** WHITE HELIX ****/
glColor3f(1.0,1.0,1.0);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glTranslatef(-30.f, 100.f, 0.f); //Move Position
glRotatef(90.0, 0.0, 0.0, 0.0);
glBegin(GL_LINE_STRIP);
for(theta = 0; theta <= 360; ++theta) { /* Also can use: for(theta = 0; theta <= 2*Pi; ++rad) */
x = r*(cosf(theta));
y = r*(sinf(theta));
z = c*theta;
glVertex3f(x,y,z);
}
glEnd();
glScalef(1.0,1.0,12.0); //Stretch or contract the helix
glPopMatrix();
/* Code for Other 6 Helices */
.............
glutSwapBuffers();
}
void Reshape(GLint w, GLint h) {
if(h==0)
h=1;
glViewport(0,0,w,h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
GLfloat aspectratio = (GLfloat)w/(GLfloat)h;
if(w<=h)
//glOrtho(-100,100,-100/aspectratio,100/aspectratio, -50.0,310.0);
//glOrtho(-100,100,-100/aspectratio,100/aspectratio, 0.0001,1000000.0); //CLIPPING FAILSAFE TEST
//gluPerspective(122.0,(GLfloat)w/(GLfloat)h,10.0,50.0);
glFrustum(-10.f,10.f, -100.f/aspectratio, 100.f/aspectratio, 1.0f, 15.0f);
else
//glOrtho(-100*aspectratio,100*aspectratio,-100,100,-50.0,310.0);
//glOrtho(-100*aspectratio,100*aspectratio,-100,100,0.0001,1000000.0); //CLIPPING FAILSAFE TEST
//gluPerspective(122.0,(GLfloat)w/(GLfloat)h,10.0,50.0);
glFrustum(-10.f*aspectratio,10.f*aspectratio,-10.f,10.f, 1.0f,15.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
The usual reason that simple 3d applications don't "look 3d" is because you need to set up a lighting system. Lighting is a major source of depth cues to your brain.
Here's a good tutorial on adding lighting to an OpenGL program:
http://www.cse.msu.edu/~cse872/tutorial3.html
EDIT: For more context, here's the relevant chapter from the classic OpenGL Red Book:
http://fly.cc.fer.hr/~unreal/theredbook/chapter06.html
Notice the screenshot near the top, showing the same sphere render both with and without lighting.
I agree it's difficult if you can't post your images (and it may not be trivial then).
If your code is open source you are likely to get help for molecular modelling from the Blue Obelisk Community (http://blueobelisk.shapado.com/ is a SE-type site for answering these questions).
There used to be a lot of GL used in our community but I'm not sure I know a good code which which you could hack to get some idea of the best things to do. The leading graphics tools are Jmol (where the gfx were largely handwritten and very good) and Avogadro which uses Qt.
But if you ask for examples of open source GL moelcular graphics you'll probably get help.
And of course you'll probably get complementary help here