OpenGL: the ultimate coordinate system confusion solution? - c++

I'm tired of thinking how the hell my coodinates are working at each case.
I heard that I could flip the Y-axis by this code:
glScalef(1, -1, 1);
But should I? Doesnt this break some other external functions and lighting etc?

i may say, getting used to opengl defaults is the fastest way of walking out of confusing.

There's no correct yes/no answer to this question. Calling glScalef(1, -1, 1) means using a left-handed instead of a right handed coordinate system, this has historically been the choice of for instance Direct3D and the RenderMan Interface. Some people feel it's more intuitive to have the positive z-axis pointing in to the screen instead of out of it (which is one version of a left handed coordinate system, the y-axis pointing down is another).
If you chose to switch coordinate system, you will have to change some of the standard settings in OpenGL to make it work. For instance you probably want to call glCullFace(GL_FRONT) (or reverse the order of all triangles sent to OpenGL).

I could be wrong, but it might turn your models inside-out. You will probably also need to swap your vertex winding order. If you use two-sided polygons, though, then you don't need to worry.

Related

Return to the original openGL origin coordinates

I'm currently trying to solve a problem regarding the display of an arm avatar.
I'm using a 3D tracker that's sending me coordinates and angles through my serial port. It works quite fine as long as I only want to show a "hand" or a block of wood in its place in 3D space.
The problem is: When I want to draw an entire arm (lets say the wrist is "stiff"), so the only degree of freedom is the elbow), I'm using the given coordinates (to which I've gltranslatef'd and glmultmatrix'd), but I want to draw another quad primitive with 2 vertices that are relative to the tracker coordinates (part of the "elbow") and 2 vertices that are always fixed next to the camera (part of the "shoulder"). However, I can't get out of my translated coordinate system.
Is my question clear?
My code is something like
cubeStretch = 0.15;
computeRotationMatrix();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glTranslatef(handX, handY, handZ);
glMultMatrixf(*rotationMatrix);
glBegin(GL_QUADS);
/*some vertices for the "block of wood"*/
/*then a vertex which is relative to handX-handZ*/
glVertex3f(-cubeStretch, -cubeStretch+0.1, 5+cubeStretch);
/*and here I want to go back to the origin*/
gltranslatef(-handX, -handY, -handZ);
/*so the next vertex should preferably be next to the camera; the shoulder, so to say*/
glVertex3f(+0.5,-0.5,+0.5);
I already know the last three line don't work, it's just one of the ways I've tried.
I realize it might be hard to understand what I'm trying to do. Anyone got any idea on how to get back to the "un-gltranslatef'd" coordinate origin?
(I'd rather avoid having to implement a whole bone/joint system for this.)
Edit:https://imagizer.imageshack.us/v2/699x439q90/202/uefw.png
In the picture you can see what I have so far. As you can see, the emphasis so far has not been on beauty, but rather on using the tracker coordinates to correctly display something on the screen.
The white cubes are target points which turn red when the arm avatar "touches" them ("arm avatar" used here as a word for the hideous brown contraption to the right, but I think you know what I mean). I now want to have a connection from the back end of the "lower arm" (the broad end of the avatar is supposed to be the hand) to just the right of the screen. Maybe it's clearer now?
a) The fixed function stack is deprecated and you shouldn't use it. Use a proper matrix math library (like GLM), make copies of the branching nodes in your transformation hierarchy so that you can use those as starting point for different branches.
b) You can reset the matrix state to identity at any time using glLoadIdentity. Using glPushMatrix and glPopMatrix you can create a stack. You know how stacks work, do you? Pushing makes a copy and adds it to the top, all following operations happen on that. Poping removes the element at the top and gives you back the state it was in before the previous push.
Update
Regarding transformation trees you may be interested in the following:
https://stackoverflow.com/a/8953078/524368
https://stackoverflow.com/a/15566740/524368
(I'd rather avoid having to implement a whole bone/joint system for this.)
It's actually the most easy way to do this. In terms of fixed function OpenGL a bone-joint is just a combination of glTranslate(…); glRotate(…).

OpenGL - Clipping in 3D space

I've been trying for so long now to find how to do this: Clipping outside a defined area in OpenGL.
I don't want to clip using the viewport or scissors btw. I've been searching like crazy and all I ever find is how to use the viewport/scissors.
I want to define something like, "if this pixel has x below 10 units don't draw it". Something like, "it's ok to draw if x is between 10-20 and y is between 10-20 and z is between 10-20, otherwise don't render the pixel".
Footnote: Sucks that stackoverflow requires you to register to ask a question. It was better before when the site was open and account creation was optional.
Figured it out after searching a little while longer, typical that I just made the question on this site.
Java code (I'm using LWJGL):
DoubleBuffer eqn1 = BufferUtils.createDoubleBuffer(8).put(new double[] {-1, 0, 0, 100});
eqn1.flip();
GL11.glClipPlane(GL11.GL_CLIP_PLANE0, eqn1);
GL11.glEnable(GL11.GL_CLIP_PLANE0);
As usual OpenGL is badly (as in poorly, not easy to grasp) documented and I had to search around until finally I found some that explained this on some forum (and revealed that the buffer must be flipped): The first three doubles are the normal of the clipping plane, the last (100) is how far from world's origin (0,0,0) the plane is.
So, if the camera is looking straight at (0,0,0) this example code above will create a plane on the right side that makes OpenGL not render stuff to the right of it. The plane is located on x=100 and is facing to the left (-1).
I hope this will be the search result for people looking for the answer to this in the future.
The Fixed-Function has user-defined clipping planes for that, see glClipPlane() for the details. You could define 6 of these planes to achieve the effect you talked about.
Modern shader-based GL provides the gl_ClipDistance[]
output variable array to implement such things.
The test you describe seems to be per-fragment, so you could also do this in the fragment shader, by discarding all fragments outside of your area. But this will be quite inefficient (and prevents the early Z test to be used), so you should be careful. This could probably combined with the stencil test to limit x and y and only do the test on z in the shader.

OpenGL perspective x-y-coordinates to orhogonal

Let's say that I have a perspective view in OpenGL and choose a point at a given depth. Let's say that it's at z=-10. How do I know which actual x- and y-coordinates this point have? It is easy when having an orth view because it then is the same for every depth. but for the perspective one: How do I find these x-y-values?
The coordinates you supply are basically "world" coordinates -- i.e., where things exist in the virtual world you build. In other words, the coordinates you work with are always orthogonal.
Just for example, if I was going to do a 3D model of a house, I could set it up so I was working in actual feet, so when I could draw a line from 0,0 to 0,10 to represent something exactly 10 feet long. That would remain 10 feet whether I viewed it up close, so it filled my view, or from a long ways away so it was only a couple of pixels long.
Only when objects are being displayed is perspective transformation done. I don't do it on the coordinates being fed into the system at all.
If you're asking about computing the screen coordinates for an object, yes, you can. The usual way to do this is with gluUnProject. At least in my experience it's relatively unusual that you end up needing to do this though.
The one time you sort of care is when you're selecting something on-screen using the mouse. Though it's possible to do with with gluUnProject, OpenGL has a selection mode that's intended specifically for this kind of purpose, and it works pretty well.
Look at gluProject as a way of projecting your cursor position into a "world" position (and gluUnproject as a way of finding out where your object is on screen).

Look from a moving camera to a fix point opengl

My camera is place on a moving object, but it always be pointed to a point in scene. How can I do that? How can I calculate the perpendiculars? How, if the position of the observator always moves, the direction stay focused on that point?
My camera is place on a moving object, but it always be pointed to a point in scene. How can I do that? How, if the position of the observator [sic] always moves, the direction stay focused on that point?
gluLookAt().
How can I calculate the perpendiculars?
Cross-product.
Which version of OpenGL are you using. This depends a lot on which environment you are working on. If you are in OpenGL ES you need to do it yourself, otherwise you can achieve the wanted result by playing with glLookat. Let us know.

OpenGL- Simple 2D clipping/occlusion method?

I'm working on a relatively small 2D (top-view) game demo, using OpenGL for my graphics. It's going for a basic stealth-based angle, and as such with all my enemies I'm drawing a sight arc so the player knows where they are looking.
One of my problems so far is that when I draw this sight arc (as a filled polygon) it naturally shows through any walls on the screen since there's nothing stopping it:
http://tinyurl.com/43y4o5z
I'm curious how I might best be able to prevent something like this. I do already have code in place that will let me detect line-intersections with walls and so on (for the enemy sight detection), and I could theoretically use this to detect such a case and draw the polygon accordingly, but this would likely be quite fiddly and/or inefficient, so I figure if there's any built-in OpenGL systems that can do this for me it would probably do it much better.
I've tried looking for questions on topics like clipping/occlusion but I'm not even sure if these are exactly what I should be looking for; my OpenGL skills are limited. It seems that anything using, say, glClipPlanes or glScissor wouldn't be suited to this due to the large amount of individual walls and so on.
Lastly, this is just a demo I'm making in my spare time, so graphics aren't exactly my main worry. If there's a (reasonably) painless way to do this then I'd hope someone can point me in the right direction; if there's no simple way then I can just leave the problem for now or find other workarounds.
This is essentially a shadowing problem. Here's how I'd go about it:
For each point around the edge of your arc, trace a (2D) ray from the enemy towards the point, looking for intersections with the green boxes. If the green boxes are always going to be axis-aligned, the math will be a lot easier (look for Ray-AABB intersection). Rendering the intersection points as a triangle fan will give you your arc.
As you mention that you already have the line-wall intersection code going, then as long as that will tell you the distance from the enemy to the wall, then you'll be able to use it for the sight arc. Don't automatically assume it'll be too slow - we're not running on 486s any more. You can always reduce the number of points around the edge of your arc to speed things up.
OpenGL's built-in occlusion handling is designed for 3D tasks and I can't think of a simple way to rig it to achieve the effect you are after. If it were me, the way I would solve this is to use a fragment shader program, but be forewarned that this definitely does not fall under "a (reasonably) painless way to do this". Briefly, you first render a binary "occlusion map" which is black where there are walls and white otherwise. Then you render the "viewing arc" like you are currently doing with a fragment program that is designed to search from the viewer towards the target location, searching for an occluder (black pixel). If it finds an occluder, then it renders that pixel of the "viewing arc" as 100% transparent. Overall though, while this is a "correct" solution I would definitely say that this is a complex feature and you seem okay without implementing it.
I figure if there's any built-in OpenGL systems that can do this for me it would probably do it much better.
OpenGL is a drawing API, not a geometry processing library.
Actually your intersection test method is the right way to do it. However to speed it up you should use a spatial subdivision structure. In your case you have something that's cries for a Binary Space Partitioning tree. BSP trees have the nice property, that the complexity for finding intersections of a line with walls is in average about O(log n) and worst case is O(n log n), or in other words, BSP tress are very efficient. See the BSP FAQ for details http://www.opengl.org//resources/code/samples/bspfaq/index.html