OpenGL - Clipping in 3D space - opengl

I've been trying for so long now to find how to do this: Clipping outside a defined area in OpenGL.
I don't want to clip using the viewport or scissors btw. I've been searching like crazy and all I ever find is how to use the viewport/scissors.
I want to define something like, "if this pixel has x below 10 units don't draw it". Something like, "it's ok to draw if x is between 10-20 and y is between 10-20 and z is between 10-20, otherwise don't render the pixel".
Footnote: Sucks that stackoverflow requires you to register to ask a question. It was better before when the site was open and account creation was optional.

Figured it out after searching a little while longer, typical that I just made the question on this site.
Java code (I'm using LWJGL):
DoubleBuffer eqn1 = BufferUtils.createDoubleBuffer(8).put(new double[] {-1, 0, 0, 100});
eqn1.flip();
GL11.glClipPlane(GL11.GL_CLIP_PLANE0, eqn1);
GL11.glEnable(GL11.GL_CLIP_PLANE0);
As usual OpenGL is badly (as in poorly, not easy to grasp) documented and I had to search around until finally I found some that explained this on some forum (and revealed that the buffer must be flipped): The first three doubles are the normal of the clipping plane, the last (100) is how far from world's origin (0,0,0) the plane is.
So, if the camera is looking straight at (0,0,0) this example code above will create a plane on the right side that makes OpenGL not render stuff to the right of it. The plane is located on x=100 and is facing to the left (-1).
I hope this will be the search result for people looking for the answer to this in the future.

The Fixed-Function has user-defined clipping planes for that, see glClipPlane() for the details. You could define 6 of these planes to achieve the effect you talked about.
Modern shader-based GL provides the gl_ClipDistance[]
output variable array to implement such things.
The test you describe seems to be per-fragment, so you could also do this in the fragment shader, by discarding all fragments outside of your area. But this will be quite inefficient (and prevents the early Z test to be used), so you should be careful. This could probably combined with the stencil test to limit x and y and only do the test on z in the shader.

Related

Water rendering in opengl [duplicate]

This question already has answers here:
How to render ocean wave using opengl in 3D? [closed]
(2 answers)
Closed 7 years ago.
I have absolutely no idea how to render water sources (ocean, lake, etc). It's like every tutorial I come across assumes I have the basic knowledge in this subject, and therefore speaks abstractly about the issue, but I don't.
My goal is to have a height based water level in my terrain.
I can't find any good article that will help me get started.
The question is quite broad. I'd split it up into separate components and get each working in turn. Hopefully this will help narrow down what those might be, unfortunately I can only offer the higher level discussion you aren't directly after.
The wave simulation (geometry and animation):
A procedural method will give a fixed height for a position and time based on some noise function.
A very basic idea is y = sin(x) + cos(z). Some more elaborate examples are in GPUGems.
Just like in the image, you can render geometry by creating a grid, sampling heights (y) at the grid x,y positions and connecting those points with triangles.
If you explicitly store all the heights in a 2D array, you can create some pretty decent looking waves and ripples. The idea here is to update height based on the neighbouring heights, using a few simple rules. For example, each height moves towards the average neighbouring height but also tends towards the equilibrium height equals zero. For this to work well, heights will need a velocity value to give the water momentum.
I found some examples of this kind of dynamic water here:
height_v[i][j] += ((height_west+ height_east + height_south + height_north)/4 - height[i][j]);
height_v[i][j] *= damping;
height[i][j] += height_v[i][j];
Rendering:
Using alpha transparency is a great first step for water. I'd start here until your simulation is running OK. The primary effect you'll want is reflection, so I'll just cover that. Further on you'll want to scale the reflection value using the Fresnel ratio. You may want an absorption effect (like fog) underwater based on distance (see Beer's law, essentially exp(-distance * density)). Getting really fancy, you might want to render the underneath parts of the water with refraction. But back to reflections...
Probably the simplest way to render a planar reflection is stencil reflections, where you'd draw the scene from underneath the water and use the stencil buffer to only affect pixels where you've previously drawn water.
An example is here.
However, this method doesn't work when you have a bumpy surface and the reflection rays are perturbed.
Rather than render the underwater reflection view directly to the screen, you can render it to a texture. Then you have the colour information for the reflection when you render the water. The tricky part is working out where in the texture to sample after calculating the reflection vector.
An example is here.
This uses textures but just for a perfectly planar reflection.
See also: How do I draw a mirror mirroring something in OpenGL?

How flexible is OpenGL's quadric functionality and transformation matrices?

To give you an idea of where I'm coming from, this started as a teaching exercise to get a 12-year-old video game addict into coding. The 2D games, I did in SDL with him and that was fine because I wasn't planning on going into 3D. Yeah, right! So now I'm in at the deep end in OpenGL and mainly trying to figure out exactly what it can and cannot do. I understand the theory (still working on beziers and nurbs if the truth be told) and could code the whole thing by hand in calculated triangular vertices but I'd hate to spend days on that only to be told that there's a built in function/library that does the whole thing faster and easier.
Quadrics seem to be extremely powerful but not terribly flexible. Consider the human head - roughly speaking a 3x4x3 sphere or a torso as a truncated cone that's taller than it is wide than it is thick. Again, a quadric shape with independent x,y and z radii. Since only one radius is provided, am I right in thinking that I would have to generate it around the origin and then apply a scaling matrix to adjust them? Furthermore, if this is so, am I also correct in thinking that saving the results into a vertex array rather than a frame list results in the system neither knowing or caring how they got there?
Transitions: I'm familiar with the basic transitions but, again, consider the torso. It can achieve, maybe, a 45 degree twist from the hips to the shoulders that is distributed linearly across the entire length or even the sideways lean. This is applied around the Y or Z axis respectively but I've obviously missed something about applying transformations that are based on an independent value. (eg rot = dist x (max_rot/max_dist). Again, I could do this by hand (and will probably have to in order to apply the correct physics) but does OpenGL have this functionality built in somewhere?
Any other areas of research I need to put in would be appreciated in the notes.

Return to the original openGL origin coordinates

I'm currently trying to solve a problem regarding the display of an arm avatar.
I'm using a 3D tracker that's sending me coordinates and angles through my serial port. It works quite fine as long as I only want to show a "hand" or a block of wood in its place in 3D space.
The problem is: When I want to draw an entire arm (lets say the wrist is "stiff"), so the only degree of freedom is the elbow), I'm using the given coordinates (to which I've gltranslatef'd and glmultmatrix'd), but I want to draw another quad primitive with 2 vertices that are relative to the tracker coordinates (part of the "elbow") and 2 vertices that are always fixed next to the camera (part of the "shoulder"). However, I can't get out of my translated coordinate system.
Is my question clear?
My code is something like
cubeStretch = 0.15;
computeRotationMatrix();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glTranslatef(handX, handY, handZ);
glMultMatrixf(*rotationMatrix);
glBegin(GL_QUADS);
/*some vertices for the "block of wood"*/
/*then a vertex which is relative to handX-handZ*/
glVertex3f(-cubeStretch, -cubeStretch+0.1, 5+cubeStretch);
/*and here I want to go back to the origin*/
gltranslatef(-handX, -handY, -handZ);
/*so the next vertex should preferably be next to the camera; the shoulder, so to say*/
glVertex3f(+0.5,-0.5,+0.5);
I already know the last three line don't work, it's just one of the ways I've tried.
I realize it might be hard to understand what I'm trying to do. Anyone got any idea on how to get back to the "un-gltranslatef'd" coordinate origin?
(I'd rather avoid having to implement a whole bone/joint system for this.)
Edit:https://imagizer.imageshack.us/v2/699x439q90/202/uefw.png
In the picture you can see what I have so far. As you can see, the emphasis so far has not been on beauty, but rather on using the tracker coordinates to correctly display something on the screen.
The white cubes are target points which turn red when the arm avatar "touches" them ("arm avatar" used here as a word for the hideous brown contraption to the right, but I think you know what I mean). I now want to have a connection from the back end of the "lower arm" (the broad end of the avatar is supposed to be the hand) to just the right of the screen. Maybe it's clearer now?
a) The fixed function stack is deprecated and you shouldn't use it. Use a proper matrix math library (like GLM), make copies of the branching nodes in your transformation hierarchy so that you can use those as starting point for different branches.
b) You can reset the matrix state to identity at any time using glLoadIdentity. Using glPushMatrix and glPopMatrix you can create a stack. You know how stacks work, do you? Pushing makes a copy and adds it to the top, all following operations happen on that. Poping removes the element at the top and gives you back the state it was in before the previous push.
Update
Regarding transformation trees you may be interested in the following:
https://stackoverflow.com/a/8953078/524368
https://stackoverflow.com/a/15566740/524368
(I'd rather avoid having to implement a whole bone/joint system for this.)
It's actually the most easy way to do this. In terms of fixed function OpenGL a bone-joint is just a combination of glTranslate(…); glRotate(…).

OpenGL perspective x-y-coordinates to orhogonal

Let's say that I have a perspective view in OpenGL and choose a point at a given depth. Let's say that it's at z=-10. How do I know which actual x- and y-coordinates this point have? It is easy when having an orth view because it then is the same for every depth. but for the perspective one: How do I find these x-y-values?
The coordinates you supply are basically "world" coordinates -- i.e., where things exist in the virtual world you build. In other words, the coordinates you work with are always orthogonal.
Just for example, if I was going to do a 3D model of a house, I could set it up so I was working in actual feet, so when I could draw a line from 0,0 to 0,10 to represent something exactly 10 feet long. That would remain 10 feet whether I viewed it up close, so it filled my view, or from a long ways away so it was only a couple of pixels long.
Only when objects are being displayed is perspective transformation done. I don't do it on the coordinates being fed into the system at all.
If you're asking about computing the screen coordinates for an object, yes, you can. The usual way to do this is with gluUnProject. At least in my experience it's relatively unusual that you end up needing to do this though.
The one time you sort of care is when you're selecting something on-screen using the mouse. Though it's possible to do with with gluUnProject, OpenGL has a selection mode that's intended specifically for this kind of purpose, and it works pretty well.
Look at gluProject as a way of projecting your cursor position into a "world" position (and gluUnproject as a way of finding out where your object is on screen).

OpenGL: the ultimate coordinate system confusion solution?

I'm tired of thinking how the hell my coodinates are working at each case.
I heard that I could flip the Y-axis by this code:
glScalef(1, -1, 1);
But should I? Doesnt this break some other external functions and lighting etc?
i may say, getting used to opengl defaults is the fastest way of walking out of confusing.
There's no correct yes/no answer to this question. Calling glScalef(1, -1, 1) means using a left-handed instead of a right handed coordinate system, this has historically been the choice of for instance Direct3D and the RenderMan Interface. Some people feel it's more intuitive to have the positive z-axis pointing in to the screen instead of out of it (which is one version of a left handed coordinate system, the y-axis pointing down is another).
If you chose to switch coordinate system, you will have to change some of the standard settings in OpenGL to make it work. For instance you probably want to call glCullFace(GL_FRONT) (or reverse the order of all triangles sent to OpenGL).
I could be wrong, but it might turn your models inside-out. You will probably also need to swap your vertex winding order. If you use two-sided polygons, though, then you don't need to worry.