Modern shader based Open GL : Hierarchical modeling of 3D objects [closed] - opengl

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I'm trying to model a human body using cubes, spheres and such.
But I don't know how to actually model a hierarchical geometry in OpenGL 3.3+ .
For example, if the shoulder is rotated, it should also move the arm (and not just leave the arm where it was). In some sense, how do I "connect" or "link" objects like that? That is, the arm should be connected to the shoulder at the elbow, and the torso should connect with the legs at the hips..etc
Are there good resources that explain this with code?

it's quite simple actually:
You create a object matrix that to apply transformations to and a stack to store copies
mat4 objectMatrix;
stack<mat4> stack;
then you can emulate the old fixed function pipeline:
stack.push(objectMatrix);
objectMatrix.translate(shoulderOffset);
objectMatrix.rotate(shoulderRotation);
glUniformMatrix4fv(OBJECTMAT, 1, GL_FALSE, objectMatrix.data());
glDrawArrays(GL_TRIANGLES, shoulderStartIndex, shoulderNumVertices);
{
stack.push(objectMatrix);
objectMatrix.translate(armOffset);
objectMatrix.rotate(armRotation);
glUniformMatrix4fv(OBJECTMAT, 1, GL_FALSE, objectMatrix.data());
glDrawArrays(GL_TRIANGLES, armStartIndex, armNumVertices);
//and so on
objectMatrix = stack.pop();
}
objectMatrix = stack.pop();
You can also push all used matrices (for all "bones") like constructed above in the shader and add a list of weights to each vertex so you can do the following in the shader
mat4 objMat = mat4(0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0);
for(int i = 0; i< 10){
obj+=matrices[i]*weights[i];
}
This will be lighter on the number of uniform changes and draw calls and allows blending of matrices

firstly, this issue isn't really graphics related as such, its more logic related.
So to start off with, we will model our object structure as a tree, with our torso for example, being the root of our tree. Each node on the tree will contain a model, and a matrix that represents that model and some information on how this node relates to its parent node. For example its origin is 5 units in positive X, and 1 unit in negative Y and 0 units in Z away from the origin of the parent.
Now with this information we can do a lot, when we draw our model, we will keep an overall translation, and rotation matrix. We draw our root first, and apply its matrices to its vertices. Then we draw each of its children, and we propagate the changes made in the root node, to all of its children. This means everytime our root node moves, all of its children will move with it, and its childrens children, etc. We can also apply rotations to each node, in fact you can store any other information in each of these nodes, its all dependant on your design.
This is just a very basic idea of how you can achieve what you are looking for, there are also other techniques that you can use to model this behaviour, some are bone based and allow a lot more freedom of animation but are a lot more complex. I recommend first getting this down, and then moving onto the more complex stuff. As for drawing your vertices, You can have a mesh object in each node which has a VBO and a simple draw method, this is really trivial though, its literally just loading in some vertice information and then that's it. Your translation matrix and rotation matrix will do the rest.
Hope this helps!

Looks like you are trying to implement a bone (skeletal) animation. I think it is rather complex subject for someone new to 3d graphics.
The implementation differs depending wheather you use fixed or shader based pipeline; VBOs or immediate mode.
Quick search on Google returned this:
http://content.gpwiki.org/index.php/OpenGL:Tutorials:Basic_Bones_System
http://en.wikipedia.org/wiki/Skeletal_animation

Related

OpenGL Game Engine Renderer [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm developing a rendering engine using OpenGL and I'm wanting to know:
Should duplicated vertices (for flat shading, we need to duplicate vertices as we have 2+ normals for a single vertex) be created in the model or should an algorithm be implemented in the engine to work out when vertices need to be duplicated. An example would be a model of a rock which has sharp edges and smooth surfaces.
It makes sense to me that the artist would duplicate vertices for sharp edges in the modelling software as the engine has no idea what the artist's intentions are (in regards to model features). The engine could identify which vertices should be duplicated by checking the angle between face normals but to me, doing this could overwrite features of the model.
This is specifically for .obj models as different exporters may (? haven't looked into it) provide options to cater for this need.
You should probably be defining the duplicate vertices yourself, insofar as they're not really duplicate vertices.
In Graphics Programming terms, a "vertex" is supposed to define all the necessary information to define a single point. This includes, but is not necessarily limited to: Position, Normal, Texture Coordinates, and Untextured Colors.
So in general, a Vertex is only "duplicate" if all of this defined data is identical (plus or minus Epsilon) when comparing two points. If you write an algorithm to detect and remove such duplicates, I'd say there's no problem.
Where you'll get a problem is when you're expecting an algorithm to accurately decide if a vertex should be "smooth" or "flat" because no one algorithm will ever get it right. Especially in your case: if you expect the rock to always be smooth shaded (which is reasonable for any particularly worn rock) you'd probably be okay, but given that you need it to consider both smooth and sharp edges, your algorithm will always screw it up. You'll need both situations where a < 10° angle is shaded smoothly, and a > 170° angle is shaded flatly. You won't get it right unless the model itself provides those rules.
So, to sum up: Just create the duplicate vertices in the model. Don't try to algorithm your way out of it. Most decent 3d modelling programs should provide features which will make this process relatively painless.

Water rendering in opengl [duplicate]

This question already has answers here:
How to render ocean wave using opengl in 3D? [closed]
(2 answers)
Closed 7 years ago.
I have absolutely no idea how to render water sources (ocean, lake, etc). It's like every tutorial I come across assumes I have the basic knowledge in this subject, and therefore speaks abstractly about the issue, but I don't.
My goal is to have a height based water level in my terrain.
I can't find any good article that will help me get started.
The question is quite broad. I'd split it up into separate components and get each working in turn. Hopefully this will help narrow down what those might be, unfortunately I can only offer the higher level discussion you aren't directly after.
The wave simulation (geometry and animation):
A procedural method will give a fixed height for a position and time based on some noise function.
A very basic idea is y = sin(x) + cos(z). Some more elaborate examples are in GPUGems.
Just like in the image, you can render geometry by creating a grid, sampling heights (y) at the grid x,y positions and connecting those points with triangles.
If you explicitly store all the heights in a 2D array, you can create some pretty decent looking waves and ripples. The idea here is to update height based on the neighbouring heights, using a few simple rules. For example, each height moves towards the average neighbouring height but also tends towards the equilibrium height equals zero. For this to work well, heights will need a velocity value to give the water momentum.
I found some examples of this kind of dynamic water here:
height_v[i][j] += ((height_west+ height_east + height_south + height_north)/4 - height[i][j]);
height_v[i][j] *= damping;
height[i][j] += height_v[i][j];
Rendering:
Using alpha transparency is a great first step for water. I'd start here until your simulation is running OK. The primary effect you'll want is reflection, so I'll just cover that. Further on you'll want to scale the reflection value using the Fresnel ratio. You may want an absorption effect (like fog) underwater based on distance (see Beer's law, essentially exp(-distance * density)). Getting really fancy, you might want to render the underneath parts of the water with refraction. But back to reflections...
Probably the simplest way to render a planar reflection is stencil reflections, where you'd draw the scene from underneath the water and use the stencil buffer to only affect pixels where you've previously drawn water.
An example is here.
However, this method doesn't work when you have a bumpy surface and the reflection rays are perturbed.
Rather than render the underwater reflection view directly to the screen, you can render it to a texture. Then you have the colour information for the reflection when you render the water. The tricky part is working out where in the texture to sample after calculating the reflection vector.
An example is here.
This uses textures but just for a perfectly planar reflection.
See also: How do I draw a mirror mirroring something in OpenGL?

Return to the original openGL origin coordinates

I'm currently trying to solve a problem regarding the display of an arm avatar.
I'm using a 3D tracker that's sending me coordinates and angles through my serial port. It works quite fine as long as I only want to show a "hand" or a block of wood in its place in 3D space.
The problem is: When I want to draw an entire arm (lets say the wrist is "stiff"), so the only degree of freedom is the elbow), I'm using the given coordinates (to which I've gltranslatef'd and glmultmatrix'd), but I want to draw another quad primitive with 2 vertices that are relative to the tracker coordinates (part of the "elbow") and 2 vertices that are always fixed next to the camera (part of the "shoulder"). However, I can't get out of my translated coordinate system.
Is my question clear?
My code is something like
cubeStretch = 0.15;
computeRotationMatrix();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glTranslatef(handX, handY, handZ);
glMultMatrixf(*rotationMatrix);
glBegin(GL_QUADS);
/*some vertices for the "block of wood"*/
/*then a vertex which is relative to handX-handZ*/
glVertex3f(-cubeStretch, -cubeStretch+0.1, 5+cubeStretch);
/*and here I want to go back to the origin*/
gltranslatef(-handX, -handY, -handZ);
/*so the next vertex should preferably be next to the camera; the shoulder, so to say*/
glVertex3f(+0.5,-0.5,+0.5);
I already know the last three line don't work, it's just one of the ways I've tried.
I realize it might be hard to understand what I'm trying to do. Anyone got any idea on how to get back to the "un-gltranslatef'd" coordinate origin?
(I'd rather avoid having to implement a whole bone/joint system for this.)
Edit:https://imagizer.imageshack.us/v2/699x439q90/202/uefw.png
In the picture you can see what I have so far. As you can see, the emphasis so far has not been on beauty, but rather on using the tracker coordinates to correctly display something on the screen.
The white cubes are target points which turn red when the arm avatar "touches" them ("arm avatar" used here as a word for the hideous brown contraption to the right, but I think you know what I mean). I now want to have a connection from the back end of the "lower arm" (the broad end of the avatar is supposed to be the hand) to just the right of the screen. Maybe it's clearer now?
a) The fixed function stack is deprecated and you shouldn't use it. Use a proper matrix math library (like GLM), make copies of the branching nodes in your transformation hierarchy so that you can use those as starting point for different branches.
b) You can reset the matrix state to identity at any time using glLoadIdentity. Using glPushMatrix and glPopMatrix you can create a stack. You know how stacks work, do you? Pushing makes a copy and adds it to the top, all following operations happen on that. Poping removes the element at the top and gives you back the state it was in before the previous push.
Update
Regarding transformation trees you may be interested in the following:
https://stackoverflow.com/a/8953078/524368
https://stackoverflow.com/a/15566740/524368
(I'd rather avoid having to implement a whole bone/joint system for this.)
It's actually the most easy way to do this. In terms of fixed function OpenGL a bone-joint is just a combination of glTranslate(…); glRotate(…).

Blender: Impossible Cube [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm working on a graphics project trying to create an impossible cube in 3D. An impossible cube looks like that:
The trick behind this is two of the edges which are 'cut' and a picture taken from a specific angle to give the illusion of the impossibility.
Well I'm trying to make this but instead of a static image, I want to be able to animate it (rotate around) maintaining the impossible properties.
I have managed to make a cube in blender as you can see in the screenshot below:
I would like to hear your suggestions as to how I can achieve the desired effect. An idea would be to make transparent the portion of the edge that has an edge(or more) behind it, so that every time the camera angle changes, the transparent patch moves along.
It doesn't have to be done in Blender exclusively so any solutions in OpenGL etc are welcome.
To give you an idea of what the end result should be, this is a link to such an illustration:
3D Impossible Cube Illusion Animation
It's impossible (heh). Try to imagine rotating the cube so that the impossibly-in-front bit moves to the left. As soon as it would "cross" the current leftmost edge, the two properties of "it's in front" and "it's in the back" will not be possible to fulfill simultaneously.
If you have edge culling enabled, but clipping (depth-testing) disabled, and draw primitives in the right order, you should get the Escher cube without any need for cuts. This should be relatively easy to animate.

Modern OpenGL Question

In my OpenGL research (the OpenGL Red Book, I think) I came across an example of a model of an articulating robot arm consisting of an "upper arm", a "lower arm", a "hand", and five or more "fingers". Each of the sections should be able to move independently, but constrained by the "joints" (the upper and lower "arms" are always connected at the "elbow").
In immediate mode (glBegin/glEnd), they use one mesh of a cube, called "member", and use scaled copies of this single mesh for each of the parts of the arm, hand, etc. "Movements" were accomplished by pushing rotations onto the transformation matrix stack for each of the following joints: shoulder, elbow, wrist, knuckle - you get the picture.
Now, this solves problem, but since it's using old, deprecated immediate mode, I don't yet understand the solution to this problem in a modern OpenGL context. My question is: how to approach this problem using modern OpenGL? In particular, should each individual "member" keep track of its own current transformation matrix since matrix stacks are no longer kosher?
Pretty much. If you really need it, implementing your own stack-like interface is pretty simple. You would literally just store a stack, then implement whatever matrix operations you need using your preferred math library, and have some way to initialized your desired matrix uniform using the top element of the stack.
In your robot arm example, suppose that the linkage is represented as a tree (or even a graph if you prefer), with relative transformations specified between each body. To draw the robot arm, you just do a traversal of this data structure and set the transformation of whichever child body to be the parent body's transformation composed with its own. For example:
def draw_linkage(body, view):
//Draw the body using view matrix
for child, relative_xform in body.edges:
if visited[child]:
continue
draw_linkage(child, view * relative_xform)
In the case of rigid parts, connected by joints, one usually treats each part as a individial submesh, loading the appropriate matrix before drawing.
In the case of "connected"/"continous" meshes, like a face, animation usually happens through bones and deformation targets. Each of those defines a deformation and every vertex in the mesh is assigned a weight, how strong it is affected by each deformators. Technically this can be applied to a rigid limb model, too, giving each limb a single deformator nonzero weighting.
Any decent animation system keeps track of transformations (matrices) itself anyway, the OpenGL matrix stack functions are seldomly used in serious applications (since OpenGL had been invented). But usually the transformations are stored in a hierachy.
You generally do this at a level above openGL using a scenegraph.
The same matrix transforms at each node in the scenegraph tree just map simply onto the openGL matrices so it's pretty efficient.