OpenGL Game Engine Renderer [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm developing a rendering engine using OpenGL and I'm wanting to know:
Should duplicated vertices (for flat shading, we need to duplicate vertices as we have 2+ normals for a single vertex) be created in the model or should an algorithm be implemented in the engine to work out when vertices need to be duplicated. An example would be a model of a rock which has sharp edges and smooth surfaces.
It makes sense to me that the artist would duplicate vertices for sharp edges in the modelling software as the engine has no idea what the artist's intentions are (in regards to model features). The engine could identify which vertices should be duplicated by checking the angle between face normals but to me, doing this could overwrite features of the model.
This is specifically for .obj models as different exporters may (? haven't looked into it) provide options to cater for this need.

You should probably be defining the duplicate vertices yourself, insofar as they're not really duplicate vertices.
In Graphics Programming terms, a "vertex" is supposed to define all the necessary information to define a single point. This includes, but is not necessarily limited to: Position, Normal, Texture Coordinates, and Untextured Colors.
So in general, a Vertex is only "duplicate" if all of this defined data is identical (plus or minus Epsilon) when comparing two points. If you write an algorithm to detect and remove such duplicates, I'd say there's no problem.
Where you'll get a problem is when you're expecting an algorithm to accurately decide if a vertex should be "smooth" or "flat" because no one algorithm will ever get it right. Especially in your case: if you expect the rock to always be smooth shaded (which is reasonable for any particularly worn rock) you'd probably be okay, but given that you need it to consider both smooth and sharp edges, your algorithm will always screw it up. You'll need both situations where a < 10° angle is shaded smoothly, and a > 170° angle is shaded flatly. You won't get it right unless the model itself provides those rules.
So, to sum up: Just create the duplicate vertices in the model. Don't try to algorithm your way out of it. Most decent 3d modelling programs should provide features which will make this process relatively painless.

Related

Algorithm to Fill In a Closed 2D Curve [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to find a way of drawing the inside of a closed 2D curve. This curve is actually created using a bicubic Bezier curve, but that's not important I believe.
At the moment there should be no "holes" within the drawn shape. So it will just be totally filled in. It seems like constrained Delaunay triangulation would be the way to go? But there seems to be different ways of doing this. I am looking for a quick and simple solution (but will implement what's needed to make it working).
Programs such as Illustrator have that sort of feature (or SVG -- with the option fill).
I am looking for:
techniques to do that
point me to a paper/document where the algorithm is explained
is the source code of a SVG renderer available somewhere?
EDIT:
The application uses OpenGL. I draw the curves myself. Just need to find a way of filling them in.
the shape can either be concave or convex
Polygons can be filled using the Scanline method. The principle is easy: move an horizontal line and keep a list of the edges it meets. It is called the active list. Then join the intersections from left to right, in pairs. When the edges are sorted by increasing ordinate, the update of the active list from one scanline to the next can be done efficiently.
This works with concave/convex polygons and polygons with holes, and even crossed ones.
To fill a Bezier path, you can flatten it, i.e. turn it to a polygon of many sides.
A direct approach is also possible, based on the scanline idea: first decompose the Bezier curves in monotone sections, i.e. portions that meet on horizontal line only once. This can be done analytically for cubic Beziers by detecting the curve maxima and minima (the equation is quadratic).
Now you can treat the curvilinear polygon exactly as a polygon, knowing that you have one intersection per side. There is a slightly delicate point, computing the intersection. But this is eased by the fact that you know a good approximation of the Bezier arc (the line segment between the same endpoints), and you can update the intersection incrementally, from one scanline to the next.
On the picture, the original endpoints appear in blue. Splitting endoints have been added to obtain monotone sections (the other control points are omitted). The dotted lines shows the polygon that approximates the shape and has the same topology (same active list, same number of intersections with the scanlines).
If you must use polygon filling, there is no other option than flattening the curve to get straight sides.
Then use a polygon filling primitive.
If all you have is a triangle filling primitive, you can
triangulate the polygon by ear clipping, or decomposition in monotone polygons, or
use a simple sweepline method: if you draw an horizontal through every vertex, you will slice the polygon in triangles and trapezoids. A trapezoid can be cut in two triangles. For efficiency, use the active list method.

Modern shader based Open GL : Hierarchical modeling of 3D objects [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I'm trying to model a human body using cubes, spheres and such.
But I don't know how to actually model a hierarchical geometry in OpenGL 3.3+ .
For example, if the shoulder is rotated, it should also move the arm (and not just leave the arm where it was). In some sense, how do I "connect" or "link" objects like that? That is, the arm should be connected to the shoulder at the elbow, and the torso should connect with the legs at the hips..etc
Are there good resources that explain this with code?
it's quite simple actually:
You create a object matrix that to apply transformations to and a stack to store copies
mat4 objectMatrix;
stack<mat4> stack;
then you can emulate the old fixed function pipeline:
stack.push(objectMatrix);
objectMatrix.translate(shoulderOffset);
objectMatrix.rotate(shoulderRotation);
glUniformMatrix4fv(OBJECTMAT, 1, GL_FALSE, objectMatrix.data());
glDrawArrays(GL_TRIANGLES, shoulderStartIndex, shoulderNumVertices);
{
stack.push(objectMatrix);
objectMatrix.translate(armOffset);
objectMatrix.rotate(armRotation);
glUniformMatrix4fv(OBJECTMAT, 1, GL_FALSE, objectMatrix.data());
glDrawArrays(GL_TRIANGLES, armStartIndex, armNumVertices);
//and so on
objectMatrix = stack.pop();
}
objectMatrix = stack.pop();
You can also push all used matrices (for all "bones") like constructed above in the shader and add a list of weights to each vertex so you can do the following in the shader
mat4 objMat = mat4(0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0);
for(int i = 0; i< 10){
obj+=matrices[i]*weights[i];
}
This will be lighter on the number of uniform changes and draw calls and allows blending of matrices
firstly, this issue isn't really graphics related as such, its more logic related.
So to start off with, we will model our object structure as a tree, with our torso for example, being the root of our tree. Each node on the tree will contain a model, and a matrix that represents that model and some information on how this node relates to its parent node. For example its origin is 5 units in positive X, and 1 unit in negative Y and 0 units in Z away from the origin of the parent.
Now with this information we can do a lot, when we draw our model, we will keep an overall translation, and rotation matrix. We draw our root first, and apply its matrices to its vertices. Then we draw each of its children, and we propagate the changes made in the root node, to all of its children. This means everytime our root node moves, all of its children will move with it, and its childrens children, etc. We can also apply rotations to each node, in fact you can store any other information in each of these nodes, its all dependant on your design.
This is just a very basic idea of how you can achieve what you are looking for, there are also other techniques that you can use to model this behaviour, some are bone based and allow a lot more freedom of animation but are a lot more complex. I recommend first getting this down, and then moving onto the more complex stuff. As for drawing your vertices, You can have a mesh object in each node which has a VBO and a simple draw method, this is really trivial though, its literally just loading in some vertice information and then that's it. Your translation matrix and rotation matrix will do the rest.
Hope this helps!
Looks like you are trying to implement a bone (skeletal) animation. I think it is rather complex subject for someone new to 3d graphics.
The implementation differs depending wheather you use fixed or shader based pipeline; VBOs or immediate mode.
Quick search on Google returned this:
http://content.gpwiki.org/index.php/OpenGL:Tutorials:Basic_Bones_System
http://en.wikipedia.org/wiki/Skeletal_animation

How to interpolate 3D points computed from a Kinect to get a ball trajectory? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm getting 3D points from the Kinect via OpenNI. Let's say I have :
X = [93.7819,76.8463,208.386,322.069,437.946,669.999]
Y = [-260.147,-250.011,-230.717,-211.104,-195.538,-189.851]
Z = [958,942,950,945,940,955]
That's the points I was able to catch from my moving ball. Now I would like to be able to compute something like an interpolation or least square with those points to know the trajectory of the ball. I can then know where the ball is going and where it will hit the wall.
I'm not sure of which mathematical tool to use and how to translate it in C++. I've seen lots of resources for 2D interpolation (cubic,...) or least squares, but it seems that it's harder for 3D or I missed something maybe.
Best regards
EDIT : the question is marked as too broad by moderators, so I will reduce the scope with the responses I got : if I use 2D polynomial regression with the 3 plans separately (thx yephick), what can I use in C++ to implement it ?
For what you are interested in there's hardly any difference between 3D and 2D.
All you do is work with planes independently (XY plane, XZ plane, and YZ plane). This will reduce the complexity significantly and allow you to "draw" much simpler diagrams on a piece of paper when you work on this problem.
Once you figured the coordinates in each of the planes it is quite trivial to not only reconcile the coordinates into a 3D space but also provides an added benefit of error checking. For example an X coordinate found in XY plane should match (or be "close enough") to the same X coordinate found in XZ plane.
If the accuracy is not too critical you don't even need to go higher than the first power of polynomial approximation, using just a simple plain-old arithmetical average of the two consequential points.
You can use spline interpolation to create a smooth trajectory.
If not in the "mood" to implement it yourself, a quick google search will give you open source libraries like SINTEF's SISL that have such functionallity.

Blender: Impossible Cube [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm working on a graphics project trying to create an impossible cube in 3D. An impossible cube looks like that:
The trick behind this is two of the edges which are 'cut' and a picture taken from a specific angle to give the illusion of the impossibility.
Well I'm trying to make this but instead of a static image, I want to be able to animate it (rotate around) maintaining the impossible properties.
I have managed to make a cube in blender as you can see in the screenshot below:
I would like to hear your suggestions as to how I can achieve the desired effect. An idea would be to make transparent the portion of the edge that has an edge(or more) behind it, so that every time the camera angle changes, the transparent patch moves along.
It doesn't have to be done in Blender exclusively so any solutions in OpenGL etc are welcome.
To give you an idea of what the end result should be, this is a link to such an illustration:
3D Impossible Cube Illusion Animation
It's impossible (heh). Try to imagine rotating the cube so that the impossibly-in-front bit moves to the left. As soon as it would "cross" the current leftmost edge, the two properties of "it's in front" and "it's in the back" will not be possible to fulfill simultaneously.
If you have edge culling enabled, but clipping (depth-testing) disabled, and draw primitives in the right order, you should get the Escher cube without any need for cuts. This should be relatively easy to animate.

3D Separating Axis Theorem, what axis to test? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I know I need to project the vertices of my polyhedra on a whole bunch of axes, i've read these axes are the normals to each of the faces of one polyhedra (or is it both?). I've also read i use the cross product of each edge of one collidable with each edge of the other collidable. So lets say i have 2 polyhedra each with 8 faces and 12 edges. Therefore there would be 8 + (12*12) = 152 axes to project and then subsequently test? Is that correct?
Also since i dont know whether my faces are CW or CCW, my normals could be pointing inside or outside, does this matter? For example lets say i project onto an axis that is a normal from one of the shapes facing inwards, as long as both polyhedra are projected using this same normal, will this effect the algorithm?
Thanks for any input!
The theorem says that you project the polyhedrons to a 2D plane and if you find an axis on which they don't overlap they don't collide. The problem is to find the right plane/axis in the least amount of attempts. So you use the polyhedron face's normals as separating axes for test as well as their cross product for test if they collide on the edges.
In your example if you have 2 polyhedrons each with 8 faces and 12 edges, you first test the 8 normals of each polyhedron as the separating axes. If each of them a separating axes you can assume that the polyhedrons don't collide. Then you can check the cross products of the normals as separating axes to eliminate the edge-on-edge non-colliding cases.
I hope this helped.
In short, the only planes you need to check are those that are defined by the faces of your objects; that is, the normal of the faces is the normal of the planes to check. The direction of the normal doesn't matter, since you're just projecting the vertices anyway.
Also note that this only works for convex meshes, and isn't necessarily the quickest way to do these kinds of checks. You might want to look into XenoCollide or GJK instead; those are becoming standard.