Store Vertices DirectX C++ - c++

Im currently implementing an Octree for my bachelor thesis project.
My Octree takes a std::vector as argument:
octree::Octree::Octree(std::vector<const D3DXVECTOR3*> vec) :
m_vertices(std::vector<D3DXVECTOR3*>()),
m_size(m_vertices.size())
{
int i=0;
for(;i<m_size;++i) {
m_vertices.push_back(new D3DXVECTOR3(*vec.at(i)));
}
}
Im asking for what is typically used to store the vertices in before rendering them and making any culling test etc to them.
I kept this very simple for now, all i have is a function that renders a grid. Some snippets:
#define GRIDFVF (D3DFVF_XYZ | D3DFVF_DIFFUSE)
struct GridVertex {
D3DXVECTOR3 position;
DWORD color;
};
g_dev->SetTransform(D3DTS_WORLD, &matIdentity);
g_dev->SetStreamSource(0, g_buffer, 0, sizeof(GridVertex));
g_dev->SetTexture(0, NULL);
g_dev->DrawPrimitive(D3DPT_LINELIST, 0, GridSize * 4 + 2);
Now when rendering this i use my custom struct GridVertex, thats saves a D3DXVECTOR9 for pos and a DWORD for the color value and the tell the GPU by setting the flexible vertex format to GRIDFVF.
But in my Octree i only want to store the positions to perform the test if certain vertices are inside nodes within my Octree and so on. Therefore I thought of creating another class called SceneManager and storing all values within an std::vector and finally pass it to my Octree class, that does the test and afterwards pass the checked vertices to the GPU.
Would this be a solid solution or whats common to implement something like this?
Thanks in advance

Generally, one does not put the actual render geometry vertices themselves in the octree or whatever spatial partitioning structure one uses. That level of granularity is not useful, because if a set of vertices that make up a model spans partition nodes such that some subset of those vertices would be culled, you couldn't properly draw the model.
What you'd typically want to do is have an object representing an entity and its bounds within the world (axis-oriented bounding boxes, or bounding spheres, are simple and efficient bounding volumes, for example). Each entity is also associated with (or can be associated with by some other subsystem) rendering geometry. The entities themselves are sorted within the octree.
Then, you use your octree to determine which entities are visible, and submit all of their associated render geometry to the card.

Related

Can glDrawElements be used independently from polygon type?

Is the only solution grouping vertices into separate glDrawElements(GL_TRIANGLES, ...) and glDrawElements(GL_QUADS, ...) draw calls or is there a way of sending data describing no of polygon sides into geometry shader and sorting out type of polygon inside geometry shader?
https://i.stack.imgur.com/4Ee4e.jpg
What you see as my output console is output of my mesh structure. I have:
vector <float> vert_data;
unsigned int faces_no;
vector <unsigned int> indices_on_face;
vector <unsigned int> indices;
First is basicly exactly what is send to open-gl buffer: coordinates, normals, colors etc. Second one says how many faces are described in this data. Third says number of verticles that are in the polygon. It goes in order. Forth one are indices(glBufferData(GL_ELEMENT_ARRAY_BUFFER, ...)). So basicly I am looking for a way to send third one into geometry shader.
I know it is possible to order faces types using flag while importing from assimp but that would loose the faces order. And still wouldnt let me draw everything with single draw call, so I would have to create bunch of draw functions for every type of polygon:(
Maybe there would be possible something like: first change every indices_on_face[i] by adding all previous ones to it. Set first verticle number that is drawn inside geometry shader during draw call. Inside geometry shader compare current number of verticle with indices_on_face[i] that would tell when to generate polygon out of vertices. Does gl_VertexID hold number dependent from count of passed vertex, independent from indices?
How can I formulate a draw call that will fit it?
No, there is no way to do what you want. A primitive includes the count of vertices for each base primitive, and that doesn't get to change within a rendering command.
Also, a geometry shader would almost certainly slow down your rendering rather than speed it up, particularly compared to the standard solution: just send triangles. Break up any non-triangular polygons into triangles, so you can just send them all in a single draw call. I mean, the hardware is going to do that for you regardless, so you may as well do it yourself.
Most mesh exporters will have an option to do this for you. And in those cases where they don't, Open Asset Importer can do it for you at load time by passing the aiProcess_Triangulate flag to your loading function.

Select object in OpenGL when doing transformations in the vertex shader

I'm pretty new to OpenGL and am trying to implement a simple program where I can draw cubes, move them around with the mouse, and delete them.
Previously I had done my drag operations by translating on the CPU. In this way I was able to use ray-tracing to pick out the element I wanted because the vertices themselves were being updated.
However, I'm trying to move all of the transformations to the GPU and in doing so realized that I would then be giving up updated access to the vertices on the CPU (as the CPU still thinks the vertices are the un-transformed ones). How does one do this communication so that I wouldn't have to manually do transformations on the CPU as well as in the Vertex Shader?
No matter where you're doing your transformations, you will typically have a model matrix that describes where each object is in the scene. Instead of transforming each object into world space just so you can check for intersection with a world-space ray, you can also transform the ray into the object space of each object by transforming the ray with the inverse model matrix.
One general issue with ray-tracing is that, as your scene gets larger, brute force testing of each object will get increasingly slow. You can use acceleration structures like an Octree or a Bounding Volume Hierarchy to speed things up. A completely different approach when it comes to picking would be just render an ID buffer, i.e. a buffer that has the same resolution as your currently rendered frame and for each pixel saves the ID of the object that is visible at that pixel. Then you can simply read back the value of the pixel underneath the cursor to find out what object you hit without the need to do any raytracing. Rendering the ID buffer could be done as a separate pass or can likely just be added as an additional render target to a pass you're already doing, e.g., prefilling the depth buffer or just when rendering the scene in case you only do one pass.

Direct2D Depth Buffer

I need to draw a list of shapes and I am using Direct2D. I get the list of shapes from a file. The list is sorted and the order of the elements inside the file represents the order these shapes will be drawn. So, if for example the file specifies two rectangle in the same position and with the same sizes, only the second one will be visible (since the first will be overwritten).
Given my list of shapes I proceede to its drawing in the following way:
list<Shape> shapes;
for (const auto& shape : shapes)
shape.draw();
It is straightforward to see that if I have two shapes I cannot invert the order of the drawing operations, and this means that I must be sure that shape2 will be always drawn after shape1 and so on. Follows that I can not use multiple threads to draw my shapes, and this is a huge disadvantage in terms of performances.
I read that Direct3D supports the depth buffer (or z-buffer), which specifies for each pixel its z-coordinate, such that only the "visible" pixels (the onces closer to the viewer) will be drawn, regardless of the order in which the shapes are drawn. And I have the depth information of each shape when I read the file.
Is there a way to use the depth buffer in Direct2D, or a similar technique which allows me the use of multiple threads to draw my shapes?
Is there a way to use the depth buffer in Direct2D, or a similar
technique which allows me the use of multiple threads to draw my
shapes?
The answer here is no. Althought the Direct2D library is built on top of Direct3D, it doesn't provide the user such feature through the API, since the primitives you can draw are only described by two-dimensional coordinates. The last primitive you draw to the render target is ensured to be visible, so no depth testing is taking place. Also, the depth buffer in Direct3D doesn't have much to do with multi-threading on the CPU side.
Also note that even if you are issuing drawing commands using multiple threads they will be serialized by the Direct3D driver and performed sequentially. Some newer graphical APIs like Direct3D 12 and Vulkan does provide multithreaded drivers which allows you to effectively draw different content from different threads, but they come with higher complexity.
So eventually if you stick to Direct2D you are left with the option of drawing each shape sequentially using a single thread.
But what can be done is that you can eliminate the effectively occluded shapes by testing for occlusion each shape against all others. So the occluded shapes can be discarded from the list and never rendered at all. The trick here is that some of the shapes does not fill their bounds rect entirely, due to transparent regions (like text) or if the shape is a complex polygon. Such shapes can not be easily tested or will need more complex algorithms.
So you have to iterate thourgh all shapes and if the current shape is a rectangle only then perform occlusion testing with all previous shapes' bounds rects.
The following code should be considered pseudo-code, it is intended just to demonstrates the idea.
#define RECTANGLE 0
#define TEXT 1
#define TRIANGLE 2
//etc
typedef struct {
int type; //We have a type field
Rect bounds_rect; //Bounds rect
Rect coordinates; //Coordinates, which count vary according to shape type
//Probably you have many other fields here
} Shape;
//We have all shapes in a vector
std::vector<Shape> shapes;
Iterate all shapes.
for (int i=1; i<shapes.size; i++) {
if(shape[i].type != RECTANGLE) {
//We will not perform testing if the current shape is not rectangle.
continue;
}
for(int j=0; j<i; j++) {
if(isOccluded(&shape[j], &shape[i])) {
//shape[j] is totally invisible, so remove it from 'shapes' list
}
}
}
Occlusion testing is something like this
bool isOccluded(Shape *a, Shape *b) {
return (a.bounds_rect.left > b.coordinates.left && a.bounds_rect.right < b.coordinates.right &&
a.bounds_rect.top > b.coordinates.to && a.bounds_rect.bottom < b.coordinates.bottom);
}
And you don't have to iterate all shapes with a single thread, you can create multiple threads to perform tests for different parts of the shape list. Of course you will need some locking technique like mutex when deleting shapes from the list, but that is another topic.
The depth buffer is used to discard primitives that will be occluded by something in front of it in the 3D space, saving on redrawing time by not bothering with stuff that won't be seen anyway. If you think of a scene with a tall, thin candle in front of a ball facing the camera, the entire ball is not drawn and then the candle drawn over it, just the visible sides of the ball are. This is how order of drawing does not matter
I have not heard of the use of a depth buffer in D2D as it is somewhat meaningless; everything is drawn onto one plane in D2D, how can something be in front of or behind something else? The API may support it but I doubt it as it makes no abstract sense. The depth information on each shape is just the order to draw it in essentially which you already have
Instead what you could do, divide and allocate your shapes to your threads while maintaining order, ie
t1 { shape1, shape2, shape3 } = shape123
t2 { shape4, shape5, shape6 } = shape456
...
And draw the shapes onto a new object (but not the backbuffer), depending on your shape class you maybe be able to represent the result as a shape. This will leave you with t many shapes which are still in order but have been computed in parallel. You can then gradually compose your final result by drawing the results in order, ie
t1 { shape123, shape456, shape789 }
t2 { shape101112, shape131415 }
t1 { shape123456789, shape101112131415 } = final shape
Now you have the final shape you can just draw that as normal

store matrix from gl_triangles

I am just getting into game programming and have adapted a marching cubes example to fit my needs. I am rendering Goursat's Surface using marching cubes. Currently I am using a slightly adapted version of marching cubes. The code that I am looking at calls:
gl_pushmatrix();
glBegin(GL_TRIANGLES);
vMarchingCubes(); // generate the mesh
glEnd();
glPopMatrix();
Every time that it renders! This gives me a framerate of ~20 fps. Since I am only using marching cubes to construct the mesh, I don't want to reconstruct the mesh every time that I render. I want to save the matrix (which has an unknown size though I could compute the size if necessary).
I have found a way to store and load a matrix on another stackoverflow question but it mentions that it is not the preferred way of doing this in openGL 4.0. I am wondering what the modern way of doing this is. If it is relevant, I would prefer that it is available in openGL ES as well (if possible).
I want to save the matrix (which has an unknown size though I could compute the size if necessary).
You don't want to store the "matrix". You want to store the mesh. Also what's causing the poor performance is the use of immediate mode (glBegin, glVertex, glEnd).
What you need is a so called Vertex Buffer Object holding all the triangle data. Isometric surfaces call for vertex sharing so you need to preprocess the data a little bit. First you put all your vertices into a key→value structure (C++ std::map for example) with the vertices being the key, or into a set (boost::set), which is effectively the same as a map but with a better memory structure for tasks like this. Everytime you encounter a new unique vertex you assign it an index and append the vertex into a vertex array. Also for every vertex you append to a faces array the index it was assigned (maybe already earlier):
Along the lines of this (Pseudocode):
vertex_index = 0
vertex_array = new array<vertex>
vertex_set = new set<vertex, unsigned int>
faces_array = new array<unsigned int>
foreach t in triangles:
foreach v in t.vertices:
if not vertex_set.has_key(vertex):
vertex_set.add( vertex, vertex_index )
vertex_array.append( vertex )
vertex_index += 1
faces_array.append( vertex_set(v) )
You can now upload the vertex_array into a GL_ARRAY_BUFFER buffer object and faces_array into a GL_ELEMENT_ARRAY_BUFFER buffer object. With that in place you can then do the usual glVertex…Pointer … glDrawElements stanza to draw the whole thing. See http://www.opengl.org/wiki/VBO_-_just_examples and other tutorials on VBOs for details.

How to implement joints and bones in openGL?

I am in the process of rolling my own openGL framework, and know how to draw 3d objects ... etc...
But how do you define relationships between 3d objects that may have a joint?
Or how do you define the 3d object as being a "bone"?
Are there any good resources?
As OpenGL is only a graphics library and not a 3D modeling framework the task of defining and using "bones" falls onto you.
There are different ways of actually implementing it, but the general idea is:
You treat each part of your model as a bone (e.g. head, torso, lower legs, upper legs, etc).
Each bone has a parent which it is connected to (e.g. the parent of the lower left leg is the upper left leg).
Thus each bone has a number of children.
Now you define each bone's position as a relative position to the parent bone. When displaying a bone you now multiply it's relative position with the parent bone's relative position to get the absolute position.
To visualize:
Think of it as a doll. When you grab the doll's arm and move it around, the relative position (and rotation) of the hand won't change. Its absolute position WILL change because you've moved one of its parents around.
When tried skeletal animations I learnt most of it from this link:
http://content.gpwiki.org/index.php/OpenGL:Tutorials:Basic_Bones_System
But how do you define relationships between 3d objects that may have a joint?
OpenGL does not care about these things. I't a pure drawing API. So it's upon you to unleash your creativity and define such structures yourself. The usual approach to skeletal animatio is having a bone/rig system, where each bone has an orientation (represented by a quaternion or a 3×3 matrix) a length and a list of bones attached to it further, i.e. some kind of tree.
I'd define this structure as
typedef float quaternion[4];
struct Bone {
quaternion orientation;
float length;
int n_subbones;
Bone *subbones;
};
In addition to that you need a pivot from where the rig starts. I'd do it like this
typedef float vec3[3];
struct GeomObjectBase {
vec3 position;
quaternion orientation;
};
struct BoneRig {
struct GeomObjectBase gob;
struct Bone pivot_bone;
}
Next you need some functions that iterate through this structure, generate the matrix palette out of it, so that it can be applied to the model mesh.
note: I'm using freeglut
Totally irrelevant