I am working with mesh materials and I would like to be able to get them by ID, not using the array index. I didn't find anything to do with other data structures so if anyone could point me in the right direction that would be great, specifically to do with the c++ Type maps.
The reason why I need the material to be read in the shader with material ID is because I am doing deferred rendering and I am trying to avoid sending a texture for specular, ambient and diffused colours. Instead I store the material ID in the alpha of my normal texture and use that ID to find which material each fragment should use in the Phong lighting calculation. The problem is that I need to loop through the array of materials checking if the material ID from the array matches the material ID from the normal texture which is not ideal. I thought that if I could make the material ID the key needed to access the data, it would be a bit more efficient as I would not need a loop, just a lookup. I could possibly pass all materials in the right order and send it to my light program but that would mean I am passing in a bigger array because there are 29 unique meshes that reuse materials from an array of 7 materials. So that would mean I would be sending at least 29 materials instead of just 7.
you can pass arbitrary data to OpenGL, so you could send a structure with you data stored as a map, and implement in your shader the way to retrieve it. But you can't have templated class and recursion. So it will be much simpler than STL's map<>. You can't have pointers either. So behind your map there would just be a const array
Your question is too broad to answer precisely, but Usually, if you want to send material information, it should be part of the shader Data, I'm not sure why you would need to retrieve material once you're in the shader.
Related
I basically got the same question as the guy who asked here:
Dynamically sized arrays in OpenGL ES vertex shader; used for blend shapes/morph targets .
Especially his last unanswered question bothers me too.
So I also want to use an arbitrary number of blendshapes for each mesh I'm processing. At the moment I'm using a fixed number and treat the shapes as vertex attribute. The advantage here is that I always have the relevant data for the current vertex availiable. Now, if I want to use an arbitrary number of shapes, I figured I'd use SSBOs since their clue is exactly what I want: Dynamically sized data.
However SSBOs are, as far as I understand it, static and for each processed vertex in the shader I have the blendshape data for the whole mesh availiable. That means I would have to introduce some kind of counter and try to pick the correct data piece out of my SSBO for each vertex.
Is this understanding correct?
I'm really not sure whether this is the optimal solution, maybe you can give me some hints.
Yes your understanding is correct.
You can use gl_VertexID or just pass some 'vertex number' as attribute to know what data to load in your SSBO for the particular vertex you're processing.
I built a 2D graphical engine, and I created a batching system for it, so, if I have 1000 sprites with the same texture, I can draw them with one single call to openGl.
This is achieved by putting in a single vbo vertex array all the vertices of all the sprites with the same texture.
Instead of "print these vertices, print these vertices, print these vertices", I do "put all the vertices toghether, print", just to be very clear.
Easy enough, but now I'm trying to achieve the same thing in 3D, and I'm having a big problem.
The problem is that I'm using a Model View Projection matrix to place and render my models, which is the common approach to render a model in 3D space.
For each model on screen, I need to pass the MVP matrix to the shader, so that I can use it to transform each vertex to the correct position.
If I would do the transformation outside the shader, it would be executed by the cpu, which I not a good idea, for obvious reasons.
But the problem lies there. I need to pass the matrix to the shader, but for each model the matrix is different.
So I cannot do the same I did with 2d sprites, because changing a shader uniform requires a draw every time.
I hope I've been clear, maybe you have a good idea I didn't have or you already had the same problem. I know for a fact that there is a solution somewhere, because in engine like Unity, you can use the same shader for multiple models, and get away with one draw call
There exists a feature exactly like what you're looking for, and it's called instancing. With instancing, you store n matrices (or whatever else you need) in a Uniform Buffer and call glDrawElementsInstanced to draw n copies. In the shader, you get an extra input gl_InstanceID, with which you index into the Uniform Buffer to fetch the matrix you need for that particular instance.
You can read more about instancing here: https://www.opengl.org/wiki/Vertex_Rendering#Instancing
The answer depends on whether the vertex data for each item is identical or not. If it is, you can use instancing as in #orost's answer, using glDrawElementsInstanced, and gl_InstanceID within the vertex shader, and that method should be preferred.
However, if each 3D model requires different vertex data (which is frequently the case), you can still render them using a single draw call. To do this, you would add another stream into your vertex data with glVertexAttribPointer (and glEnableVertexAttribArray). This extra stream would contain the index of the matrix within the uniform buffer that vertex should use when rendering - so each mesh within the VBO would have an identical index in the extra stream. The uniform buffer contains the same data as in the instancing setup.
Note this method may require some extra CPU processing, if you need to redo the batching - for example, an object within a batch should not be rendered anymore. If this process is required frequently, it should be determined whether batching items is actually beneficial or not.
Besides instancing and adding another vertex attribute as some object ID, I'd like to also mention another strategy (which requires modern OpenGL, though):
The extension ARB_multi_draw_indirect (in core since GL 4.3) adds indirect drawing commands. These commands do source their parameters (number of vertices, starting index and so on) directly from another buffer object. With these functions, many different objects can be drawn with a single draw call.
However, as you still want some per-object state like transformation matrices, that feature is not enough. But in combination with ARB_shader_draw_parameters (not in core GL yet), you get the gl_DrawID parameter, which will be incremented by one for each single object in one mult draw indirect call. That way, you can index into some UBO, or TBO, or SSBO (or whatever) where you store per-object data.
I want to make a couple of vec4 vertex attributes in my shaders. I've done quite a bit of googling, but I can't seem to find consistent information for specifically what I want to do.
My goal here is to move skinning to the GPU, so I need a list of bones and weights per vertex, hence why I want to use vertex attributes. I have 2 arrays of floats that represent this data. Basically this:
weightsBuffer = new float[vSize*4];
indexesBuffer = new int[vSize*4];
The part that I can't consistently find is how to upload these and use them in the shader. To be clear, I don't want to upload all the position, normal and texture coordinate data, I'm already using display lists and have decided to keep using them for a few reasons that aren't relevant. How can I create the buffers and bind them properly so I can use them?
Thanks.
Binding your bone weights and indices is no different of a process than binding your position data. Assuming the data is generated properly in your buffers, you use glBindAttribLocation to bind the attribute index in your vertex stream to your shader variable, and glVertexAttribPointer to define your vertex array (and don't forget glEnableVertexAttribArray).
The exact code may vary, depending on whether you're using VAOs and VBOs (or just client buffers). If you want a more specific answer, you should provide your code and shader.
In an stl file, there are facet normals, then a list of verticies. In some stl files I work with, there are multiples of the same vertex, for example, a file with 5 million verticies, is usually containing 30 duplicates of each vertex. Such as, a cylinder cut out of a cube, has one vertex that belongs to 20 other triangles.
For this reason, I like to store the verticies in a hash table, that allows me to upload the index set of verticies for the triangle, reducing a mesh from 5 million verticies to 900k.
This however, creates a normal issue for the facet, which uses the first facet normal to assign to the first instance of the vertex.
What is the fastest way to store a vertex normal that will work for all of the facets it belongs to in the file, or, is this just not possible?
A vertex is not just the position, a vertex is the whole tuple of its associated attributes. The normal is a vertex attribute. If vertices differ in any of their attributes, they're different vertices.
While it's perfectly possible to decompose the vertex attributes into multiple sets and use an intermediate indexing structure, this kind of data format is hard or even impossible to process for GPUs and also very cumbersome to work with. OpenGL for example can not directly use it.
Deduplication of certain vertex attributes (like the normal or other properties shared across vertices) makes sense only for storing the data. When you want to work with it, you normally expand it.
The data structure you have right now is what you want. Don't try to "optimize" it. Also even at 5 million vertices, given two attributes (position and normal) that's at most 100MiB of data. Modern computers have Gigabytes of RAM, so that's not really a problem.
The only straightforward approach in OpenGL is to create a vertex for each unique combination of position and normal. Depending on your data, this can still give you a very substantial reduction in the number of vertices. But if your data does not contain repeated vertices that share both position and normal, it will not help.
To validate if this will work for your data, you can extend the approach you already tried. Instead of using the 3 vertex coordinates as the key into your hash table, you use 6 values: the 3 vertex coordinates, and the 3 normal components.
If the number of entries in your hash table is significantly smaller than the original number of vertices, indexed rendering will be beneficial. You can then assign an index to each unique position/normal combination stored in the hash table, and used these indices to build the index buffer as well as the vertex buffer.
Beyond that, AMD defined an extension to support separate indices for different attributes, but this will not be useful if you want to keep your code portable: GL_AMD_interleaved_elements.
I've got a question about how to organise my graphics resources/objects. I'm importing scenes from 3D Studio using AssImp. At the moment I'm processing the AssImp structures into my own so I can stream them to and from disk more quickly than AssImp can. Anyway, I'm also processing the scene materials, which gives me a set of choices I'm not too sure how to best handle.
I have the following classes for materials:
Material
Contains things like ambient, diffuse colour, specular power and so on, also contains an array of maps, with each slot in the array being a "channel" (i.e. slot 0 is always ambient map, slot 1 diffuse, slot 2 gloss and so on).
Map
An instance of a map, with parameters such as wrap mode and a pointer to a texture object.
Texture
An OpenGL texture map.
Sampler
An OpenGL sampler object.
Now I'm kind-of wondering how I'm going to handle configuring samplers for any given map in a material. Technically each map can have different sampler parameters. For example, a sampler can have UV wrap modes, min and max filters and anisotropy. Some of these parameters are defined in the scene (wrap modes) and others are defined as engine parameters (filtering, anisotropy).
What I'm thinking about doing is creating one sampler object per map (not per texture, remember, a map points to a texture). Then when I render an object that has a material with map X, it will automatically bind in the sampler for map X to whichever channel it is. As it's possible there will be thousands of maps, I'm wondering if the corresponding thousands of samplers is a good idea.
Another way of doing it would be to add samplers to my resource dictionary so they can be shared by any map that happens to have the same parameters.
Does anyone have an opinion on how best to manage things like this?
This is really not a hard problem.
How many possible different kinds of samplers would you have? If your filtering is a global concept, then the main difference between samplers will be in wrap modes. There are three dimensions of wrapping (S, T, R) and 4 possible wrapping types (CLAMP_TO_EDGE, CLAMP_TO_BORDER, REPEAT, and MIRROR_REPEAT). This gives you a grand total of... 12 possible samplers you could use for normal things.
CLAMP_TO_BORDER requires a border color, so that would require a unique sampler object (unless many textures share the same border color, which is not unreasonable. Black is a popular choice). And if you're doing depth comparison sampling, then you'd need a few more sampler objects.
So just have a sampler object manager that you can ask for a sampler from. You give it some parameters and it regurgitates a sampler. It's either a sampler that exists or one that it created just for you. If you need a unique one (for border colors), then you can ask for a unique one that was given to you. Otherwise, if two people ask for the same sampler, then they get the same sampler.