How to use vertex, normal and texture indices together? - c++

I'm writing a small rendering engine using C++ and DirectX 11. I've managed to render models using indexed vertices. Now I want to support textures and vertex normals as well. I'm using Wavefront OBJ files, which uses indices to reference to the correct texture and normal coordinates (much like indexed vertices). This is the first time I'm doing something like this with vertices, textures and normals combined, so this is all a bit new to me.
The problem I'm facing is that the number of indices for vertices, normals and texture coordinates are not the same and I'm trying to find a right way to use all these indices in a vertex shader. To illustrate the problem more clearly, I've made some images.
The left image is a wireframe of a simple piramid object and the right image is its UV coordinate layout (with the bottom of the piramid in the center). In the left image you can see that the piramid has 4 vertices, so there are 4 vertex indices. The right UV layout has 6 UV coordinates so there are 6 texture indices. For each face of the piramid has a normal of its own, so there are only 4 normal indices.
I've found this question while searching on SO and it seems to work in theory (I haven't tried it yet). Is this a common way to solve a problem like this or are there better ways? What would you recommend I do?
Thanks :)
P.S.: my vertex, normal and texture data is stored in separate arrays on the cpu side.

I would recommend you to analyze the problem narrowing down to the simplest case; to me it helped using only two squares (flat surface, 4 triangles) and try to correctly map the texture on both squares as shown in the image.
Using this simple case you have this data:
6 vertices
6 UVs
6 normals (for each vertex, even though some vertex might share same normal)
12 indices
In theory this will suffice to describe your whole data, but if you test it will find out that one of the mapped textures won't be fully mapped onto one of the squares. That's because for the vertex 2 and 3, you have two different UV coord sets, it's impossible to describe using so few vertices, you need more vertices!
To fix this problem, the data you gather from whatever exporter you chose, will duplicate vertices when clashes like this occur. For the example above, two more vertices will be needed, it will produce a duplication of data for the normals, 2 more elements to the indices but now it will describe correctly map the UV coords.
Finally, for your example those 4 vertices can't fully describe all UV coords, then duplicating some vertices will be needed to complete the UV layout, the indices array will grow.
For the part am not sure is if the OBJ format exporter actually throws the data ready to read, either you can try to figure it out with an exporter or you can try another format (which I better recommend you) that can export data that is already in the right format ready to be pushed into the graphics pipeline of DirectX11.
Hope the example that helped me understanding this problem, will help to you as well.
Take a look to some basic DirectX11 tutorials, or if you can this book.

Ah seems to me that you need to visit: http://www.rastertek.com/tutdx11.html
See what they have here and give it a whirl

Related

How would I store vertex, color, and index data separately? (OpenGL)

I'm starting to learn openGL (working with version 3.3) with intent to get a small 3d falling sand simulation up, akin to this:
https://www.youtube.com/watch?v=R3Ji8J2Kprw&t=41s
I have a little experience with setting up a voxel environment like Minecraft from some Udemy tutorials for Unity, but I want to build something simple from the ground up and not deal with all the systems already laid on top of things with Unity.
The first issue I've run into comes early. I want to build a system for rendering quads, because instancing a ton of cubes is ridiculously inefficient. I also want to be efficient with storage of vertices, colors, etc. Thus far in the opengl tutorials I've worked with the way to do this is to store each vertex in a float array with both position and color data, and set up the buffer object to read every set of six entries as three floats for position and three for color, using glVertexAttribPointer. The problem is that for each neighboring quad, the same vertices will be repeated because if they are made of different "blocks" they will be different colors, and I want to avoid this.
What I want to do instead to make things more efficient is store the vertices of a cube in one int array (positions will all be ints), then add each quad out of the terrain to an indices array (which will probably turn into each chunk's mesh later on). The indices array will store each quad's position, and a separate array will store each quad's color. I'm a little confused on how to set this up since I am rather new to opengl, but I know this should be doable based on what other people have done with minecraft clones, if not even easier since I don't need textures.
I just really want to get the framework for the chunks, blocks, world, etc, up and running so that I can get to the fun stuff like adding new elements. Anyone able to verify that this is a sensible way to do this (lol) and offer guidance on how to set this up in the rendering, I would very much appreciate.
Thus far in the opengl tutorials I've worked with the way to do this is to store each vertex in a float array with both position and color data, and set up the buffer object to read every set of six entries as three floats for position and three for color, using glVertexAttribPointer. The problem is that for each neighboring quad, the same vertices will be repeated because if they are made of different "blocks" they will be different colors, and I want to avoid this.
Yes, and perhaps there's a reason for that. You seem to be trying to save.. what, a few bytes of RAM? Your graphics card has 8GB of RAM on it, what it doesn't have is a general processing unit or an unlimited bus to do random lookups in other buffers for every single rendered pixel.
The indices array will store each quad's position, and a separate array will store each quad's color.
If you insist on doing it this way, nothing's stopping you. You don't even need the quad vertices, you can synthesize them in a geometry shader.
Just fill in a buffer with X|Y|Width|Height|Color(RGB) with glVertexAttribPointer like you already know, then run a geometry shader to synthesize two triangles for each entry in your input buffer (a quad), then your vertex shader projects it to world units (you mentioned integers, so you're not in world units initially), and then your fragment shader can color each rastered pixel according to its color entry.
ridiculously inefficient
Indeed, if that sounds ridiculously inefficient to you, it's because it is. You're essentially packing your data on the CPU, transferring it to the GPU, unpacking it and then processing it as normal. You can skip at least two of the steps, and even more if you consider that vertex shader outputs get cached within rasterized primitives.
There are many more variations of this insanity, like:
store vertex positions unpacked as normal, and store an index for the colors. Then store the colors in a linear buffer of some kind (texture, SSBO, generic buffer, etc) and look up each color index. That's even more inefficient, but it's closer to the algorithm you were suggesting.
store vertex positions for one quad and set up instanced rendering with a multi-draw command and a buffer to feed individual instance data (positions and colors). If you also have textures, you can use bindless textures for each quad instance. It's still rendering multiple objects, but it's slightly more optimized by your graphics driver.
or just store per-vertex data in a buffer and render it. Done. No pre-computations, no unlimited expansions, no crazy code, you have your vertex data and you render it.

OBJ File: Why are there so many normal indices in comparison to normal values?

So I am reading obj files and make lists to store the vertices, texture vertices, normal, vertex texture and normal indices. Testing this with a cube results in 8 vertices and 6 normals but there are 24 items in the normal indices list.
Can anyone explain why this is as I am a little confused right now?
The OBJ format is designed for Gouraud shading, with one normal per vertex; rather than flat shading, one normal per polygon. Pick a cube face, go through the file data and write down the normal indices and normals. You'll see why.

How to multiply vertices with model matrix outside the vertex shader

I am using OpenGL ES2 to render a fairly large number of mostly 2d items and so far I have gotten away by sending a premultiplied model/view/projection matrix to the vertex shader as a uniform and then multiplying my vertices with the resulting MVP in there.
All items are batched using texture atlases and I use one MVP per batch. So all my vertices are relative to the translation of that MVP.
Now I want to have rotation and scaling for each of the separate items, which means I need a different model for each of them. So I modified my vertex to include the model (16 floats!) and added a mat4 attribute in my shader and it all works well. But I'm kinda dissapointed with this solution since it dramatically increased the vertex size.
So as I was staring at my screen trying to think of a different solution I thought about transforming my vertices to world space before I send them over to the shader. Or even to screen space if its possible. The vertices I use are unnormalized coordinates in pixels.
So the question is, is such a thing possible? And if yes how do you do it? I can't think why it shouldn't be since its just maths but after a fairly long search on google, it doesn't look like a lot of people are actually doing this...
Strange cause if it is indeed possible, it would be quite a major optimization in cases like this one.
If the number of matrices per batch are limited then you can pass all those matrices as uniforms (preferably in a UBO) and expand the vertex data with an index which specifies which matrix you need to use.
This is similar to GPU skinning used for skeletal animation.

OpenGL - Access next 3 vertices in buffer from the vertex shader

Im placing a bunch of square tiles around a world using 2 buffers fed from vector arrays, one for color and the other for position. The triangles vertex colors arent smooth as they dont interpolate between the two tris in the square. To combat this I wanted to set each fragments color individually, blending the colors of the vertices manually. I cannot substitute this process with premade textures either.
The issue Ive come across is passing the next 3 vertices position and location in the buffer into the vertex shader. Is there any easy way to do this?
Thanks and have a great day!
Add another set of attributes and setup the glVertexAttribPointer to point into the vertex position buffer as well, but with an offset. Keep in mind, to add a bit of dummy padding to the end, so that when reaching the end of the array you don't access out of bounds. Also the …_ADJACENCY drawing modes are useful in this situation (if available).

Pairwise vertex attributes in OpenGL

I'm trying to visualise a graph with OpenGL. I have vertex buffer with points in 3D space, and an index buffer that specifies lines between vertices. I use glDrawElements to draw the graph. Everything works. Problem is that I need to visualise the edge weights. My problem is that edge weights are pairwise attributes and I have no idea how to put this information into my shader. Only solutions I can think of is drawing each edge individually with DrawRangeElements and setting the edge weight between every call. Is there a better way of doing this?
There's no need to employ a geometry shader. Just render them as GL_LINES, duplicating the positions as needed as well as providing the same "weight" attribute for each pair of verts on a line. This is ultimately no different from rendering a cube, where each face needs its own normals.
If (and only if) you absolutely must have that memory back, and you can't simply compress your vertex data (using normalized shorts, unnormalized shorts, or whatever), here are some techniques you can use. Be warned: this is a memory-vs-performance tradeoff. So unless you have real memory pressures, just duplicate your vertex data and get it over with.