Multiple base vertex per instanced draw - opengl

I am searching a way to do a glDrawElementsInstancedBaseVertex but with different base vertex for each instance.
Basically I have to render a lot of cubes (bounding boxes) which have different model space coordinates.
Each cube have its own modelToCamera matrix which is passed via instanced arrays attributes to the vertex shader.
The problem is that I have a list of 16 indices to render, which are the same for each cube, except for their baseVertex part, and I want to render every cube in a single draw call, without having 16 * numberOfCube indices.
So is there a way to change the baseVertex for each instance ?

No, you can't do that. Furthermore, there's no reason to want to in your case either. If each cube has it's own "model space coordinates" and it's own "modelToCamera matrix"... then you have redundant information.
All cubes are similar to one another. The only difference between one cube and another is the transform of it: scale, rotation, translation. You can take a unit cube and apply a transform to it to turn it into any other cube.
Since you're applying a different transform to each cube, then all your cubes could be just the same initial vertices repeated over and over.
However, don't expect this to give you much performance increase; instancing for tiny objects like a cube generally doesn't help much.

Related

Quad texture stretching on OpenGL

So when drawing a rectangle on OpenGL, if you give the corners of the rectangle texture coordinates of (0,0), (1,0), (1,1) and (0, 1), you'll get the standard rectangle.
However, if you turn it into something that's not rectangular, you'll get a weird stretching effect. Just like the following:
I saw from this page below that this can be fixed, but the solution given is only for trapezoidal values only. Also, I have to be doing this over many rectangles.
And so, the questions is, what is the proper way, and most efficient way to get the right "4D" texture coordinates for drawing stretched quads?
Implementations are allowed to decompose quads into two triangles and if you visualize this as two triangles you can immediately see why it interpolates texture coordinates the way it does. That texture mapping is correct ... for two independent triangles.
That diagonal seam coincides with the edge of two independently interpolated triangles.
Projective texturing can help as you already know, but ultimately the real problem here is simply interpolation across two triangles instead of a single quad. You will find that while modifying the Q coordinate may help with mapping a texture onto your quadrilateral, interpolating other attributes such as colors will still have serious issues.
If you have access to fragment shaders and instanced vertex arrays (probably rules out OpenGL ES), there is a full implementation of quadrilateral vertex attribute interpolation here. (You can modify the shader to work without "instanced arrays", but it will require either 4x as much data in your vertex array or a geometry shader).
Incidentally, texture coordinates in OpenGL are always "4D". It just happens that if you use something like glTexCoord2f (s, t) that r is assigned 0.0 and q is assigned 1.0. That behavior applies to all vertex attributes; vertex attributes are all 4D whether you explicitly define all 4 of the coordinates or not.

How should I use glNormal() for a vertex shared between a triangle and a quad?

Let there be a vertex which is part of a triangle, and of a quad.
To my best understanding, the normal of that vertex is the average of the normal of the quad and the normal of the triangle.
The triangle is drawn before the quad. When should I call glNormal and with what vector?
Should I call glNormal 2 times, each time with the same vector (the average normal vector)?
Should I call glNormal the last time the vertex is drawn, with the average normal vector?
To my best understanding, the normal of that vertex is the average of
the normal of the quad and the normal of the triangle.
Ideally, the normal vector should be orthogonal to the surface that you are rendering, on any point. However, the GL only supports rendering surfaces only as polygonal models (at least directly). So there are two principal possibilities:
The polygonal representation does exactly represent the object you want to visualize. A simple example would be a cube.
The polygonal represantation is just an (picewise linear) approximation of the surface you want to visualize. Think of smooth surfaces.
In case 1, you need one nomral per triangle (as the normal is unchaning for a flat surface defined by a triangle). However, this means that either for neighboring triangles who share an edge or corner, the normals will have to be different. From GL's point of view, each of the trianlges use different vertices, even if those vertices share the position in space. A vertex is the set of all attributes, not just the position. For the cube, that means that you will need not just 8 different vertices, but 24, so you have 3 at each corner.
In case 2, you do want to cover up the polygonal structure of the model as good as possible. One aspect of this is using smooth shading techniques. Averaging the normales of adjacent traingles at each vertex is one heuristic of doing so. In this case, neighboring primitives actually can share vertices, as the normal and the position of some corner point is the same for any triangle connected to it.
This heuristic has some drawbacks, especially if your surface does contain both smooth parts and "sharp edges" you want to preserve. There are some improved heuristics which try to detect sharp edges and splitting vertices to allow different normals for the connected triangles to not shooth such edges. But all such heuristics might fail in some cases - ideally, the normals are provided when the model is created in the first place.
The triangle is drawn before the quad. When should I call glNormal and
with what vector?
OpenGL is a state machine, meaning that things you set kepp that way until you channge them again - and setting normals is no exception. The second thing to note is that normals are a vertex attribute. So for every vertex, every arrtibute has always some value (but depending on the rest of your GL state, not all of these attributes are used when rendering).
Since you use the fixed-function GL, normals are builtin vertex attributes - so every vertex you issue in some way has some value as its normal attribute - in immediate mode rendering with glBegin()/End(), it will be the one you set with the most recent glNormal() call (or it will have the initial default value if you never called glNormal()).
So to answer you question:
YOu have to set that normal before you issue the glVertex() call for that particular vertex for the first time, and you have to re-issue that normal command for the second time drawing with "this" vertex (which technically is a different vertex anyway) if you did change it inbetween when specifying some other vertices.
To my best understanding, the normal of that vertex is the average of the normal of the quad and the normal of the triangle.
No. The normal of a plane is a vector pointing 'out of' the plane at a 90 degree angle. In OpenGL, this is used in shading calculations, and to support various effects, OpenGL lets you specify whatever normal you want instead of calculating it from the primitive. For flat lighting, the normal should be set to the mathematical definition of the normal for each primitive, while for smooth lighting, the normal should be set to the average normal of all primitives that share the vertex.
glNormal sets a value in OpenGL that is read whenever you call glVertex, and is persistent until you call glNormal again. So this code
glNormal3d(0,0,1)
glVertex3d(1,0,0)
glVertex3d(1,1,0)
glVertex3d(0,1,0)
glVertex3d(0,0,0)
specifies 4 vertices, each with a normal of (0,0,1).

OpenGL: Multi-texturing an array of "linked" quads

I recently completed my system for loading an array of quads into VBOs. This system allows quads to share vertices in order to save a substantial amount of memory. For example, an array of 100x100 quads would use 100x100x4=40000 vertices normally (4 vertices per quad), but with this system, it would only use 101x101=10201 vertices. That is a huge amount of space saving when you get into even larger scales.
My problem is is that in order to texture each quad individually, each vertex needs a "UV" coordinate pair (or "ST" coordinate) to map one part of the texture to. This leads to the problem, how do I texture each quad independently of each other? Even if two of the same textured quads are next to each other, I cannot use the same texture coordinate for both of the quads. This is illustrated below:
*Each quad being 16x16 pixels in dimension and the texture coordinates having a range of 0 to 1.
To make things even more complicated, some quads in the array might not even be there (because that part of the terrain is just an empty block). So as you might have guessed, this is for a rendering engine for those 2D tile games everyone is trying to make.
Is there a way to texture quads using the vertex saving technique or will I just have to trash this method and just use the way less efficient way?
You can't.
Vertices in OpenGL are a collection of data. They may contain positions, but they also contain texture coordinates or other things. Every vertex, every collection of position/coordinate/etc, must be unique. So if you need to pair the same position with different texture coordinates, then you have different vertices.

OpenGL: Using shaders to create vertex lighting by using pre-calculated colormap?

First of all, I have very little knowledge of what shaders can do, and i am very interested in making vertex lighting. I am attempting to use a 3d colormap which would be used to calculate the vertex color at that position of the world, and also interpolate the color by using the nearby colors from the colormap.
I cant use typical OpenGL lighting because its probably too slow and theres a lot of lights i need to render. I am going to "render" the lights at the colormap first, and then i could either manually map every vertex drawn with the corresponding color from the colormap.
...Or i could somehow automate this process, so i wouldnt have to change the color values of vertexes myself, but a shader could perhaps do this for me?
Questions is... is this possible, and if it is: what i need to know to make it possible?
Edit: Note that i also need to update the lightmap efficiently, without caring about the size of the lightmap, so the update should be done only at that specific part of the lightmap i want to update.
It almost sounds like what you want to do is render the lights to your color map, then use your color map as a texture, but instead of decal mode set it to modulate mode, so it's multiplied with the existing color instead of just replacing it.
That is different in one way though: instead of just affecting the vertexes, it'll map to the individual fragments (pixels, in essence).
Edit: What I had in mind wasn't a 3D texture -- it was a cube map. Basically, create a virtual cube surrounding everything in your "world". Create a 2D texture for each face of that cube. Render your coloring to the cube map. Then, to color a vertex you (virtually) extend a ray outward from the center, through the vertex, to the cube. The pixel you hit on the cube map gives you the color of lighting for that vertex.
Updating should be relatively efficient -- you have normal 2D textures for the top, bottom, front, etc., and you update them as needed.
If you cant use the fixed function pipeline functionality the best way to do per vertex lighting should be to do all the lighting calculations per vertex in the vertex-shader, when you then pass it on the the fragment shader it will be correctly interpolated across the face.
Another way to deal with performances issues when using a lot of light sources is to use deferred rendering as it will only do lighting calculation on the geometry that is actually visible.
That is possible, but will not be effective on the current hardware.
You want to render light volumes into 3d texture. The rasterizer works on a 2D surface, so your volumes have to be split along one of the axises. The split can be done in one of the following ways:
Different draw calls for each split
Instanced draw, with layer selection based on glInstanceID (will require geometry shader)
Branch in geometry shader directly from a single draw call
In order to implement it, I would suggest reading GL-3 specification and examples. It's not going to be easy, nor it will be fast enough in the result for complex scenes.

Avoiding glBindTexture() calls?

My game renders lots of cubes which randomly have 1 of 12 textures. I already Z order the geometry so therefore I cant just render all the cubes with texture1 then 2 then 3 etc... because that would defeat z ordering. I already keep track of the previous texture and in they are == then I do not call glbindtexture, but its still way too many calls to this. What else can I do?
Thanks
Ultimate and fastest way would be to have an array of textures (normal ones or cubemaps). Then dynamically fetch the texture slice according to an id stored in each cube instance data/ or cube face data (if you want a different texture on a per cube face basis) using GLSL built-in gl_InstanceID or gl_PrimitiveID.
With this implementation you would bind your texture array just once.
This would of course required used of gpu_shader4 and texture_array extensions:
http://developer.download.nvidia.com/opengl/specs/GL_EXT_gpu_shader4.txt
http://developer.download.nvidia.com/opengl/specs/GL_EXT_texture_array.txt
I have used this mechanism (using D3D10, but principle applies too) and it worked very well.
I had to map on sprites (3D points of a constant screen size of 9x9 or 15x15 pixels IIRC) differents textures indicating each a different meaning for the user.
Edit:
If you don't feel comfy with all shader stuff, I would simply sort cubes by textures, and don't Z order the geometry. Then measure performances gains.
Also I would try to add a pre-Z pass where you render all your cubes in Z buffer only, then render normal scene, and see if it speed up things (if fragments bound, it could help).
You can pack your textures into one texture and offset the texture coordinates accordingly
glMatrixMode(GL_TEXTURE) will also allow you to perform transformations on the texture space (to avoid changing all the texture coords)
Also from NVIDIA:
Bindless Graphics