How do I use GLTriangleBatch - c++

I am developing a real world application that is to render solid white polygons on the screen, and change their alpha values as time passes. I am trying to learn how to use GLTriangleBatch to perform this (I know C++ well, but not so much about OpenGL), and have not been able to find any good examples online. So far, I have found a few resources that led me to put together the following bit of code, but I am still confused on some portions:
GLTriangleBatch myTriangles;
myTriangles.BeginMesh(1000000); // Include how many vertices you have: 1???
myTriangles.AddTriangle(...); // Add a triangle: 2???
// Repeat for all triangles for this batch
myTriangles.End(); // Done adding triangles
myTriangles.Draw(); // Draw the triangles
I have marked the portions I am confused on with ???, and the questions appear below:
Question 1: Does this number of vertices have to be exact, or is it okay to over-estimate and / or underestimate? Furthermore, does this number of vertices represent the "batched" vertices, or the individual vertices of each triangle (e.g. would 2 connected triangles with 2 shared vertices require 6 vertices or 4?)
Question 2: The paramaters are:
M3DVector3f verts[3]: I assume the 3 vertices of the triangle
M3DVector3f vNorms: What is this, and how do I set it up?
M3DVector2f vTexCoords[3]: I assume this is something to do with texture, all I want is a white triangle, and to be able to vary the alpha value of it. How would I set this up?

This GLTriangleBatch class apparently is part of a framework provided with the OpenGL Super Bible book. It is not part of the OpenGL standard (OpenGL is a plain C API).
As for the first question, I assume the nMaxVerts parameter of BeginMesh() is the upper limit of vertexes you want to draw (each triangle = 3 verts). So you can set this to either an exact value or an over-estimate. But you'd have to find some documentation or look at the source to know for sure.
For the second question, yes verts[3] are the three positions that make the triangle.
vNorms is the Face Normal of the triangle. It is used for lighting calculation. And the last member, vTexCoords[3] are the Texture Coordinates of the triangle. These are used for texture mapping. You can set them to zero if you are not applying texture to your mesh.

Related

Is there a way to implement something like marching squares on a grid drawn with triangle strips?

I am currently drawing a grid using a series of triangle strips. I am using this to render a height field, and generating the vertex data completely in the vertex shader without any input buffers just using the vertex and instance indexes. This is all working fine and is very efficient.
However, I now find myself also needing to implement border lines on this grid. The obvious solution here would be something like marching squares. Basically, what I want to achieve is something like this:
The black dots represent the vertices in the grid that are part of some set, and I want to shade the area inside the red line differently than that outside it.
Naïvely, this seems like it would be easy: Add a value to the vertices that is 1 for vertices in the set and 0 for those outside it, and render differently depending on if the interpolated value is above or below 0.5, for instance.
However, because this is rendered as a triangle strip, this does not quite work. In pracitce, since this is rendered as a triangle strip, this ends up looking like this:
So, half the edges work and half end up with ugly square staircases.
I've now been trying to wrack my brain for days whether there is some trick that could be used to generate the vertex values differently or making a more complicated test than >0.5 to get closer to the intended shape without giving up on the nice and simple triangle strips and having to actually generate geometry ahead of time, but I can not think of one.
Has anyone ever dealt with a similar problem? Is there some clever solution I am missing?
I am doing this in Metal, but I don't expect this to depend much on the specific API used.
It sounds like you're trying to calculate the colors in the fragment shader independently of the mesh underneath. If so, you should decouple the color calculation from the mesh.
Assuming your occupancy is stored in a texture, use textureGather to get the four nearby occupancy values; determine the equation of the boundary; then use the fractional part of the texture coordinates to determine its position relative to the boundary. (The devil here is in the details -- in particularly the ambiguous checker-board pattern case.)
Once you implement the above approach, it's very likely you won't even need the triangle strip mesh anymore -- simply fill your entire drawing area with a single large quad and let the fragment shader to do the rest.

Dynamically create complementary triangles of a regular grid in OpenGL

I have created a regular grid which originates from a 2D image, i.e. each pixels has a vertex. There are two triangles per four pixels so that I have a triangle in the top right and in the bottom left. I use vertex and index buffers for that.
Now I dynamically remove triangles / faces at the border of two different kinds of vertices (according to my application) because else there would be distortions. I wrote a geometry shader which takes a triangle and outputs the triangle or nothing (see first picture). The shader recognizes if a triangle is "faulty" (has orange edges) and omits it.
Now this works fine, but I may lose some details because of my vertex geometry. I can add complementary triangles to the mesh (see second picture, new triangles with dashed orange line).
How do I accomplish this in OpenGL?
My first idea is to create one quad instead of two triangles, check for the four possible triangles cases and create those triangles dynamically in the geometry shader. But this might be slow; GL_QUADs are deprecated and alternatives might be slow too. What do you have in mind?
Here's my idea:
Put the whole grid in a buffer/texture.
Build four triangles for each four pixels. They cross each other, yes.
In the geometry shader you can tell if a triangle is "faulty" because it connects two wrong regions. Or, sampling form the texture, because the crossing triangle is valid, so this new one can be discarded.
EDIT: Another approach
Use the texture. Draw instanced with GL_POINTS. With some order and the help of the instanceID the shader knows where the point is.
For this point test the four possible triangles. If you instance top to down and left to right, only a point to the right and the two below are used for the four triangles. And you avoid repeating tests.
Emit only those you choose.

Opengl: coloring a world map?

Here is a task that every GIS application can do: given some polygons, fill each polygon with a chosen color. Like this: image
What is the best way of doing this repeatedly in Opengl? That is, the polygons do not change, and I want to vary the data for coloring to produce difference renderings.
Redrawing polygons for each rendering is the most straightforward solution, but it seems to be a waste, since the geometries do not change at all.
Or is it better to create a stencil for each polygon, and stencil print the entire map? If there are too many polygons, will doing hundreds or thousands of rendering passes create a problem?
For each vertex of a polygon, map a certain color.That means when you send the data to the shaders, with each call the vertex array object sends 2 parameters: a vector which is needed in the vertex shader and a vector which will be used as the fragment color.That is the simplest way.
For example think of a triangle drawn in opengl . if you send its vertices to the vertex shader and set a color in the fragment shader everytime when a vertex enters the shader pipeline it will be positioned accordingly and on the screen set with the given color from the fragment shader.
The technique which I poorly explained ( sry I am not the best at explanations) , is used in the colored triangle example in which colors interpolate.Red maped to a corner , Green maped to another , and Blue to the last. If you set it so the red color maps to every corner you get your colored triangle.That is the basic principle.Oh and you draw the minimum count of triangles and you need one pair of shaders .
Note : a polygon is made out of N triangles and you need to map the same color to every vertex of each triangle drawn in that polygon.
I think a bigger issue will be that OpenGL doesn't support polygons or vector drawing in general, but there are libraries for this. You'll have to use an existing solution for vector drawing, or failing that, you'll have to convert from your GIS data (usually a list of points for a polygon) to triangles. This is likely the biggest obstacle.
The fact that the geometry doesn't change isn't really an issue, you would generally store geometry into one or more buffers, then create logic to only draw what is visible inside your view point area, perhaps even go as far to only generate the geometry for the visible area.
See also this question and it's answers.
Rendering Vector Graphics in OpenGL?

How to use vertex, normal and texture indices together?

I'm writing a small rendering engine using C++ and DirectX 11. I've managed to render models using indexed vertices. Now I want to support textures and vertex normals as well. I'm using Wavefront OBJ files, which uses indices to reference to the correct texture and normal coordinates (much like indexed vertices). This is the first time I'm doing something like this with vertices, textures and normals combined, so this is all a bit new to me.
The problem I'm facing is that the number of indices for vertices, normals and texture coordinates are not the same and I'm trying to find a right way to use all these indices in a vertex shader. To illustrate the problem more clearly, I've made some images.
The left image is a wireframe of a simple piramid object and the right image is its UV coordinate layout (with the bottom of the piramid in the center). In the left image you can see that the piramid has 4 vertices, so there are 4 vertex indices. The right UV layout has 6 UV coordinates so there are 6 texture indices. For each face of the piramid has a normal of its own, so there are only 4 normal indices.
I've found this question while searching on SO and it seems to work in theory (I haven't tried it yet). Is this a common way to solve a problem like this or are there better ways? What would you recommend I do?
Thanks :)
P.S.: my vertex, normal and texture data is stored in separate arrays on the cpu side.
I would recommend you to analyze the problem narrowing down to the simplest case; to me it helped using only two squares (flat surface, 4 triangles) and try to correctly map the texture on both squares as shown in the image.
Using this simple case you have this data:
6 vertices
6 UVs
6 normals (for each vertex, even though some vertex might share same normal)
12 indices
In theory this will suffice to describe your whole data, but if you test it will find out that one of the mapped textures won't be fully mapped onto one of the squares. That's because for the vertex 2 and 3, you have two different UV coord sets, it's impossible to describe using so few vertices, you need more vertices!
To fix this problem, the data you gather from whatever exporter you chose, will duplicate vertices when clashes like this occur. For the example above, two more vertices will be needed, it will produce a duplication of data for the normals, 2 more elements to the indices but now it will describe correctly map the UV coords.
Finally, for your example those 4 vertices can't fully describe all UV coords, then duplicating some vertices will be needed to complete the UV layout, the indices array will grow.
For the part am not sure is if the OBJ format exporter actually throws the data ready to read, either you can try to figure it out with an exporter or you can try another format (which I better recommend you) that can export data that is already in the right format ready to be pushed into the graphics pipeline of DirectX11.
Hope the example that helped me understanding this problem, will help to you as well.
Take a look to some basic DirectX11 tutorials, or if you can this book.
Ah seems to me that you need to visit: http://www.rastertek.com/tutdx11.html
See what they have here and give it a whirl

Displaying multiple cubes in OpenGL with shaders

I'm new to OpenGL and shaders. I have a project that involves using shaders to display cubes.
So basically I'm supposed to display eight cubes using a perspective projection at (+-10,+-10,+-10) from the origin each in a different color. In other words, there would be a cube centered at (10, 10, 10), another centered at (10, 10, -10) and so on. There are 8 combinations in (+-10, +-10, +-10). And then I'm supposed to provide a key command 'c' that changes the color of all the cubes each time the key is pressed.
So far I was able to make one cube at the origin. I know I should use this cube and translate it to create the eight cubes but I'm not sure how I would do that. Does anyone know how I would go about with this?
That question is, as mentioned, too broad. But you said that you managed to draw one cube so I can assume that you can set up camera and your window. That leaves us whit how to render 8 cubes. There are many ways to do this, but I'll mention 2 very different ones.
Classic:
You make function that takes 2 parameters - center of cube, and size. Whit these 2 you can build up cube the same way you're doing it now, but instead of fixed values you will use those variables. For example, front face would be:
glBegin(GL_TRIANGLE_STRIP);
glVertex3f(center.x-size/2, center.y-size/2, center.z+size/2);
glVertex3f(center.x+size/2, center.y-size/2, center.z+size/2);
glVertex3f(center.x-size/2, center.y+size/2, center.z+size/2);
glVertex3f(center.x+size/2, center.y+size/2, center.z+size/2);
glEnd();
This is just for showcase how to make it from variables, you can do it the same way you're doing it now.
Now, you mentioned you want to use shaders. Shader topic is very broad, just like openGL itself, but I can tell you the idea. In openGL 3.2 special shaders called geometry were added. Their purpose is to work with geometry as whole - on contrary that vertex shaders works whit just 1 vertex at time or that fragment shaders work just whit one fragment at time - geometry shaders work whit one geometry piece at time. If you're rendering triangles, you get all info about single triangle that is just passing through shaders. This wouldn't be anything serious, but these shaders doesn't only modify these geometries, they can create new ones! So I'm doing in one of my shader programs, where I render points, but when they pass through geometry shader, these points are converted to circles. Similarly you can render just points, but inside geometry shader you can render whole cubes. The point position would work as center for these cubes and you should pass size of cubes in uniform. If size of cubes may vary, you need to make vertex shader also that will pass the size from attribute to variable, which can be read in geometry shader.
As for color problem, if you don't implement fragment shaders, only thing you need to do is call glColor3f before rendering cubes. It takes 3 parameters - red, green and blue values. Note that these values doesn't range from 0 to 255, but from 0 to 1. You can get confused that you cubes aren't rendered if you use white background and think that when you set colors to 200,10,10 you should see red cubes but you don't see anything. That's because in fact you render white cubes. To avoid such errors, I recommend to set background to something like grey whit glClearColor.