OpenGL - Access next 3 vertices in buffer from the vertex shader - c++

Im placing a bunch of square tiles around a world using 2 buffers fed from vector arrays, one for color and the other for position. The triangles vertex colors arent smooth as they dont interpolate between the two tris in the square. To combat this I wanted to set each fragments color individually, blending the colors of the vertices manually. I cannot substitute this process with premade textures either.
The issue Ive come across is passing the next 3 vertices position and location in the buffer into the vertex shader. Is there any easy way to do this?
Thanks and have a great day!

Add another set of attributes and setup the glVertexAttribPointer to point into the vertex position buffer as well, but with an offset. Keep in mind, to add a bit of dummy padding to the end, so that when reaching the end of the array you don't access out of bounds. Also the …_ADJACENCY drawing modes are useful in this situation (if available).

Related

How would I store vertex, color, and index data separately? (OpenGL)

I'm starting to learn openGL (working with version 3.3) with intent to get a small 3d falling sand simulation up, akin to this:
https://www.youtube.com/watch?v=R3Ji8J2Kprw&t=41s
I have a little experience with setting up a voxel environment like Minecraft from some Udemy tutorials for Unity, but I want to build something simple from the ground up and not deal with all the systems already laid on top of things with Unity.
The first issue I've run into comes early. I want to build a system for rendering quads, because instancing a ton of cubes is ridiculously inefficient. I also want to be efficient with storage of vertices, colors, etc. Thus far in the opengl tutorials I've worked with the way to do this is to store each vertex in a float array with both position and color data, and set up the buffer object to read every set of six entries as three floats for position and three for color, using glVertexAttribPointer. The problem is that for each neighboring quad, the same vertices will be repeated because if they are made of different "blocks" they will be different colors, and I want to avoid this.
What I want to do instead to make things more efficient is store the vertices of a cube in one int array (positions will all be ints), then add each quad out of the terrain to an indices array (which will probably turn into each chunk's mesh later on). The indices array will store each quad's position, and a separate array will store each quad's color. I'm a little confused on how to set this up since I am rather new to opengl, but I know this should be doable based on what other people have done with minecraft clones, if not even easier since I don't need textures.
I just really want to get the framework for the chunks, blocks, world, etc, up and running so that I can get to the fun stuff like adding new elements. Anyone able to verify that this is a sensible way to do this (lol) and offer guidance on how to set this up in the rendering, I would very much appreciate.
Thus far in the opengl tutorials I've worked with the way to do this is to store each vertex in a float array with both position and color data, and set up the buffer object to read every set of six entries as three floats for position and three for color, using glVertexAttribPointer. The problem is that for each neighboring quad, the same vertices will be repeated because if they are made of different "blocks" they will be different colors, and I want to avoid this.
Yes, and perhaps there's a reason for that. You seem to be trying to save.. what, a few bytes of RAM? Your graphics card has 8GB of RAM on it, what it doesn't have is a general processing unit or an unlimited bus to do random lookups in other buffers for every single rendered pixel.
The indices array will store each quad's position, and a separate array will store each quad's color.
If you insist on doing it this way, nothing's stopping you. You don't even need the quad vertices, you can synthesize them in a geometry shader.
Just fill in a buffer with X|Y|Width|Height|Color(RGB) with glVertexAttribPointer like you already know, then run a geometry shader to synthesize two triangles for each entry in your input buffer (a quad), then your vertex shader projects it to world units (you mentioned integers, so you're not in world units initially), and then your fragment shader can color each rastered pixel according to its color entry.
ridiculously inefficient
Indeed, if that sounds ridiculously inefficient to you, it's because it is. You're essentially packing your data on the CPU, transferring it to the GPU, unpacking it and then processing it as normal. You can skip at least two of the steps, and even more if you consider that vertex shader outputs get cached within rasterized primitives.
There are many more variations of this insanity, like:
store vertex positions unpacked as normal, and store an index for the colors. Then store the colors in a linear buffer of some kind (texture, SSBO, generic buffer, etc) and look up each color index. That's even more inefficient, but it's closer to the algorithm you were suggesting.
store vertex positions for one quad and set up instanced rendering with a multi-draw command and a buffer to feed individual instance data (positions and colors). If you also have textures, you can use bindless textures for each quad instance. It's still rendering multiple objects, but it's slightly more optimized by your graphics driver.
or just store per-vertex data in a buffer and render it. Done. No pre-computations, no unlimited expansions, no crazy code, you have your vertex data and you render it.

Opengl: coloring a world map?

Here is a task that every GIS application can do: given some polygons, fill each polygon with a chosen color. Like this: image
What is the best way of doing this repeatedly in Opengl? That is, the polygons do not change, and I want to vary the data for coloring to produce difference renderings.
Redrawing polygons for each rendering is the most straightforward solution, but it seems to be a waste, since the geometries do not change at all.
Or is it better to create a stencil for each polygon, and stencil print the entire map? If there are too many polygons, will doing hundreds or thousands of rendering passes create a problem?
For each vertex of a polygon, map a certain color.That means when you send the data to the shaders, with each call the vertex array object sends 2 parameters: a vector which is needed in the vertex shader and a vector which will be used as the fragment color.That is the simplest way.
For example think of a triangle drawn in opengl . if you send its vertices to the vertex shader and set a color in the fragment shader everytime when a vertex enters the shader pipeline it will be positioned accordingly and on the screen set with the given color from the fragment shader.
The technique which I poorly explained ( sry I am not the best at explanations) , is used in the colored triangle example in which colors interpolate.Red maped to a corner , Green maped to another , and Blue to the last. If you set it so the red color maps to every corner you get your colored triangle.That is the basic principle.Oh and you draw the minimum count of triangles and you need one pair of shaders .
Note : a polygon is made out of N triangles and you need to map the same color to every vertex of each triangle drawn in that polygon.
I think a bigger issue will be that OpenGL doesn't support polygons or vector drawing in general, but there are libraries for this. You'll have to use an existing solution for vector drawing, or failing that, you'll have to convert from your GIS data (usually a list of points for a polygon) to triangles. This is likely the biggest obstacle.
The fact that the geometry doesn't change isn't really an issue, you would generally store geometry into one or more buffers, then create logic to only draw what is visible inside your view point area, perhaps even go as far to only generate the geometry for the visible area.
See also this question and it's answers.
Rendering Vector Graphics in OpenGL?

OpenGL/C++: Back to Front rendering with vertex buffers

I'm currently wrapping my head around how to efficiently render polygons within a vertex buffer in back-to-front order to get transparency working..
I got my vertex buffers and index buffers set up, doing glDrawElements rendering and everything works nicely except transparency cause i currently render in arbitrary (the order the objects were created) order..
I will later implement octree rendering, but this will only help in the overall vertex-buffer-rendering order (what vertex buffer to render first), but not the order WITHIN the vertex buffer..
The only thing I can think of is reorder my index buffers every time i do a camera position change, which feels terrible inefficient since i store around 65.000 vertexes per vbo (using a GLushort for the indexes to achieve an optimal vbo size of around 1-4MB)..
Is there a better way to order vertexes in the vertex buffer object (or better phrased the corresponding indexes in the index buffer object)?
There are two methods for that (I have not used any of them myself though)
Peeling (dual peeling) http://developer.download.nvidia.com/SDK/10/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf
Stochastic Transparency
http://www.cse.chalmers.se/~d00sint/StochasticTransparency_I3D2010.pdf
Also if your objects are convex peeling can be easily implemented by first drawing back faces and front faces (using GL_CULL_FACE and inverting normals for correct lighting in a shader)

Pairwise vertex attributes in OpenGL

I'm trying to visualise a graph with OpenGL. I have vertex buffer with points in 3D space, and an index buffer that specifies lines between vertices. I use glDrawElements to draw the graph. Everything works. Problem is that I need to visualise the edge weights. My problem is that edge weights are pairwise attributes and I have no idea how to put this information into my shader. Only solutions I can think of is drawing each edge individually with DrawRangeElements and setting the edge weight between every call. Is there a better way of doing this?
There's no need to employ a geometry shader. Just render them as GL_LINES, duplicating the positions as needed as well as providing the same "weight" attribute for each pair of verts on a line. This is ultimately no different from rendering a cube, where each face needs its own normals.
If (and only if) you absolutely must have that memory back, and you can't simply compress your vertex data (using normalized shorts, unnormalized shorts, or whatever), here are some techniques you can use. Be warned: this is a memory-vs-performance tradeoff. So unless you have real memory pressures, just duplicate your vertex data and get it over with.

What color does a fragment get if there are two vertices at the very same position with two different colors?

I have a question concerning the OpenGL rendering pipeline.
I have recently been reading theory about the GLSL's Geometry Shader. I think I do understand the basics of how to emit new geometry and assign colors to the new vertices. I am, however, not sure what color a fragment would get if one of those new vertices would have the very same position as one coming in from the Vertex shader.
Consider this example:
I far as I understand it, I am able to handle a single vertex with the Vertex shader. I make some transformation and store the position in glPosition. It is furthermore possible to assign a color to that vertex, e.g. by storing it to glFrontColor. As an example, I give it the color red. If all channels have 32 bits, that would be 0xFFFFFFFF'00000000'00000000'00000000, right?.
Next, Geometry shader is involved. I want my geometry shader to emit some additional vertices. At least one of them is at the very same position as the original vertex coming in from the Vertex shader. However, it is assigned another color, e.g. green. That would be 0x00000000'FFFFFFFF'00000000'00000000, right?
Sooner or later, after every vertex has been dealt, the rasterization takes place. As I understand, both vertices are rasterized and will therefore become the very same fragment. So, there we go. What color will that particular fragment get? Is there some kind of automatic blending and the fragment becomes yellow? Or is red or rather green?
This question might be silly. But I am simply not clear on that and would appreciate if somebody could clarify that for me.
If there is no blending (which I assume), how could I possibly create a blending effect?
Assuming you're rendering points (which seems to be what you're describing), the two vertices with the different colors will result in two fragments (one for each vertex) at the same location. What final color will be written to the output depends on the Z values for each, the blending function set and the order in which they are processed (which is effectively random -- you can't count on either order unless you do some extra sync stuff, so you need to set your blend func/Z-culling such that it doesn't matter).
I think they will be Z-Fighting, if they have the exact same values for x y and z.
About blending:
This is separate from the programmable pipeline, so you don't have to do most of the work in the shaders for it.
First enable blending with glEnable(GL_BLEND),
then specify your desired blending function with glBlendFunc, most commonly glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
Now the vertices only need an alpha value set at gl_FragColor.a and their color will blend.