openGL displaying multiple objects - c++

I'm using openGL 3.3 to draw 4 polygons in a window. My program calculates the vertices of 2 polygons (m and n sides) and applies some transformation to draw and animate 2 polygons of m sides and 2 of n sides.
My problem is that I need to store the vertices of the 2 polygons in different memory locations, since I want to apply different transformations, but I'm not able to retrieve and display both the objects from my vertex shader.
#version 330
in vec3 a_vertex;
uniform mat4 transform;
void main(void)
{
gl_Position = transform * vec4(a_vertex, 1.0);
}
This shader shows only the first 2 objects (whose vertices are stored in location 0). The same happens if I write layout (location = 0) in vec3 a_vertex;.
Otherwise writing layout (location = 1) in vec3 a_vertex; I can only see the other two polygons (whose vertices are stored in location 1).
This seems reasonable. Now my question is: how can I show all the 4 polygons?
P.S.: I think Multiple objects drawing (OpenGL)
previous question contains the answer but the explanation is so high level that I cannot translate it into code.

You misunderstood what shader locations are and what they're used for. Locations are designator for separate attributes within the same vertex passed to a shader. Color is an attribute and has a location. Surface normal is an attribute and has a location. Texture coordinate is an attribute and… well you get the idea.
What you have there however are different vertices. All you have to do is simply make draw calls for your different vertices. Just call glDraw… several times or put all your different vertices into a single buffer and draw them all at once.

Related

Calculating surface normals of dynamic mesh (without geometry shader)

I have a mesh whose vertex positions are generated dynamically by the vertex shader. I've been using https://www.khronos.org/opengl/wiki/Calculating_a_Surface_Normal to calculate the surface normal for each primitive in the geometry shader, which seems to work fine.
Unfortunately, I'm planning on switching to an environment where using a geometry shader is not possible. I'm looking for alternative ways to calculate surface normals. I've considered:
Using compute shaders in two passes. One to generate the vertex positions, another (using the generated vertex positions) to calculate the surface normals, and then passing that data into the shader pipeline.
Using ARB_shader_image_load_store (or related) to write the vertex positions to a texture (in the vertex shader), which can then be read from the fragment shader. The fragment shader should be able to safely access the vertex positions (since it will only ever access the vertices used to invoke the fragment), and can then calculate the surface normal per fragment.
I believe both of these methods should work, but I'm wondering if there is a less complicated way of doing this, especially considering that this seems like a fairly common task. I'm also wondering if there are any problems with either of the ideas I've proposed, as I've had little experience with both compute shaders and image_load_store.
See Diffuse light with OpenGL GLSL. If you just want the face normals, you can use the partial derivative dFdx, dFdy. Basic fragment shader that calculates the normal vector (N) in the same space as the position:
in vec3 position;
void main()
{
vec3 dx = dFdx(position);
vec3 dy = dFdy(position);
vec3 N = normalize(cross(dx, dy));
// [...]
}

how to pass GL_PATCHES

I am trying to create an example of an interpolated surface.
First I created an example of an interpolated trefoil.
Here the source of my example.
Then I had to noticed that the animation is pretty slow, around 20-30FPS.
After some papers, I know that have to "move" the evaluation of the trefoil into the GPU. Thus I studied some papers about tessellation shaders.
At the moment I bind following simply vertex shader:
#version 130
in vec4 Position;
in vec3 Normal;
uniform mat4 Projection;
uniform mat4 Modelview;
uniform mat3 NormalMatrix;
uniform vec3 DiffuseMaterial;
out vec3 EyespaceNormal;
out vec3 Diffuse;
void main()
{
EyespaceNormal = NormalMatrix * Normal;
gl_Position = Projection * Modelview * Position;
Diffuse = DiffuseMaterial;
}
Now I have multiply questions:
Do I use an array of vertices to pass GL_PATCHES like I already did with Triangle_Strips ? Which way is faster? DrawElements?
glDrawElements(GL_TRIANGLE_STRIP, Indices.Length, OpenGL.GL_UNSIGNED_SHORT, IntPtr.Zero);
or should I use
glPatchParameteri(GL_PATCH_VERTICES,16);
glBegin(GL_PATCHES);
glVertex3f(x0,y0,z0)
...
glEnd();
What about the array of indices? How can I determine the path means in which order the patches will be passed.
Do I calculate the normals in the Shader as well?
I found some examples of tessellation shader but in #version400
Can I use this version on mobile devices as well?(OpenGL ES)
Can I pass multiple Patches to the GPU by Multithreading?
Many many thanks in advance.
In essence I don't believe you have to send anything to the GPU in terms of indices (or vertices) as everything can be synthesized. I don't know if the evaluation of the trefoil knot directly maps onto the connectivity of the resulting tessellated mesh of a bilinear patch, but this could work.
You could do with a simple vertex buffer where each vertex is the position of a single trefoil knot. Set glPatchParameteri(GL_PATCH_VERTICES​, 1). Then you could draw multiple knots with a single call to glDrawArrays:
glDrawArrays(GL_PATCHES, 0, numKnots);
The tessellation control stage can be a simple pass through stage. Then in the tessellation evaluation shader you can use the abstract patch type of quads. Then move the evaluation of the trefoil knot, or any other biparametric shape, into the tessellation evaluation shader, using the supplied [u, v] coordinates. Then you could translate every trefoil by the input vertex. The normals can be calculated in the shader as well.
Alternatively, you could use the geometry shader to synthesize the trefoil just from one input vertex position using points as input primitive and triangle strip as output primitive. Then you could just call again
glDrawArrays(GL_POINTS, 0, numKnots);
and create the trefoil in the geometry shader using the function for the generation of the indices to describe the order of evaluation and translating the generated vertices with the input vertex.
In both cases there would be no need to multithread draw calls, which is ineffective with OpenGL anyways. You are limited by the number of vertices that can be generated maximum per-patch which should be 64 times 64 for tessellation and GL_MAX_GEOMETRY_OUTPUT_VERTICES for geometry shaders.

In OpenGL can I access the buffer I'm drawing in a vertex shader?

I know that each time a vertex shader is run it basically accesses part of the buffer (VBO) being drawn, when drawing vertex number 7 for example it's basically indexing 7 vertices into that VBO, based on the vertex attributes and so on.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec3 texCoords; // This may be running on the 7th vertex for example.
What I want to do is have access to an earlier part of the VBO for example, so when it's drawing the 7th Vertex I would like to have access to vertex number 1 for example, so that I can interpolate with it.
Seeing that at the time of running the shader it's already indexing into the VBO already, I would think that this is possible, but I don't know how to do it.
Thank you.
As you can see in the documentation, vertex attributes are expected to change on every shader run. So no, you can't access attributes defined for other vertices in a vertex shader.
You can probably do this:
Define a uniform array and pass in the values you need. But keep in mind that you are using more memory this way, you need to pass more data etc.
As #Reaper said you can use a uniform buffer, which can be accessed freely. But the GPU doesn't like random access, it's usually more efficient to stream the data.
You can solve this as well by just adding the data for later/earlier vertices into the array, because in C++ all vertices are at your disposal.
For example if this is the "normal" array:
{
vertex1_x, vertex1_y, vertex1_z, normal1_x, normal1_y, normal1_z, texCoord1_x, texCoord1_y,
...
}
Then you could extend it with data for the other vertex to interpolate with:
{
vertex1_x, vertex1_y, vertex1_z, normal1_x, normal1_y, normal1_z, texCoord1_x, texCoord1_y, vertex2_x, vertex2_y, vertex2_z, normal2_x, normal2_y, normal2_z, texCoord2_x, texCoord2_y,
...
}
Actually you can pass any data per vertex. Just make sure that the stride size and other parameters are adjusted in the glVertexAttribPointer parameters.

How exactly does indexing work?

From my understanding, indexing or IBOs in OpenGL are mainly used to reduce the number of vertices needed to draw for a given geometry. I understand that with an Index Buffer, OpenGL only draws the vertices with the given indexes and skips any other vertices. But doesn't that eliminate the possibility to use texturing? As far as i am aware, if you skip vertices with index buffers, it also skips their vertex attributes? If i have my vertex attributes set like this:
attribute vec4 v_Position;
attribute vec2 v_TexCoord;
and then use an index buffer and glDrawElements(...), wont that eliminate the usage of texturing, or does v_Position get "reused"? if they don't, how can i texture when using an index buffer?
I think you are misunderstanding several key terms.
"Vertex attributes" are the data that defines each individual vertex. While these include texture coordinates, they also include position. In fact, at least if you are not using fixed-function, the meaning of vertex attributes is entirely arbitrary; their meaning is defined by how the vertex shader uses and/or forwards them to following shader stages.
As such, there is no difference between how position, texture coordinates, and any other vertex attribute are forwarded to the vertex shader. They are all parsed exactly the same no matter how indexes are used (or not used).
An example vertex shader:
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvAttr;
out vec2 uv;
void main( )
{
uv = uvAttr;
gl_Position = position;
}
And the beginning of the fragment shader to which the above is paired:
in vec2 uv;
The output of vertex shaders is, as you can see, based on the vertex attributes. That output is then interpolated across the faces generated by primitive assembly, before sending it to fragment shaders. Primitive assembly is the main place where indexes come into play: indexes determine how the vertex shader output is used to create actual geometry. That geometry is then broken up into fragments, which are what actually affect the rendering output. Outputs from the vertex shader become inputs to the fragment shader.
After the vertex shader, the vertex attributes cease being defined. Only if you forward them, as above, can they be accessed for use in something like texturing. So, you are not even using the vertex attribute itself as a texture coordinate in the first place: you're using a variable output by the vertex shader and interpolated in primitive assembly/rasterization.
"if you skip vertices with index buffers, it also skips their vertex attributes"
Yes - it totally ignores the vertex: texture coordinates, position, and whatever else you have defined for that vertex. But only the skipped vertex. The rest continue to be processed normally as if the skipped vertex never existed.
For example. Let us say for the sake of argument I have 5 vertexes. I have these ordered into a bow-tie shape as you can see below. Each vertex has position (a 2 component vector of just x and y) and a single component "brightness" to be used as a color. The center vertex of the bow tie is only defined once, but referenced via indexes twice.
The vertex attributes are:
[(1, 1), 0.5], aka [(x, y), brightness]
[(1, 5), 0.5]
[(3, 3), 0.0]
[(5, 5), 0.5]
[(5, 1), 0.5]
The indexes are: 1, 2, 3, 4, 5, 3.
Note that in this example, the "brightness" might as well stand in for your UV(W) coordinates. It would be interpolated similarly, just as a vector. As I said before, the meaning of vertex attributes is arbitrary.
Now, since you're asking about skipping vertexes, here is what the output would be if I changed the indexes to 1, 2, 4:
And this would be 1, 2, 3:
See the pattern here? OpenGL is concerned with the vertexes that makes up the faces it generates, nothing else. Indexes merely change how those faces are assembled (and can enable it to skip unneeded vertexes being calculated entirely). They have no impact on the meaning of the vertexes that are used and do go into the faces. If the black vertex #3 is skipped, it does not contribute to any face, because it is not part of any face.
As an aside, the standard allows implementations to re-use vertex shader output within single draw calls. So, you should expect that using the same index repeatedly will probably not result in additional vertex shader calls. I say "probably not" because what your driver actually does is always going to be voodoo.
Note that in this I have intentionally ignored tesselation and geometry shaders. Those are a topic beyond the scope of this question, but can have some interesting implications for how vertex attributes are handled. I also ignored the fact that the ordering of vertexes can be accessed to a degree in shaders, and thus might impact output.
Index buffer is used for speed.
With index buffer, vertex cache is used to store recently transformed vertices. During transformation, if vertex pointed by index is already transformed and available in vertex cache, it is reused otherwise, vertex is transformed. Without index buffer, vertex cache cannot be utilized so vertices always get transformed. That is why it is important to order your indices to maximize vertex cache hits.
Index buffer is also used for reducing memory footprint.
Single vertex data is usually quite large. For example: to store single precision floating point of position data (x, y, z) requires 12 bytes (assuming that each float requires 4 bytes). This memory requirement gets bigger if you include vertex color, texture coordinate or vertex normal.
If you have a quad composed of two triangles with each vertex consist of position data only (x, y, z). Without index buffer, you require 6 vertices (72 bytes) to store a quad. With 16-bit index buffer, you only need 4 vertices (48 bytes)+ 6 indices (6*2 bytes = 12 bytes) = 60 bytes to store a quad. With index buffer, this memory requirement gets smaller if you have many shared vertices.

OpenGL: Passing random positions to the Vertex Shader

I am starting to learn OpenGL (3.3+), and now I am trying to do an algorithm that draws 10000 points randomly in the screen.
The problem is that I don't know exactly where to do the algorithm. Since they are random, I can't declare them on a VBO (or can I?), so I was thinking in passing a uniform value to the vertex shader with the varying position (I would do a loop changing the uniform value). Then I would do the operation 10000 times. I would also pass a random color value to the shader.
Here is kind of my though:
#version 330 core
uniform vec3 random_position;
uniform vec3 random_color;
out vec3 Color;
void main() {
gl_Position = random_position;
Color = random_color;
}
In this way I would do the calculations outside the shaders, and just pass them through the uniforms, but I think a better way would be doing this calculations inside the vertex shader. Would that be right?
The vertex shader will be called for every vertex you pass to the vertex shader stage. The uniforms are the same for each of these calls. Hence you shouldn't pass the vertices - be they random or not - as uniforms. If you would have global transformations (i.e. a camera rotation, a model matrix, etc.), those would go into the uniforms.
Your vertices should be passed as a vertex buffer object. Just generate them randomly in your host application and draw them. The will be automatically the in variables of your shader.
You can change the array in every iteration, however it might be a good idea to keep the size constant. For this it's sometimes useful to pass a 3D-vector with 4 dimensions, one being 1 if the vertex is used and 0 otherwise. This way you can simply check if a vertex should be drawn or not.
Then just clear the GL_COLOR_BUFFER_BIT and draw the arrays before updating the screen.
In your shader just set gl_Position with your in variables (i.e. the vertices) and pass the color on to the fragment shader - it will not be applied in the vertex shader yet.
In the fragment shader the last set variable will be the color. So just use the variable you passed from the vertex shader and e.g. gl_FragColor.
By the way, if you draw something as GL_POINTS it will result in little squares. There are lots of tricks to make them actually round, the easiest to use is probably to use this simple if in the fragment shader. However you should configure them as Point Sprites (glEnable(GL_POINT_SPRITE)) then.
if(dot(gl_PointCoord - vec2(0.5,0.5), gl_PointCoord - vec2(0.5,0.5)) > 0.25)
discard;
I suggest you to read up a little on what the fragment and vertex shader do, what vertices and fragments are and what their respective in/out/uniform variables represent.
Since programs with full vertex buffer objects, shader programs etc. get quite huge, you can also start out with glBegin() and glEnd() to draw vertices directly. However this should only be a very early starting point to understand what you are drawing where and how the different shaders affect it.
The lighthouse3d tutorials (http://www.lighthouse3d.com/tutorials/) usually are a good start, though they might be a bit outdated. Also a good reference is the glsl wiki (http://www.opengl.org/wiki/Vertex_Shader) which is up to date in most cases - but it might be a bit technical.
Whether or not you are working with C++, Java, or other languages - the concepts for OpenGL are usually the same, so almost all tutorials will do well.