I guess I'm expecting too much, but, maybe, it could be possible.
In python I do have a huge numpy array (with 2 point 2D lines), something like this:
[ [x1, y1, x2, y2, r, g, b, width],
[x1, y1, x2, y2, r, g, b, width],
.......
]
there is any way I can configure a VBO and shaders to process those lines?
I can't see how to do that since each "vector" actually has two vectors :)
My idea was to use a transparent vertex shader and then geometry shader to accept points (single vector), and then output two triangles (strip), to form the 2D line.
By the way: width can be a big number, that's why I'm planning to draw the line as two triangles.
Since I've never used a geometry shader before, I'm wondering if that's possible?
A geometry shader should be able to handle this. Since it's a little simpler, I'll illustrate it with drawing lines. If you read up more on geometry shaders, I'm sure you can figure out how to pass in the width, and generate triangles instead.
The basic idea is that you pass the [x1, y1, x2, y2] values as a single vec4 attribute into the vertex shader. Assuming that you already stored the data in a VBO, and have it bound as GL_ARRAY_BUFFER, you use this call:
glVertexAttribPointer(loc, 4, GL_FLOAT, GL_FALSE, stride, 0);
For drawing, you use GL_POINTS as the primitive type, which you will later turn into lines in the geometry shader:
glDrawArrays(GL_POINTS, 0, lineCount);
Then in the vertex shader, you simply pass the position along:
in vec4 InPos;
...
gl_Position = InPos;
In the geometry shader, you will then use the first 2 components of the input vector, which are [x1, y1] as the first point of the line, and the second 2 components, which are [x2, y2], as the second point of the generated line:
layout (points) in;
layout (line_strip, max_vertices = 2) out;
void main() {
gl_Position = vec4(gl_in[0].gl_Position.xy, 0.0, 1.0);
EmitVertex();
gl_Position = vec4(gl_in[0].gl_Position.wz, 0.0, 1.0);
EmitVertex();
EndPrimitive();
}
I'm intentionally posting this as a second answer, since it's almost entirely different. This one does not use geometry shaders, but uses instanced rendering instead. It might be somewhat unconventional, but I can't think of a reason why it couldn't work.
The very first part is the same. You create a single 4-component vertex attribute that contains the coordinates of both 2D points, [x1, y1, x2, y2]. But this time, you also enable instanced rendering for the attribute:
glVertexAttribPointer(loc, 4, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribDivisor(loc, 1);
The second call specifies that the same attribute value is used for all vertices of each instance, and is only advanced per instance.
With this, we can now draw each line as a separate instance, with 2 vertices per instance:
glDrawArraysInstanced(GL_LINES, 0, 2, lineCount);
Now, the consequence of only advancing once per instance is that you will get the same attribute values for both vertices of each instance. This may look like a problem, but you can use gl_VertexID to pick the first two components for the first vertex, and the other two components for the second vertex:
in vec4 InPos;
...
gl_Position = vec4(InPos[gl_VertexID * 2], InPos[gl_VertexID * 2 + 1], 0.0, 1.0);
Since gl_VertexID will be 0, this will get components 0 and 1, corresponding to [x1, y1] for the first vertex, and components 2 and 3, corresponding to [x2, y2] for the second vertex of each line.
Related
I'm pretty new to 3D programming. I'm trying to learn OpenGL from this site. During the reading I couldn't really understand how the layout (location = 0) line really operates. I've tried to search for other explanation online both in the OpenGL wiki and in other sites, and I've managed to find this site from which I understood a little more.
So if I am correct the vertex shader takes some inputs and generates some outputs. The input of the shader are called vertex attributes and each one of them as an index location called attribute index. Now I expect that if the shader takes as input a single vertex and its attributes, it has to run multiple times, one for each vertex of the object I'm trying to render.
Is it correct what I wrote up until this point?
Now, what I didn't manage to understand is how layout (location = 0) really works. My assumption is that this intruction needs to tell the shader from where location in memory to pick the first index attribute. Thus each time the shader re-runs (if it actually re-runs), the location should move by one unit, like in a normal for loop. Is this interpretation correct? And, please, can anyone actually explain me, in an organic way, how the vertex shader operates?
P.S. Thank you in advance for your time and excuse my poor English: I'm still practising it!
Edit
This is the code. Following the first guide I linked I created an array of vertices:
float vertices[] {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
then I created a vertex buffer object:
unsigned int VBO;
glGenBuffer(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
I added the data to the VBO:
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
while the vertex shader reads:
#version 330 core
layout (location = 0) in vec3 aPos:
void main() {
gl_Position(aPos.x, aPos.y, aPos.z, 1.0f);
}
You need to look at both sides of this. You bind a buffer containing all of your data. Say position and color.
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 color;
Now in the program, I can use these vectors without specifying the index of the vertex I am processing because we had to tell GL how to buffer the data.
We do that when we bind buffers to the program.
Lets say we want to create a triangle. It has 3 vertexes, each vertex has two attributes: color and position. We create a vertex shader that processes each vertex, in that program it is implied that each vertex has a color and position. You don't care about the index in the array it is (for now).
The program will take vertex i, v_i and process it. How it populates position and vector depend on how you bind the data. I could have two arrays,
positionData = [x0, y0, z0, x1, ... z3];
colorData = [r0, g0, b0, r1, ... b3];
So I would buffer this data, then I would bind that buffer to the program at the attribute location and specify how it is read. Eg. bind the positionBuffer to attribute location 0, read it in strides of three with no offset.
The same with the color data, but with location 1.
Alternatively I could do.
posColData = [ x0, y0, z0, r0, g0, b0, x1, y1, ... b3];
Then I would create posColBuffer and bind it to the 0th attribute, with a stride of 6. I would also bind the posColBuffer to the 1st attribute with a stride of 6 and an offset of 3.
The code you are using does this here.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0); ;
They utilize the layout clause by just saying 0 since they know the location.
Currently, I'm trying to implement a fragment shader, which mixes colors of different fluid particles by combining the percentage of the fluids' phases inside the particle. So for example, if fluid 1 possesses 15% of the particle and fluid 2 possesses 85%, the resulting color should reflect that proportion. Therefore, I have a buffer texture containing the percentage reflected as a float value in [0,1] per particle and per phase and a texture containing the fluid colors.
The buffer texture does currently contain the percentages for each particle in a subsequential list. That is for example:
| Particle 1 percentage 1 | Particle 1 percentage 2 | Particle 2 percentage 1 | Particle 2 percentage 2 | ...
I already tested the correctness of the textures by assigning them to the particles directly or by assigning the volFrac to the red part of the final color. I also tried different GLSL debuggers trying to analyze the problem, but none of the popular options did work on my machine after trying.
#version 330
uniform float radius;
uniform mat4 projection_matrix;
uniform uint nFluids;
uniform sampler1D colorSampler;
uniform samplerBuffer volumeFractionSampler;
in block
{
flat vec3 mv_pos;
flat float pIndex;
}
In;
out vec4 out_color;
void main(void)
{
vec3 fluidColor = vec3(0.0, 0.0, 0.0);
for (int fluidModelIndex = 0; fluidModelIndex < int(nFluids); fluidModelIndex++)
{
float volFrac = texelFetch(volumeFractionSampler, int(nFluids * In.pIndex) + fluidModelIndex).x;
vec3 phaseColor = texture(colorSampler, float(fluidModelIndex)/(int(nFluids) - 1)).xyz;
fluidColor = volFrac * phaseColor;
}
out_color = vec4(fluidColor, 1.0);
}
And also a short snippet of the texture initialization
//Texture Initialisation and Manipulation here
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_1D, m_textureMap);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, nFluids, 0, GL_RGB, GL_FLOAT, color_map);
//Creation and Initialisation for Buffer Texture containing the volume Fractions
glBindBuffer(GL_TEXTURE_BUFFER, m_texBuffer);
glBufferData(GL_TEXTURE_BUFFER, nFluids * nParticles * sizeof(float), m_volumeFractions.data(), GL_STATIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glBindTexture(GL_TEXTURE_BUFFER, m_bufferTexture);
glTexBuffer(GL_TEXTURE_BUFFER, GL_R32F, m_texBuffer);
The problem now is, that if I multiply the information of the buffer texture with the information of the texture, the particles that should be rendered disappear completely without any warnings or other error messages. So the particles disappear if I use the statement:
fluidColor = volFrac * phaseColor;
Does anybody know, why this is the case or how I can further debug this problem?
Does anybody know, why this is the case
Yes. You seem to use the same texture unit for both colorSampler and volumeFractionSampler which is simply not allowed as per the spec. Quoting from section 7.11 of the OpenGL 4.6 core profile spec:
It is not allowed to have variables of different sampler types pointing to the same texture image unit within a program object. This situation can only
be detected at the next rendering command issued which triggers shader invocations, and an INVALID_OPERATION error will then be generated.
So while you can bind different textures do the different targets of texture unit 0 at the same time, each draw call can only use one particular target per texture unit. If you only use one sampler or the other (and the shader compilere will aggresively optimize these out if they don't influence the outputs of your shader), you are in a legal use case, but as soon as you use both, it will not work.
Golang gomobile basic example [1] uses VertexAttribPointer to set 3 x FLOATS per vertex.
However the vertex shader attribute type is vec4. Shouldn't it be vec3?
Why?
Within render loop:
glctx.VertexAttribPointer(position, coordsPerVertex, gl.FLOAT, false, 0, 0)
Triangle data:
var triangleData = f32.Bytes(binary.LittleEndian,
0.0, 0.4, 0.0, // top left
0.0, 0.0, 0.0, // bottom left
0.4, 0.0, 0.0, // bottom right
)
Constant declaration:
const (
coordsPerVertex = 3
vertexCount = 3
)
In vertex shader:
attribute vec4 position;
[1] gomobile basic example: https://github.com/golang/mobile/blob/master/example/basic/main.go
Vertex attributes are conceptually always 4 component vectors. There is no requirement that the number of components you use in the shader and the one you set up for the attribute pointer have to match. If your array has more components than your shader consumes, the additional components are just ignored. If your array supplies less components, the attribute is filled to a vector of the form (0,0,0,1) (which makes sense for homogeneous position vectors as well as RGBA colors).
In the usual case, you want w=1 for every input position anyway, there is no need to store that in an array. But you usually need the full 4D form when applying the transformation matrices (or even when directly forwarding the value as gl_Position). So your shader could conceptually do
in vec3 pos;
gl_Position=vec4(pos,1);
but that would be equivalent of just writing
in vec4 pos;
gl_Position=pos;
I'm trying to color single vertices of quads that are drawn through glDrawElements, I'm working with cocos2d libray, so I've been able to scavenge the source code to understand exactly what is happening, code is the following:
glBindVertexArray( VAOname_ );
glDrawElements(GL_TRIANGLES, (GLsizei) n*6, GL_UNSIGNED_SHORT, (GLvoid*) (start*6*sizeof(indices_[0])) );
glBindVertexArray(0);
So vertex array objects are used. I'm trying to modify single vertices color of the objects that are passed and it seems to work but with a glitch which is described by the following image:
Here I tried to change the color of the lower left and right vertex. The result is different, I guess this is because the quad is rendered as a couple of triangles with shared hypotenuse which resides on the diagonal which goes from lower left vertex to higher right vertex. So this could cause the different result.
Now I would like to have the second result also for the first case. Is there a way to obtain it?
Your guess is right. The OpenGL driver tesselates your quad into two triangles, in which the vertex colours are interpolated barycentrically, which results in what you see.
The usual approach to solve this, is by performing the interpolation "manually" in a fragment shader, that takes into account the target topology, in your case a quad. Or in short you have to perform barycentric interpolation not based on a triangle but on a quad. You might also want to apply perspective correction.
I don't have ready to read resources at hand right now, but I'll update this answer as soon as I have (might actually mean, I'll have to write it myself).
Update
First we must understand the problem: Most OpenGL implementations break down higher primitives into triangles and render them localized, i.e. without further knowledge about the rest of the primitive, e.g. a quad. So we have to do this ourself.
This is how I'd do it.
#version 330 // vertex shader
Of course we also need the usual uniforms
uniform mat4x4 MV;
uniform mat4x4 P;
First we need the position of the vertex processed by this shader execution instance
layout (location=0) in vec3 pos;
Next we need some vertex attributes which we use to describe the quad itself. This means its corner positions
layout (location=1) in vec3 qp0;
layout (location=2) in vec3 qp1;
layout (location=3) in vec3 qp2;
layout (location=4) in vec3 qp3;
and colors
layout (location=5) in vec3 qc0;
layout (location=6) in vec3 qc1;
layout (location=7) in vec3 qc2;
layout (location=8) in vec3 qc3;
We put those into varyings for the fragment shader to process.
out vec3 position;
out vec3 qpos[4];
out vec3 qcolor[4];
void main()
{
qpos[0] = qp0;
qpos[1] = qp1;
qpos[2] = qp2;
qpos[3] = qp3;
qcolor[0] = qc0;
qcolor[1] = qc1;
qcolor[2] = qc2;
qcolor[3] = qc3;
gl_Position = P * MV * position;
}
In the fragment shader we use this to implement a distance weighting for the color components:
#version 330 // fragment shader
in vec3 position;
in vec3 qpos[4];
in vec3 qcolor[4];
void main()
{
vec3 color = vec3(0);
The following can be simplified combinatorical, but for sake of clarity I write it out:
For each corner point of the vertex mix with the colors of all corner points with the projection of the position on the edge between them as mix factor.
for(int i=0; i < 4; i++) {
vec3 p = position - qpos[i];
for(int j=0; j < 4; j++) {
vec3 edge = qpos[i] - qpos[j];
float edge_length = length(edge);
edge = normalize(edge);
float tau = dot(edge_length, p) / edge_length;
color += mix(qcolor[i], qcolor[j], tau);
}
}
Since we looked at each corner point 4 times, scale down by 1/4
color *= 0.25;
gl_FragColor = color; // and maybe other things.
}
We're almost done. On the client side we need to pass the additional information. Of course we don't want to duplicate data. For this we use glVertexBindingDivisor so that a vertex attribute advances only every 4 vertices (i.e. a quad), on the qp… and qc… locations, i.e. location 1 to 8
typedef float vec3[3];
extern vec3 *quad_position;
extern vec3 *quad_color;
glVertexAttribute(0, 3, GL_FLOAT, GL_FALSE, 0, &quad_position[0]);
glVertexBindingDivisor(1, 4);
glVertexAttribute (1, 3, GL_FLOAT, GL_FALSE, 0, &quad_position[0]);
glVertexBindingDivisor(2, 4);
glVertexAttribute (2, 3, GL_FLOAT, GL_FALSE, 0, &quad_position[1]);
glVertexBindingDivisor(3, 4);
glVertexAttribute (3, 3, GL_FLOAT, GL_FALSE, 0, &quad_position[2]);
glVertexBindingDivisor(4, 4);
glVertexAttribute (4, 3, GL_FLOAT, GL_FALSE, 0, &quad_position[3]);
glVertexBindingDivisor(5, 4);
glVertexAttribute (5, 3, GL_FLOAT, GL_FALSE, 0, &quad_color[0]);
glVertexBindingDivisor(6, 4);
glVertexAttribute (6, 3, GL_FLOAT, GL_FALSE, 0, &quad_color[1]);
glVertexBindingDivisor(7, 4);
glVertexAttribute (7, 3, GL_FLOAT, GL_FALSE, 0, &quad_color[2]);
glVertexBindingDivisor(8, 4);
glVertexAttribute (8, 3, GL_FLOAT, GL_FALSE, 0, &quad_color[3]);
It makes sense to put the above into a Vertex Array Object. Also using a VBO would make sense, but then you must calculate the offset sizes manually; due to the typedef float vec3 the compiler does the math for us ATM.
With all this being set you can finally tesselation independently draw your quad.
I'm fairly new to OpenGL, and I seem to be experiencing some difficulties. I've written a simple shader in GLSL, that is supposed to transform vertices by given joint matrices, allowing simple skeletal animation. Each vertex has a maximum of two bone influences (stored as the x and y components of a Vec2), indices and corresponding weights that are associated with an array of transformation matrices, and are specified as "Attribute variables" in my shader, then set using the "glVertexAttribPointer" function.
Here's where the problem arises... I've managed to set the "Uniform Variable" array of matrices properly, when I check those values in the shader, all of them are imported correctly and they contain the correct data. However, when I attempt to set the joint Indices variable the vertices are multiplied by arbitrary transformation matrices! They jump to seemingly random positions in space (which are different every time) from this I am assuming that the indices are set incorrectly and my shader is reading past the end of my joint matrix array into the following memory. I'm not exactly sure why, because upon reading all of the information I could find on the subject, I was surprised to see the same (if not very similar) code in their examples, and it seemed to work for them.
I have attempted to solve this problem for quite some time now and it's really beginning to get on my nerves... I know that the matrices are correct, and when I manually change the index value in the shader to an arbitrary integer, it reads the correct matrix values and works the way it should, transforming all the vertices by that matrix, but when I try and use the code I wrote to set the attribute variables, it does not seem to work.
The code I am using to set the variables is as follows...
// this works properly...
GLuint boneMatLoc = glGetUniformLocation([[[obj material] shader] programID], "boneMatrices");
glUniformMatrix4fv( boneMatLoc, matCount, GL_TRUE, currentBoneMatrices );
GLfloat testBoneIndices[8] = {1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0};
// this however, does not...
GLuint boneIndexLoc = glGetAttribLocation([[[obj material] shader] programID], "boneIndices");
glEnableVertexAttribArray( boneIndexLoc );
glVertexAttribPointer( boneIndexLoc, 2, GL_FLOAT, GL_FALSE, 0, testBoneIndices );
And my vertex shader looks like this...
// this shader is supposed to transform the bones by a skeleton, a maximum of two
// bones per vertex with varying weights...
uniform mat4 boneMatrices[32]; // matrices for the bones
attribute vec2 boneIndices; // x for the first bone, y for the second
//attribute vec2 boneWeight; // the blend weights between the two bones
void main(void)
{
gl_TexCoord[0] = gl_MultiTexCoord0; // just set up the texture coordinates...
vec4 vertexPos1 = 1.0 * boneMatrices[ int(boneIndex.x) ] * gl_Vertex;
//vec4 vertexPos2 = 0.5 * boneMatrices[ int(boneIndex.y) ] * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * (vertexPos1);
}
This is really beginning to frustrate me, and any and all help will be appreciated,
-Andrew Gotow
Ok, I've figured it out. OpenGL draws triangles with the drawArrays function by reading every 9 values as a triangle (3 vertices with 3 components each). Because of this, vertices are repepated between triangles, so if two adjacent triangles share a vertex it comes up twice in the array. So my cube which I originally thought had 8 vertices, actually has 36!
six sides, two triangles a side, three vertices per triangle, all multiplies out to a total of 36 independent vertices instead of 8 shared ones.
The entire problem was an issue with specifying too few values. As soon as I extended my test array to include 36 values it worked perfectly.