GLSL Layout works strange - opengl

There are 2 layouts in my vertex shader
layout (location = 1) in vec3 aColor;
layout (location = 0) in vec3 aPos;
I set them by func glVertexAttrib3f();
glVertexAttrib3f(1, 0, 1, 0); // color
glBegin(GL_TRIANGLES); // vertexes
glVertexAttrib3f(0, 0, 0, 0);
glVertexAttrib3f(0, 1, 0, 0);
glVertexAttrib3f(0, 1, 1, 0);
glEnd();
In this situation, everything is ok, I have my green triangle in a window. But if I swap my aPos layout and aColor (set 1 to aPos and 0 to aColor)
layout (location = 0) in vec3 aColor;
layout (location = 1) in vec3 aPos;
and also dont forgot to swap them in my cpp file
glVertexAttrib3f(0, 0, 1, 0); // color
glBegin(GL_TRIANGLES); // vertexes
glVertexAttrib3f(1, 0, 0, 0);
glVertexAttrib3f(1, 1, 0, 0);
glVertexAttrib3f(1, 1, 1, 0);
glEnd();
nothing dont work and my window is empty (it must draw green triangle)
Why it works so?

Calling glVertexAttrib with an index of 0 is a special kind of command: It will result in a vertex being emitted using all the vertex attributes previously set.
See OpenGL 2.0 specification (where glVertexAttrib was introduced) section 2.7. "Vertex Specification":
Setting generic vertex attribute zero specifies a vertex; the four vertex coordinates are taken from the values of attribute zero. A Vertex2, Vertex3, or Vertex4
command is completely equivalent to the corresponding VertexAttrib* command
with an index of zero. Setting any other generic vertex attribute updates the current
values of the attribute. There are no current values for vertex attribute zero
When you do:
glVertexAttrib3f(1, 0, 1, 0); // color
glVertexAttrib3f(0, 0, 0, 0);
glVertexAttrib3f(0, 1, 0, 0);
glVertexAttrib3f(0, 1, 1, 0);
then essentially you emit three vertices with their index=1 attribute having the value (0, 1, 0).
On the other hand, when you do:
glVertexAttrib3f(0, 0, 1, 0); // color
glVertexAttrib3f(1, 0, 0, 0);
glVertexAttrib3f(1, 1, 0, 0);
glVertexAttrib3f(1, 1, 1, 0);
then what happens is that you first emit one vertex with all its attributes having their default/zero values and then set the value of the vertex attribute with index=1 to effectively (1, 1, 0) (the last call). That will only be really used whenever you emit another vertex by calling glVertexAttrib3f(0, ...).
So, the bottom line is: The vertex attribute with index 0 has a special meaning in that it will also emit a vertex. So a call with glVertexAttrib3f(0, ...) should always be the last call per vertex after all the other attributes for that vertex have been set.

Related

Getting exact pixel from texture

I have a question about textures in OpenGL. I am trying to use them for GPGPU operations but I am stuck at beggining. I have created a texture like this (4x4 int matrix).
OGLTexImageFloat dataTexImage = new OGLTexImageFloat(4, 4, 4);
dataTexImage.setPixel(0, 0, 0, 0);
dataTexImage.setPixel(0, 1, 0, 10);
dataTexImage.setPixel(0, 2, 0, 5);
dataTexImage.setPixel(0, 3, 0, 15);
dataTexImage.setPixel(1, 0, 0, 10);
dataTexImage.setPixel(1, 1, 0, 0);
dataTexImage.setPixel(1, 2, 0, 2);
dataTexImage.setPixel(1, 3, 0, 1000);
dataTexImage.setPixel(2, 0, 0, 5);
dataTexImage.setPixel(2, 1, 0, 2);
dataTexImage.setPixel(2, 2, 0, 0);
dataTexImage.setPixel(2, 3, 0, 2);
dataTexImage.setPixel(3, 0, 0, 15);
dataTexImage.setPixel(3, 1, 0, 1000);
dataTexImage.setPixel(3, 2, 0, 2);
dataTexImage.setPixel(3, 3, 0, 0);
texture = new OGLTexture2D(gl, dataTexImage);
Now I would like to add value from [1,1] matrix position to value of each pixel (matrix entry). As I am speaking about every picture I should probably do it in fragment shader. But i dont know how can i get exact pixel form texture ([1,1] entry from matrix). Can someone explain me, how to do this?
If you are trying to add a single constant value (i.e. a value from [1,1]) to the entire image (every pixel of the rendered image), then you should pass that constant value as a separate uniform value into your shader program.
Then in the fragment shader, add this constant value to the current pixel color. The current pixel color comes as an input vec4 from your vertex shader.

Can I use three-dimensional compute shader to write to a three-dimensional image

define a three-dimensional Texture and dispatch compute
glTexStorage3D(GL_TEXTURE_3D, 1, GL_RGBA32F, SCREEN_WIDTH, SCREEN_HEIGHT, TEXTURE_DEPTH);
glBindImageTexture(0, m_Texture, 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA32F);
glDispatchCompute(16, 16, 2);
compute shader
#version 450
layout(local_size_x = 32,local_size_y = 32, local_size_z = 2) in;
layout(binding = 0, rgba32f) uniform image3D Image;
void main()
{
ivec3 position = ivec3(gl_GlobalInvocationID.xyz);
vec4 color = vec4(gl_WorkGroupID / vec3(gl_NumWorkGroups), 1.0);
imageStore(Image, position, color);
}
but the code doesn't work, I want to konw the value of gl_GlobalInvocationID.z is the depth of space
I had solved this probelm, the parameters of glDispatchCompute() is x,y,z, ifxyzlocal_size_xlocal_size_ylocal_size_z > screensize.xscreensize.y, it cannot work, so I downsample the texture resolution.

I can't get a simple indexed array rendered properly

I am porting this sample (site) to jogl but I noticed something wasn't perfect in the image, some artefacts on the floor and shapes not exactly squared, as you can see (dont care about color, is varying), also the floor doesnt look good:
Therefore I tried to render only the floor first (if you wanna try, pretty easy, swith SQRT_BUILDING_COUNT from 100 -> 0) and there I have already the first problems, it is supposed to be a square based on two triangles, but I see only one of them.
My vertex structure:
public float[] position = new float[3];
public byte[] color = new byte[4];
public float[] attrib0 = new float[4];
public float[] attrib1 = new float[4];
public float[] attrib2 = new float[4];
public float[] attrib3 = new float[4];
public float[] attrib4 = new float[4];
public float[] attrib5 = new float[4];
public float[] attrib6 = new float[4];
attrib0-6 are unused at the moment
My VS inputs:
// Input attributes
layout(location=0) in vec4 iPos;
layout(location=1) in vec4 iColor;
layout(location=2) in PerMeshUniforms* bindlessPerMeshUniformsPtr;
layout(location=3) in vec4 iAttrib3;
layout(location=4) in vec4 iAttrib4;
layout(location=5) in vec4 iAttrib5;
layout(location=6) in vec4 iAttrib6;
layout(location=7) in vec4 iAttrib7;
I am declaring iPos as vec3, so I guess it will padded as vec4(iPos, 1) in the VS
I transfer data to gpu:
gl4.glNamedBufferData(vertexBuffer[0], Vertex.size() * vertices.size(),
GLBuffers.newDirectFloatBuffer(verticesArray), GL4.GL_STATIC_DRAW);
gl4.glNamedBufferData(indexBuffer[0], GLBuffers.SIZEOF_SHORT * indices.size(),
GLBuffers.newDirectShortBuffer(indicesArray), GL4.GL_STATIC_DRAW);
Then before I render I call:
gl4.glEnableVertexArrayAttrib(0, 0);
gl4.glEnableVertexArrayAttrib(0, 1);
Then render, original code is:
// Set up attribute 0 for the position (3 floats)
glVertexArrayVertexAttribOffsetEXT(0, m_vertexBuffer, 0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), Vertex::PositionOffset);
// Set up attribute 1 for the color (4 unsigned bytes)
glVertexArrayVertexAttribOffsetEXT(0, m_vertexBuffer, 1, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), Vertex::ColorOffset);
I substituted it with:
// Set up attribute 0 for the position (3 floats)
gl4.glVertexArrayVertexBuffer(0, 0, vertexBuffer[0], Vertex.positionOffset,
Vertex.size());
gl4.glVertexArrayAttribFormat(0, 0, 3, GL4.GL_FLOAT, false, Vertex.size());
// Set up attribute 1 for the color (4 unsigned bytes)
gl4.glVertexArrayVertexBuffer(0, 1, vertexBuffer[0], Vertex.colorOffset,
Vertex.size());
gl4.glVertexArrayAttribFormat(0, 1, 4, GL4.GL_UNSIGNED_BYTE, true, Vertex.size());
And then I finish the render:
// Reset state
gl4.glDisableVertexArrayAttrib(0, 0);
gl4.glDisableVertexArrayAttrib(0, 1);
I admit I never used dsa before, I always used GL3 with the normal vbo, vao and ibo, binding and unbinding..
Culling is off.
What's wrong then?
Solved, the problem was I didnt implement properly dsa
glEnableVertexAttribArray(vao, 0);
glEnableVertexAttribArray(vao, 1);
// Setup the formats
glVertexArrayAttribFormat(vao, 0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribFormat(vao, 1, 2, GL_FLOAT, GL_FALSE, 0);
// Setup the buffer sources
glVertexArrayVertexBuffer(vao, 0, buffers[0], 0, 0); // Note the 2nd argument here is a 'binding index', not the attribute index
glVertexArrayVertexBuffer(vao, 1, buffers[1], 0, 0);
// Link them up
glVertexArrayAttribBinding(vao, 0, 0); // Associate attrib 0 (first 0) with binding 0 (second 0).
glVertexArrayAttribBinding(vao, 1, 1);
plus glVertexArrayElementBuffer if you have indexed rendering

Arrange color in fragment shader without texture coordinates depending on fragment position

I need to draw a rectangle in OpenGL ES 2.0 but to arrange color for rectangle in fragment shader. I will draw two triangles to represent the rectangle. This is similar to texture mapping but without the texture coordinates. What would be ideal is to take a specific pixel and calculate its position. On the other side there is an array containing 16 elements of 0s and 1s. The pixel is calculated and it is compared to the arrays element at the same location (this is possible if you take remainder of division with 16 since it will return 0 - 15). If the element in corresponding array's index is 1, that pixel will be colored using a specific color from a fragment shader attribute, otherwise it will not be colored.
The following diagram illustrates the problem:
This is my problem
The array is passed as uniform value to fragment shader and it seems that it is not passed correctly:
The following is the code where uniform value is passed to fragment shader:
void GeometryEngine::drawLineGeometry(QGLShaderProgram *program)
{
glBindBuffer(GL_ARRAY_BUFFER, vboId);
// Bind attribute position in vertex shader to the program
int vertexLocation = program->attributeLocation("position");
program->enableAttributeArray(vertexLocation);
glVertexAttribPointer(vertexLocation, 3, GL_FLOAT, GL_FALSE, 0, 0);
const int array[16] = {1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0};
int arrayLocation = program->attributeLocation("array");
program->setUniformValueArray(arrayLocation, array, 16);
// Draw triangles (6 points)
glDrawArrays(GL_TRIANGLES, 0, 6);
}
Fragment shader:
uniform int array[16];
void main()
{
gl_FragColor = vec4(array[0], 0.0, 0.0, 1.0);
/* int index = int(mod(gl_FragCoord.x, 16.0));
if( array[index] == 1 )
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
else
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
*/
}
Commented lines should create specific color for the fragments, but I cannot test this part because the array is not passed correctly. The rectangle is black, instead of red (array[0] is 1, not 0).

OpenGL(with GLSL) using glTexBuffer and texelFetch

Trying to render a rectangle in OpenGL desktop, but the internal Format used in glTexBuffer(...) and respective code for texelFetch(...) is not working for me. I have the correct primitive rendering, need to correct the texture buffer part only. Below is the relevant code snippet
unsigned char texData [16] =
{
255, 0, 0, 0, //Red
0,255,0,255, //Green
0,0,255,255, //Blue
255,0,255,255, //PINK
};
glGenBuffers( 2,texBuffObj);
glBindBuffer( GL_TEXTURE_BUFFER,texBuffObj[0]);
glBufferData( GL_TEXTURE_BUFFER,sizeof(texData),texData,GL_DYNAMIC_DRAW);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_BUFFER, textureID);
glTexBuffer(GL_TEXTURE_BUFFER,GL_RGBA8UI,texBuffObj[0]);
Fragement Shader Snippet:
#version 330 core
uniform usamplerBuffer samplerTexBuffer;
out vec4 color;
in vec2 vs_texCoord;
in vec3 vert_Color;
void main()
{
int offset = 8; // 0:RED 4:GREEN 8:BLUE 12:PINK
vec4 colBuff;
colBuff = texelFetch(samplerTexBuffer,offset) ;
color = colBuff;
}
The required rendering is such that with :
offset value 0 primitive color : 255, 0, 0, 0, //Red
offset value 4 primitive color : 0, 255, 0, 255, //Green
offset value 8 primitive color : 0, 0, 255, 255, //Blue
offset value 12 primitive color : 255, 0, 255, 255, //PINK
What are the necessary corrections required ?
texelFetch takes texel coordinates, not a byte offset into the buffer. Since your texels are 4 bytes wide, you want to retrieve them using indices 0, 1, 2, 3 rather than 0, 4, 8, 12.