I have a vertex attribute that's being chewed up very weird by my shaders. It's uploaded to the VBO as a (uint8)1 but when the fragment shader sees it, it's interpreted as a 10653532160, or 0x3F800000 which some of you might recognize as being the bit pattern for a 1.0f in floating point.
I have no ideas as to why? I can confirm that it is uploaded to the VBO as a 1 (0x00000001) though.
The vertex attribute is defined as:
struct Vertex{
...
glm::u8vec4 c2; // attribute with problems
};
// not-normalized
glVertexAttribPointer(aColor2, 4, GL_UNSIGNED_BYTE, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, c2));
While the shader has that attribute bound with
glBindAttribLocation(programID, aColor2, "c2");
The vertex shader passes along the attribute pretty uneventfully:
#version 330
in lowp uvec4 c2; // <-- this value is uploaded to the VBO as 0x00, 0x00, 0x00, 0x01;
flat out lowp uvec4 indices;
void main(){
indices = c2;
}
And finally the fragment shader gets ahold of it:
flat in lowp uvec4 indices; // <-- this value is now 0, 0, 0, 0x3F800000
out lowp vec4 fragColor;
void main(){
fragColor = vec4(indices) / 256.0;
}
The indices varying leaves the vertex shader as a 0x3F800000 for indices.w according to my shader inspector, so something odd is happening there? What could be causing this?
It the type of an vertex attribute is integral, then you have to use glVertexAttribIPointer rather than glVertexAttribPointer (focus on I). See glVertexAttribPointer.
The type which is specified in glVertexAttribPointer is the type of the data in source buffer and doesn't specify the target attribute type in the shader. If you use glVertexAttribPointer, then the type of the attribute in the shader program is assumed to be floating point, and the integral data are converted.
If you use glVertexAttribIPointer then the values left as integer values.
Related
I am trying to pass some integer values to the Vertex Shader along with the vertex data.
I generate a buffer while vertex array is bound and then try to attach it to a location but it seems like in vertex shader the value is always 0.
here is part of the code that generates the buffer and it`s usage in the shader.
glm::vec3 materialStuff = glm::vec3(31, 32, 33);
glGenBuffers(1, &materialBufferIndex);
glBindBuffer(GL_ARRAY_BUFFER, materialBufferIndex);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3), &materialStuff, GL_STATIC_DRAW);
glEnableVertexAttribArray(9);
glVertexAttribIPointer(9, 3, GL_INT, sizeof(glm::vec3), (void*)0);
And here is part of the shader that suppose to receive the integer values
// Some other locations
layout (location = 0) in vec3 vertex_position;
layout (location = 1) in vec2 vertex_texcoord;
layout (location = 2) in vec3 vertex_normal;
layout (location = 3) in vec3 vertex_tangent;
layout (location = 4) in vec3 vertex_bitangent;
layout (location = 5) in mat4 vertex_modelMatrix;
// layout (location = 6) in_use...
// layout (location = 7) in_use...
// layout (location = 8) in_use...
// The location I am attaching my integer buffer to
layout (location = 9) in ivec3 vertex_material;
// I also tried with these variations
//layout (location = 9) in int vertex_material[3];
//layout (location = 9) in int[3] vertex_material;
// and then in vertex shader I try to retrieve the int value by doing something like this
diffuseTextureInd = vertex_material[0];
That diffuseTextureInd should go to fragment shader through
out flat int diffuseTextureInd;
And I am planning to use this to index into an array of bindless textures that I already have set up and working. The issue is that it seems like vertex_material just contains 0s since my fragment shader always displays the 0th texture in the array.
Note: I know that my fragment shader is fine since if I do
diffuseTextureInd = 31;
in the vertex shader, the fragment shader correctly receives the correct index and displays the correct texture. But when I try to use the value from the layout location 9, it seems like I always get a 0. Any idea what I am doing wrong here?
The following definitions:
glm::vec3 materialStuff = glm::vec3(31, 32, 33);
glVertexAttribIPointer(9, 3, GL_INT, sizeof(glm::vec3), (void*)0);
...
layout (location = 9) in ivec3 vertex_material;
practically mean that:
glm::vec3 means that you declare vector of 3 floats rather than integers. glm::ivec3 should be used for vector of integer.
ivec3 vertex attribute means a vector of 3 integer values is expected for each vertex. At the same moment, materialStuff defines values only for a single vertex (makes no sense for a triangle, which would require at least 3 glm::ivec3).
What is supposed to be declared for passing a single integer vertex attribute:
layout (location = 9) in int vertex_material;
(without any array qualifier)
GLint materialStuff[3] = { 31, 32, 33 };
glVertexAttribIPointer(9, 1, GL_INT, sizeof(GLint)*3, (void*)0);
It should be noticed though, that passing different per-vertex integer to fragment shader makes no sense, which I suppose you solved by flat keyword. Existing pipeline defines only per-vertex inputs, not per-triangle or something like this. There are glVertexAttribDivisor() defining the vertex attribute rate, but it is applicable only to rendering instances via glDrawArraysInstanced()/glDrawElementsInstanced() (specific vertex attribute might be increment per instance), not triangles.
There are ways to handle per-triangle inputs - this could be done by defining Uniform Buffer Object or Texture Buffer Object (same as 1D texture but for accessing by index without interpolation) instead of generic Vertex Attribute. But tricks will be still necessary to determine the triangle index in this array - again, from vertex attribute or from built-in variables like gl_VertexID in Vertex Shader, gl_PrimitiveIDIn in Geometry Shader or gl_PrimitiveID in Fragment Shader (I cannot say, though, how these counters are affected by culling).
Problem: Opengl is converting the integer array, I passed into a vertex array object, into a float array for some reason.
When I try to use the vertices as an ivec2 in my vertex shader I get some weird numbers as outputs, however if I use vec2 instead I get the expected output.
Code:
VertexShader:
#version 430 core
// \/ This is what I'm reffering to when talking about ivec2 and vec2
layout (location = 0) in ivec2 aPos;
uniform uvec2 window;
void main()
{
float xPos = float(aPos.x)/float(window.x);
float yPos = float(aPos.y)/float(window.y);
gl_Position = vec4(xPos, yPos, 1.0f, 1.0f);
}
Passing the Vertex Array:
GLint vertices[] =
{
-50, 50,
50, 50,
50, -50,
-50, -50
};
GLuint VBO, VAO;
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO); // \/ I'm passing it as an int
glVertexAttribPointer(0, 2, GL_INT, GL_FALSE, 2 * sizeof(GLint), (void*)0);
glEnableVertexAttribArray(0);
glUseProgram(sprite2DProgram);
glUniform2ui(glGetUniformLocation(sprite2DProgram, "window"), 640, 480);
Output:
The first picture is what happens when I use ivec2 (the bad output).
The second picture is what happes when I use vec2 (the expected output).
If you need to know anything else please ask it in the comments.
For vertex attributes with an integral data it has to be used glVertexAttribIPointer (focus on the I), to define an array of generic vertex attribute data.
This means, that if you use the ivec2 data type for the attribute in the vertex shader:
in ivec2 aPos;
then you have to use glVertexAttribIPointer wehan you define the array of generic vertex attribute data.
Change your code like this, to solve the issue:
// glVertexAttribIPointer instead of glVertexAttribPointer
glVertexAttribIPointer(0, 2, GL_INT, GL_FALSE, 2 * sizeof(GLint), (void*)0);
See OpenGL 4.6 API Compatibility Profile Specification; 10.2. CURRENT VERTEX ATTRIBUTE VALUES; page m389:
When values for a vertex shader attribute variable are sourced from an enabled generic vertex attribute array,
the array must be specified by a command compatible with the data type of the variable.
The values loaded into a shader attribute variable bound to generic attribute index are undefined if the array for index was not specified by:
VertexAttribFormat, for floating-point base type attributes;
VertexAttribIFormat with type BYTE, SHORT, or INT for signed integer base type attributes; or
VertexAttribIFormat with type UNSIGNED_BYTE, UNSIGNED_SHORT, or UNSIGNED_INT for unsigned integer base type attributes.
I want to render a unsigned integer texture with fragment shader using following code:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8UI, width, height, 0, GL_RED_INTEGER, GL_UNSIGNED_BYTE, data);
and part of the fragment shader code:
#version 330
uniform usampler2D tex;
void main(void){
vec3 vec_tex;
vec_tex = (texture(tex), TexCoordOut).r
}
It is written in OpenGL Programming Guide, that if I want to receive integers in shader, then I should use an integer sampler type, an integer internal format, and an integer external format and type. Here is us GL_R8UI as internal format, GL_RED_INTEGER as external format, GL_UNSIGNED_BYTE as data type. I also use usampler2D in shader. But when the program starts to render the file, always got error implicit cast from "int" to "uint". It seems that the texture data is stored as int, and the unsigned sampler can not convert that. But I did use GL_R8UI as internal format, so the texture data should be stored as unsigned. why the unsigned sampler only get signed int? How can I solve this problem?
The texture function call is not correct, secondly the texture function returns float values which needs to handled in shader by dividing the RGBA components by 255.0 (as you use GL_R8UI) and return and fragment color output.
uniform usampler2D tex;
out uvec3 OutColor;
void main(void){
uvec3 vec_tex;
vec_tex = texture(tex, TexCoordOut)
OutColor = vec3(float(vec_tex.r)/255, float(vec_tex.g)/255, float(vec_tex.b)/255)
}
I have a 2d texture that I loaded with
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, gs.width(), gs.height(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, gs.buffer());
where gs is an object that with methods that return the proper types.
In the fragment shader I sample from the texture and attempt to use that as the alpha channel for the resultant color. If I use the sampled value for other channels in the output texture it produces what I would expect. Any value that I use for the alpha channel appears to be ignored, because it always draws Color.
I am clearing the screen using:
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
Can anyone suggest what I might be doing wrong? I am getting an OpenGL 4.0 context with 8 red, 8 green, 8 blue, and 8 alpha bits.
Vertex Shader:
#version 150
in vec2 position;
in vec3 color;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
void main()
{
Texcoord = texcoord;
Color = color;
gl_Position = vec4(position, 0.0, 1.0);
}
Fragment Shader:
#version 150
in vec3 Color;
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main()
{
float t = texture(tex, Texcoord);
outColor = vec4(Color, t);
}
Frankly, I am surprised this actually works. texture (...) returns a vec4 (unless you are using a shadow/integer sampler, which you are not). You really ought to be swizzling that texture down to just a single component if you intend to store it in a float.
I am guessing you want the alpha component of your texture, but who honestly knows -- try this instead:
float t = texture (tex, Texcoord).a; // Get the alpha channel of your texture
A half-way decent GLSL compiler would warn/error you for doing what you are trying to do right now. I suspect yours is as well, but you are not checking the shader info log when you compile your shader.
Update:
The original answer did not even begin to address the madness you are doing with your GL_DEPTH_COMPONENT internal format texture. I completely missed that because the code did not fit on screen.
Why are you using gs.rgba() to pass data to a texture whose internal and pixel transfer format is exactly 1 component? Also, if you intend to use a depth texture in your shader then the reason it is always returning a=1.0 is actually very simple:
Beginning with GLSL 1.30, when sampled using texture (...), depth textures are automatically setup to return the following vec4:
vec4 (r, r, r, 1.0).
The RGB components are replaced with the value of R (the floating-point depth), and A is replaced with a constant value of 1.0.
Your issue is that you're only passing in a vec3 when you need a vec4. RGBA - 4 components, not just three.
When I pass non max values into texture buffer, while rendering it draws geometry with colors at max values. I found this issue while using glTexBuffer() API.
E.g. Let’s assume my texture data is GLubyte, when I pass any value less than 255, then the color is same as that of drawn with 255, instead of mixture of black and that color.
I tried on AMD and nvidia card, but the results are same.
Can you tell me where could be going wrong?
I am copying my code here:
Vert shader:
in vec2 a_position;
uniform float offset_x;
void main()
{
gl_Position = vec4(a_position.x + offset_x, a_position.y, 1.0, 1.0);
}
Frag shader:
out vec4 Color;
uniform isamplerBuffer sampler;
uniform int index;
void main()
{
Color=texelFetch(sampler,index);
}
Code:
GLubyte arr[]={128,5,250};
glGenBuffers(1,&bufferid);
glBindBuffer(GL_TEXTURE_BUFFER,bufferid);
glBufferData(GL_TEXTURE_BUFFER,sizeof(arr),arr,GL_STATIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER,0);
glGenTextures(1, &buffer_texture);
glBindTexture(GL_TEXTURE_BUFFER, buffer_texture);
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8, bufferid);
glUniform1f(glGetUniformLocation(shader_data.psId,"offset_x"),0.0f);
glUniform1i(glGetUniformLocation(shader_data.psId,"sampler"),0);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),0);
glGenBuffers(1,&bufferid1);
glBindBuffer(GL_ARRAY_BUFFER,bufferid1);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertices4),vertices4,GL_STATIC_DRAW);
attr_vertex = glGetAttribLocation(shader_data.psId, "a_position");
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0, 0);
glEnableVertexAttribArray(attr_vertex);
glDrawArrays(GL_TRIANGLE_FAN,0,4);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),1);
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0,(void *)(32) );
glDrawArrays(GL_TRIANGLE_FAN,0,4);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),2);
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0,(void *)(64) );
glDrawArrays(GL_TRIANGLE_FAN,0,4);
In this case it draws all the 3 squares with dark red color.
uniform isamplerBuffer sampler;
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8, bufferid);
There's your problem: they don't match.
You created the texture's storage as unsigned 8-bit integers, which are normalized to floats upon reading. But you told the shader that you were giving it signed 8-bit integers which will be read as integers, not floats.
You confused OpenGL by being inconsistent. Mismatching sampler types with texture formats yields undefined behavior.
That should be a samplerBuffer, not an isamplerBuffer.