I am filling a 2D texture with GLubyte from floating point values of R (real numbers) mapped to (0,1) and multiplied by 255 giving values (0, 255). Saving is as GL_R8 as I only need 1 value from the texture. This can for example represent a mathematical function.
I also upload a 1d texture to work as a colormap/colorbar. I then sample from the 1D texture based on the values from my 2D texture.
This is how my fragment shaders works:
#version 400
in vec2 f_textureCoord;
layout(location = 0) out vec4 FragColor;
uniform sampler2D textureData;
uniform sampler1D colorBar;
void main() {
/* Values in the sampler are on (0, 1) * 255 => (0, 255) */
vec3 texColor = texture2D(textureData, f_textureCoord).rgb;
float s = texColor.r;
/* Use the texture value as a coordinate in the 1D colorbar texture */
vec3 color = texture1D(colorBar, s).rgb;
float val = color.r;
FragColor = vec4(val, val, val, 1);
}
Using this I get the following error:
glValidateProgram: Validation Error: Samplers of different types point
to the same texture unit
However, my code works as expected, at least the rendering result!
My questions are:
1) Why do I get this error/warning? --- Answered in comments...
2) Is this the correct approach to what I am trying to do? Should I use another form of buffer instead of saving my function values in a 2D texture?
3) I assume that I will run into problems when my math function (filling the 2D texture) exceeds some texture size limit. Any recommendations on how I should proceed to work around this?
1.) Call glValidateProgram before your glDraw* command to check if you set up uniform and attribute locations correctly. So the wrong warning is issued because both sampler texture units are still zero after program linking.
2.) If this is about displaying the results of you function using a color index, it's ok.
If I understand it right textureData contains only grey values. If you need only one color component from the texture, you should write
float s = texture2D(textureData, f_textureCoord).r;
3.) If you need to display more data than you can put into a single texture, you will have to use tiling (i.e split the data in several textures and do several draws).
glValidateProgram is for detecting if the program can be used to render with the current OpenGL state. This includes things like the current bindings for textures, uniform buffers, and any other things that textures directly access (draw buffers and vertex attributes are not part of this list).
Related
So I have a Compute Shader that is supposed to take a texture and copy it over to another texture with slight modifications. I have confirmed that the textures are bound and that data can be written using RenderDoc which is a debugging tool for graphics. The issue I have is that inside the shader the variable gl_GlobalInvocationID, which is created by OpenGL, does not seem to work properly.
Here is my call of the compute shader: (The texture height is 480)
glDispatchCompute(1, this->m_texture_height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
And then we have my compute shader here:
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba8, binding=0) uniform image2D texture_source0;
layout (rgba8, binding=1) uniform image2D texture_target0;
layout (local_size_x=640 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main() {
ivec2 txlPos; //A variable keeping track of where on the texture current texel is from
vec4 result; //A variable to store color
txlPos = ivec2(gl_GlobalInvocationID.xy);
//txlPos = ivec2( (gl_WorkGroupID * gl_WorkGroupSize + gl_LocalInvocationID).xy );
result = imageLoad(texture_source0, txlPos); //Get color value
barrier();
result = vec4(txlPos, 0.0, 1.0);
imageStore(texture_target0, txlPos, result); //Save color in target texture
}
When I run this the target texture becomes entirely yellow, save for a 1pxl thick green line along the left border and a 1pxl thick red line along the bottom border. My expectation is to see some sort of gradient given that a save txlPos as a colour value.
Am I somehow defining my work-groups wrong? I've tried splitting the gl_GlobalInvokationID up into its components but not managed to get any wiser fiddling with them.
A 8-bit floating point texture can only store values between 0 and 1. Since gl_GlobalInvocationID is in most cases larger than 1, it get's clamped to the maximum value of 1 which makes the texture yellow.
If you want to create a gradient in both directions, then you have to make sure that the values stored start at 0 and end at 1. One possiblity is to divide by the maximum:
result = vec4(vec2(gl_GlobalInvocationID.xy) / vec2(640, 480), 0, 1);
I'm quite new to GLSL and I am trying to apply a colormap in a fragment shader, i.e., I have some float value normalized from [0, ...., 1] and I want to convert it into an RGBA value given by a predefined map/lookup table/colormap.
I am doing this by passing a GL_LUMINANCE32F_ARB texture, which is a single float per pixel matrix with my values and size 'width*height' . Also, I am passing a GL_RGBA32F_ARB matrix (4 floats per pixel) with the rgb values mapped from 0 to 1.
This is how I am trying to access my colormap in the fragment shader:
uniform sampler1D cmap;
...
vec4 cmapColor = texture1D(cmap, lum);
gl_FragColor = cmapColor;
where 'lum' is a float in the range (0, 1). However, this is failing. Am I fetching the data correctly? Is this how I get a vec4 of floats?
Is there a way to store color values on the VRAM other than a float per color component ?
Since color can represented as byte per component, how can I force my fragment shader to range color component from [0-255] instead of the default range [0.0-1.0]
if I use type as GL_UNSIGNED_BYTE , do I have to set bool normalized to GL_TRUE to convert them to 0.0-1.0 values that can be interpreted by the Fragment shader?
The output of the fragment shader is independent of the input of the vertex shader. For example you are storing colors in RGBA 8 bit format it would look somehting like this.
//...
glVertexAttribPointer( 0,4,GL_UNSIGNED_BYTE,FALSE,4,0);
//...
in the vertex shader
//the unsigned bytes are automatically converted to floats in the range [0,255]
//if normalized would have been set to true the range would be [0,1]
layout(location = 0) in vec4 color;
out vec4 c;
//...
c = color; //pass color to fragment shader
fragment shader
in vec4 c;
out vec4 out_color; //output (a texture from a framebuffer)
//....
//the output of the fragment shader must be converted to range [0,1] unless
//you're writing to integer textures (i'm asuming not here)
out_color = c / 255.0f;
A VBO is just a bunch of bytes in the first place. You need to tell OpenGL some information about the data in the VBO. One does that by invoking glVertexAttribPointer.
glVertexAttribPointer(index, size, GL_FLOAT, ...)
Using GL_FLOAT OpenGL knows that your data comes in float32 (4 bytes). In your case, you could use GL_BYTE which is an 8 bit number, so you can encode values from 0 to 255.
Since the information is only stored in VAO, one could use the same VBO with different views on data. Here one can find all available types.
According to the documentation of glVertexAttribPointer you have to set the normalize parameter to let the bytes be scaled to the range 0.0 to 1.0.
But as I can see in your comment to another answer, your real problem is with the output. The shader type vec4 contains always of floats to the values must be in range 0.0 to 1.0.
So, I need to make a shader to replace the gray colors in the texture with a given color. The fragment shader works properly if I set the color to a given specific one, like
gl_FragColor = vec4(1, 1, 0, 1);
However, I'm getting an error when I try to retrieve the original color of the texture. It always return black, for some reason.
uniform sampler2D texture; //texture to change
void main() {
vec2 coords = gl_TexCoord[0].xy;
vec3 normalColor = texture2D(texture, coords).rgb; //original color
gl_FragColor = vec4(normalColor.r, normalColor.g, normalColor.b, 1);
}
Theoretically, it should do nothing - the texture should be unchanged. But it gets entirely black instead. I think the problem is that I'm not sure how to pass the texture as a parameter (to the uniform variable). I'm currently using the ID (integer), but it seems to return always black. So I basically don't know how to set the value of the uniform texture (or to get it in any other way, without using the parameters). The code (in Java):
program.setUniform("texture", t.getTextureID());
I'm using the Program class, that I got from here, and also SlickUtils Texture class, but I believe that is irrelevant.
program.setUniform("texture", t.getTextureID());
^^^^^^^^^^^^^^^^
Nope nope nope.
Texture object IDs never go in uniforms.
Pass in the index of the texture unit you want to sample from.
So if you want to sample from the nth texture unit (GL_TEXTURE0 + n) pass in n:
program.setUniform("texture", 0);
^ or whatever texture unit you've bound `t` to
In addition to what genpfault said, when you say "replace the gray colors in the texture with a given color", that had better be a shorthand for "write the color from one texture to another, except replacing gray with a different color". Because you are not allowed to simultaneously read from and write to the same image in the same texture.
In my fragment shader I can load a texture, then do this:
uniform sampler2D tex;
void main(void) {
vec4 color = texture2D(tex, gl_TexCoord[0].st);
gl_FragColor = color;
}
That sets the current pixel to color value of texture. I can modify these, etc and it works well.
But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a :
"if currentSelf.Position() == (100,100); then color=red; else color=black?"
?
I know how to set colors, but how do I get "my" location?
Secondly, how do I get values from a neighbor pixel?
I tried this:
vec4 nextColor = texture2D(tex, gl_TexCoord[1].st);
But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?
How do I tell "which" pixel I am?
You're not a pixel. You're a fragment. There's a reason that OpenGL calls them "Fragment shaders"; it's because they aren't pixels yet. Indeed, not only may they never become pixels (via discard or depth tests or whatever), thanks to multisampling, multiple fragments can combine to form a single pixel.
If you want to tell where your fragment shader is in window-space, use gl_FragCoord. Fragment positions are floating-point values, not integers, so you have to test with a range instead of a single "100, 100" value.
Secondly, how do I get values from a neighbor pixel?
If you're talking about the neighboring framebuffer pixel, you don't. Fragment shaders cannot arbitrarily read from the framebuffer, either in their own position or in a neighboring one.
If you're talking about accessing a neighboring texel from the one you accessed, then that's just a matter of biasing the texture coordinate you pass to texture2D. You have to get the size of the texture (since you're not using GLSL 1.30 or above, you have to manually pass this in), invert the size and either add or subtract these sizes from the S and T component of the texture coordinate.
Easy peasy.
Just compute the size of a pixel based on resolution. Then look up +1 and -1.
vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;
gl_FragColor = (
texture2D(u_image, v_texCoord) +
texture2D(u_image, v_texCoord + vec2(onePixel.x, 0.0)) +
texture2D(u_image, v_texCoord + vec2(-onePixel.x, 0.0))) / 3.0;
There's a good example here