I'm quite new to GLSL and I am trying to apply a colormap in a fragment shader, i.e., I have some float value normalized from [0, ...., 1] and I want to convert it into an RGBA value given by a predefined map/lookup table/colormap.
I am doing this by passing a GL_LUMINANCE32F_ARB texture, which is a single float per pixel matrix with my values and size 'width*height' . Also, I am passing a GL_RGBA32F_ARB matrix (4 floats per pixel) with the rgb values mapped from 0 to 1.
This is how I am trying to access my colormap in the fragment shader:
uniform sampler1D cmap;
...
vec4 cmapColor = texture1D(cmap, lum);
gl_FragColor = cmapColor;
where 'lum' is a float in the range (0, 1). However, this is failing. Am I fetching the data correctly? Is this how I get a vec4 of floats?
Related
Consider a texture with the same dimensions as gl.canvas, what would be the proper method to sample a pixel from the texture at the same screen location as a clip-space coordinate (-1,-1 to 1,1)? Currently i'm using:
//u_screen = vec2 of canvas dimensions
//u_data = sampler2D of texture to sample
vec2 clip_coord = vec2(-0.25, -0.25);
vec2 tex_pos = (clip_coord / u_screen) + 0.5;
vec4 sample = texture2D(u_data, tex_pos);
This works but doesn't seem to properly take into account the canvas size and text_pos seems offset the closer clip_coord gets to -1 or 1.
In the following is assumed that the texture has the same size as the canvas and is rendered 1:1 on the entire canvas, as mentioned in the question.
If clip_coord is a 2 dimension fragment shader input where each component is in range [-1, 1], then you've to map the coordinate to the range [0, 1]:
vec4 sample = texture2D(u_data, clip_coord*0.5+0.5);
Note, the texture coordinates range from 0.0 to 1.0.
Another possibility is to use gl_FragCoord. gl_FragCoord is a fragment shader built-in variable and contains the window relative coordinates of the fragment.
If you use WebGL 2.0 respectively GLSLES 3.00,
then gl_FragCoord.xy can be use for texture lookup of a 2 dimensional texture, by texelFetch, to get the texel which correspond to the fragment:
vec4 sample = texelFetch(u_data, ivec2(gl_FragCoord.xy), 0);
Note, texelFetch performs a lookup of a single texel. You can think about the coordinate as the 2 dimensional texel index.
If you use WebGL 1.0 respectively GLSL ES 1.00,
then gl_FragCoord.xy can divided by the (2 dimensional) size of the texture. The result can be used for a texture lookup, by texture2D, to get the texel which correspond to the fragment:
vec4 sample = texture2D(u_data, gl_FragCoord.xy / u_size);
For an application I need to create a depth-image of a mesh. Basically I want a image, where each pixel tells the distance between the camera center and the intersection point. I choose to do this task with OpenGL, so that most of the computation can be done on the GPU instead of the CPU.
Here is my code for the vertex-shader. It computes the real-world coordinate of the intersection point and stores the coordinates in a varying vec4, so that I have access to it in the fragment-shader.
varying vec4 verpos;
void main(void)
{
gl_Position = ftransform();
verpos = gl_ModelViewMatrix*gl_Vertex;
}
And here the code for the fragment-shader. In the fragment-shader I use these coordinates to compute the eulidean distance to the origin (0,0,0) using pythagoras. To get access to the computed value on the CPU, my plan is to store the distance as a color using gl_FragColor, and extract the color values using glReadPixels(0, 0, width, height, GL_RED, GL_FLOAT, &distances[0]);
varying vec4 verpos;
void main()
{
float x = verpos.x;
float y = verpos.y;
float z = verpos.z;
float depth = sqrt(x*x + y*y + z*z) / 5000.0;
gl_FragColor = vec4(depth, depth, depth, 1.0);
}
The problem is, that I receive the results only in the precision of 0.003921569 (=1/255).
The resulting values contain for instance 0.364705890, 0.364705890, 0.364705890, 0.364705890, 0.368627459, 0.368627459, 0.368627459. Most of the values in a row are exactly the same, and then there is a jump of 1/255.
So somewhere inbetween gl_FragColor and glReadPixels OpenGL converts the floats to 8 bits and afterwards converts them back to float.
How can I extract distance-values as float without loosing precision?
Attach a R32F or R16F texture to a FBO and render to it. Then extract the pixels with glGetTexImage2D() like you would do with glReadPixels().
Is there a way to store color values on the VRAM other than a float per color component ?
Since color can represented as byte per component, how can I force my fragment shader to range color component from [0-255] instead of the default range [0.0-1.0]
if I use type as GL_UNSIGNED_BYTE , do I have to set bool normalized to GL_TRUE to convert them to 0.0-1.0 values that can be interpreted by the Fragment shader?
The output of the fragment shader is independent of the input of the vertex shader. For example you are storing colors in RGBA 8 bit format it would look somehting like this.
//...
glVertexAttribPointer( 0,4,GL_UNSIGNED_BYTE,FALSE,4,0);
//...
in the vertex shader
//the unsigned bytes are automatically converted to floats in the range [0,255]
//if normalized would have been set to true the range would be [0,1]
layout(location = 0) in vec4 color;
out vec4 c;
//...
c = color; //pass color to fragment shader
fragment shader
in vec4 c;
out vec4 out_color; //output (a texture from a framebuffer)
//....
//the output of the fragment shader must be converted to range [0,1] unless
//you're writing to integer textures (i'm asuming not here)
out_color = c / 255.0f;
A VBO is just a bunch of bytes in the first place. You need to tell OpenGL some information about the data in the VBO. One does that by invoking glVertexAttribPointer.
glVertexAttribPointer(index, size, GL_FLOAT, ...)
Using GL_FLOAT OpenGL knows that your data comes in float32 (4 bytes). In your case, you could use GL_BYTE which is an 8 bit number, so you can encode values from 0 to 255.
Since the information is only stored in VAO, one could use the same VBO with different views on data. Here one can find all available types.
According to the documentation of glVertexAttribPointer you have to set the normalize parameter to let the bytes be scaled to the range 0.0 to 1.0.
But as I can see in your comment to another answer, your real problem is with the output. The shader type vec4 contains always of floats to the values must be in range 0.0 to 1.0.
I am filling a 2D texture with GLubyte from floating point values of R (real numbers) mapped to (0,1) and multiplied by 255 giving values (0, 255). Saving is as GL_R8 as I only need 1 value from the texture. This can for example represent a mathematical function.
I also upload a 1d texture to work as a colormap/colorbar. I then sample from the 1D texture based on the values from my 2D texture.
This is how my fragment shaders works:
#version 400
in vec2 f_textureCoord;
layout(location = 0) out vec4 FragColor;
uniform sampler2D textureData;
uniform sampler1D colorBar;
void main() {
/* Values in the sampler are on (0, 1) * 255 => (0, 255) */
vec3 texColor = texture2D(textureData, f_textureCoord).rgb;
float s = texColor.r;
/* Use the texture value as a coordinate in the 1D colorbar texture */
vec3 color = texture1D(colorBar, s).rgb;
float val = color.r;
FragColor = vec4(val, val, val, 1);
}
Using this I get the following error:
glValidateProgram: Validation Error: Samplers of different types point
to the same texture unit
However, my code works as expected, at least the rendering result!
My questions are:
1) Why do I get this error/warning? --- Answered in comments...
2) Is this the correct approach to what I am trying to do? Should I use another form of buffer instead of saving my function values in a 2D texture?
3) I assume that I will run into problems when my math function (filling the 2D texture) exceeds some texture size limit. Any recommendations on how I should proceed to work around this?
1.) Call glValidateProgram before your glDraw* command to check if you set up uniform and attribute locations correctly. So the wrong warning is issued because both sampler texture units are still zero after program linking.
2.) If this is about displaying the results of you function using a color index, it's ok.
If I understand it right textureData contains only grey values. If you need only one color component from the texture, you should write
float s = texture2D(textureData, f_textureCoord).r;
3.) If you need to display more data than you can put into a single texture, you will have to use tiling (i.e split the data in several textures and do several draws).
glValidateProgram is for detecting if the program can be used to render with the current OpenGL state. This includes things like the current bindings for textures, uniform buffers, and any other things that textures directly access (draw buffers and vertex attributes are not part of this list).
In my fragment shader I can load a texture, then do this:
uniform sampler2D tex;
void main(void) {
vec4 color = texture2D(tex, gl_TexCoord[0].st);
gl_FragColor = color;
}
That sets the current pixel to color value of texture. I can modify these, etc and it works well.
But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a :
"if currentSelf.Position() == (100,100); then color=red; else color=black?"
?
I know how to set colors, but how do I get "my" location?
Secondly, how do I get values from a neighbor pixel?
I tried this:
vec4 nextColor = texture2D(tex, gl_TexCoord[1].st);
But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?
How do I tell "which" pixel I am?
You're not a pixel. You're a fragment. There's a reason that OpenGL calls them "Fragment shaders"; it's because they aren't pixels yet. Indeed, not only may they never become pixels (via discard or depth tests or whatever), thanks to multisampling, multiple fragments can combine to form a single pixel.
If you want to tell where your fragment shader is in window-space, use gl_FragCoord. Fragment positions are floating-point values, not integers, so you have to test with a range instead of a single "100, 100" value.
Secondly, how do I get values from a neighbor pixel?
If you're talking about the neighboring framebuffer pixel, you don't. Fragment shaders cannot arbitrarily read from the framebuffer, either in their own position or in a neighboring one.
If you're talking about accessing a neighboring texel from the one you accessed, then that's just a matter of biasing the texture coordinate you pass to texture2D. You have to get the size of the texture (since you're not using GLSL 1.30 or above, you have to manually pass this in), invert the size and either add or subtract these sizes from the S and T component of the texture coordinate.
Easy peasy.
Just compute the size of a pixel based on resolution. Then look up +1 and -1.
vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;
gl_FragColor = (
texture2D(u_image, v_texCoord) +
texture2D(u_image, v_texCoord + vec2(onePixel.x, 0.0)) +
texture2D(u_image, v_texCoord + vec2(-onePixel.x, 0.0))) / 3.0;
There's a good example here