glsl store float data in texture - glsl

I'm looking for an efficient way to store datas in a texture instead of using uniforms. The goal is to store bones matrices in a texture.
I'm currently doing like this :
- one RGBA pixel = one float
- (128, 255, 32, 14) = 0.128 255 032 014
It works and it's much faster than passing vars through uniforms. Still, I think it's a lot of calculations to do such a little thing.
So do you know a better way to construct a float from RGBA pixel?
Is there a built-in way to construct a 4*4 matrix from several RGBA pixels?

Here's the answer thanks to LJ.
Didn't tried it yet, but that's the exact same problem.

Related

Convert SRGB texture to linear in OpenGL

I am currently trying to properly implement gamma correction in my renderer. I have set my framebuffer to be SRGB with glEnable(GL_FRAMEBUFFER_SRGB) and now I am left with importing the SRGB textures properly. I know three approaches to do this:
Convert value in shader: vec3 realColor = pow(sampledColor, 2.2)
Make OpenGL do it for me: glTexImage2D(..., ...,GL_SRGB, ..., ..., ..., GL_RGB, ..., ...);
Convert the values directly:
for (GLubyte* pixel = image; pixel < image + size; ++pixel)
*pixel = GLubyte(pow(*pixel, 2.2f) + 0.5f);
Now I'm trying to use the third approach, but it doesn't work.
It is super slow (I know it has to loop through all the pixels but still).
It makes everything look completely wrong (see image below).
Here are some images.
No gamma correction:
Method 2 (correction in when sampling in fragment shader)
Something weird when trying method 3
So now my question is what's wrong with method 3 cause it looks completely different from the correct result (assuming that method 2 is correct, which if I think it is).
I have set my framebuffer to be SRGB with glEnable(GL_FRAMEBUFFER_SRGB);
That doesn't set your framebuffer to a sRGB format - it only enables sRGB conversion if the framebuffer is using an sRGB format already - they only use of the GL_FRAMEBUFFER_SRGB enable state is to actually disable sRGB conversion on frambeuffers which have an sRGB format. You still have to specifically request your windows' default framebuffer to be sRGB capabable (or might be lucky to get one without asking for it, but that will differ greatly on implementations and platforms), or you have to create an sRGB texture or render-target if you render to an FBO.
Convert the values directly:
for (GLubyte* pixel = image; pixel < image + size; ++pixel)
*pixel = GLubyte(pow(*pixel, 2.2f) + 0.5f);
First of all pow(x,2.2) is not the correct formula for sRGB - the real one uses a small linear segment near 0 and the power of 2.4 for the rest - using a power of 2.2 is just some further approximation.
However, the bigger problem with this approach is that GLubyte is an 8 Bit unsigned integer type with the range [0,255] and doing a pow(...,2.2) on that yields a value in [0,196964.7], which when converted back to GLubyte will ignore the higher bits and basically calculate the modulo 256, so you will get really useless results. Conceptually, you need 255.0 * pow(x/255.0,2.2) which could of course be further simplified.
The big problem here is that by doing this conversion, you basically loose a lot of precision due to the non-linear distortion of your value range.
If you do such a conversion before-hand, you would have to use higher precision textures to store the linearized color values (like 16 bit half float per channel), just keeping the stuff as 8bit UNORM is a complete disaster - and that is also why GPUs do the conversion directly when accessing the texture, so that you don't have to blow up the memory footprint of your textures by a factor of 2.
So I really doubt that your approach 3 would be "importing the SRGB textures properly". It will just destroy any fidelity even if done right. Approaches1 and 2 do not have that problem, but approach 1 is just silly considering that the hardware will do that for you for free. so I really wonder why you even consider 1 and 3 at all.

Changing the size of a pixel depending on it's color with GLSL

I have a application that will encode data for bearing and intensity using 32 bits. My fragment shader already decodes the values and then sets the color depending on bearing and intensity.
I'm wondering if it's also possible, via shader, to change the size (and possibly shape) of the drawn pixel.
As an example, let's say we have 4 possible values for intensity, then 0 would cause a single pixel to be drawn, 1 would draw a 2x2 square, 2 a 4x4 square and 3 a circle with a radius of 6 pixels.
In the past, we had to do all this on the CPU side and I was hoping to offload this job to the GPU.
No, fragment shaders cannot affect the "size" of the data they write. Once something has been rasterized into fragments, it doesn't really have a "size" anymore.
If you're rendering GL_POINTS primitives, you can change their size from the vertex shader. As for point sizes, it's rather difficult to ensure that a particular point covers an exact number of fragments.
The first thing that came into my mind is doing something similiar to blur technique, but instead of bluring the texture, we use it to look for neighbouring texels with a range to check if it has the intensity above 1.0f. if yes, then set the current texel color to for example red.
If you're using a fbo that is 1:1 in comparison to window size, use 1/width and 1/height in texture coordinates to get approximately 1 pixel (well not exactly because it is not a pixel but texel, just nearly)
Although this work just fine, the downside of this is it is very expensive as it will have n^2 complexity and probably some branching.
Edit: after thinking awhile this might not work for size with even number

Comparing two textures in openGL

I'm new to OpenGL and I'm looking forward to compare two textures to understand how much they are similar to each other. I know how to to this with two bitmap images but I really need to use a method to compare two textures.
Question is: Is there any way to compare two textures as we compare two images? Like comparing two images pixel by pixel?
Actually what you seem to be asking for is not possible or at least not as easy as it would seem to accomplish on the GPU. The problem is GPU is designed to accomplish as many small tasks as possible in the shortest amount of time. Iterating through an array of data such as pixels is not included so getting something like an integer or a floating value might be a bit hard.
There is one very interesting procedure you may try but I can not say the result will be appropriate for you:
You may first create a new texture that is a difference between the two input textures and then keep downsampling the result till 1x1 pixel texture and get the value of that pixel to see how different it is.
To achieve this it would be best to use a fixed size of the target buffer which is POT (power of two) for instance 256x256. If you didn't use a fixed size then the result could vary a lot depending on the image sizes.
So in first pass you would redraw the two textures to the 3rd one (using FBO - frame buffer object). The shader you would use is simply:
vec4 a = texture2D(iChannel0,uv);
vec4 b = texture2D(iChannel1,uv);
fragColor = abs(a-b);
So now you have a texture which represents the difference between the two images per pixel, per color component. If the two images will be the same, the result will be a totally black picture.
Now you will need to create a new FBO which is scaled by half in every dimension which comes to 128x128 in this example. To draw to this buffer you would need to use GL_NEAREST as a texture parameter so no interpolations on the texel fetching is done. Then for each new pixel sum the 4 nearest pixels of the source image:
vec4 originalTextCoord = varyingTextCoord;
vec4 textCoordRight = vec2(varyingTextCoord.x+1.0/256, varyingTextCoord.y);
vec4 textCoordBottom = vec2(varyingTextCoord.x, varyingTextCoord.y+1.0/256);
vec4 textCoordBottomRight = vec2(varyingTextCoord.x+1.0/256, varyingTextCoord.y+1.0/256);
fragColor = texture2D(iChannel0, originalTextCoord) +
texture2D(iChannel0, textCoordRight) +
texture2D(iChannel0, textCoordBottom) +
texture2D(iChannel0, textCoordBottomRight);
The 256 value is from the source texture so that should come as a uniform so you may reuse the same shader.
After this is drawn you need to drop down to 64, 32, 16... Then read the pixel back to the CPU and see the result.
Now unfortunately this procedure may produce very unwanted results. Since the colors are simply summed together this will produce an overflow for all the images which are not similar enough (results in a white pixel or rather (1,1,1,0) for non-transparent). This may be overcome first by using a scale on the first shader pass, to divide the output by a large enough value. Still this might not be enough and an average might need to be done in the second shader (multiply all the texture2D calls by .25).
In the end the result might still be a bit strange. You get 4 color components on the CPU which represent the sum or the average of an image differential. I guess you could sum them up and choose what you consider for the images to be much alike or not. But if you want to have a more sense in the result you are getting you might want to treat the whole pixel as a single 32-bit floating value (these are a bit tricky but you may find answers around the SO). This way you may compute the values without the overflows and get quite exact results from the algorithms. This means you would write the floating value as if it is a color which starts with the first shader output and continues for every other draw call (get texel, convert it to float, sum it, convert it back to vec4 and assign as output), GL_NEAREST is essential here.
If not then you may optimize the procedure and use GL_LINEAR instead of GL_NEAREST and simply keep redrawing the differential texture till it gets to a single pixel size (no need for 4 coordinates). This should produce a nice pixel which represents an average of all the pixels in the differential textures. So this is the average difference between pixels in the two images. Also this procedure should be quite fast.
Then if you want to do a bit smarter algorithm you may do some wonders on creating the differential texture. Simply subtracting the colors may not be the best approach. It would make more sense to blur one of the images and then comparing it to the other image. This will lose precision for those very similar images but for everything else it will give you a much better result. For instance you could say you are interested only if the pixel is 30% different then the weight of the other image (the blurred one) so you would discard and scale the 30% for every component such as result.r = clamp(abs(a.r-b.r)-30.0/100.0, .0, 1.0)/((100.0-30.0)/100.0);
You can bind both textures to a shader and visit each pixel by drawing a quad or something like this.
// Equal pixels are marked green. Different pixels are shown in red color.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
vec4 a = texture2D(iChannel0,uv);
vec4 b = texture2D(iChannel1,uv);
if(a != b)
fragColor = vec4(1,0,0,1);
else
fragColor = vec4(0,1,0,1);
}
You can test the shader on Shadertoy.
Or you can also bind both textures to a compute shader and visit every pixel by iteration.
You cannot compare vectors. You have to use
if( any(notEqual(a,b)))
Check the GLSL language spec

How to get a floating-point color from GLSL

I am currently faced with a problem closely related to the OpenGL pipeline, and the use of shaders.
Indeed, I work on a project whose one of the steps consists of reading pixels from an image that we generate using OpenGL, with as much accuracy as possible : I mean that instead of reading integers, I would like to read float numbers. (So, instead of reading the value (134, 208, 108) for a pixel, I would like to obtain something like (134.180, 207.686, 108.413), for example.)
For this project, I used both vertex and fragment shaders to render my scene. I assume that the color computed and returned by the fragment shader, is a vector of 4 floats (one per RGBA component) belonging to the "continuous" [0, 1] internal. But, how can I get it in my C++ file ? Is there a way of doing it ?
I thought of calling the glReadPixels() function just after having rendered my scene in a buffer, by setting the format argument to GL_RGBA, and the data type of the pixel data to GL_FLOAT. But I have the feeling that the values associated to the pixels that we read, have already been casted to a integer in the meanwhile, because the float numbers that I finally get, correspond to the interval [0, 255] clamped to [0, 1], without any gain in precision. A closer look on the OpenGL spectifications strengthens this idea : I think there is indeed a cast somewhere between rendering my scene, and callingglReadPixels().
Do you have any idea about the way I can reach my objective ?
The GL_RGBA format returned by the fragment shader stores pixels components in 8-bit integers. You should use a floating point format, such as GL_RGBA16F or GL_RGBA32F, where 16 and 32 are the depths for each component.

Substituting color with vertex attributes

I have a large set of vertices and currently use glColorPointer to specify their color. The problem is, that glColorPointer only accepts a size of 3 or 4 as its first parameter but the value of R, G and B for each vertex is identical.
Of course, I could use glVertexAttribPointer to specify each color value as an attribute of size one and duplicate it in a shader, but I'm looking for a way to do this in the fixed function pipeline.
Calling glColor1* is unfortunately out of the question given the amount of vertices (Yes, I tried it).
Any creative solution that squeezes the value into something else is also OK.
I think without shaders this won't be possible since, well, glColorPointer only accepts a size of 3 or 4, like you already found out (there should also be no glColor1, only glColor3 and glColor4).
You might trick glColorPointer to use your tightly packed array by specifying a size of 3 but a stride of 1 (ubyte) or 4 (float). But this will give you a color of (Ri, Ri+1, Ri+2) for vertex i and there is no way to adjust it to (Ri, Ri, Ri), since the color matrix is not applied to per-vertex colors but only to pixel images.
So without shaders you don't have much creativity left. What you could do is use a 1D texture of size 256, which contains all the grey colors from (0, 0, 0) to (255, 255, 255) in order. Then you can just use your per-vertex color as 1D texture coordinate into that texture. But I'm not sure if that would really buy you anything in either space or time.
The easy and straightforward way is to use vertex attributes and a shader to implement the unpacking of the attribute into the fragment color. This takes just a few lines of shader code and should be the preferred solution. The real difficulty here is the artificial restriction, not the problem space itself. Such cases should be handled by lifting the restriction.
You can store the color information in an 1D texture (with only one channel), then you can use a vertex shader which reads the proper color based on the gl_vertexID.