How to make a 1D lut in C++ for GLSL - c++

I'm beginning to understand how to implement a fragment shader to do a 1D LUT but I am struggling to find any good resources that tell you how to make the 1D LUT in C++ and then texture it.
So for a simple example given the following 1D lut below:
Would I make an array with the following data?
int colorLUT[255] = {255,
254,
253,
...,
...,
...,
3,
2,
1,
0};
or unsigned char I guess since I'm going to be texturing it.
If this is how to create the LUT, then how would I convert it to a texture? Should I use glTexImage1D? Or is there a better method to do this? I'm really at a lose here, any advice would be helpful
I'm sorry to be so brief but I haven't seen any tutorials about how to actually make and link the LUT, every tutorial on GLSL only tells you about the shaders they always neglect the linking part.
My end goal is I would like to know how to take different 1D LUTs as seen below and apply them all to images.

Yes, you can use 1D textures as lookup tables.
You can load the data into a 1D texture with glTexImage1D(). Using GL_R8 as the internal texture format, and specifying the data as GL_UNSIGNED_BYTE when passing it to glTexImage1D(), is your best choice if 8 bits of precision are enough for the value. Your call will look like this, with lutData being a pointer/array to GLubyte data, and lutSize the size of your LUT:
glTexImage1D(GL_TEXTURE_1D, 0, GL_R8, lutSize, 0, GL_RED, GL_UNSIGNED_BYTE, lutData);
If you need higher precision than 8 bits, you can use formats like GL_R16 or GL_R32F.
Make sure that you also set the texture parameters correctly, e.g. for linear sampling between values in the lookup table:
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
You then bind the texture to a sampler1D uniform in your shader, and use the regular texture sampling functions to retrieve the new value. Remember that texture coordinates are in the range 0.0 to 1.0, so you need to map the range of your original values to [0.0, 1.0] before you pass it into the texture sampling function. The new value you receive from the texture sampling function will also be in the range [0.0, 1.0].
Note that as long as your lookup is a relatively simple function, it might be more efficient to calculate the function in the shader. But if the LUT can contain completely arbitrary mappings, using a 1D texture is a good way to go.
In OpenGL variations that do not have 1D textures, like OpenGL ES, you can use a 2D texture with height set to 1 instead.
If you need lookup tables that are larger than the maximum supported texture size, you can also look into buffer textures, as suggested by Andon in his comment.

Related

Accessing RGBA components C-Style

Is there any way to access the texture data in GLSL in a C-like fashion?
By that I mean, if I have fragment.rgba, can I, within the shader, cast fragment.rg to a short and use it that way? I want to encode two pieces of scene information (two shorts) into a texture for use by another shader.
Edit: I'm declaring my texture this way:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth,mHeight, 0, GL_RGBA,GL_UNSIGNED_BYTE, mBits)
...and what I want to do in the shader would be done this way in C++:
short* aShortPtr=&fragment.rg;
*aShortPtr=10000;
aShortPtr=&fragment.ba
*aShortPtr=2500;
Now I realize that I can't actually do it this way in the shader, but the RGBA data is stored as an integer anyway, so how can I write and read that integer instead of accessing everything as a vec4?

Can't create FBO with more than 8 render buffers

So, here's the problem. I have got an FBO with 8 render buffers which I use in my deferred rendering pipeline. Then I added another render buffer and now I get a GLError.
GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glFramebufferTexture2D,
cArguments = (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, 12, 0,)
The code should be fine, since I have just copied it from the previously used render buffer.
glMyRenderBuffer = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, glMyRenderBuffer)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, self.width, self.height, 0, GL_RGB, GL_FLOAT, None)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, glMyRenderBuffer, 0)
glGenerateMipmap(GL_TEXTURE_2D)
And I get the error at this line
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, glMyRenderBuffer, 0)
It looks more like some kind of OpenGL limitation that I don't know about.
And I also have got a weird stack - Linux + GLFW + PyOpenGL which may also cause this problem.
I would be glad to any advice at this point.
It looks more like some kind of OpenGL limitation that I don't know about.
The relevant limit is GL_MAX_COLOR_ATTACHMENTS and the spec guarantees that this value is at least 8.
Now needing more than 8 render targets in a single pass seems insane anyway.
Consider the following things:
try to reduce the number of render targets as much as possible, do not store redundant information (such as vertex position) which can easily be calculated on the fly (you only need depth alone, and you usually have a depth attachment anyway)
use clever encodings appropriate for the data, i.e. 3xfloat for a normal vector is a huge waste. See for example Survey of Efficient Representations for Independent Unit Vectors
coalesce different render targets. i.e if you need one vec3 and 2 vec2 outputs, better use 2 vec4 targets and asiign the 8 values to the 8 channels
maybe even use a higher bitdepth formats like RGBA32UI and manually encode different values into a single channel
If you still need more data, you either can do several render passes (basically with n/8 targets for each pass). Another alternative would be to use image load/store or SSBOs in your fragment shader to write the additional data. In your Scenario, using image load/store seems to make most sense, soince you probaly need the resulting data as texture. You also get a relatively good access pattern, since you can basically use gl_FragCoord.xy for adressing the image. However, care must be taken if you have overlapping geometry in one draw call, so that you write to each pixel more than once (that issue is also addressed by the GL_ARB_fragment_shader_interlock extension, but that one is not yet a core feature of OpenGL). However, you might be able to eliminate that scenario completely by using a pre-depth-pass.

Need to create a custom data 2D texture with reasonable precision

The idea
I need to create a 2D texture to be fed with resonably precise float values (I mean at least as precise as a glsl mediump float). I want to store in it each pixel's distance from the camera. I don't want the GL Zbuffer distance to near plane, only my own lovely custom data :>
The problem/What I've tried
By using a standard texture as color attachment, I don't get enough precision. Or maybe I missed something ?
By using a depth attachment texture as GL_DEPTH_COMPONENT32 I am getting the clamped near plane distance - rubbish.
So it seems I am stuck at not using a depth attachment even tho they seem to eventually hold more precision. So is there a way to have mediump float precision for standard textures ?
I find it strange OpenGL doesn't have a generic container for arbitrary data. I mean with custom bit-depth. Or maybe I missed something again!
You can use floating point textures instead of a RGBA texture with 8 bit per pixel. However support of these depends on the devices you want to support, especially older mobile devices have a lack of support for these formats.
Example for GL_RGBA16F( Not tested ):
glTexImage2D(GL_TEXTURE_2D, mipmap, GL_RGBA16F, mipmapWidth, mipmapHeight, GL_RGBA, GL_HALF_FLOAT, null);
Now you can store the data in your fragment-shader manually. However clamping still occurs depending on you MVP. Also you need to pass the data to the fragment shader.
There are also 32bit formats.
There are a number of options for texture formats that give you more than 8-bit component precision.
If your values are in a pre-defined range, meaning that you can easily map your values into the [0.0, 1.0] interval, normalized 16-bit formats are your best option. For a single component texture, the format would be GL_R16. You can allocate a texture of this format using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16, 512, 512, 0, GL_RED, GL_UNSIGNED_SHORT, NULL);
There are matching formats for 2, 3, and 4 components (GL_RG16, GL_RGB16, GL_RGBA16).
If you need a larger range of values that is not easily constrained, float textures become more attractive. The corresponding calls for 1 component textures are:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16F, 512, 512, 0, GL_RED, GL_HALF_FLOAT, NULL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, 512, 512, 0, GL_RED, GL_FLOAT, NULL);
Again, there are matching formats for 2, 3, and 4 components.
The advantage of float textures is that, just based on the nature of float values encoded with a mantissa and exponent, they can cover a wide range. This comes at the price of less precision. For example, GL_R16F gives you 11 bits of precision, while GL_R16 gives you a full 16 bits. Of course GL_R32F gives you plenty of precision (24 bits) as well as a wide range, but it uses twice the amount of storage.
You would probably have an easier time accomplishing this in GLSL as opposed to the C API. However, any custom depth buffer will be consistently, considerably slower than the one provided by OpenGL. Don't expect it to operate in real-time.
If your aim is to have access to the raw distance of any fragment from the camera, remember that depth buffer values are z/w, where z is the distance from the near plane and w is the distance from the camera. So, it is possible to extract quickly with an acceptable amount of precision. However, you are still faced with your original problem: fragments between the camera and the near plane will not be in the depth buffer.

Volume Rendering: How can I turn a .raw file into an opengl-friendly isosurface?

I want to create a 3D visualizer of .raw volume medical datasets using marching tetrahedra.
I found this implementation on GPL license that looks nice and is based on the info by Paul Bourke, but I don't know how to make work with a .raw file, which I've found people load as a 3d texture.
//assuming that the data at hand is a 256x256x256 unsigned byte data
int XDIM=256, YDIM=256, ZDIM=256;
const int size = XDIM*YDIM*ZDIM;
bool LoadVolumeFromFile(const char* fileName) {
FILE *pFile = fopen(fileName,"rb");
if(NULL == pFile) {
return false;
}
GLubyte* pVolume=new GLubyte[size]; //<- here pVolume is a 1D byte array
fread(pVolume,sizeof(GLubyte),size,pFile);
fclose(pFile);
// now pVolume is passed to a 3d texture
//load data into a 3D texture
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_3D, textureID);
// set the texture parameters
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage3D(GL_TEXTURE_3D,0,GL_INTENSITY,XDIM,YDIM,ZDIM,0,GL_LUMINANCE,GL_UNSIGNED_BYTE,pVolume);
delete [] pVolume;
return true;
}
I don't know if I should either:
a) access the values directly from pValue
b) access them after loading them to a 3D texture
I have no idea of how I'm supposed to index pVolume to get the isosurface intensity at each x,y,z. if pVolume is a 1D array. I know x,y coordinates are mapped from a 1D array using a divide and a module, but how would I map x,y,z?
On the other hand, If I load the .raw file as a 3d texture would
glTexCoord3f
give me the isovalue at a given x,y,z?
Clarification: The question isn't "can OpenGL draw an isosurface directly from this?" the question is how to index the isolevels on the .raw geometry, which is needed to evaluate each tetrahedron properly. On the image one can see that each tetrahedron case is marked based on whether the isosurface is above or below it. The isosurface is indexed on it's x,y,z coordinates on the .raw file, how can I access it for each x,y,z on the dataset? Will glTexCoord3f give me its intensity on a given x,y,z? Is it better to do a x,y,z conversion from the pVolume array directly?
The isosurface is indexed on it's x,y,z coordinates on the .raw file, how can I access it for each x,y,z on the dataset?
By loading the image as a 3D array and indexing it. Not by loading it as an OpenGL texture.
Will glTexCoord3f give me its intensity on a given x,y,z?
No. As the documentation states, glTexCoord3f simply provides a set of texture coordinates to the next vertex to be issued. It returns nothing, it doesn't cause anything to happen yet. All it does is set state within OpenGL.
Is it better to do a x,y,z conversion from the pVolume array directly?
Yes. Information in OpenGL should generally travel in one direction: from the user to the screen. OpenGL functions exist for you to feed OpenGL information that it will use to render something. That's how OpenGL works best.
Any kind of back-tracking (reading images back to the CPU, etc) is a slow path.
I think you're confusing a few things here. OpenGL is not a image processing library. It's a rasterizer API. 3D Textures are not meant as something you load data to and are given arbitrary access. You load 3D textures to map them onto surfaces, or use them in direct volume rendering.
Extracting isosurfaces is outside the scope of OpenGL.
RAW file just means that there is no header and you must know the data layout before being able to do something with it. You need to know
width
height
depth
bytes per value
The value(s) at x,y,z can then be found at the data offset
(width*height*z + width*y + z)*(bytes)
Isovalue just means all pixels of the data set equal to a given value. Normally you're interested in pixels which are neighboured by each a pixel below and above isovalue. For efficient determination first determine the gradient of the scalar field for each pixel. Then you test the neighbour pixels along the gradient if they're above or below the isovalue. No OpenGL involved here!

What exactly is a floating point texture?

I tried reading the OpenGL ARB_texture_float spec, but I still cannot get it in my head..
And how is floating point data related to just normal 8-bit per channel RGBA or RGB data from an image that I am loading into a texture?
Here is a read a little bit here about it.
Basically floating point texture is a texture in which data is of floating point type :)
That is it is not clamped. So if you have 3.14f in your texture you will read the same value in the shader.
You may create them with different numbers of channels. Also you may crate 16 or 32 bit textures depending on the format. e.g.
// create 32bit 4 component texture, each component has type float
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 16, 16, 0, GL_RGBA, GL_FLOAT, data);
where data could be like this:
float data[16][16];
for(int i=0;i<16*16;++i) data[i] = sin(i*M_PI/180.0f); // whatever
then in shader you can get exactly same (if you use FLOAT32 texture) value.
e.g.
uniform sampler2D myFloatTex;
float value = texture2D(myFloatTex, texcoord.xy);
If you were using 16bit format, say GL_RGBA16F, then whenever you read in shader you will have a convertion. So, to avoid this you may use half4 type:
half4 value = texture2D(my16BitTex, texcoord.xy);
So, basically, difference between the normalized 8bit and floating point texture is that in the first case your values will be brought to [0..1] range and clamped, whereas in latter you will receive your values as is ( except for 16<->32 conversion, see my example above).
Not that you'd probably want to use them with FBO as a render target, in this case you need to know that not all of the formats may be attached as a render target. E.g. you cannot attach Luminance and intensity formats.
Also not all hardware supports filtering of floating point textures, so you need to check it first for your case if you need it.
Hope this helps.
FP textures have a special designated range of internal formats (RGBA_16F,RGBA_32F,etc).
Regular textures store fixed-point data, so reading from them gives you [0,1] range values. Contrary, FP textures give you [-inf,+inf] range as a result (not necessarily with a higher precision).
In many cases (like HDR rendering) you can easily proceed without FP textures, just by transforming the values to fit in [0,1] range. But there are cases like deferred rendering when you may want to store, for example, world-space coordinate without caring about their range.