I'm working on a graphics project involving using a 3D texture to do some volume rendering on data stored in the form of a rectilinear grid, and I was a little confused on the width, height, and depth arguments for glTexImage3D. For a 1D texture, I know that you can use something like this:
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
where the width is the 256 possible values for each color stream. Here, I'm still using colors in the form of unsigned bytes (and the main purpose of the texture here is still to interpolate the colors of the points in the grid, along with the transparencies), so it makes sense to me that one of the arguments would still be 256. It's a 64 X 64 X 64 grid, so it makes sense that one of the arguments (maybe the depth?) would be 64, but I'm not sure about the third, or even if I'm on the right track there. Could anyone enlighten me on the proper use of those three parameters? I looked at both of these discussions, but still came away confused:
regarding glTexImage3D
OPENGL how to use glTexImage3D function
It looks like you misunderstood the 1D case. In your example, 256 is the size of the 1D texture, i.e. the number of texels.
The number of possible values for each color component is given by the internal format, which is the 3rd argument. GL_RGB actually leaves it up to the implementation what the color depth should be. It is supported for backwards compatibility with old code. It gets clearer if you specify a sized format like GL_RGB8 for the internal format, which requests 8 bits per color component, which corresponds to 256 possible values each for R, G, and B.
For the 3D case, if you want a 64 x 64 x 64 grid, you simply specify 64 for the size in each of the 3 dimensions (width, height, and depth). Say for this size, using RGBA and a color depth of 8 bits, the call is:
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA8, 64, 64, 64, 0,
GL_RGBA, GL_UNSIGNED_BYTE, data);
Related
I have created a sample application using glew and glut which reads a dds file and displays it. I manually read the dds file (NPOT(886 x 317) file in R8G8B8) and creates the data pointer(unsigned char*).
Then I prepared the texture using
void prepareTexture(int w, int h, unsigned char* data) {
/* Create and load texture to OpenGL */
glGenTextures(1, &textureID); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
w, h,
0, GL_RGB, GL_UNSIGNED_BYTE,
data);
glGenerateMipmap(GL_TEXTURE_2D);
}
In the above figure, First one shows the original dds file and second one is the rendering result of my application which is obviously wrong. If I re-size the image to 1024 x 512, both images will look same.
From the OpenGL Specification
I.3 Non-Power-Of-Two Textures
The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so
that non-power-of-two textures may be specified without generating
errors. Non-power-of-two textures was promoted from the ARB texture
non power of two extension.
From which what I understand is from OpenGl 2.0 we can use NPOT textures and OpenGL will handle this.
I tried using the DevIL image library to load the dds file but end up with same result. If I convert the image to a RGBA and and change the internal format and format of glTexImage2D to GL_RGBA I will get correct result even if the dds file is NPOT.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
w, h,
0, GL_RGBA, GL_UNSIGNED_BYTE,
data);
I tried the application in PC's with NVIDA card and Radeon card and both of them are giving the same result.
My sample source code can be downloaded from the link
Can anybody tell me what is wrong with my application? Or OpenGL does not allow NPOT if the image is in R8G8B8.
This looks like an alignment issue. Add this before the glTexImage2D() call:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This value specifies the row alignment of your data in bytes. The default value is 4.
With your texture width of 886 and 3 bytes per pixel for GL_RGB, each row is 886 * 3 = 2658 bytes, which is not a multiple of 4.
With the UNPACK_ALIGNMENT value at the default, the size would be rounded up to the next multiple of 4, which is 2660. So 2660 bytes will be read for each row, which explains the increasing shift for each row. The first row would be correct, the second one 2 bytes off, the 2nd row 4 bytes off, the 3rd row 6 bytes off, etc.
The idea
I need to create a 2D texture to be fed with resonably precise float values (I mean at least as precise as a glsl mediump float). I want to store in it each pixel's distance from the camera. I don't want the GL Zbuffer distance to near plane, only my own lovely custom data :>
The problem/What I've tried
By using a standard texture as color attachment, I don't get enough precision. Or maybe I missed something ?
By using a depth attachment texture as GL_DEPTH_COMPONENT32 I am getting the clamped near plane distance - rubbish.
So it seems I am stuck at not using a depth attachment even tho they seem to eventually hold more precision. So is there a way to have mediump float precision for standard textures ?
I find it strange OpenGL doesn't have a generic container for arbitrary data. I mean with custom bit-depth. Or maybe I missed something again!
You can use floating point textures instead of a RGBA texture with 8 bit per pixel. However support of these depends on the devices you want to support, especially older mobile devices have a lack of support for these formats.
Example for GL_RGBA16F( Not tested ):
glTexImage2D(GL_TEXTURE_2D, mipmap, GL_RGBA16F, mipmapWidth, mipmapHeight, GL_RGBA, GL_HALF_FLOAT, null);
Now you can store the data in your fragment-shader manually. However clamping still occurs depending on you MVP. Also you need to pass the data to the fragment shader.
There are also 32bit formats.
There are a number of options for texture formats that give you more than 8-bit component precision.
If your values are in a pre-defined range, meaning that you can easily map your values into the [0.0, 1.0] interval, normalized 16-bit formats are your best option. For a single component texture, the format would be GL_R16. You can allocate a texture of this format using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16, 512, 512, 0, GL_RED, GL_UNSIGNED_SHORT, NULL);
There are matching formats for 2, 3, and 4 components (GL_RG16, GL_RGB16, GL_RGBA16).
If you need a larger range of values that is not easily constrained, float textures become more attractive. The corresponding calls for 1 component textures are:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16F, 512, 512, 0, GL_RED, GL_HALF_FLOAT, NULL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, 512, 512, 0, GL_RED, GL_FLOAT, NULL);
Again, there are matching formats for 2, 3, and 4 components.
The advantage of float textures is that, just based on the nature of float values encoded with a mantissa and exponent, they can cover a wide range. This comes at the price of less precision. For example, GL_R16F gives you 11 bits of precision, while GL_R16 gives you a full 16 bits. Of course GL_R32F gives you plenty of precision (24 bits) as well as a wide range, but it uses twice the amount of storage.
You would probably have an easier time accomplishing this in GLSL as opposed to the C API. However, any custom depth buffer will be consistently, considerably slower than the one provided by OpenGL. Don't expect it to operate in real-time.
If your aim is to have access to the raw distance of any fragment from the camera, remember that depth buffer values are z/w, where z is the distance from the near plane and w is the distance from the camera. So, it is possible to extract quickly with an acceptable amount of precision. However, you are still faced with your original problem: fragments between the camera and the near plane will not be in the depth buffer.
I have created a sample application using glew and glut which reads a dds file and displays it. I manually read the dds file (NPOT(886 x 317) file in R8G8B8) and creates the data pointer(unsigned char*).
Then I prepared the texture using
void prepareTexture(int w, int h, unsigned char* data) {
/* Create and load texture to OpenGL */
glGenTextures(1, &textureID); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
w, h,
0, GL_RGB, GL_UNSIGNED_BYTE,
data);
glGenerateMipmap(GL_TEXTURE_2D);
}
In the above figure, First one shows the original dds file and second one is the rendering result of my application which is obviously wrong. If I re-size the image to 1024 x 512, both images will look same.
From the OpenGL Specification
I.3 Non-Power-Of-Two Textures
The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so
that non-power-of-two textures may be specified without generating
errors. Non-power-of-two textures was promoted from the ARB texture
non power of two extension.
From which what I understand is from OpenGl 2.0 we can use NPOT textures and OpenGL will handle this.
I tried using the DevIL image library to load the dds file but end up with same result. If I convert the image to a RGBA and and change the internal format and format of glTexImage2D to GL_RGBA I will get correct result even if the dds file is NPOT.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
w, h,
0, GL_RGBA, GL_UNSIGNED_BYTE,
data);
I tried the application in PC's with NVIDA card and Radeon card and both of them are giving the same result.
My sample source code can be downloaded from the link
Can anybody tell me what is wrong with my application? Or OpenGL does not allow NPOT if the image is in R8G8B8.
This looks like an alignment issue. Add this before the glTexImage2D() call:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This value specifies the row alignment of your data in bytes. The default value is 4.
With your texture width of 886 and 3 bytes per pixel for GL_RGB, each row is 886 * 3 = 2658 bytes, which is not a multiple of 4.
With the UNPACK_ALIGNMENT value at the default, the size would be rounded up to the next multiple of 4, which is 2660. So 2660 bytes will be read for each row, which explains the increasing shift for each row. The first row would be correct, the second one 2 bytes off, the 2nd row 4 bytes off, the 3rd row 6 bytes off, etc.
I'm beginning to understand how to implement a fragment shader to do a 1D LUT but I am struggling to find any good resources that tell you how to make the 1D LUT in C++ and then texture it.
So for a simple example given the following 1D lut below:
Would I make an array with the following data?
int colorLUT[255] = {255,
254,
253,
...,
...,
...,
3,
2,
1,
0};
or unsigned char I guess since I'm going to be texturing it.
If this is how to create the LUT, then how would I convert it to a texture? Should I use glTexImage1D? Or is there a better method to do this? I'm really at a lose here, any advice would be helpful
I'm sorry to be so brief but I haven't seen any tutorials about how to actually make and link the LUT, every tutorial on GLSL only tells you about the shaders they always neglect the linking part.
My end goal is I would like to know how to take different 1D LUTs as seen below and apply them all to images.
Yes, you can use 1D textures as lookup tables.
You can load the data into a 1D texture with glTexImage1D(). Using GL_R8 as the internal texture format, and specifying the data as GL_UNSIGNED_BYTE when passing it to glTexImage1D(), is your best choice if 8 bits of precision are enough for the value. Your call will look like this, with lutData being a pointer/array to GLubyte data, and lutSize the size of your LUT:
glTexImage1D(GL_TEXTURE_1D, 0, GL_R8, lutSize, 0, GL_RED, GL_UNSIGNED_BYTE, lutData);
If you need higher precision than 8 bits, you can use formats like GL_R16 or GL_R32F.
Make sure that you also set the texture parameters correctly, e.g. for linear sampling between values in the lookup table:
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
You then bind the texture to a sampler1D uniform in your shader, and use the regular texture sampling functions to retrieve the new value. Remember that texture coordinates are in the range 0.0 to 1.0, so you need to map the range of your original values to [0.0, 1.0] before you pass it into the texture sampling function. The new value you receive from the texture sampling function will also be in the range [0.0, 1.0].
Note that as long as your lookup is a relatively simple function, it might be more efficient to calculate the function in the shader. But if the LUT can contain completely arbitrary mappings, using a 1D texture is a good way to go.
In OpenGL variations that do not have 1D textures, like OpenGL ES, you can use a 2D texture with height set to 1 instead.
If you need lookup tables that are larger than the maximum supported texture size, you can also look into buffer textures, as suggested by Andon in his comment.
I have a texture which has only 1 channel as it's a grayscale image. When I pass the pixels in to glTexImage2D, it comes out red (obviously because channel 1 is red; RGB).
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RGBA, GL_UNSIGNED_BYTE, pixelArrayPtr);
Do I change GL_RGBA? If so, what to?
Change it to GL_LUMINANCE. See https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml
in the FragmentShader, you can write:
uniform sampler2D A;
vec3 result = vec3(texture(A, TexCoord).r);
in the cpp file,you can write:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RED,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RED, GL_UNSIGNED_BYTE, pixelArrayPtr);
It appears that I should use GL_LUMINANCE instead of GL_RGBA for the 3rd argument.
Edit (in reply to comments):
When I set the 7th argument to GL_LUMINANCE (as well as the 3rd), the picture goes completely distorted. With the DICOM pixel format, it appears that the 7th argument must be GL_RGBA for some reason.
The strange behavior is because I'm using the DICOM standard. The particular DICOM reader I am using outputs integer pixel values (as pixel values may exceed the normal maximum of 255). For some strange reason the combination of telling OpenGL that I am using an RGBA format, but passing in integer values rendered a perfect image.
Because I was truncating the DICOM > 255 pixel values anyway, it seemed logical to copy the values in to a GLbyte array. However, after doing so, a SIGSEGV (segmentation fault) occurred when calling glTexImage2D. Changing the 7th parameter to GL_LUMINANCE (as is normally required) returned the functionality to normal.
Weird eh?
So, a note to all developers using the DICOM image format: You need to convert the integer array to a char array before passing it to glTexImage2D, or just set the 7th argument to GL_RGBA (the later is probably not recommended).
You would use GL_LUMINANCE format in old versions of openGL, but now in modern 3.0+ OpenGL versions GL_LUMINANCE is deprecated, so new way of doing it is to use GL_RED format, but that would result in a red texture, so to get around this you should create a costum shader as above answers have shown, in that shader you grab red component of the texture, as it's the only one with data you have given and set green/blue channels to red channel's value, that will convert is to grayscale, because grayscale textures have all 3 RGB channels the same and Alpha/Transparency channel set to 1.