I am trying to understand how to mipmap texture arrays .From what I understand,texture array is 3 dimensional structure where each texture 2D has a depth param in glTexStorage3D which sets a given texture to some position in the array.But how do I specify number of mipmaps per texture?Can I specify different number of mipmaps per texture?
Is this the right way to do it?
glTexStorage3D(GL_TEXTURE_2D_ARRAY,10,GL_RGBA8,width,height,numTextures);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
for(int i =0 ; i <numTextures;++i){
glTexSubImage3D(GL_TEXTURE_2D_ARRAY,/*What goes here???*/, 0, 0, 0, width, height, /*What goes here?*/, GL_RGBA, GL_UNSIGNED_BYTE, data);
}
glGenerateMipmap(GL_TEXTURE_2D_ARRAY);
I don't know what to pass into second and 8 parameters of glTexSubImage3D. Should the second param be a number of mipmaps and the 8th - depth of the current texture?
The second param is the mipmap level of the texture you want to load. In your case, since you want to rely on GL to generate the mipmaps, it's 0.
The eighth parameter is the depth. In the case of arrays, that means the number of layers you're passing. For you, it's 1, since you're passing a single layer per iteration of the loop.
The 5th parameter, however, is the offset in depth of where you want to store the data you're passing in. In your case, it's the layer you're loading i.
Related
I have a 3D texture. Each texel contains a transparency value on the alpha channel.
I need to generate my mipmaps in such a way that it always takes the values of the texel with he maximum alpha value.
In other words if there are 4 texels 3 with a transparency value of 0 and one with a transparency value of 1 the resulting mipmap texel should be 1.
How can I achieve this?
If I need to write my own shaders, what is the optimal way to do it?
EDIT:
My question, to put it more clearly is:
Do I need to manually create a shader that does this or is there a way to use built in functions of opengl to save me the trouble?
In order to do that, you'll need to render to each layer of each mipmap with a custom shader that computes max of 8 samples from the upper level.
This can be done by attaching each layer of the rendered mipmap to a framebuffer (using glFramebufferTexture3D), and, in the shader, sampling from the same texture by using texelFetch (lod parameter specifies the mipmap to sample from).
Is there a way to render monochromatically to a frame buffer in OpenGL?
My end goal is to render to a Cubic texture to create shadow maps for shading in my application.
From what I understand a way to do this would be, for each light source, render the scene 6 times (using the 6 possible orthogonal orientations for the camera) to an FBO each, then add all of them to the cube map.
I alrady have the shaders that render the depth map for one such camera position. However, these shaders render in full RGB, which, for a depth map, is 3 times bigger than it needs to be. Is there a way to render monochromatically so as to reduce the size of the textures?
How do you create texture[s] for your shadowmap (or cubemap)? If you use GL_DEPTH_COMPONENT[16|24|32] format while creating texture then the texture will be single channel as you want.
Check official documentation: https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glTexImage2D.xml
GL_DEPTH_COMPONENT
Each element is a single depth value.
The GL converts it to floating point, multiplies by the signed scale factor
GL_DEPTH_SCALE, adds the signed bias GL_DEPTH_BIAS,
and clamps to the range [0,1] (see glPixelTransfer).
As you can see it says each element is SINGLE depth value.
So if you use something like this:
for (i = 0; i < 6; i++)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
0,
GL_DEPTH_COMPONENT24,
size,
size,
0,
GL_DEPTH_COMPONENT,
GL_FLOAT,
NULL);
single element size must be 24-bit (maybe 32 with padding). Otherwise it would be ridiculous to specify depth size if it will store them as RGB[A].
This post also validates that depth texture is single channel texture: https://www.opengl.org/discussion_boards/showthread.php/123939-How-is-data-stored-in-GL_DEPTH_COMPONENT24-texture
"I alrady have the shaders that render the depth map for one such camera position. However, these shaders render in full RGB, which, for a depth map, is 3 times bigger than it needs to be."
In general you render scene to shadowmap to get depth value (or distance), right? Then why do you render as RGB anyway? If you only need to depth values, you don't need to color attachments because you don't need to write them, you only write to depth buffer (actually OpenGL itself do this if you are not overriding its values in frag)
I'm working on a graphics project involving using a 3D texture to do some volume rendering on data stored in the form of a rectilinear grid, and I was a little confused on the width, height, and depth arguments for glTexImage3D. For a 1D texture, I know that you can use something like this:
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
where the width is the 256 possible values for each color stream. Here, I'm still using colors in the form of unsigned bytes (and the main purpose of the texture here is still to interpolate the colors of the points in the grid, along with the transparencies), so it makes sense to me that one of the arguments would still be 256. It's a 64 X 64 X 64 grid, so it makes sense that one of the arguments (maybe the depth?) would be 64, but I'm not sure about the third, or even if I'm on the right track there. Could anyone enlighten me on the proper use of those three parameters? I looked at both of these discussions, but still came away confused:
regarding glTexImage3D
OPENGL how to use glTexImage3D function
It looks like you misunderstood the 1D case. In your example, 256 is the size of the 1D texture, i.e. the number of texels.
The number of possible values for each color component is given by the internal format, which is the 3rd argument. GL_RGB actually leaves it up to the implementation what the color depth should be. It is supported for backwards compatibility with old code. It gets clearer if you specify a sized format like GL_RGB8 for the internal format, which requests 8 bits per color component, which corresponds to 256 possible values each for R, G, and B.
For the 3D case, if you want a 64 x 64 x 64 grid, you simply specify 64 for the size in each of the 3 dimensions (width, height, and depth). Say for this size, using RGBA and a color depth of 8 bits, the call is:
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA8, 64, 64, 64, 0,
GL_RGBA, GL_UNSIGNED_BYTE, data);
If I create two textures as
1. srcID
2. destID
Both of type GL_TEXTURE_CUBE_MAP
glTexStorage2D(GL_TEXTURE_CUBE_MAP, 6, GL_RGBA8, 32, 32);
Now "srcID" is filled with all the texture data required.
So what parameters be used in order to copy entire "srcID" to "destID".
Tried many combinations but it always gave error.
This is untested, and purely from studying the man page and spec document:
glCopyImageSubData(srcID, GL_TEXTURE_CUBE_MAP, 0, 0, 0, 0,
destID, GL_TEXTURE_CUBE_MAP, 0, 0, 0, 0,
32, 32, 6);
The least obvious part is how to specify that you want to copy all cubemap faces. The spec says about the target arguments:
All non-proxy texture targets are accepted, with the exception of TEXTURE_BUFFER and the cubemap face selectors described in table 8.19.
This tells me that GL_TEXTURE_CUBE_MAP must be used for those arguments, not specific faces.
Then on how to specify that you want to copy all 6 cubemap faces, this part is relevant (highlight added by me):
Slices of a one-dimensional array, two-dimensional array, cube map array, or three dimensional texture, or faces of a cube map texture are all compatible provided they share a compatible internal format, and multiple slices or faces may be copied between these objects with a single call by specifying the starting slice with srcZ and dstZ, and the number of slices to be copied with srcDepth. Cubemap textures always have six faces which are selected by a zero-based face index, according to the order speciļ¬ed in table 8.19.
So copying all 6 faces should work with using 0 for srcZ and dstZ, and 6 for srcDepth.
I'm beginning to understand how to implement a fragment shader to do a 1D LUT but I am struggling to find any good resources that tell you how to make the 1D LUT in C++ and then texture it.
So for a simple example given the following 1D lut below:
Would I make an array with the following data?
int colorLUT[255] = {255,
254,
253,
...,
...,
...,
3,
2,
1,
0};
or unsigned char I guess since I'm going to be texturing it.
If this is how to create the LUT, then how would I convert it to a texture? Should I use glTexImage1D? Or is there a better method to do this? I'm really at a lose here, any advice would be helpful
I'm sorry to be so brief but I haven't seen any tutorials about how to actually make and link the LUT, every tutorial on GLSL only tells you about the shaders they always neglect the linking part.
My end goal is I would like to know how to take different 1D LUTs as seen below and apply them all to images.
Yes, you can use 1D textures as lookup tables.
You can load the data into a 1D texture with glTexImage1D(). Using GL_R8 as the internal texture format, and specifying the data as GL_UNSIGNED_BYTE when passing it to glTexImage1D(), is your best choice if 8 bits of precision are enough for the value. Your call will look like this, with lutData being a pointer/array to GLubyte data, and lutSize the size of your LUT:
glTexImage1D(GL_TEXTURE_1D, 0, GL_R8, lutSize, 0, GL_RED, GL_UNSIGNED_BYTE, lutData);
If you need higher precision than 8 bits, you can use formats like GL_R16 or GL_R32F.
Make sure that you also set the texture parameters correctly, e.g. for linear sampling between values in the lookup table:
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
You then bind the texture to a sampler1D uniform in your shader, and use the regular texture sampling functions to retrieve the new value. Remember that texture coordinates are in the range 0.0 to 1.0, so you need to map the range of your original values to [0.0, 1.0] before you pass it into the texture sampling function. The new value you receive from the texture sampling function will also be in the range [0.0, 1.0].
Note that as long as your lookup is a relatively simple function, it might be more efficient to calculate the function in the shader. But if the LUT can contain completely arbitrary mappings, using a 1D texture is a good way to go.
In OpenGL variations that do not have 1D textures, like OpenGL ES, you can use a 2D texture with height set to 1 instead.
If you need lookup tables that are larger than the maximum supported texture size, you can also look into buffer textures, as suggested by Andon in his comment.