If I create two textures as
1. srcID
2. destID
Both of type GL_TEXTURE_CUBE_MAP
glTexStorage2D(GL_TEXTURE_CUBE_MAP, 6, GL_RGBA8, 32, 32);
Now "srcID" is filled with all the texture data required.
So what parameters be used in order to copy entire "srcID" to "destID".
Tried many combinations but it always gave error.
This is untested, and purely from studying the man page and spec document:
glCopyImageSubData(srcID, GL_TEXTURE_CUBE_MAP, 0, 0, 0, 0,
destID, GL_TEXTURE_CUBE_MAP, 0, 0, 0, 0,
32, 32, 6);
The least obvious part is how to specify that you want to copy all cubemap faces. The spec says about the target arguments:
All non-proxy texture targets are accepted, with the exception of TEXTURE_BUFFER and the cubemap face selectors described in table 8.19.
This tells me that GL_TEXTURE_CUBE_MAP must be used for those arguments, not specific faces.
Then on how to specify that you want to copy all 6 cubemap faces, this part is relevant (highlight added by me):
Slices of a one-dimensional array, two-dimensional array, cube map array, or three dimensional texture, or faces of a cube map texture are all compatible provided they share a compatible internal format, and multiple slices or faces may be copied between these objects with a single call by specifying the starting slice with srcZ and dstZ, and the number of slices to be copied with srcDepth. Cubemap textures always have six faces which are selected by a zero-based face index, according to the order speciļ¬ed in table 8.19.
So copying all 6 faces should work with using 0 for srcZ and dstZ, and 6 for srcDepth.
Related
My problem is, that I can't read the values, stored in a texture which has only a red component correctly. My first implementation caused a buffer overflow. So I read the openGL reference and it says:
If the selected texture image does not contain four components, the following mappings are applied. Single-component textures are treated as RGBA buffers with red set to the single-component value, green set to 0, blue set to 0, and alpha set to 1. Two-component textures are treated as RGBA buffers with red set to the value of component zero, alpha set to the value of component one, and green and blue set to 0. Finally, three-component textures are treated as RGBA buffers with red set to component zero, green set to component one, blue set to component two, and alpha set to 1.
The first confusing thing is, that the nvidia implementation packs the values tight together. If I have four one byte values I only need four bytes space, not 16.
So I read the openGL specification and it told me on page 236 in table 8.18 the same, except that a two component texture stores it second value not in the alpha channel, but in the green channel, which makes also more sense for me. But which definition is correct?
It also says:
If format is a color format then the components are assigned
among R, G, B, and A according to table 8.18[...]
So I ask you: "What is a color format?" and "Is my texture data tight packed if the format is not a color format"?
My texture is defined like this:
type: GL_UNSIGNED_BYTE
format: GL_RED
internalformat: GL_R8
Another thing is that when my texture has a size of two times two pixels the first two values are being saved in the first two bytes, but the other two values in the fith and sixth bytes of my buffer. The two bytes in between are padding. So I got the "GL_PACK_ALIGNMENT" state and it says four bytes. How can that be?
The GetTexImage call:
std::vector<GLubyte> values(TEXTURERESOLUTION * TEXTURERESOLUTION);
GLvoid *data = &values[0];//values are being passed through a function which does that
glBindTexture(GL_TEXTURE_2D, TEXTUREINDEX);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RED, GL_UNSIGNED_BYTE, data);
glBindTexture(GL_TEXTURE_2D, 0);
The first confusing thing is, that the nvidia implementation packs the values tight together.
That is exactly what should to happen. The extension to 2, 3 or 4 components is only relevant when you actually read back with GL_RG, GL_RGB or GL_RGBA formats (and the source texture hass less components. If you just aks for GL_RED you will also only get GL_RED
[...] except that a two component texture stores it second value not in the alpha channel, but in the green channel, which makes also more sense for me. But which definition is correct?
The correct definition is the one in the spec. The reference pages have often small inaccuracies or omissions, unfortunately. In this case, I think the reference is just outdated. The description matches the old and now deprecated GL_LUMINANCE and GL_LUMINANCE_ALPHA formats for one and two channels, repsectively, not the modern GL_RED and GL_RG ones.
So I ask you: "What is a color format?"
A color format is one for color textures, in contrast to non-color formats like GL_DEPTH_COMPONENT or GL_STENCIL_INDEX.
Concerning your problem with GL_PACK_ALIGNMENT: The GL behaves exactly as it is intended to behave. You have a 2x2 texture and GL_PACK_ALIGNMENT of 4, which means that data will be padded at each row so the distance from one row tow the next will be a multiple of 4. So you will get the first row tightly packed, 2 padding bytes, and finally the second row.
I'm working on a graphics project involving using a 3D texture to do some volume rendering on data stored in the form of a rectilinear grid, and I was a little confused on the width, height, and depth arguments for glTexImage3D. For a 1D texture, I know that you can use something like this:
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
where the width is the 256 possible values for each color stream. Here, I'm still using colors in the form of unsigned bytes (and the main purpose of the texture here is still to interpolate the colors of the points in the grid, along with the transparencies), so it makes sense to me that one of the arguments would still be 256. It's a 64 X 64 X 64 grid, so it makes sense that one of the arguments (maybe the depth?) would be 64, but I'm not sure about the third, or even if I'm on the right track there. Could anyone enlighten me on the proper use of those three parameters? I looked at both of these discussions, but still came away confused:
regarding glTexImage3D
OPENGL how to use glTexImage3D function
It looks like you misunderstood the 1D case. In your example, 256 is the size of the 1D texture, i.e. the number of texels.
The number of possible values for each color component is given by the internal format, which is the 3rd argument. GL_RGB actually leaves it up to the implementation what the color depth should be. It is supported for backwards compatibility with old code. It gets clearer if you specify a sized format like GL_RGB8 for the internal format, which requests 8 bits per color component, which corresponds to 256 possible values each for R, G, and B.
For the 3D case, if you want a 64 x 64 x 64 grid, you simply specify 64 for the size in each of the 3 dimensions (width, height, and depth). Say for this size, using RGBA and a color depth of 8 bits, the call is:
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA8, 64, 64, 64, 0,
GL_RGBA, GL_UNSIGNED_BYTE, data);
I am trying to understand how to mipmap texture arrays .From what I understand,texture array is 3 dimensional structure where each texture 2D has a depth param in glTexStorage3D which sets a given texture to some position in the array.But how do I specify number of mipmaps per texture?Can I specify different number of mipmaps per texture?
Is this the right way to do it?
glTexStorage3D(GL_TEXTURE_2D_ARRAY,10,GL_RGBA8,width,height,numTextures);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
for(int i =0 ; i <numTextures;++i){
glTexSubImage3D(GL_TEXTURE_2D_ARRAY,/*What goes here???*/, 0, 0, 0, width, height, /*What goes here?*/, GL_RGBA, GL_UNSIGNED_BYTE, data);
}
glGenerateMipmap(GL_TEXTURE_2D_ARRAY);
I don't know what to pass into second and 8 parameters of glTexSubImage3D. Should the second param be a number of mipmaps and the 8th - depth of the current texture?
The second param is the mipmap level of the texture you want to load. In your case, since you want to rely on GL to generate the mipmaps, it's 0.
The eighth parameter is the depth. In the case of arrays, that means the number of layers you're passing. For you, it's 1, since you're passing a single layer per iteration of the loop.
The 5th parameter, however, is the offset in depth of where you want to store the data you're passing in. In your case, it's the layer you're loading i.
I'm beginning to understand how to implement a fragment shader to do a 1D LUT but I am struggling to find any good resources that tell you how to make the 1D LUT in C++ and then texture it.
So for a simple example given the following 1D lut below:
Would I make an array with the following data?
int colorLUT[255] = {255,
254,
253,
...,
...,
...,
3,
2,
1,
0};
or unsigned char I guess since I'm going to be texturing it.
If this is how to create the LUT, then how would I convert it to a texture? Should I use glTexImage1D? Or is there a better method to do this? I'm really at a lose here, any advice would be helpful
I'm sorry to be so brief but I haven't seen any tutorials about how to actually make and link the LUT, every tutorial on GLSL only tells you about the shaders they always neglect the linking part.
My end goal is I would like to know how to take different 1D LUTs as seen below and apply them all to images.
Yes, you can use 1D textures as lookup tables.
You can load the data into a 1D texture with glTexImage1D(). Using GL_R8 as the internal texture format, and specifying the data as GL_UNSIGNED_BYTE when passing it to glTexImage1D(), is your best choice if 8 bits of precision are enough for the value. Your call will look like this, with lutData being a pointer/array to GLubyte data, and lutSize the size of your LUT:
glTexImage1D(GL_TEXTURE_1D, 0, GL_R8, lutSize, 0, GL_RED, GL_UNSIGNED_BYTE, lutData);
If you need higher precision than 8 bits, you can use formats like GL_R16 or GL_R32F.
Make sure that you also set the texture parameters correctly, e.g. for linear sampling between values in the lookup table:
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
You then bind the texture to a sampler1D uniform in your shader, and use the regular texture sampling functions to retrieve the new value. Remember that texture coordinates are in the range 0.0 to 1.0, so you need to map the range of your original values to [0.0, 1.0] before you pass it into the texture sampling function. The new value you receive from the texture sampling function will also be in the range [0.0, 1.0].
Note that as long as your lookup is a relatively simple function, it might be more efficient to calculate the function in the shader. But if the LUT can contain completely arbitrary mappings, using a 1D texture is a good way to go.
In OpenGL variations that do not have 1D textures, like OpenGL ES, you can use a 2D texture with height set to 1 instead.
If you need lookup tables that are larger than the maximum supported texture size, you can also look into buffer textures, as suggested by Andon in his comment.
Thus far i have only used glDrawArrays and would like to move over to using an index buffer and indexed triangles. I am drawing a somewhat complicated object with texture coords, normals and vertex coords. All this data is gathered into a single interleaved vertex buffer and drawn using calls similar to ( Assuming all the serup is done correctly ):
glVertexPointer( 3, GL_FLOAT, 22, (char*)m_vertexData );
glNormalPointer( GL_SHORT, 22, (char*)m_vertexData+(12) );
glTexCoordPointer( 2, GL_SHORT, 22, (char*)m_vertexData+(18) );
glDrawElements(GL_TRIANGLES, m_numTriangles, GL_UNSIGNED_SHORT, m_indexData );
Does this allow for m_indexData to also be interleaved with the indices of my normals and texture coords as well as the standard position index array? Or does it assume a single linear list of inidices that apply to the entire vertex format ( POS, NOR, TEX )? If the latter is true, how is it possible to render the same vertex with different texture coords or normals?
I guess this question could also be rephrased into: if i had 3 seperate indexed lists ( POS, NOR, TEX ) where the latter 2 cannot be rearranged to share the same index list as the first, what is the best way to render that.
You cannot have different indexes for the different lists. When you specify glArrayElement(3) then OpenGL is going to take the 3rd element of every list.
What you can do is play with the pointer you specify since essentially the place in the list which is eventually accessed is the pointer offset from the start of the list plus the index you specify. This is useful if you have a constant offset between the lists. if the lists are just a random permutation then this kind of play for every vertex is probably going to be as costy as just using plain old glVertex3fv(), glNormal3fv() and glTexCoord3fv()
I am having similar trouble attempting to do the same in Direct3D 9.0
For my OpenGL 3 implementation it was rather easy, and my source code is available online if it might help you any...
https://github.com/RobertBColton/enigma-dev/blob/master/ENIGMAsystem/SHELL/Graphics_Systems/OpenGL3/GL3model.cpp