OpenGL texture array initialization - opengl

I want to use texture arrays to reduce the high texture binding cost, but I can't upload the data to the texture array. I use Tao framework. Here's my code:
Gl.glEnable(Gl.GL_TEXTURE_2D_ARRAY_EXT);
Gl.glGenTextures(1, out textureArray);
Gl.glBindTexture(Gl.GL_TEXTURE_2D_ARRAY_EXT, textureArray);
var data = new uint[textureWidth, textureHeight, textureCount];
for (var x = 0; x < textureWidth; x++)
{
for (var y = 0; y < textureHeight; y++)
{
for (var z = 0; z < textureCount; z++)
data[x, y, z] = GetRGBAColor(1, 1, 1, 1);
}
}
Gl.glTexImage3D(Gl.GL_TEXTURE_2D_ARRAY_EXT, 0, Gl.GL_RGBA, textureWidth,
textureHeight, textureCount, 0, Gl.GL_RGBA, Gl.GL_UNSIGNED_BYTE, data);
Console.WriteLine(Glu.gluErrorString(Gl.glGetError()));
The glTexImage3D function says there is an invalid enumerant.

The most likely cause for a GL_INVALID_ENUM in the above code is the
Gl.glEnable(Gl.GL_TEXTURE_2D_ARRAY_EXT);
call.
This is simply not allowed. Array textures cannot be used with the fixed-function pipeline, but only with shaders (which do not need those texture enables at all). The GL_EXT_texture_array spec makes this quite clear:
This extension does not provide for the use of array textures with fixed-function fragment processing. Such support could be added by providing an additional extension allowing pplications to pass the new target enumerants (TEXTURE_1D_ARRAY_EXT and TEXTURE_2D_ARRAY_EXT) to Enable and Disable.
There never was any further extension allowing array textures for fixed-function processing (AFAIK)...

Change the 2nd parameter of glTexImage3d into 1.
I don't know why, however, nvidia's opengl driver seems to need at least 1 level for texture 2d array object.

Related

OpenGL: Shader storage buffer mapping/binding

I'm currently working on a program which supports depth-independent (also known as order-independent) alpha blending. To do that, I implemented a per-pixel linked list, using a texture for the header (points for every pixel to the first entry in the linked list) and a texture buffer object for the linked list itself. While this works fine, I would like to exchange the texture buffer object with a shader storage buffer as an excercise.
I think I almost got it, but it took me about a week to get to a point where I could actually use the shader storage buffer. My question are:
Why I can't map the shader storage buffer?
Why is it a problem to bind the shader storage buffer again?
For debugging, I just display the contents of the shader storage buffer (which doesn't contain a linked list yet). I created the shader storage buffer in the following way:
glm::vec4* bufferData = new glm::vec4[windowOptions.width * windowOptions.height];
glm::vec4* readBufferData = new glm::vec4[windowOptions.width * windowOptions.height];
for(unsigned int y = 0; y < windowOptions.height; ++y)
{
for(unsigned int x = 0; x < windowOptions.width; ++x)
{
// Set the whole buffer to red
bufferData[x + y * windowOptions.width] = glm::vec4(1,0,0,1);
}
}
GLuint ssb;
// Get a handle
glGenBuffers(1, &ssb);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssb);
// Create buffer
glBufferData(GL_SHADER_STORAGE_BUFFER, windowOptions.width * windowOptions.height * sizeof(glm::vec4), bufferData, GL_DYNAMIC_COPY);
// Now bind the buffer to the shader
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssb);
In the shader, the shader storage buffer is defined as:
layout (std430, binding = 0) buffer BufferObject
{
vec4 points[];
};
In the rendering loop, I do the following:
glUseProgram(defaultProgram);
for(unsigned int y = 0; y < windowOptions.height; ++y)
{
for(unsigned int x = 0; x < windowOptions.width; ++x)
{
// Create a green/red color gradient
bufferData[x + y * windowOptions.width] =
glm::vec4((float)x / (float)windowOptions.width,
(float)y / (float)windowOptions.height, 0.0f, 1.0f);
}
}
glMemoryBarrier(GL_ALL_BARRIER_BITS); // Don't know if this is necessary, just a precaution
glBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, windowOptions.width * windowOptions.height * sizeof(glm::vec4), bufferData);
// Retrieving the buffer also works fine
// glMemoryBarrier(GL_ALL_BARRIER_BITS);
// glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, windowOptions.width * windowOptions.height * sizeof(glm::vec4), readBufferData);
glMemoryBarrier(GL_ALL_BARRIER_BITS); // Don't know if this is necessary, just a precaution
// Draw a quad which fills the screen
// ...
This code works, but when I replace glBufferSubData with the following code,
glm::vec4* p = (glm::vec4*)glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, windowOptions.width * windowOptions.height, GL_WRITE_ONLY);
for(unsigned int x = 0; x < windowOptions.width; ++x)
{
for(unsigned int y = 0; y < windowOptions.height; ++y)
{
p[x + y * windowOptions.width] = glm::vec4(0,1,0,1);
}
}
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
the mapping fails, returning GL_INVALID_OPERATION. It seems like the shader storage buffer is still bound to something, so it can't be mapped. I read something about glGetProgramResourceIndex (http://www.opengl.org/wiki/GlGetProgramResourceIndex) and glShaderStorageBlockBinding (http://www.opengl.org/wiki/GlShaderStorageBlockBinding), but I don't really get it.
My second question is, why I can neither call
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssb);
, nor
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssb);
in the render loop after glBufferSubData and glMemoryBarrier. This code should not change a thing, since these calls are the same as during the creation of the shader storage buffer. If I can't bind different shader storage buffers, I can only use one. But I know that more than one shader storage buffer is supported, so I think I'm missing something else (like "releasing" the buffer).
First of all, the glMapBufferRange fails simply because GL_WRITE_ONLY is not a valid argument to it. That was used for the old glMapBuffer, but glMapBufferRange uses a collection of flags for more fine-grained control. In your case you need GL_MAP_WRITE_BIT instead. And since you seem to completely overwrite the whole buffer, without caring for the previous values, an additional optimization would probably be GL_MAP_INVALIDATE_BUFFER_BIT. So replace that call with:
glm::vec4* p = (glm::vec4*)glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0,
windowOptions.width * windowOptions.height,
GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
The other error is not described that well in the question. But fix this one first and maybe it will already help with the following error.

Luminance values clipped to [0, 1] during texture transfer?

I am uploading a host-side texture to OpenGL using something like:
GLfloat * values = new [nRows * nCols];
// initialize values
for (int i = 0; i < nRows * nCols; ++i)
{
values[i] = (i % 201 - 100) / 10.0f; // values from -10.0f .. + 10.0f
}
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, nRows, nCols, GL_LUMINANCE, GL_FLOAT, values);
However, when I read back the texture using glGetTexImage(), it turns out that all values are clipped to the range [0..1].
First, I cannot find where this behavior is documented (I am using the Red Book for OpenGL 2.1).
Second, is it possible to change this behavior and let the values pass unchanged? I want to access the unscaled, unclipped data in an GLSL shader.
I cannot find where this behavior is documented
In the actual specification, it's in the section on Pixel Rectangles, titled Transfer of Pixel Rectangles.
Second, is it possible to change this behavior and let the values pass unchanged?
Yes. If you want to use "unscaled, unclamped" data, you have to use a floating point image format. The format of your texture is defined when you created the storage for it, probably by a call to glTexImage2D. The third parameter of that function defines the format. So use a proper floating-point format instead of an integer one.

Semantics of glBindMultiTexture and glEnableIndexed?

What are the semantics of glBindMultiTexture and glEnableIndexed?
I have seen glBindMultiTexture used with glEnableIndexed where it seems to to something similar to e.g. glEnable(GL_TEXTURE_2D) though I am unsure if it is required or not and if it replaces glEnable(GL_TEXTURE_2D) or not, or should both be used? The DSA spec doesn't seem to mention glEnableIndexed in the context of glBindMultiTextureEXT.
What is the correct usage?
// Init 1
glEnable(GL_TEXTURE_2D);
for(int n = 0; n < 4; ++n)
glEnableIndexed(GL_TEXTURE_2D, n);
// Init 2
for(int n = 0; n < 4; ++n)
glEnableIndexed(GL_TEXTURE_2D, n);
// Init 3
glEnable(GL_TEXTURE_2D);
// For each frame 1
for(int n = 0; n < 4; ++n)
glBindMultiTexture(GL_TEXTURE0 + n, GL_TEXTURE_2D, textureIds[n]);
// For each frame 2
for(int n = 0; n < 4; ++n)
{
glEnableIndexed(GL_TEXTURE_2D, n);
glBindMultiTexture(GL_TEXTURE0 + n, GL_TEXTURE_2D, textureIds[n]);
}
glEnableIndexed does not exist. glEnableIndexedEXT does however, as does glEnablei (the core OpenGL 3.0 equivalent). I'll assume you're talking about them. Same goes for glBindMultiTextureEXT.
Now that that bit of nomenclature is out of the way, it's not entirely clear what you mean by "correct usage".
If the intent of the "Init" code is to enable GL_TEXTURE_2D for fixed-function use across the first four fixed-function texture units, then 1 and 2 will do that. 3 will only enable it for the current texture unit. Do note that this is only for fixed-function texture use.
Which is where the other point comes in: generally, you do not simply enable a bunch of texture targets globally like that in an initialization routine. This would only make sense if everything you are rendering in the entire scene uses 4 2D textures in the first four texture units. Generally speaking, you enable and disable texture targets as needed for each object.
So I would say that having no enables in your initialization and enabling (and disabling) targets around your rendering calls is the "correct usage".
Also, be advised that this is no different from directly using glActiveTexture in this regard. So the fact that you're using the DSA switch-less commands is irrelevant.

How do I set certain colors in an Opengl texture to transparent?

I am trying to create a library i can use to handle 2d rendering in Opengl (c++) i have it all figured out except I can't figure out how to set current colors transparent (ex. being able to set 255, 0, 255 to transparent) I realize from reading on the topic that I need to preprocess the texture and set that color's alpha value to 0 but I have no idea how to do this.
PS: I am using SOIL for loading textures if that helps.
I realize from reading on the topic that I need to preprocess the texture and set that color's alpha value to 0 but I have no idea how to do this.
for(y = 0; y < image.height; y++) for(x = 0; x < image.width; x++) {
if( image.data[x, y].rgb == colorkey ) {
image.data[x, y].alpha = 0.0;
} else {
image.data[x, y].alpha = 1.0;
}
}
/* ... */
upload_image_to_texture(image);
Firstly I would probably recommend you use alpha transparent textures rather than a color key/chroma key ones unless their is some specific reason not to (ie really low memory or your trying to use the Minecraft ones).
With that said, use shaders. In your fragment shader use the 'discard' keyword when the fragment color matches your color key. There's an official tutorial.

Using vertex buffers in jogl, crash when too many triangles

I have written a simple application in Java using Jogl which draws a 3d geometry. The camera can be rotated by dragging the mouse. The application works fine, but drawing the geometry with glBegin(GL_TRIANGLE) ... calls ist too slow.
So I started to use vertex buffers. This also works fine until the number of triangles gets larger than 1000000. If that happens, the display driver suddenly crashes and my montior gets dark. Is there a limit of how many triangles fit in the buffer? I hoped to get 1000000 triangles rendered at a reasonable frame rate.
I have no idea on how to debug this problem. The nasty thing is that I have to reboot Windows after each launch, since I have no other way to get my display working again. Could anyone give me some advice?
The vertices, triangles and normals are stored in arrays float[][] m_vertices, int[][] m_triangles, float[][] m_triangleNormals.
I initialized the buffer with:
// generate a VBO pointer / handle
if (m_vboHandle <= 0) {
int[] vboHandle = new int[1];
m_gl.glGenBuffers(1, vboHandle, 0);
m_vboHandle = vboHandle[0];
}
// interleave vertex / normal data
FloatBuffer data = Buffers.newDirectFloatBuffer(m_triangles.length * 3*3*2);
for (int t=0; t<m_triangles.length; t++)
for (int j=0; j<3; j++) {
int v = m_triangles[t][j];
data.put(m_vertices[v]);
data.put(m_triangleNormals[t]);
}
data.rewind();
// transfer data to VBO
int numBytes = data.capacity() * 4;
m_gl.glBindBuffer(GL.GL_ARRAY_BUFFER, m_vboHandle);
m_gl.glBufferData(GL.GL_ARRAY_BUFFER, numBytes, data, GL.GL_STATIC_DRAW);
m_gl.glBindBuffer(GL.GL_ARRAY_BUFFER, 0);
Then, the scene gets rendered with:
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, m_vboHandle);
gl.glEnableClientState(GL2.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL2.GL_NORMAL_ARRAY);
gl.glVertexPointer(3, GL.GL_FLOAT, 6*4, 0);
gl.glNormalPointer(GL.GL_FLOAT, 6*4, 3*4);
gl.glDrawArrays(GL.GL_TRIANGLES, 0, 3*m_triangles.length);
gl.glDisableClientState(GL2.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL2.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, 0);
Try checking the return value of calling glBufferData. It will return GL_OUT_OF_MEMORY if it cannot satisfy numBytes.