Can't create FBO with more than 8 render buffers - opengl

So, here's the problem. I have got an FBO with 8 render buffers which I use in my deferred rendering pipeline. Then I added another render buffer and now I get a GLError.
GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glFramebufferTexture2D,
cArguments = (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, 12, 0,)
The code should be fine, since I have just copied it from the previously used render buffer.
glMyRenderBuffer = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, glMyRenderBuffer)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, self.width, self.height, 0, GL_RGB, GL_FLOAT, None)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, glMyRenderBuffer, 0)
glGenerateMipmap(GL_TEXTURE_2D)
And I get the error at this line
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, glMyRenderBuffer, 0)
It looks more like some kind of OpenGL limitation that I don't know about.
And I also have got a weird stack - Linux + GLFW + PyOpenGL which may also cause this problem.
I would be glad to any advice at this point.

It looks more like some kind of OpenGL limitation that I don't know about.
The relevant limit is GL_MAX_COLOR_ATTACHMENTS and the spec guarantees that this value is at least 8.
Now needing more than 8 render targets in a single pass seems insane anyway.
Consider the following things:
try to reduce the number of render targets as much as possible, do not store redundant information (such as vertex position) which can easily be calculated on the fly (you only need depth alone, and you usually have a depth attachment anyway)
use clever encodings appropriate for the data, i.e. 3xfloat for a normal vector is a huge waste. See for example Survey of Efficient Representations for Independent Unit Vectors
coalesce different render targets. i.e if you need one vec3 and 2 vec2 outputs, better use 2 vec4 targets and asiign the 8 values to the 8 channels
maybe even use a higher bitdepth formats like RGBA32UI and manually encode different values into a single channel
If you still need more data, you either can do several render passes (basically with n/8 targets for each pass). Another alternative would be to use image load/store or SSBOs in your fragment shader to write the additional data. In your Scenario, using image load/store seems to make most sense, soince you probaly need the resulting data as texture. You also get a relatively good access pattern, since you can basically use gl_FragCoord.xy for adressing the image. However, care must be taken if you have overlapping geometry in one draw call, so that you write to each pixel more than once (that issue is also addressed by the GL_ARB_fragment_shader_interlock extension, but that one is not yet a core feature of OpenGL). However, you might be able to eliminate that scenario completely by using a pre-depth-pass.

Related

glDrawPixels vs textures to draw a 2d buffer in OpenGL

I have a 2d graphic library that I want to use in OpenGL, to be able to mix 2d and 3d graphic. The simplest way seems to be with glDrawPixels, but many recent tutorial, and forums, suggest to use a texture with the command glTexSubImage2D, and then to draw a square with such a texture.
My question is: why? where is the advantage? It just adds one more step (memory buffer->texture->video buffer, instead of memory buffer->video buffer).
There are two main reasons:
glDrawPixels() is deprecated, and not available in the OpenGL core profile, or in OpenGL ES.
When drawing the image multiple times, a lot of repeated work can be saved by storing the image data in a texture.
It's quite rare that you would have to draw an image only once. Much more commonly, you'll draw it repeatedly, on each redraw. With glDrawPixels() you have to pass the image data into OpenGL each time. If you store it in a texture, you can draw it repeatedly, and OpenGL can reuse the same data each time.
To draw the content of a texture, you don't necessarily have to set up a shader, draw a quad, etc. You can use glBlitFramebuffer() to copy the texture content to the display.
Since OpenGL use a video memory, use a simple "draw pixel" must be really slow because you will do a lot GPU/CPU synchronisation for each draw.
When you use glTexSubImage2D, you ensure that your image will reside(all the time) into the video memory which is fast.
One way to load a texture inside video memory could be :
glCreateTextures(GL_TEXTURE_2D, 1, &texture->mId);
glTextureParameteri(mId, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTextureParameteri(mId, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
GLsizei numMipmaps = ((GLsizei)log2(std::max(surface->w, surface->h)) + 1);
glTextureStorage2D(*texture, numMipmaps, internalFormat, surface->w, surface->h);
glTextureSubImage2D(*texture, 0, 0, 0, surface->w, surface->h,
format, GL_UNSIGNED_BYTE, surface->pixels);
glGenerateTextureMipmap(*texture);
Don't forget binding if you do not want to use direct state access.
However, if you still want to perform pixel draw (for example for procedural rendering), you must write your own fragment shader to be as fast as possible

What does GL_COLOR_ATTACHMENT do?

I'm learning about framebuffers right now and I just don't understand what the Color attachment does. I understand framebuffers.
What is the point of the second parameter in:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureColorBuffer, 0);
Why doesn't anything draw to my frame buffer when I change it to COLOR_ATTACHMENT1?
How could I draw to the frame buffer by setting the texture to Color attachment 1?
Why would using multiple color attachments be useful?
Is it similar to the glActiveTexture(GL_TEXTUREi) concept of drawing multiple textures at once?
I'm just trying to understand more open gl. Thanks.
Yes, a framebuffer can have multiple color attachments, and the second parameter to glFramebufferTexture2D() controls which of them you're setting. The maximum number of supported color attachments can be queried with:
GLint maxAtt = 0;
glGetIntegerv(GL_MAX_COLOR_ATTACHMENTS, &maxAtt);
The minimum number supported by all implementations is 8.
To select which buffer(s) you want to render to, you use glDrawBuffer() or glDrawBuffers(). The only difference between these two calls is that the first one allows you to specify only one buffer, while the second one supports multiple buffers.
In the example you tried, you can render to GL_COLOR_ATTACHMENT1 by using either:
glDrawBuffer(GL_COLOR_ATTACHMENT1);
or:
GLenum bufs[1] = {GL_COLOR_ATTACHMENT1};
glDrawBuffers(1, bufs);
The default is GL_COLOR_ATTACHMENT0, which explains why you had success rendering to attachment 0 without ever using this call. The list of draw buffers is part of the FBO state.
To produce output for multiple color buffers, you define multiple outputs in the fragment shader.
One popular use for multiple color buffers (often referred to by the acronym MRT = Multiple Render Targets) is deferred shading, where the interpolated vertex attributes used for the shading calculation are stored in a number of buffers in the initial rendering pass. The actual shading calculation is then performed only for the visible pixels in a second pass, using the attribute values from the buffers produced in the first pass. See for example http://ogldev.atspace.co.uk/www/tutorial35/tutorial35.html for a more complete explanation of how this works.

Write to mutiple 3Dtextures in fragment shader OpenGL

I have a 3D texture where I write data and use it as voxels in the fragment shader in this way:
#extension GL_ARB_shader_image_size : enable
...
layout (binding = 0, rgba8) coherent uniform image3D volumeTexture;
...
void main(){
vec4 fragmentColor = ...
vec3 coords = ...
imageStore(volumeTexture, ivec3(coords), fragmentColor);
}
and the texture is defined in this way
glGenTextures(1, &volumeTexture);
glBindTexture(GL_TEXTURE_3D, volumeTexture);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA8, volumeDimensions, volumeDimensions, volumeDimensions, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
and then this when I have to use it
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D, volumeTexture);
now my issue is that I would like to have a mipmapped version of this and without using the opengl function because I noticed that it is extremely slow. So I was thinking of writing in the 3D texture at all levels at the same time so, for instance, the max resolution is 512^3 and as I write 1 voxel VALUE in that 3dtex I also write 0.125*VALUE for the 256^3 voxel and 0.015625*VALUE for the 126^3 etc. Since I am using imageStore, which uses atomicity all values will be written and using these weights I would automatically get the average value (not exactly like an interpolation but i might get a pleasing result anyway).
So my question is, what is the best way to have multiple 3dtextures and writing in all of them at the same time?
I believe hardware mipmapping is about as fast as you'll get. I've always assumed attempting custom mipmapping would be slower given you have to bind and rasterize to each layer manually in turn. Atomics will give huge contention and it'll be amazingly slow. Even without atomics you'd be negating the nice O(log n) construction of mipmaps.
You have to be really careful with imageStore with regard to access order and cache. I'd start here and try some different indexing (eg row/column vs column/row).
You could try drawing to the texture the older way, by binding it to an FBO and drawing a full screen triangle (big triangle that covers the viewport) with glDrawElementsInstanced. In the geometry shader, set gl_Layer to the instance ID. The rasterizer creates fragments for x/y and the layer gives z.
Lastly, 512^3 is simply a huge texture even by todays standards. Maybe find out your theoretical max gpu bandwidth to get an idea of how far away you are. E.G. lets say your GPU can do 200GB/s. You'll probably only get 100 in a good case anyway. Your 512^3 texture is 512MB so you might be able to write to it in ~5ms (imo this seems awfully fast, maybe I made a mistake). Expect some overhead and latency from the rest of the pipeline, spawning and executing threads etc. If you're writing complex stuff then memory bandwidth isn't the bottleneck and my estimation goes out the window. So try just writing zeroes first. Then try changing the coords xyz order.
Update: Instead of using the fragment shader to create your threads, the vertex shader can be used instead, and in theory avoids rasterizer overhead though I've seen cases where it doesn't perform as well. You glEnable(GL_RASTERIZER_DISCARD), glDrawArrays(GL_POINTS, 0, numThreads) and use gl_VertexID as your thread index.

How fast is it to change the texture filters in OpenGL at runtime?

I'm at this point where I would like to render a texture twice but with different filters.
It seems like a very bad idea to store the texture twice with different filters, that would take up way too much V-RAM. So I came up with the idea to just change the filters on the go, but how fast is it?
I'm thinking of doing it like this:
// First render call
BindTexture(...);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
RenderObject( ... );
BindTexture(...);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
RenderObject( ... );
So the final question is: How fast is it to update the texture parameters at runtime?
So I came up with the idea to just change the filters on the go, but how fast is it?
To the GPU it's merely a single register which value changes. So it's quite cheap. But the way you wrote it doesn't make much sense.
Since filtering parameters are part of the texture object, you set them after glBindTexture of the texture object in question.
If you want to use just the same texture with different filtering parameters you don't have to re-bind it inbetween.
Also since OpenGL-3.3 there's a class of data-less object (data-less objects can't be shared) called samplers. Samplers collect texture sampling parameters (like filtering), while textures provide the data. So if you want to switch filteing parameters often, or you have a common mode of sampling parameters for a large set of texture you can do this using a single sampler serving multiple textures.
See http://www.opengl.org/wiki/Sampler_Object
This depends highly on the implementation of GL you are using. Like anything performance-related, just test and see if it's fast enough for your specific application on your target hardware.
Relatively recent versions of GL include a feature called Samplers which are object you can create with various texture parameters. You can create a number of different samplers and then swap these out as needed rather than reconfiguring an existing texture. This also allows you to use two different texture sampling states for the same texture if necessary. This should be faster in general, but again, just test and see what works best in your specific circumstance.

How to make OpenGL pick the nearest larger mipmap?

I want to draw text with OpenGL using FreeType and to make it sharper, i generate the font texture from FreeType for each mipmap iteration. Everything works quite fine except for one thing. When i do this:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
OpenGL chooses the nearest mipmap according to the size of the text, but if available sizes are 16 and 32, and i want 22, it picks 16, making it look terrible. Is there a way to set it so that it instead always picks the nearest larger mipmap?
I know i can do this while rendering the text to set the mipmap level manually:
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_LOD, (int) log2(1/scale));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_LOD, (int) log2(1/scale));
But is that effective? Doesn't that kind of drop the need for using mipmaps completely? I could just make different textures and choose one according to size. So is there a better way to acomplish this?
You can use GL_TEXTURE_LOD_BIAS to add a constant to the mipmap level, which could help you accomplish this (for example, by always choosing the next higher mipmap level). However, it could simply be that mipmap-nearest isn't ideal for this situation. If you're always showing your texture in screen-space and know exactly how the font size corresponds to the pixels on the screen, then mipmapping may be too much machinery for you.
Have you tried GL_LINEAR_MIPMAP_LINEAR as the GL_TEXTURE_MIN_FILTER parameter? It should blend between the two nearest mipmaps. That might improve the appearance.
Also, you could try using the texture_lod_bias extension which is documented here:
http://www.opengl.org/registry/specs/EXT/texture_lod_bias.txt