Error when create FrameBuffer: GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS - opengl

I'm using libgdx, to create some program. I need used some operation in framebuffer. For this operation I create new framebuffer, after this operation I'm call in framebuffer dispose(). When I create framebuffer 10 time, I have crash program with error: frame buffer couldn't be constructed: incomplete dimensions. I see at code libgdx and see that this is GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS status of framebuffer. Why is it happened? What am I must to do to fixing this problem?
Code:
if(maskBufferer != null){
maskBufferer.dispose();
}
maskBufferer = new FrameBuffer(Pixmap.Format.RGBA8888, width, height, true);
mask = createMaskImageMask(aspectRatioCrop, maskBufferer);
...
private Texture createMaskImageMask(boolean aspectRatioCrop, FrameBuffer maskBufferer) {
maskBufferer.begin();
Gdx.gl.glClearColor(COLOR_FOR_MASK, COLOR_FOR_MASK, COLOR_FOR_MASK, ALPHA_FOR_MASK);
Gdx.graphics.getGL20().glClear(GL20.GL_COLOR_BUFFER_BIT);
float[] coord = null;
...
PolygonRegion polyReg = new PolygonRegion( new TextureRegion(new Texture(Gdx.files.internal(texturePolygon)) ),
coord);
PolygonSprite poly = new PolygonSprite(polyReg);
PolygonSpriteBatch polyBatch = new PolygonSpriteBatch();
polyBatch.begin();
poly.draw(polyBatch);
polyBatch.end();
maskBufferer.end();
return maskBufferer.getColorBufferTexture();
}

EDIT
To summarize, GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS can occur in libgdx when too many FrameBuffer objects are created (without calling .dispose()), possibly to do with OpenGL running out of FBO or texture/renderbuffer handles.
If no handle is returned with glGenFrameBuffers then an FBO won't be bound when attaching targets or checking the status. Likewise an attempt to attach (from a failed call to glGenTextures) an invalid target will cause the FBO will be incomplete. Though it seems incorrect to report GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS in either case.
One possibility may be the call to allocate memory for the target, such as glTexImage2D or glRenderbufferStorage has failed (out of memory). This leaves the dimensions of the target not equal to other targets already successfully attached to the FBO, and could then produce the error.
It's pretty standard to create a framebuffer once, attach your render targets and reuse each frame. By dispose do you mean glDeleteFrameBuffers?
It looks like there should be delete maskBufferer; after and as well as maskBufferer.dispose();. EDIT if it were C++
Given this error happens after a number of frames it could be many things. Double check you aren't creating framebuffers or attachments each frame, not deleting them and running out of objects/handles.
It also looks like GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS is no longer used (see the specs), something along the lines of the ability to have mixed dimensions now. Seems like it'd be worth checking that your attachments are all the same size though.
A quick way to narrow down which attachment is causing issues is to comment out half of them and see when the error occurs (or check the status after each attach).

I'm resolved problem. Dispose was helped. I was re-create the class every time, because of dispose was never call.

Related

Dealing with the catch 22 of object lifetimes in vulkan's device, surface, and swapchain in C++?

Background:
In order to even display to the screen you need to enable a "KHR" (khronos group extension) extension for presentation surfaces.
A surface, as far as I understand, is an abstraction of the windows/places images are displayed returned by your window software.
In vulkan you have a VkSurface (returned by your window software, ie GLFW), which has certain properties
These properties are needed in order to know if a Device is compatible with it. In other words, before a VkDevice is created (the actual logical view of the GPU which you can actually use to submit commands to), it needs to know about the Surface if you are going to use it, specifically in order to create a device with presentation queues that support that surface with the properties it has.
Once the device is created, you can create the swapchain, which is basically a series of buffers/attachments you actually use to render to.
Swapchain's however have a 1:1 relationship with surfaces. There can only ever be a single swapchain per surface at max.
Problem:
This is where I start running into issues. In my code-base, I codify this relationship in a member variable. A surface has a swapchain, which guarantees that you as the programmer can't accidentally create multiple swapchains per surface if you use my wrapper.
But, if we use this abstraction the following happens:
my::Surface surface = window.create_surface(...); //VkSurface wrapper
auto queue_family = physical_device.find_queue_family_that_matches(surface,...);
auto queue_create_list = {{queue_family, priority},...};
my::Device device = physical_device.create_device(...,queue_create_list,...);
my::swapchain_builder.swapchain_builder(device);
swapchain_builder.builder_pattern_x(...).builder_pattern_x(...)....;
surface.create_swapchain(swapchain_builder);
...
render loop{
}
...
//end of program
return 0;
//ERROR! device no longer exists to destroy swapchain!
}
Because the surface is created before the device, and because the swapchain is a member of the surface, on destruction the device is destroyed before the swapchain.
The "solution" I came up with in the mean time was this:
my::Device device; //device default constructible, but becomes a VK_NULL_HANDLE underneath
my::Surface surface = ...;
...
device = physical_device.create_device(...,queue_create_list,...);
...
surface.create_swapchain(swapchain_builder);
And this certainly works. The surface is destroyed before the device is, and thus so is the swapchain. But it leaves a bad taste in my mouth.
The whole reason I made swapchain a member was to eliminate bugs caused by multiple swapchains being created, my eliminating the option for the bug to exist in the first place, and remove the need for the user to think about the Vulkan Spec by encoding that requirement into my wrapper itself.
But Now the user has to remember to default initialize the device first... or they will get an esoteric error (not as good as the one I show here) unless they use validation layers.
Question:
Is there some way to encode this object relationship at compile time with out runtime declaration order issues?, is there maybe a better way to codify a 1:1 relationship in this scenario, such that the surface object could exist on it's own and RAII order would handle this?
Swapchain's however have a 1:1 relationship with surfaces. There can only ever be a single swapchain per surface at max.
That is not true. From the standard:
A native window cannot be associated with more than one non-retired swapchain at a time.
You can create multiple swapchains for a surface. However, when you create a new one, you have to provide the old one, and the old one becomes "retired". Images you have previously acquired from the retired swapchain can still be presented, but you cannot acquire images more from the swapchain.
This moves nicely into the next point: the user needs to be able to recreate the swapchain for a surface.
Swapchains can become invalid, perhaps due to user rescaling of a window or other things. When this happens, the user needs to recreate them. Whether you retire the old one or not, you're going to have to call the function to create one.
So if you want your surface class to store a swapchain, your API needs a way for the user to create a swapchain.
In short, your goal is wrong; users need the function you're trying to get rid of.

Multiple calls to cv::ogl::Texture2D.copyFrom() results in cv::Exception (-219)

I am rendering two views per frame on an HMD and it's kind of complicated right now because I use OpenCV to load images and process intermediary results and the rest is OpenGL, but I still want it to work. I am using OpenCV 3.1 and any help would be greatly appreciated, even if you have just some advice.
Application details:
Per view (left and right eye) I take four images as cv::Mat and copy them into four cv::ogl::Texture2D objects. Then I bind these textures to active OpenGL textures to configure my shader and draw to a framebuffer. I read the pixels of the frame buffer again (glReadPixels()) as a cv::Mat and do some postprocessing. This cv::Mat ("synthView") is getting copied to another cv::ogl::Texture2D which is rendered on a 2D screenspace quad for the view.
Here's some console output I logged for each call to the cv::ogl::Texture2D objects. No actual code!
// First iteration for my left eye view
colorImageTexture[LEFT].copyFrom(imageLeft, true); //view1
colorImageTexture[RIGHT].copyFrom(imageRight, true); //view1
depthImageTexture[LEFT].copyFrom(depthLeft, true); //view1
depthImageTexture[RIGHT].copyFrom(depthRight, true); //view1
colorImageTexture[i].bind(); //left
depthImageTexture[i].bind(); //left
colorImageTexture[i].bind(); //right
depthImageTexture[i].bind(); //right
synthesizedImageTexture.copyFrom(synthView, true); //frame0, left_eye done
// Second iteration for my right eye view, reusing colorImageTexture[LEFT] the first time
colorImageTexture[LEFT].copyFrom(imageLeft, true); //view2 // cv::Exception!
The code was working when I catched the exceptions and used the Oculus DK2 instead of the CV1. As you can see, I can run through one rendered view, but trying to render the second view will throw an exception in the copyFrom method at gl::Buffer::unbind(ogl::Buffer::PIXEL_UNPACK_BUFFER).
The exception occurs after all ogl::Texture2D objects have been used once and the first one gets "reused", which means that it will not call ogl::Texture2D::create(...) in the copyFrom() function!
Details of the cv::Exception:
code: -219
err: The specified operation is not allowed in the current state
func: cv::ogl::Buffer::unbind
file: C:\\SDKs\\opencv3.1\\sources\\modules\\core\\src\\opengl.cpp
Call stack details:
cv::ogl::Texture2D::copyFrom(const cv::_InputArray &arr, bool autoRelease);
gets called from my calls, which invokes
ogl::Buffer::unbind(ogl::Buffer::PIXEL_UNPACK_BUFFER);
In that, there is an OpenGL call to
gl::BindBuffer(target, 0); // target is "ogl::Buffer::PIXEL_UNPACK_BUFFER"
with a direct call to CV_CheckGlError(); afterwards, which throws the cv::exception. HAVE_OPENGL is apparently not defined in my code. The GL error is a GL_INVALID_OPERATION.
According to the specification of glBindBuffer:
void glBindBuffer(GLenum target,
GLuint buffer);
While a non-zero buffer object name is bound, GL operations on the
target to which it is bound affect the bound buffer object, and
queries of the target to which it is bound return state from the bound
buffer object. While buffer object name zero is bound, as in the
initial state, attempts to modify or query state on the target to
which it is bound generates an GL_INVALID_OPERATION error.
If I understand it correctly, gl::BindBuffer(target, 0) is causing this error because the buffer argument is 0 and I somehow alter the target. I am not sure what the target actually is, but maybe my glReadPixels() interferes with it?
Can somebody point me in the right direction to get rid of this exception? I just used the sample OpenCV code to construct my code.
Update: My shader code can trigger the exception. If I simply output the unprojected coordinates or vec4(0,0,0,1.0f), the program breaks because of the exception. Else, it continues but I cannot see my color texture on my mesh.
Given the information in your question, I believe the issue is with asynchronous writes to a pixel buffer object (PBO). I believe your code is trying to bind a buffer to 0 (unbinding the buffer), but that buffer is still being written to by an asynchronous call prior to it.
One way to overcome this is to use sync objects. Either define a sync object and use glFenceSync() or glWaitSync(). If you wait for buffers to finish their actions, this will have a negative impact on performance. Here is some information about a sync object.
Check this question for information on where one would use fence sync objects.
Another way could be use multiple buffers and switch between them for consecutive frames, this will make it less likely that a buffer is still in use while you unbind it.
Actual answer is that the code from OpenCV checks for errors with glGetError(). If you don't do this in your code, the cv::ogl::Texture2D::copyFrom() code will catch the error and throw an exception.

Qt 5.5 QOpenGLTexture copying data issue

I'm working with Qt 5.5 OpenGL wrapper classes. Specifically trying to get QOpenGLTexture working. Here I am creating a 1x1 2D white texture for masking purposes. This works:
void Renderer::initTextures()
{
QImage white(1, 1, QImage::Format_RGBA8888);
white.fill(Qt::white);
m_whiteTexture.reset(new QOpenGLTexture(QOpenGLTexture::Target2D));
m_whiteTexture->setSize(1, 1);
m_whiteTexture->setData(white);
//m_whiteTexture->allocateStorage(QOpenGLTexture::RGBA, QOpenGLTexture::UInt32);
//m_whiteTexture->setData(QOpenGLTexture::RGBA, QOpenGLTexture::UInt8, white.bits());
// Print any errors
QList<QOpenGLDebugMessage> messages = m_logger->loggedMessages();
if (messages.size())
{
qDebug() << "Start of texture errors";
foreach (const QOpenGLDebugMessage &message, messages)
qDebug() << message;
qDebug() << "End of texture errors";
}
}
However I am now trying to do two things:
Use allocate + setData sequence as separate commands (the commented out lines), e.g.
m_whiteTexture->allocateStorage(QOpenGLTexture::RGBA, QOpenGLTexture::UInt32);
m_whiteTexture->setData(QOpenGLTexture::RGBA, QOpenGLTexture::UInt8, white.bits());
for the purpose of more complicated rendering later where I just update part of the data and not reallocate. Related to this is (2) where I want to move to Target2DArray and push/pop textures in this array.
Create a Target2DArray texture and populate layers using QImages. Eventually I will be pushing/popping textures up to some max size available on the hardware.
Regarding (1), I get these errors from QOpenGLDebugMessage logger:
Start of texture errors
QOpenGLDebugMessage("APISource", 1280, "Error has been generated. GL error GL_INVALID_ENUM in TextureImage2DEXT: (ID: 2663136273) non-integer <format> 0 has been provided.", "HighSeverity", "ErrorType")
QOpenGLDebugMessage("APISource", 1280, "Error has been generated. GL error GL_INVALID_ENUM in TextureImage2DEXT: (ID: 1978056088) Generic error", "HighSeverity", "ErrorType")
QOpenGLDebugMessage("APISource", 1281, "Error has been generated. GL error GL_INVALID_VALUE in TextureImage2DEXT: (ID: 1978056088) Generic error", "HighSeverity", "ErrorType")
QOpenGLDebugMessage("APISource", 1281, "Error has been generated. GL error GL_INVALID_VALUE in TextureSubImage2DEXT: (ID: 1163869712) Generic error", "HighSeverity", "ErrorType")
End of texture errors
My mask works with the original code, but I can't get it to work in either (1) and (2) scenarios. For (2) I change the target to Target2DArray, change the size to include depth of 1, adjust my shaders to use vec3 texture coordinates and sampler3D for sampling, etc. I can post a more complete (2) example if that helps. I also don't understand these error codes and obviously difficult to debug on the GPU if that is what is going wrong. I've tried all sorts of PixelType and PixelFormat combinations.
Thanks!
This question is very old, but I just came across a similar problem myself. For me the solution was to call setFormat before
m_whiteTexture->setFormat(QOpenGLTexture::RGBA8_UNorm);
As I found out here: https://www.khronos.org/opengl/wiki/Common_Mistakes#Creating_a_complete_texture
The issues with the original code, is that the texture is not complete.
As mentioned by #flaiver, using QOpenGLTexture::RGBA8_UNorm works, but only because Qt uses different kind of storage for this texture (effectively it uses glTexStorage2D, and that is even better, as per OpenGL documentation), which is not the case for QOpenGLTexture::RGBA.
To make the texture work, even if you do require specifically QOpenGLTexture::RGBA (or some other formats, e.g. QOpenGLTexture::AlphaFormat), you need either set texture data for each mipmap level (which you don't really need for your case), or disable using mipmaps:
// the default is `QOpenGLTexture::NearestMipMapLinear`/`GL_NEAREST_MIPMAP_LINEAR`,
// but it doesn't work, if you set data only for level 0
// alternatively use QOpenGLTexture::Nearest if that suits your needs better
m_whiteTexture->setMagnificationFilter(QOpenGLTexture::Linear);
m_whiteTexture->setMinificationFilter(QOpenGLTexture::Linear);
// // optionally a good practice is to explicitly set the Wrap Mode:
// m_whiteTexture->setWrapMode(QOpenGLTexture::ClampToEdge);
right after you allocate the storage for texture data.

Sharing between QOpenGLContext and QGLWidget

The short form of the question: How can I draw in my QGLWidget using my FBO as a texture, without just getting a blank white image?
And, some background and details... I am using Qt 5.1 for an app that does some image processing on the GPU. have a “compositor” class which uses a QOffscreenSurface, with a QOpenGLContext, and a QOpenGLFramebufferObject. It renders some things to the FBO. If the app is running in a render only mode, the result will get written to a file. Run interactively, it gets shown on my subclass of QGLWidget the “viewer.”
If I make a QImage from the FBO and draw that to the viewer, it works. However, this requires round-tripping from GPU-> QImage-> Back to the GPU as a texture. In theory, I should be able to use the FBO as a texture directly, which is what I want.
I am trying to share between my QOpenGLContext and the QGLWidget’s QGLContext like so:
viewer = new tl::ui::glViewer(this);
compositor = new tl::playback::glCompositor(1280, 720, this);
viewer->context()->contextHandle()->setShareContext(compositor->context);
Is it possible to share between the two types of contexts? Is this the way to do it? Do I need to do something else to draw in the viewer using the FBO in the compositor? I’m just getting solid white when I draw the FBO directly instead of the QImage, so I’m clearly doing something wrong.
So I have figured out my problem. I was misinterpreting the documentation for setShareContext() which notes that it "Won't take effect until create() is called" which I mistakenly thought meant you had to share the context after it was created. Instead, sharing has to be established right before:
viewer = new tl::ui::glViewer(this);
compositor = new tl::playback::glCompositor(512, 512, viewer->context()->contextHandle(), this);
and the new constructor for my glCompositor:
offscreenSurface = new QOffscreenSurface();
QSurfaceFormat format;
format.setMajorVersion(4);
format.setMinorVersion(0);
format.setProfile(QSurfaceFormat::CompatibilityProfile);
format.setSamples(0);
offscreenSurface->setFormat(format);
offscreenSurface->create();
context = new QOpenGLContext();
context->setShareContext(srcCtx);
context->setFormat(format);
context->create();
context->makeCurrent(offscreenSurface);
QOpenGLFramebufferObjectFormat f;
f.setSamples(0);
f.setInternalTextureFormat(GL_RGBA32F);
frameBuffer = new QOpenGLFramebufferObject(w, h, f);
Will create a new FBO in a new context which is sharing with the viewer's context. When it comes time to draw, I just bind glBindTexture(GL_TEXTURE_2D, frameBuffer->texture());
I am marking the answer I got correct, even though it didn't directly solve my problem. It was very informative.
I believe your problem has to do with the fact that FBOs themselves cannot be shared across contexts. What you can, however, do is share the attached data stores (renderbuffers, textures, etc.).
It really boils down to the fact that FBOs are little more than a pretty front-end to manage collections of read/draw buffers, the FBOs themselves are not actually a sharable resource. In fact, despite the name, they are not even Buffer Objects as far as the rest of the API is concerned; have you ever wondered why you do not use glBufferData (...), etc. on them? :)
Critically Important Point:
FrameBuffer Objects are not Buffer Objects; they contain no data stores; they only manage state for attachment points and provide an interface for validation and binding semantics.
This is why they cannot be shared, the same way that Vertex Array Objects cannot be shared, but their constituent Vertex Buffer Objects can.
The takeaway message here is:
If it does not store data, it is generally not a sharable resource.
The solution you will have to pursue will involve using the renderbuffers and textures that are attached to your FBO. Provided you do not do anything crazy like try and render in both contexts at once, sharing these resources should not present too much trouble. It could get really ugly if you started trying to read from the renderbuffer or texture while the other context is drawing into it, so do not do this :P
Due to the following language in the OpenGL specification, you will probably have to detach the texture before using it in your other context:
OpenGL 3.2 (Core Profile) - 4.4.3 Feedback Loops Between Textures and the Framebuffer - pp. 230:
A feedback loop may exist when a texture object is used as both the source and destination of a GL operation. When a feedback loop exists, undefined behavior results. This section describes rendering feedback loops (see section 3.8.9) and texture copying feedback loops (see section 3.8.2) in more detail.
To put this bluntly, your white textures could be the result of feedback loops (OpenGL did not give this situation a name in versions of the spec. prior to 3.1, so proper discussion of "feedback loops" will be found in 3.1+ literature only). Because this invokes undefined behavior it will behave differently between vendors. Detaching will eliminate the source of undefined behavior and you should be good to go.

Is it ok to use a Random texture ID?

Instead of using glGenTextures() to get an unused texture ID. Can I randomly choose a number, say, 99999 and use it?
I will, of course, query:
glIsTexture( m_texture )
and make sure it's false before proceeding.
Here's some background:
I am developing an image slideshow app for the mac. Previewing the slideshow is flawless. To save the slideshow, I am rendering to an FBO. I create an AGL context, instantiate a texture with glGenTextures() and render to a frame buffer. All's well except for a minor issue.
After I've saved the slideshow and return to the main window, all my image thumbnails are grey, i.e. the textures have been cleared.
I have investigated it and found out that the image thumbnails and my FBO texture somehow have the same texture ID. When I delete my FBO texture at the end of the slideshow saving operation, the thumbnail textures are also lost. This is weird because I have an AGL context during saving, and the main UI has another AGL context, presumably created in the background by Core Image and which I have no control on.
So my options, as I see it now, is to:
Not delete the FBO texture.
Randomly choose a high texture ID in the hopes that the main UI will not be using it.
Actually I read that you do not necessarily need to delete your texture if you are deleting the AGL opengl context. Because deleting your openGL context automatically deletes all associated textures. Is this true? If yes, option 1 makes more sense.
I do realize that there are funny things happening here that I can't really explain. For example, after I've saved my image slideshow to an .mov file, I delete the context, which was created in the same class. By right, it should not affect textures created in another context. But it does, and I seriously can't explain that.
Allow me to answer your basic question first: yes, it is legal in OpenGL 2.1 to simply pick a texture name arbitrarily and use it as a texture. Note that it is not legal in 3.2 core. The fact that the OpenGL ARB removed this ability should help illustrate whether it is a good idea or not.
Now, you seem to have run into an odd bug. glGenTextures should never return a texture name that is already in use. After all that is what it is for. You appear to somehow be getting textures that are already in use. I don't know how that happens.
An OpenGL context, when destroyed, will delete all resources specific to that context. If you have created a second context that shares resources with the first, deleting the second context will not delete the shared resources (textures, buffers, etc). It will delete un-shared resources (VAOs, etc), but everything else will remain.
I would suggest not creating and deleting the FBO texture constantly. Just create it on application startup. Worst-case, you'll have to reallocate storage for the object as needed (via glTexImage2D), if you need a different size or something.
To anybody who has this problem, make sure your context is fully initialized; until it is glGenTextures will always return 0. I didn't realize what was happening at first because it seems 0 is still a valid texture ID.