I am rendering two views per frame on an HMD and it's kind of complicated right now because I use OpenCV to load images and process intermediary results and the rest is OpenGL, but I still want it to work. I am using OpenCV 3.1 and any help would be greatly appreciated, even if you have just some advice.
Application details:
Per view (left and right eye) I take four images as cv::Mat and copy them into four cv::ogl::Texture2D objects. Then I bind these textures to active OpenGL textures to configure my shader and draw to a framebuffer. I read the pixels of the frame buffer again (glReadPixels()) as a cv::Mat and do some postprocessing. This cv::Mat ("synthView") is getting copied to another cv::ogl::Texture2D which is rendered on a 2D screenspace quad for the view.
Here's some console output I logged for each call to the cv::ogl::Texture2D objects. No actual code!
// First iteration for my left eye view
colorImageTexture[LEFT].copyFrom(imageLeft, true); //view1
colorImageTexture[RIGHT].copyFrom(imageRight, true); //view1
depthImageTexture[LEFT].copyFrom(depthLeft, true); //view1
depthImageTexture[RIGHT].copyFrom(depthRight, true); //view1
colorImageTexture[i].bind(); //left
depthImageTexture[i].bind(); //left
colorImageTexture[i].bind(); //right
depthImageTexture[i].bind(); //right
synthesizedImageTexture.copyFrom(synthView, true); //frame0, left_eye done
// Second iteration for my right eye view, reusing colorImageTexture[LEFT] the first time
colorImageTexture[LEFT].copyFrom(imageLeft, true); //view2 // cv::Exception!
The code was working when I catched the exceptions and used the Oculus DK2 instead of the CV1. As you can see, I can run through one rendered view, but trying to render the second view will throw an exception in the copyFrom method at gl::Buffer::unbind(ogl::Buffer::PIXEL_UNPACK_BUFFER).
The exception occurs after all ogl::Texture2D objects have been used once and the first one gets "reused", which means that it will not call ogl::Texture2D::create(...) in the copyFrom() function!
Details of the cv::Exception:
code: -219
err: The specified operation is not allowed in the current state
func: cv::ogl::Buffer::unbind
file: C:\\SDKs\\opencv3.1\\sources\\modules\\core\\src\\opengl.cpp
Call stack details:
cv::ogl::Texture2D::copyFrom(const cv::_InputArray &arr, bool autoRelease);
gets called from my calls, which invokes
ogl::Buffer::unbind(ogl::Buffer::PIXEL_UNPACK_BUFFER);
In that, there is an OpenGL call to
gl::BindBuffer(target, 0); // target is "ogl::Buffer::PIXEL_UNPACK_BUFFER"
with a direct call to CV_CheckGlError(); afterwards, which throws the cv::exception. HAVE_OPENGL is apparently not defined in my code. The GL error is a GL_INVALID_OPERATION.
According to the specification of glBindBuffer:
void glBindBuffer(GLenum target,
GLuint buffer);
While a non-zero buffer object name is bound, GL operations on the
target to which it is bound affect the bound buffer object, and
queries of the target to which it is bound return state from the bound
buffer object. While buffer object name zero is bound, as in the
initial state, attempts to modify or query state on the target to
which it is bound generates an GL_INVALID_OPERATION error.
If I understand it correctly, gl::BindBuffer(target, 0) is causing this error because the buffer argument is 0 and I somehow alter the target. I am not sure what the target actually is, but maybe my glReadPixels() interferes with it?
Can somebody point me in the right direction to get rid of this exception? I just used the sample OpenCV code to construct my code.
Update: My shader code can trigger the exception. If I simply output the unprojected coordinates or vec4(0,0,0,1.0f), the program breaks because of the exception. Else, it continues but I cannot see my color texture on my mesh.
Given the information in your question, I believe the issue is with asynchronous writes to a pixel buffer object (PBO). I believe your code is trying to bind a buffer to 0 (unbinding the buffer), but that buffer is still being written to by an asynchronous call prior to it.
One way to overcome this is to use sync objects. Either define a sync object and use glFenceSync() or glWaitSync(). If you wait for buffers to finish their actions, this will have a negative impact on performance. Here is some information about a sync object.
Check this question for information on where one would use fence sync objects.
Another way could be use multiple buffers and switch between them for consecutive frames, this will make it less likely that a buffer is still in use while you unbind it.
Actual answer is that the code from OpenCV checks for errors with glGetError(). If you don't do this in your code, the cv::ogl::Texture2D::copyFrom() code will catch the error and throw an exception.
Related
I'm currently trying to get into Vulkan, and I've mostly followed this well-know Vulkan tutorial, all the while trying to integrate it into a framework I built around OpenGL. I'm at the point where I can successfully render an object on the screen, and have the object move around by passing a transformation matrix to a uniform buffer linked to my shader code.
In this tutorial the author is focusing on drawing one object to the screen, which is a good starting point, but I would like to have end code that would look like this:
drawRect(position1, size1, color1);
drawRect(position2, size2, color2);
...
My first try to implement something like this ended up with me submitting the command buffer, which is created an recorded only once at the beginning, once for each object I wanted to render, and making sure to update the uniform data in-between each command buffer submission. This didn't work however, and after some debugging with renderdoc, I realized it was because starting a render pass clears the screen.
If I understand my situation correctly, the only way to achieve what I want would involve re-creating the command buffers every frame:
Record n, the number of time we want to draw something on the screen;
At the end of a frame, allocate n uniform buffers, and fill them with the corresponding data;
Create n descriptor sets to be able to link these uniform buffers with my shader;
Record the command buffer by repeating n times the process of binding a descriptor set using vkCmdBindDescriptorSets and drawing the requested data using vkCmdDrawIndexed.
This seems like a lot of work to do every frame. Is this how I should handle a dynamic number of draw calls ? Or is there some concept I don't know about/got wrong ?
Generally command buffers are actually re-recorded every frame, and Vulkan allows to multithread recording with command pools.
Indirect draws exist: you store data about draw commands (indeces count, instances count, etc.) into a separate buffer, and then the driver reads the data from the buffer when you submit the commands; vkCmdDraw*Indirect requires you to specify number of draw commands at recording time; vkCmdDraw*IndirectCount allows you to store number of draw commands in a buffer as well.
Also i dont see a reason why would you have to re-create uniform buffers, descriptor sets each frame; In fact, as far as I know, Vulkan encourages you to pre-bake things that you can, and descriptor sets are a tool for that.
In a Round Robin fashion, you usually have a few buffers and you cycle between these buffers, how you manage GLFW callbacks in this situation?
Let's suppose that you have 3 buffers, you send draw commands with a specified viewport in the first one, but when the cpu is processing the second one, it gets a callback of a window resize for example, the server may be rendering whatever you sent with the previous viewport size yet, causing some "artifacts", and this is just a example, but it will happen for literally everything right? A easy fix would be to process the callbacks(the last ones received) just after rendering the last buffer, and block the client until the server processed all the commands, is that correct(what would imply a frame delay per buffer)? Is there something else that could be done?
OpenGL's internal state machine takes care of all of that. All OpenGL commands are queued up in a command queue and executed in order. A call to glViewport – and any other OpenGL command for that matter – affects only the outcome of the commands that follow it, and nothing that comes before.
There's no need to implement custom round robin buffering.
This even covers things like textures and buffer objects (with the notable exceptions of buffer objects that are persistent mapped). I.e. if you do the following sequence of operations
glDrawElements(…); // (1)
glTexSubImage2D(GL_TEXTURE_2D, …);
glDrawElements(…); // (2)
The OpenGL rendering model mandates that glDrawElements (1) uses the texture data of the bound texture object as it was before the call to glTexSubImage2D and that glDrawElements (2) must use the data that has been uploaded between (1) and (2).
Yes, this involves tracking the contents, implicit data copies and a lot of other unpleasant things. Yes, this also likely implies that you're hitting a slow path.
This is an advanced OpenGL question and tbh. it seems more like a driver bug. I know that the standard explicitly states, that deletion of an object only deletes it's name, therefore a generator function can return the same name. However it's not clear on how to deal with this...
The situation is the following: I have a so called "transient" (C++) object (TO from now on), which generates GL objects, enqueues commands using them, then deletes them.
Now consider that I use more than one of this kind before I call SwapBuffers(). The following happens:
TO 1. generates a vertex buffer named VBO1, along with a VAO1 and other things
TO 1. calls some mapping/drawing commands with VBO1
TO 1. deletes the VAO1 and VBO1 (therefore the name VBO1 is freed)
TO 2. generates a vertex buffer object, now of course with the same name (VBO1) as the name 1 is deleted and available, along with another VAO (probably 1)
TO 2. calls some other mapping/drawing commands with this new VBO1 (different vertex positions, etc.)
TO 2. deletes the new VBO1
SwapBuffers()
And the result is: only the modifications performed by TO 1. are in effect. In a nutshell: I wanted to render a triangle, then a square, but I only got the triangle.
Workaround: not deleting the VBO, so I get a new name in TO 2. (VBO2)
I would like to ask for your help in this matter; although I'm aware of the fact that I shouldn't delete/generate objects mid-frame, but aside that, this "buggy" mechanism really disturbs me (I mean how can I trust GL then?...short answer: I can't...)
(sideonote: I've been programming 3D graphics since 12 years, but this thing really gave me the creeps...)
I have similar problems with my multithreaded rendering code. I use a double buffering system for the render commands, so when I delete an object, it might be used in the next frame.
The short of it is that TO shouldn't directly delete the GL objects. It needs to submit the handle to a manager to queue for deletion between frames. With my double buffering, I add a small timer to count down 2 frames before releasing.
For my transient verts, I have a large chunk of memory that I write to for storage, and skip the VBO submission. I don't know what your setup is or how many vertices you are pushing, but you may not benefit from VBOs if you 1) regenerate every frame or 2) push small sets of verts. Definitely perf test with and without VBOs.
I found the cause of the problem, I think it's worth mentioning (so that other developers won't fall into the same hole). The actual problem is the VAO, or more precisely the caching of the VAO.
In Metal and Vulkan the input layout is completely independent of the actual buffers used: you only specify the binding point (location) where the buffer is going to be.
But not in OpenGL... the VAO actually holds a strong reference to the vertex buffer which was bound during it's creation. Therefore the following thing happened:
VBO1 was created, VAO1 was created
VAO1 was cached in the pipeline cache
VBO1 was deleted, but only the name was freed, not the object
glGenBuffers() returns 1 again as the name is available
but VAO1 in the cache still references the old VBO1
the driver gets confused and doesn't let me modify the new VBO1
And the solution...well... For now when a vertex buffer gets deleted I delete any cached pipelines that reference that buffer.
On the long term tho: I'm going to maintain a separate cache for input layouts (even if it's part of the pipeline state), and move the transient object further up, so that it becomes less transient.
Welcome to the world of OpenGL...
I'm using libgdx, to create some program. I need used some operation in framebuffer. For this operation I create new framebuffer, after this operation I'm call in framebuffer dispose(). When I create framebuffer 10 time, I have crash program with error: frame buffer couldn't be constructed: incomplete dimensions. I see at code libgdx and see that this is GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS status of framebuffer. Why is it happened? What am I must to do to fixing this problem?
Code:
if(maskBufferer != null){
maskBufferer.dispose();
}
maskBufferer = new FrameBuffer(Pixmap.Format.RGBA8888, width, height, true);
mask = createMaskImageMask(aspectRatioCrop, maskBufferer);
...
private Texture createMaskImageMask(boolean aspectRatioCrop, FrameBuffer maskBufferer) {
maskBufferer.begin();
Gdx.gl.glClearColor(COLOR_FOR_MASK, COLOR_FOR_MASK, COLOR_FOR_MASK, ALPHA_FOR_MASK);
Gdx.graphics.getGL20().glClear(GL20.GL_COLOR_BUFFER_BIT);
float[] coord = null;
...
PolygonRegion polyReg = new PolygonRegion( new TextureRegion(new Texture(Gdx.files.internal(texturePolygon)) ),
coord);
PolygonSprite poly = new PolygonSprite(polyReg);
PolygonSpriteBatch polyBatch = new PolygonSpriteBatch();
polyBatch.begin();
poly.draw(polyBatch);
polyBatch.end();
maskBufferer.end();
return maskBufferer.getColorBufferTexture();
}
EDIT
To summarize, GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS can occur in libgdx when too many FrameBuffer objects are created (without calling .dispose()), possibly to do with OpenGL running out of FBO or texture/renderbuffer handles.
If no handle is returned with glGenFrameBuffers then an FBO won't be bound when attaching targets or checking the status. Likewise an attempt to attach (from a failed call to glGenTextures) an invalid target will cause the FBO will be incomplete. Though it seems incorrect to report GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS in either case.
One possibility may be the call to allocate memory for the target, such as glTexImage2D or glRenderbufferStorage has failed (out of memory). This leaves the dimensions of the target not equal to other targets already successfully attached to the FBO, and could then produce the error.
It's pretty standard to create a framebuffer once, attach your render targets and reuse each frame. By dispose do you mean glDeleteFrameBuffers?
It looks like there should be delete maskBufferer; after and as well as maskBufferer.dispose();. EDIT if it were C++
Given this error happens after a number of frames it could be many things. Double check you aren't creating framebuffers or attachments each frame, not deleting them and running out of objects/handles.
It also looks like GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS is no longer used (see the specs), something along the lines of the ability to have mixed dimensions now. Seems like it'd be worth checking that your attachments are all the same size though.
A quick way to narrow down which attachment is causing issues is to comment out half of them and see when the error occurs (or check the status after each attach).
I'm resolved problem. Dispose was helped. I was re-create the class every time, because of dispose was never call.
I'm trying to use the transform feedback functionality of OpenGL. I've written a minimalistic vertex shader and created a program with it (there's no fragment shader). I've also made a call to glTransformFeedbackVaryings with a single output varying name and I've set the buffer mode to be GL_INTERLEAVED_ATTRIBS. The shader program compiles and links fine (I also make sure I link after the glTransformFeedbackVaryings call.
I've enabled a single vertex attrib array using glEnableVertexAttribArray, allocated a VBO for the generic vertex attributes and made a call to glVertexAttribPointer for the attribute.
I've bound the TRANSFORM_FEEDBACK_BUFFER to another buffer which I've generated and created a data store which should be plenty big enough to be written to during transform feedback.
I then enable transform feedback and call glDrawArrays(GL_POINTS, 0, 1000). I don't get any crashes throughout the running of the program.
The problem is that I'm getting no indication that the transform feedback is writing anything to the TRANSFORM_FEEDBACK_BUFFER during the glDrawArrays call. I set up a query which monitors GL_TRANSFORM_FEEDBACK_PRIMITIVES_WRITTEN and this always returns 0. No matter what I try I can't seem to get the transform feedback to write ANYTHING (never mind anything meaningful!)
If anyone has any suggestions as to how I could get the transform feedback to write anything, or things that I should check for please let me know!
Note: I can't use transform feedback objects and I'm not using vertex array objects.
I think the problem ended up being how I was calling glBindBufferBase. Given that I can't see this function call in the original question it may have been that I omitted it altogether.
Certainly I didn't realise that the GL_TRANSFORM_FEEDBACK_BUFFER also has to be bound with a call to glBindBuffer to the correct buffer object before calling glBindBufferBase.