I'm working with Qt 5.5 OpenGL wrapper classes. Specifically trying to get QOpenGLTexture working. Here I am creating a 1x1 2D white texture for masking purposes. This works:
void Renderer::initTextures()
{
QImage white(1, 1, QImage::Format_RGBA8888);
white.fill(Qt::white);
m_whiteTexture.reset(new QOpenGLTexture(QOpenGLTexture::Target2D));
m_whiteTexture->setSize(1, 1);
m_whiteTexture->setData(white);
//m_whiteTexture->allocateStorage(QOpenGLTexture::RGBA, QOpenGLTexture::UInt32);
//m_whiteTexture->setData(QOpenGLTexture::RGBA, QOpenGLTexture::UInt8, white.bits());
// Print any errors
QList<QOpenGLDebugMessage> messages = m_logger->loggedMessages();
if (messages.size())
{
qDebug() << "Start of texture errors";
foreach (const QOpenGLDebugMessage &message, messages)
qDebug() << message;
qDebug() << "End of texture errors";
}
}
However I am now trying to do two things:
Use allocate + setData sequence as separate commands (the commented out lines), e.g.
m_whiteTexture->allocateStorage(QOpenGLTexture::RGBA, QOpenGLTexture::UInt32);
m_whiteTexture->setData(QOpenGLTexture::RGBA, QOpenGLTexture::UInt8, white.bits());
for the purpose of more complicated rendering later where I just update part of the data and not reallocate. Related to this is (2) where I want to move to Target2DArray and push/pop textures in this array.
Create a Target2DArray texture and populate layers using QImages. Eventually I will be pushing/popping textures up to some max size available on the hardware.
Regarding (1), I get these errors from QOpenGLDebugMessage logger:
Start of texture errors
QOpenGLDebugMessage("APISource", 1280, "Error has been generated. GL error GL_INVALID_ENUM in TextureImage2DEXT: (ID: 2663136273) non-integer <format> 0 has been provided.", "HighSeverity", "ErrorType")
QOpenGLDebugMessage("APISource", 1280, "Error has been generated. GL error GL_INVALID_ENUM in TextureImage2DEXT: (ID: 1978056088) Generic error", "HighSeverity", "ErrorType")
QOpenGLDebugMessage("APISource", 1281, "Error has been generated. GL error GL_INVALID_VALUE in TextureImage2DEXT: (ID: 1978056088) Generic error", "HighSeverity", "ErrorType")
QOpenGLDebugMessage("APISource", 1281, "Error has been generated. GL error GL_INVALID_VALUE in TextureSubImage2DEXT: (ID: 1163869712) Generic error", "HighSeverity", "ErrorType")
End of texture errors
My mask works with the original code, but I can't get it to work in either (1) and (2) scenarios. For (2) I change the target to Target2DArray, change the size to include depth of 1, adjust my shaders to use vec3 texture coordinates and sampler3D for sampling, etc. I can post a more complete (2) example if that helps. I also don't understand these error codes and obviously difficult to debug on the GPU if that is what is going wrong. I've tried all sorts of PixelType and PixelFormat combinations.
Thanks!
This question is very old, but I just came across a similar problem myself. For me the solution was to call setFormat before
m_whiteTexture->setFormat(QOpenGLTexture::RGBA8_UNorm);
As I found out here: https://www.khronos.org/opengl/wiki/Common_Mistakes#Creating_a_complete_texture
The issues with the original code, is that the texture is not complete.
As mentioned by #flaiver, using QOpenGLTexture::RGBA8_UNorm works, but only because Qt uses different kind of storage for this texture (effectively it uses glTexStorage2D, and that is even better, as per OpenGL documentation), which is not the case for QOpenGLTexture::RGBA.
To make the texture work, even if you do require specifically QOpenGLTexture::RGBA (or some other formats, e.g. QOpenGLTexture::AlphaFormat), you need either set texture data for each mipmap level (which you don't really need for your case), or disable using mipmaps:
// the default is `QOpenGLTexture::NearestMipMapLinear`/`GL_NEAREST_MIPMAP_LINEAR`,
// but it doesn't work, if you set data only for level 0
// alternatively use QOpenGLTexture::Nearest if that suits your needs better
m_whiteTexture->setMagnificationFilter(QOpenGLTexture::Linear);
m_whiteTexture->setMinificationFilter(QOpenGLTexture::Linear);
// // optionally a good practice is to explicitly set the Wrap Mode:
// m_whiteTexture->setWrapMode(QOpenGLTexture::ClampToEdge);
right after you allocate the storage for texture data.
Related
I am uploading image data into GL texture asynchronously.
In debug output I am getting these warnings during the rendering:
Source:OpenGL,type: Other, id: 131185, severity: Notification
Message: Buffer detailed info: Buffer object 1 (bound to
GL_PIXEL_UNPACK_BUFFER_ARB, usage hint is GL_DYNAMIC_DRAW) has been
mapped WRITE_ONLY in SYSTEM HEAP memory (fast). Source:OpenGL,type:
Performance, id: 131154, severity: Medium Message: Pixel-path
performance warning: Pixel transfer is synchronized with 3D rendering.
I can't see any wrong usage of PBOs in my case or any errors.So the questions is, if these warnings are safe to discard, or I am actually doing smth wrong.
My code for that part:
//start copuying pixels into PBO from RAM:
mPBOs[mCurrentPBO].Bind(GL_PIXEL_UNPACK_BUFFER);
const uint32_t buffSize = pipe->GetBufferSize();
GLubyte* ptr = (GLubyte*)mPBOs[mCurrentPBO].MapRange(0, buffSize, GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
if (ptr)
{
memcpy(ptr, pipe->GetBuffer(), buffSize);
mPBOs[mCurrentPBO].Unmap();
}
//copy pixels from another already full PBO(except of first frame into texture //
mPBOs[1 - mCurrentPBO].Bind(GL_PIXEL_UNPACK_BUFFER);
//mCopyTex is bound to mCopyFBO as attachment
glTextureSubImage2D(mCopyTex->GetHandle(), 0, 0, 0, mClientSize.x, mClientSize.y,
GL_RGBA, GL_UNSIGNED_BYTE, 0);
mCurrentPBO = 1 - mCurrentPBO;
Then I just blit the result to default frame buffer. No rendering of geometry or anything like that.
glBlitNamedFramebuffer(
mCopyFBO,
0,//default FBO id
0,
0,
mViewportSize.x,
mViewportSize.y,
0,
0,
mViewportSize.x,
mViewportSize.y,
GL_COLOR_BUFFER_BIT,
GL_LINEAR);
Running on NVIDIA GTX 960 card.
This performance warning is nividia-specific and it is intended as a hint to tell you that you're not going to use a separate hw transfer queue, which is no wonder since you use a single thread, single GL context model, where both rendering (at least your your blit) and transfer are carried out.
See this nvidia presentation for some details about how nvidia handles this. Page 22 also explains this specific warning. Note that this warnign does not mean that your transfer is not asynchronous. It is still fully asynchronous to the CPU thread. It will just be synchronously processed on the GPU, with respect to the render commands which are in the same command queue, and you're not using the asynchronous copy engine which could do these copies independent from the rendering commands in a separate command queue.
I can't see any wrong usage of PBOs in my case or any errors.So the questions is, if these warnings are safe to discard, or I am actually doing smth wrong.
There is nothing wrong with your PBO usage.
It is not clear if your specific application could even benefit from using a more elaborate separate transfer queue scheme.
I am rendering two views per frame on an HMD and it's kind of complicated right now because I use OpenCV to load images and process intermediary results and the rest is OpenGL, but I still want it to work. I am using OpenCV 3.1 and any help would be greatly appreciated, even if you have just some advice.
Application details:
Per view (left and right eye) I take four images as cv::Mat and copy them into four cv::ogl::Texture2D objects. Then I bind these textures to active OpenGL textures to configure my shader and draw to a framebuffer. I read the pixels of the frame buffer again (glReadPixels()) as a cv::Mat and do some postprocessing. This cv::Mat ("synthView") is getting copied to another cv::ogl::Texture2D which is rendered on a 2D screenspace quad for the view.
Here's some console output I logged for each call to the cv::ogl::Texture2D objects. No actual code!
// First iteration for my left eye view
colorImageTexture[LEFT].copyFrom(imageLeft, true); //view1
colorImageTexture[RIGHT].copyFrom(imageRight, true); //view1
depthImageTexture[LEFT].copyFrom(depthLeft, true); //view1
depthImageTexture[RIGHT].copyFrom(depthRight, true); //view1
colorImageTexture[i].bind(); //left
depthImageTexture[i].bind(); //left
colorImageTexture[i].bind(); //right
depthImageTexture[i].bind(); //right
synthesizedImageTexture.copyFrom(synthView, true); //frame0, left_eye done
// Second iteration for my right eye view, reusing colorImageTexture[LEFT] the first time
colorImageTexture[LEFT].copyFrom(imageLeft, true); //view2 // cv::Exception!
The code was working when I catched the exceptions and used the Oculus DK2 instead of the CV1. As you can see, I can run through one rendered view, but trying to render the second view will throw an exception in the copyFrom method at gl::Buffer::unbind(ogl::Buffer::PIXEL_UNPACK_BUFFER).
The exception occurs after all ogl::Texture2D objects have been used once and the first one gets "reused", which means that it will not call ogl::Texture2D::create(...) in the copyFrom() function!
Details of the cv::Exception:
code: -219
err: The specified operation is not allowed in the current state
func: cv::ogl::Buffer::unbind
file: C:\\SDKs\\opencv3.1\\sources\\modules\\core\\src\\opengl.cpp
Call stack details:
cv::ogl::Texture2D::copyFrom(const cv::_InputArray &arr, bool autoRelease);
gets called from my calls, which invokes
ogl::Buffer::unbind(ogl::Buffer::PIXEL_UNPACK_BUFFER);
In that, there is an OpenGL call to
gl::BindBuffer(target, 0); // target is "ogl::Buffer::PIXEL_UNPACK_BUFFER"
with a direct call to CV_CheckGlError(); afterwards, which throws the cv::exception. HAVE_OPENGL is apparently not defined in my code. The GL error is a GL_INVALID_OPERATION.
According to the specification of glBindBuffer:
void glBindBuffer(GLenum target,
GLuint buffer);
While a non-zero buffer object name is bound, GL operations on the
target to which it is bound affect the bound buffer object, and
queries of the target to which it is bound return state from the bound
buffer object. While buffer object name zero is bound, as in the
initial state, attempts to modify or query state on the target to
which it is bound generates an GL_INVALID_OPERATION error.
If I understand it correctly, gl::BindBuffer(target, 0) is causing this error because the buffer argument is 0 and I somehow alter the target. I am not sure what the target actually is, but maybe my glReadPixels() interferes with it?
Can somebody point me in the right direction to get rid of this exception? I just used the sample OpenCV code to construct my code.
Update: My shader code can trigger the exception. If I simply output the unprojected coordinates or vec4(0,0,0,1.0f), the program breaks because of the exception. Else, it continues but I cannot see my color texture on my mesh.
Given the information in your question, I believe the issue is with asynchronous writes to a pixel buffer object (PBO). I believe your code is trying to bind a buffer to 0 (unbinding the buffer), but that buffer is still being written to by an asynchronous call prior to it.
One way to overcome this is to use sync objects. Either define a sync object and use glFenceSync() or glWaitSync(). If you wait for buffers to finish their actions, this will have a negative impact on performance. Here is some information about a sync object.
Check this question for information on where one would use fence sync objects.
Another way could be use multiple buffers and switch between them for consecutive frames, this will make it less likely that a buffer is still in use while you unbind it.
Actual answer is that the code from OpenCV checks for errors with glGetError(). If you don't do this in your code, the cv::ogl::Texture2D::copyFrom() code will catch the error and throw an exception.
So I'm working on a function that works similarly to OpenGL Profiler on OSX, which allows me to extract information on OpenGL's back buffers and what they currently contains. Due to the nature of the function, I do not have access to the application's variables containing the depth buffer Ids and need to relies on the GL function itself to provide me with this information.
I've already got another function to copy the actual FBO context image into normal GL texture and already successfully extracted normal Draw buffers and save them as image files using a series of glGetIntegerv() in a (sample) function calls below.
But I couldn't seem to find a constant/function that could be used to pull the buffer information (e.g. type, texture id) out of the Depth buffer (and I already look through them a few times), which I'm pretty sure has to be possible, considering it's been done before in other GL profiling tools.
So, this is the first time I find the need to ask a question here, and is wondering if anyone here know if it'd be possible, or if I really need to catch the that value while the application is setting them rather than trying to pull the current value out of GL context?
// ......
GLint savedReadFBO = GL_ZERO;
GLenum savedReadBuffer = GL_BACK;
glGetIntegerv(GL_READ_FRAMEBUFFER_BINDING, &savedReadFBO);
glGetIntegerv(GL_READ_BUFFER, (GLint *) &savedReadBuffer);
// Try to obtain current draw buffer
GLint currentDrawFBO;
GLenum currentDrawBuffer = GL_NONE;
glGetIntegerv(GL_DRAW_FRAMEBUFFER_BINDING, ¤tDrawFBO);
glGetIntegerv(GL_DRAW_BUFFER0, (GLint *) ¤tDrawBuffer);
// We'll temporarily bind the drawbuffer for reading to pull out the current data.
// Bind drawbuffer FBO to readbuffer
if (savedReadFBO != currentDrawFBO)
{
glBindFramebuffer(GL_READ_FRAMEBUFFER, currentDrawFBO);
}
// Bind the read buffer and copy image
glReadBuffer(currentDrawBuffer);
// ....commands to fetch actual buffer content here....
// Restore the old read buffer
glBindFramebuffer(GL_READ_FRAMEBUFFER, savedReadFBO);
glReadBuffer(savedReadBuffer);
// .......
I'm using libgdx, to create some program. I need used some operation in framebuffer. For this operation I create new framebuffer, after this operation I'm call in framebuffer dispose(). When I create framebuffer 10 time, I have crash program with error: frame buffer couldn't be constructed: incomplete dimensions. I see at code libgdx and see that this is GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS status of framebuffer. Why is it happened? What am I must to do to fixing this problem?
Code:
if(maskBufferer != null){
maskBufferer.dispose();
}
maskBufferer = new FrameBuffer(Pixmap.Format.RGBA8888, width, height, true);
mask = createMaskImageMask(aspectRatioCrop, maskBufferer);
...
private Texture createMaskImageMask(boolean aspectRatioCrop, FrameBuffer maskBufferer) {
maskBufferer.begin();
Gdx.gl.glClearColor(COLOR_FOR_MASK, COLOR_FOR_MASK, COLOR_FOR_MASK, ALPHA_FOR_MASK);
Gdx.graphics.getGL20().glClear(GL20.GL_COLOR_BUFFER_BIT);
float[] coord = null;
...
PolygonRegion polyReg = new PolygonRegion( new TextureRegion(new Texture(Gdx.files.internal(texturePolygon)) ),
coord);
PolygonSprite poly = new PolygonSprite(polyReg);
PolygonSpriteBatch polyBatch = new PolygonSpriteBatch();
polyBatch.begin();
poly.draw(polyBatch);
polyBatch.end();
maskBufferer.end();
return maskBufferer.getColorBufferTexture();
}
EDIT
To summarize, GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS can occur in libgdx when too many FrameBuffer objects are created (without calling .dispose()), possibly to do with OpenGL running out of FBO or texture/renderbuffer handles.
If no handle is returned with glGenFrameBuffers then an FBO won't be bound when attaching targets or checking the status. Likewise an attempt to attach (from a failed call to glGenTextures) an invalid target will cause the FBO will be incomplete. Though it seems incorrect to report GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS in either case.
One possibility may be the call to allocate memory for the target, such as glTexImage2D or glRenderbufferStorage has failed (out of memory). This leaves the dimensions of the target not equal to other targets already successfully attached to the FBO, and could then produce the error.
It's pretty standard to create a framebuffer once, attach your render targets and reuse each frame. By dispose do you mean glDeleteFrameBuffers?
It looks like there should be delete maskBufferer; after and as well as maskBufferer.dispose();. EDIT if it were C++
Given this error happens after a number of frames it could be many things. Double check you aren't creating framebuffers or attachments each frame, not deleting them and running out of objects/handles.
It also looks like GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS is no longer used (see the specs), something along the lines of the ability to have mixed dimensions now. Seems like it'd be worth checking that your attachments are all the same size though.
A quick way to narrow down which attachment is causing issues is to comment out half of them and see when the error occurs (or check the status after each attach).
I'm resolved problem. Dispose was helped. I was re-create the class every time, because of dispose was never call.
I'm experiencing a difficult problem on certain ATI cards (Radeon X1650, X1550 + and others).
The message is: "Access violation at address 6959DD46 in module 'atioglxx.dll'. Read of address 00000000"
It happens on this line:
glGetTexImage(GL_TEXTURE_2D,0,GL_RGBA,GL_FLOAT,P);
Note:
Latest graphics drivers are installed.
It works perfectly on other cards.
Here is what I've tried so far (with assertions in the code):
That the pointer P is valid and allocated enough memory to hold the image
Texturing is enabled: glIsEnabled(GL_TEXTURE_2D)
Test that the currently bound texture is the one I expect: glGetIntegerv(GL_TEXTURE_2D_BINDING)
Test that the currently bound texture has the dimensions I expect: glGetTexLevelParameteriv( GL_TEXTURE_WIDTH / HEIGHT )
Test that no errors have been reported: glGetError
It passes all those test and then still fails with the message.
I feel I've tried everything and have no more ideas. I really hope some GL-guru here can help!
EDIT:
After concluded it is probably a driver bug I posted about it here too: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=295137#Post295137
I also tried GL_PACK_ALIGNMENT and it didn't help.
By some more investigation I found that it only happened on textures that I have previously filled with pixels using a call to glCopyTexSubImage2D. So I could produce a workaround by replacing the glCopyTexSubImage2d call with calls to glReadPixels and then glTexImage2D instead.
Here is my updated code:
{
glCopyTexSubImage2D cannot be used here because the combination of calling
glCopyTexSubImage2D and then later glGetTexImage on the same texture causes
a crash in atioglxx.dll on ATI Radeon X1650 and X1550.
Instead we copy to the main memory first and then update.
}
// glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, PixelWidth, PixelHeight); //**
GetMem(P, PixelWidth * PixelHeight * 4);
glReadPixels(0, 0, PixelWidth, PixelHeight, GL_RGBA, GL_UNSIGNED_BYTE, P);
SetMemory(P,GL_RGBA,GL_UNSIGNED_BYTE);
You might take care of the GL_PACK_ALIGNEMENT. This parameter told you the closest byte count to pack the texture. Ie, if you have a image of 645 pixels:
With GL_PACK_ALIGNEMENT to 4 (default value), you'll have 648 pixels.
With GL_PACK_ALIGNEMENT to 1, you'll have 645 pixels.
So ensure that the pack value is ok by doing:
glPixelStorei(GL_PACK_ALIGNMENT, 1)
Before your glGetTexImage(), or align your memory texture on the GL_PACK_ALIGNEMENT.
This is most likely a driver bug. Having written 3D apis myself it is even easy to see how. You are doing something that is really weird and rare to be covered by test: Convert float data to 8 bit during upload. Nobody is going to optimize that path. You should reconsider what you are doing in the first place. The generic conversion cpu conversion function probably kicks in there and somebody messed up a table that drives allocation of temp buffers for that. You should really reconsider using an external float format with an internal 8 bit format. Conversions like that in the GL api usually point to programming errors. If you data is float and you want to keep it as such you should use a float texture and not RGBA. If you want 8 bit why is your input float?