First of all, sorry for the title, didn't know what I should call this.
I have found out that when I call glCopyImageSubData on a secondary thread with a secondary context created with glfw and explicit version, following render calls just don't work like they should. In my program I create two textures, texture 2 bigger than texture 1. I upload pixels to texture 1 but not texture 2, I just allocate space. Then I use glCopyImageSubData. After that I delete the textures. All this happens on a secondary context sharing resources with the window.
In theory I do not modify anything on the rendering side of things, I just create two textures, copy data and delete them. My rendering loop looks like this:
glViewport(0, 0, width, height);
glClearColor(1F, 0F, 0F, 0.5F);
glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers(window);
Following render calls on the window don't work like they should, I don't know what happens or why it is happening.
If I do not specify the context version, it works just fine, which is a mystery to me because I do not think it should make a difference... If I specify the version of GL running on my computer, the same context is created as if I did not specify the version, right?
I specify the version like this:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);
Here is a video of the problem:
https://youtu.be/5fc6m1BEKyc
Here is the github with the code: https://github.com/DasBabyPixel/lwjgl-testing
(In the code, mode=1 with disableExplicitVersion=false is what I want to get to work)
My System info just to confirm things
EDIT
The copy process works just fine, confirmed by writing the textures to file.
I don't know, maybe I'm unclear: I am trying to figure out why this is happening and how to fix it. Leaving the version hints out is not an option because I want to be able to use GLES 3.2
Related
Goal
I'd like to implement an actual widget for Qt3D since QWidget::createWindowContainer just doesn't cut it for me.
Problem Description
My first approach of letting a new class subclass QWidget and QSurface was not successful since the Qt3D code either expects a QWindow or a QOffscreenSurface in multiple places and I don't want to recompile the whole Qt3D base.
My second idea was to render the Qt3D content to an offscreen surface and then draw the texture on a quad in a QOpenGLWidget. When I use a QRenderCapture framegraph node to save the image rendered to the offscreen texture and then load the image into a QOpenGLTexture and draw it in the QOpenGLWidget's paintGL function I can see the rendered image - i.e. rendering in Qt3D and also in the OpenGL widget works properly. This is just extremely slow compared to rendering the content from Qt3D directly.
Now, when I use the GLuint returned by the QTexutre2D to bind the texture during rendering of the QOpenGLWidget, everything stays black.
Of course this would make sense, if the contexts of the QOpenGLWidget and QT3D were completely separate. But by retrieving the AbstractRenderer from the QRenderAspectPrivate I was able to obtain the context that Qt3D uses. In my main.cpp I set
QApplication::setAttribute(Qt::AA_ShareOpenGLContexts);
The context of the QOpenGLWidget and of Qt3D both reference the same shared context - I verified this by printing both using qDebug, they are the same object.
Shouldn't this allow me to use the texture from Qt3D?
Or any other suggestions on how to implement such a widget? I just thought this to be the easiest way.
Implementation Details / What I've tried so far
This is what the paintGL function in my QOpenGLWidget looks like:
glClearColor(1.0, 1.0, 1.0, 1.0);
glDisable(GL_BLEND);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
d->m_shaderProgram->bind();
{
QMatrix4x4 m;
m.ortho(0, 1, 1, 0, 1.0f, 3.0f);
m.translate(0.0f, 0.0f, -2.0f);
QOpenGLVertexArrayObject::Binder vaoBinder(&d->m_vao);
d->m_shaderProgram->setUniformValue("matrix", m);
glBindTexture(GL_TEXTURE_2D, d->m_colorTexture->handle().toUInt());
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
}
d->m_shaderProgram->release();
m_colorTexture is the QTexture2D that is attached to the QRenderTargetOutput/QRenderTarget that Qt3D renders the scene offscreen to.
I have a QFrameAction in place to trigger draw updates on the QOpenGLWidget:
connect(d->m_frameAction, &Qt3DLogic::QFrameAction::triggered, this, &Qt3DWidget::paintGL);
I have verified that this indeed calls the paintGL function. So every time I draw the QOpenGLWidget, the frame should be ready and present in the texture.
I've also tried to replace the m_colorTexture with a QSharedGLTexture. Then I created this texture with the context of the QOpenGLWidget like this
m_texture = new QOpenGLTexture(QOpenGLTexture::Target2D);
m_texture->setFormat(QOpenGLTexture::RGBA8_UNorm);
// w and h are width and height of the widget
m_texture->setSize(w, h);
// m_colorTexture is the QSharedGLTexture
m_colorTexture->setTextureId(m_texture->textureId());
In the resizeEvent function of the QOpenGLWdiget I set the appropriate size on this texture and also on all offscreen resources of Qt3D. This also shows just a black screen. Placing qDebug() << glGetError(); directly after binding the texture simply shows 0 every time, so I assume that there aren't any errors.
The code can be found here in my GitHub project.
Update (10th May 2021, since I stumbled upon my answer again):
My Qt3DWidget implementation works perfectly now, the issue was that I had to call update() when the frame action was triggered instead of paintGL (duh, silly me, I actually know that).
Although I didn't find an exact solution to my question I'll post an answer here since I succeeded in creating a Qt3D widget.
The code can be found here. It's not the cleanest solution because I think it should be possible to use the shared texture somehow. Instead, now I'm setting the QOpenGLWidget's context on Qt3D for which I have to use Qt3D's private classes. This means that Qt3D draws directly onto the frame buffer bound by the OpenGL widget. Unfortunately, now the widget has to be the render driver and performs manual updates on the QAspectEngine by calling processFrame. Ideally, I would have liked to leave all processing loops to Qt3D but at least the widget works now as it is.
Edit:
I found an example for QSharedGLTexture in the manual tests here. It works the other way round, i.e. OpenGL renders to the texture and Qt3D uses it so I assume it should be possible to inverse the direction. Unfortunately, QSharedGLTexture seems to be a bit unstable as resizing the OpenGL window sometimes crashes the app. That's why I'll stick with my solution for now. But if anyone has news regarding this issue feel free to post an answer!
I've run into a bit of a confusing problem with OpenGL, it's rather simple but I've failed to find any directly related information.
What I'm trying to do
I'm creating several new textures every frame, and right after creation I bind them, use them for drawing, and then delete them right after.
The Problem
If I delete every texture right after it was used, the last one to be drawn replaces the previous ones(but their different geometry works as it should). If I batch my deletions after all drawing has been done, it works as expected, but if I do any draw calls at all after deleting the textures, the texture used in the last drawcall replaces the old ones(which could be some common permanent sprite texture).
Results from debugging
I've tried using glFlush(), which didn't seem to do anything at all, not deleting the textures at all gives the correct behaviour, and also not drawing anything at all between deleting the textures and calling SwapBuffers() works.
Code
This is not what my code looks like, but this is what the relevant parts boil down to:
int Tex1, Tex2, Tex3;
glGenTextures(1, &Tex1);
glBindTexture(GL_TEXTURE_2D, Tex1);
// ... Fill Texture with data, set correct filtering etc.
glDrawElements(GL_TRIANGLES, ...); // Using Tex1
glGenTextures(1, &Tex2);
glBindTexture(GL_TEXTURE_2D, Tex2);
// ... Fill Texture with data, set correct filtering etc.
glDrawElements(GL_TRIANGLES, ...); // Using Tex2
// I delete some textures here.
glDeleteTextures(1, &Tex1);
glDeleteTextures(1, &Tex2);
// If I comment out this section, everything works correctly
// If I leave it in, this texture replaces Tex1 and Tex2, but
// the geometry is correct for each geometry batch.
glGenTextures(1, &Tex3);
glBindTexture(GL_TEXTURE_2D, Tex3);
// ... Fill Texture with data, set correct filtering etc.
glDrawElements(GL_TRIANGLES, ...); // Using Tex3
glDeleteTextures(1, &Tex3);
// ...
SwapBuffers();
I suspect this might have something to do with OpenGL buffering my draw calls,
and by the time they are actually processed the textures are deleted? It doesn't really make sense to me though, why would drawing something else after deleting the previous textures cause this behaviour?
More context
The generated textures are text strings, that may or may not change each frame, right now I create new textures for each string each frame and then render the texture and discard it right after. The bitmap data is generated with Windows GDI.
I'm not really looking for advice on efficiency, ideally I want an answer that can quote the documentation on the expected/correct behaviour for rendering using temporary textures like this, as well as possible common gotchas with this approach.
The expected behavior is clear. You can delete the objects as soon as you are done using them. In your case, after you made the draw calls that use the textures, you can call glDeleteTextures() on those textures. No additional precautions are required from your side.
Under the hood, OpenGL will typically execute the draw calls asynchronously. So the texture will still be used after the draw call returns. But that's not your problem. The driver is responsible for tracking and managing the lifetime of objects to keep them around until they are not used anymore.
The clearest expression of this I found in the spec is on page 28 of the OpenGL 4.5 spec:
If an object is deleted while it is currently in use by a GL context, its name is immediately marked as unused, and some types of objects are automatically unbound from binding points in the current context, as described in section 5.1.2. However, the actual underlying object is not deleted until it is no longer in use.
In your code, this means that the driver can't delete the textures until the GPU completed the draw call using the texture.
Why that doesn't work in your case is hard to tell. One possibility is always that something in your code unintentionally deletes the texture earlier than it should be. With complex software architectures, that happens much more easily than you might think. For example, a really popular cause is that people wrap OpenGL objects in C++ classes, and let those C++ objects go out of scope while the underlying OpenGL object is still in use.
So you should definitely double check (for example by using debug breakpoints or logging) that no code that deletes textures is invoked at unexpected times.
The other option is a driver bug. While object lifetime management is not entirely trivial, it is so critical that it's hard to imagine it being broken for a very basic case. But it's certainly possible, and more or less likely depending on vendor and platform.
As a workaround, you could try not deleting the texture objects, and only specifying new data (using glTexImage2D()) for the same objects instead. If the texture size does not change, it would probably be more efficient to only replace the data with glTexSubImage2D() anyway.
I try to create a FrameBuffer with 2 textures attaching to it (Multi Render Targets). Then in every time step, both textures are cleared and painted, as following code. (Some part will be replaced as pseudo code to make it shorter.)
Version 1
//beginning of the 1st time step
initialize(framebufferID12)
//^ I quite sure it is done correctly,
//^ Note : there is no glDrawBuffers() calling
loop , do once every time step {
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID12);
//(#1#) a line will be add here in version 2 (see belowed) <------------
glClearColor (0.5f, 0.0f, 0.5f, 0.0f);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// paint a lot of object here , using glsl (Shader .frag, .vert)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
All objects are painted correctly to both texture, but only the first texture (ATTACHMENT0) is cleared every frame, which is wrong.
Version 2
I try to insert a line of code ...
glDrawBuffers({ATTACHMENT0,ATTACHMENT1}) ;
at (#1#) and it works as expected i.e. clear all two textures.
(image http://s13.postimg.org/66k9lr5av/gl_Draw_Buffer.jpg)
Version 3
From version 2, I move that glDrawBuffers() statement to be inside frame buffer initialization like this
initialize(int framebufferID12){
int nameFBO = glGenFramebuffersEXT();
int nameTexture0=glGenTextures();
int nameTexture1=glGenTextures();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,nameFBO);
glBindTexture(nameTexture0);
glTexImage2D( .... ); glTexParameteri(...);
glFramebufferTexture2DEXT( ATTACHMENT0, nameTexture0);
glBindTexture(nameTexture1);
glTexImage2D( .... ); glTexParameteri(...);
glFramebufferTexture2DEXT( ATTACHMENT0, nameTexture1);
glDrawBuffers({ATTACHMENT0,ATTACHMENT1}) ; //<--- moved here ---
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
return nameFBO ;
}
It is no longer work (symptom like version 1), why?
The opengl manual said that "changes to context state will be stored in this object", so the state modification from glDrawBuffers() will be stored in "framebufferID12" right? Then, why I have to call it every time step (or every time I change FBO)
I may misunderstand some opengl's concept, someone enlighten me please.
Edit 1: Thank j-p. I agree that it is make sense, but shouldn't the state be recorded in the FBO already?
Edit 2 (accept answer): Reto Koradi's answer is correct! I am using a not-so-standard library called LWJGL.
Yes, the draw buffers setting is part of the framebuffer state. If you look at for example the OpenGL 3.3 spec document, it is listed in table 6.23 on page 299, titled "Framebuffer (state per framebuffer object)".
The default value for FBOs is a single draw buffer, which is GL_COLOR_ATTACHMENT0. From the same spec, page 214:
For framebuffer objects, in the initial state the draw buffer for fragment color zero is COLOR_ATTACHMENT0. For both the default framebuffer and framebuffer objects, the initial state of draw buffers for fragment colors other then zero is NONE.
So it's expected that if you have more than one draw buffer, you need the explicit glDrawBuffers() call.
Now, why it doesn't seem to work for you if you make the glDrawBuffers() call as part of the FBO setup, that's somewhat mysterious. One thing I notice in your code is that you're using the EXT form of the FBO calls. I suspect that this might have something to do with your problem.
FBOs have been part of standard OpenGL since version 3.0. If there's any way for you to use OpenGL 3.0 or later, I would strongly recommend that you use the standard entry points. While the extensions normally still work even after the functionality has become standard, I would always be skeptical how they interact with other features. Particularly, there were multiple extensions for FBO functionality before 3.0, with different behavior. I wouldn't be surprised if some of them interact differently with other OpenGL calls compared to the standard FBO functionality.
So, try using the standard entry points (the ones without the EXT in their name). That will hopefully solve your problem.
I am trying to setup a simple function which will make it a lot easier for me to texture map geometry in OpenGL, but for some reason when I'm trying to make a skybox, I am getting a white box instead of the texture mapped geometry. I think that the problemed code lies within the following:
void MapTexture (char *File, int TextNum) {
if (!TextureImage[TextNum]){
TextureImage[TextNum]=auxDIBImageLoad(File);
glGenTextures(1, &texture[TextNum]);
glBindTexture(GL_TEXTURE_2D, texture[TextNum]);
glTexImage2D(GL_TEXTURE_2D, 0, 3, TextureImage[TextNum]->sizeX, TextureImage[TextNum]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[TextNum]->data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
}
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture[TextNum]);
//glTexImage2D(GL_TEXTURE_2D, 0, 3, TextureImage[TextNum]->sizeX, TextureImage[TextNum]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[TextNum]->data);
//glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
}
The big thing I don't understand is for some reason the glBindTexture() must come between glGenTextures() and glTexImage2D. If I place it anywhere else, it screws everything up. What could be causing this problem? Sorry if it's something simple, I'm brand new to openGL.
Below is a screenshot of the whitebox I am talking about:
+++++++++++++++++++++++++++++++
EDIT
+++++++++++++++++++++++++++++++
After playing around with the code a bit more, i realized that if I added glTexImage2D() and glTexParameteri()after the last glBindTexture() then all the textures load. Why is it that without these two lines most textures would load, and yet there are a few that would not, and why do I have to call glTexImage() for every frame, but only for a few textures?
Yes, order is definitely important.
glGenTexture creates a texture name.
glBindTexture takes the texture name generated by glGenTexture, so it can't be run before glGenTexture.
glTexImage2D uploads data to the currently bound texture, so it can't be run before glBindTexture.
The client-side interface to OpenGL is a Big Giant Squggly State Machine. There are an enormous number of parameters and flags that you can change, and you have to be scrupulous to always leave OpenGL in the right state. This usually means popping matrices you push and restoring flags that you modify (at least in OpenGL 1.x).
OpenGL is a state machine, which means that you can pull its levers, turn its knobs, and it will keep those settings, until you change them actively.
However it also manages it's persistent data in objects. Such objects are something abstract, and must not be confused with objects seen on the screen!
Now to the outside OpenGL identifies objects by their so called name, a numerical ID. You create a (list of) name(s) – but not the object(s)! – with glGenTextures for texture objects, which are such a kind of OpenGL object.
To maniupulate such an object, OpenGL must first be put into a state that all the following calls to manipulate such objects of that type happen to one particular object. This is done with glBindTexture. After calling glBindTexture all the following calls that manipulate textures happen to that one texture object you've just bound. If the object didn't exist previously, it is created if a new assigned object name is bound for the first time.
Now OpenGL uses that particular object.
glTexImage2D is just one of several functions to maniuplate the data of the currently bound textures.
Otherwise your function points into the right direction. OpenGL has no real initialization phase, you just do things as you go along. And it makes sense to defer loading of data until you need it. But it also makes sense to have multiple iterations over the lists of objects before you actually draw a frame. One of the preparations should be, that you iterate over all objects (now not OpenGL but your's) to test if the data's already loaded. If a significant amount of data's still missing, draw a loading screen instead, so that the user doesn't get the impression your program hangs. Maybe even carry out lengthy loading operations in a separate thread, but with OpenGL this requires some precautions.
I am using code from this site:
http://www.spacesimulator.net/tut4_3dsloader.html
It works in their example project but when I placed the code into a class for easier and more modular use, the texture fails to appear on the object.
I've double checked to make sure the texture ID is correct by debugging them side by side.
On my project I get a blank white object while the example works fine.
Are there ANY ways to tell what is going on under the hood? Any error functions I can call that can give me some hint to what's going on? Right now I am just guessing. (Yes I have enabled 2D textures.
Thanks SO!
glGetLastError()
or glGetError()
what ever it is...
make sure glEnable(GL_TEXTURE_2D);
and make sure your texture is bound using glBindTexture
make sure there are texture coords being rendered and that they are right (if they are all the same, or all the same uninitialized value you will get one colour across the whole thing)
ummm....
make sure your texture matrix isn't screwed...
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
if your not using it...
then ummm....
make sure the data getting loaded in when you load the texture is right.
make sure if you have mipmapping on that you are loading in the mip maps, otherwise if you have the object at a different zoom you might not get any texture...
umm...
thats all I can think of off the top of my head.
EDIT:
ooo, I just remembered one that caught me up once:
by changing the structure, you may have changed the initialization order of the app.
MAKE SURE you aren't trying to load textures BEFORE you initialize opengl (with the device contexts or whatever, I was under windows)
Make sure you're uploading a complete texture.