I am currently creating a D2DFactory with
D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &Direct2DFactory);
and using Direct2DFactory to create a render target on my main window using:
Direct2DFactory->CreateHwndRenderTarget(
D2D1::RenderTargetProperties(),
D2D1::HwndRenderTargetProperties(WindowHandle, size),
&RenderTarget
);
I am attempting to draw from multiple classes to multiple parts of this window, at the minute, i am holding a list of all of these classes and calling OnRender for every one of them and passing RenderTarget as a param.
Is there a better way to do this? Can i create more than one render target, and then render those render targets with my main RenderTarget?
What's your current problem? boring of passing so many render target parameters? I think this is cheap than creating multiple render targets, just like the D3DDevice object in Direct3D apps, nearly every render-able class need a device object to manage resource and do rendering work, the most common way is to pass a device parameter to each class instead of creating multiple devices. so i think you are on the way.
Related
My application contain 3rd party code, which paints something using OpenGL. Only I can do is to provide full-screen surface for it and mouse/touchscreen events. Say I can create full-screen-sized Item and leave it for that 3rd-party render. The render emits some signals for me (maybe from arbitrary thread, by means of QMetaObject::invokeMethod) to update the view. How to block QML to paint into the context temporarily?
How described above can be implemented? Is it technically possible to create such a workflow?
Is there more wise way to achieve desired. Maybe using of FBO would be better?
Can I do this asynchronously? I.e. that render have its own message quque into separate thread.
You should have a look on QQuickFramebufferObject.
The way it works:
You create a subclass and instanciate it in QML. You should override its
Renderer *QQuickFramebufferObject::createRenderer()
You create your own subclass of Renderer http://doc.qt.io/qt-5/qquickframebufferobject-renderer.html
All the code you put inside its void Renderer::render() will affect the FBOItem and be rendered to the screen. Qt is in charge to bind your FBO.
Now if you need to call for modification in the UI you call the void Renderer::update().
Your third party renderer should call this method when he needs to draw but all your opengl methods (glDrawArrays ...) should be written inside the render function of the Renderer. Qt bind your FBO before accessing this method. Maybe your QQuickFramebufferObject should listen to your ThirdParty signals and modify (via some states, or std::functions) the render method of its Renderer components and call for an update.
Here are some good examples
http://blog.qt.io/blog/2015/05/11/integrating-custom-opengl-rendering-with-qt-quick-via-qquickframebufferobject/
Background :
I have a Shader class in my c++/OpenGL3.1/GLSL/Qt program. My program uses several shaders based on different GLSL sources files.
My application can run many different 3D renderers based on the QGLWidget implementation and each one creates its own shaders.
When I create my first 3d renderer and initialize my shaders, shaders IDs are generated with the help of glCreateShader & glCreateProgram, without any problem.
Problem :
But when I create a second 3d renderer, the OGL functions retrieving the ID give exactly the same but I expect to have new ones. It means that my two renderers will send the data to the same GPU program...
It's obvious that in the GPU program, uniform variables are mixed and when running the second renderer, the first one displays a weird rendering.
Indeed, when I close one of the two renderer, all shaders are killed... and the second renderer cannot display anything.
Idea ?
I'm completely lost and my logical deduction is that glCreateShader & glCreateProgram give ID according to their own thread id. QGLWidget running probably its own thread to call the rendering functions, it may trouble the persistence...
Any idea of how to solve this problem ?
If both your QOpenGLWidget share the same parent window, then by default they share the same context. If you don't want that, the easier is probably to create a new top-level widget (any QWidget with no parent) with your second QOpenGLWidget.
Please note that this is different from the older QGLWidget class. From Qt documentation:
When multiple QOpenGLWidgets are added as children to the same top-level widget, their contexts will share with each other. This does not apply for QOpenGLWidget instances that belong to different windows.
This means that all QOpenGLWidgets in the same window can access each other's sharable resources, like textures, and there is no need for an extra "global share" context, as was the case with QGLWidget.
There is no problem to solve.
Each QGLWidget has its own OpenGL context. And, unless you are explicitly sharing objects between them, each context has its own separate list of objects.
You can only use an OpenGL object with the OpenGL context that created it. So long as you keep the objects separate, and only use them with the context that created it, you should be fine.
I've the following problem :
I want to get an application composed of many view which render a common OpenGL scene from a different point of view, illumination, and others options.
Basically, my question is what is the best way to do that with qt ?
My first attempt was to create multiple QOpenGLWidget and get a common QOpenGLContext where I stored the textures but also the meshes and shaders.
But it didn't work for meshes because Vertex Array Objects seem to not be shareable.
After lot of tries, a possible solution is to store one VAO for each widget that need the mesh but this look really awful.
So, I wonder if there is a good alternative for this kind of problem, or maybe a good documentation to understand how these QOpenGLContext work.
The simplest idea that I've imagined is to create only one QOpenGLContext and use it in the different widgets. But I don't know how to just create a QOpenGLContext alone nor what kind of QWidgets is able to display these renderings.
It's my first post so I don't know if it's clear enough or if I need to describe my whole architecture.
You already tried, so I pass the word about shared contexts.
An OpenGL context is bound to a window: if you want only one context, the straight answer is to have only one window.
Using the widgets module, you can have multiple views of a same scene using multiple viewports in a same QOpenGLWidget. Something like:
void myWidget::paintGL() {
//...
glViewport(
0, 0,
this->width()/2, this->height()/2
);
// draw scene from one point of view
glViewport(
this->width()/2, this->height()/2,
this->width()/2, this->height()/2
);
// draw scene from an other point of view
//...
}
You should probably design a viewport class to store and manage the rendering parameters for each viewport.
The drawback is that you will have to detect in which viewport the user is clicking to handle interactions: some kind of if event.pos.x is between 0 and this->width()/2 ....
An other way could be to let down the widgets module and use Qt Quick and QML: a quick window declares a unique OpenGL context, where each quick item is like a viewport, but encapsulated in its own object so you don't have to think about where the user is interacting.
Inherit QQuickItem instead of QOpenGLWidget and export your class to QML using the qmlRegisterType() macro. You can then create a QQuickView in your program to load a QML code where you declare your items. An example from Qt's documentation here.
I think since multiple views/surfces can update independently, unfortunately its not possible to have one single QOpenGLContext that does the job. And sharing contexts have the limitation you already point out in your question.
QOpenGLContext can be moved to a different thread with moveToThread().
Do not call makeCurrent() from a different thread than the one to
which the QOpenGLContext object belongs. A context can only be current
in one thread and against one surface at a time, and a thread only has
one context current at a time.
Link : http://doc.qt.io/qt-5/qopenglcontext.html
So one way you can get it working is have independent updates to your views in a sequential order and make the context current one by one and render before moving on to the next view. This will guarantee that the context is current in only one view at any given time. Perhaps use a QMutex to serialize the updates.
Alternatively you can also pass the context around among threads and serialize their updates, but this is a bad approach.
Goal:
In my application I'm trying to implement multiple viewports to allow the user to view a scene from multiple perspectives. Each of my viewports need to be able to switch between wireframe, shaded, lighting, etc. I can currently render from different perspectives in each viewport, but I have issues.
Problem:
When I try to set various settings such as glPolygonMode() or qglClearColor() within any viewport, these settings only seem to apply to a single viewport, generally the very last viewport that was created. This isn't a signals/slots issue, since these connections are handled internally within each widget, and cannot be mixed up between widgets.
Attempts at solving the problem:
Since I'm using Qt as the library for managing all UI related things, I'm sure there are a lot of things Qt has taken care of for creating and setting up each OpenGL instance for me, so there may be things that I'm overlooking that I don't know about.
I've checked the constructors available for QGLWidgets, and seen that a QGLWidget can take in another QGLWidget as a "sharedwidget", and also a QGLContext object.
I currently use the "sharedwidget" route, because without it for some reason I can't get textures to bind for more than 1 viewport. However, this doesn't solve the problem of not being able to switch between wireframe or shaded in each QGLWidget instance.
I've also tried the QGLContext route. By default each QGLWidget
creates a new context anyways, but when trying to assign new ones or
sharing a single Context between all of them I would just get issues
with my shaders not linking (I believe the initializeGL slot is not
getting called in that case), leading to a crash every time a context is shared to another QGLWidget:
ASSERT: "QOpenGLFunctions::isInitialized(d_ptr)" in file
c:\work\build\qt5_workdir\w\s\qtbase\include\qtgui../../src/gui/opengl/qopenglfunctions.h,
line 2018
Details:
Currently, my application takes on the following hierarchy:
Application
Window
ViewportWidget [dynamic array]
QGLWidget (custom variation)
The only thing each QGLWidget needs to share is the pointer to the current "map", so that each can render the map based on whatever settings are set within that particular widget's instance.
I perform the following functions for setting up a viewport:
I create a new ViewportWidget, parent it and add it to the appropriate frame and Layout. If the viewport isn't the first one, then it also passes the very first QGLWidget to be used as a "sharedwidget"
The viewport then creates a QGLFormat with a swap interval of 1, and passes said format into the constructor of a new QGLWidget.
I then am forced to call "makeCurrent()" for the viewport, otherwise I crash with the reason:
ASSERT: "false" in file qgl.cpp, line 122
Is it even possible to have separate QGLWidgets with different "polygonMode"'s, or "clearColor"'s? I'm just worried that I'm doing something wrong that will bite me in the butt later on, which I want to avoid.
I want to render a scene and display it on the monitor, while rendering another one to a texture.
Do I need to create two swapchains? How do I create the second swapchain in this case? I tried to call CreateSwapChainForCoreWindow but got memory access exceptions.
Swapchains are really just for displaying stuff.
To render to something, you have to add a render target view to the device via the OMSetRenderTargets() call. You can create render target views via CreateRenderTargetView(), which takes a resource as input. Textures are resources too... you just have to create them with the D3D11_BIND_RENDER_TARGET flag.
That's just a few cues that should be able to point you into the right direction.
Btw, Swapchains have buffers, which are resources that are used to create a render target view as well. That's how you render to a swapchain; it really doesn't have anything to do with "swapchains" at all.