How do I access the associated Renderer object from within a QQuickFramebufferObject - c++

I tried to store it as a member pointer on creation, so it can be accessed later:
QQuickFramebufferObject::Renderer* MyItem::createRenderer() const {
m_renderer = new MyItemRenderer(this);
return m_renderer;
}
... but this doesn't work - Qt requires createRenderer to be a const method, so I can't assign to m_renderer within it. I could use mutable but that's a hack and is risky because it may break assumptions in Qt internals.
Any proper way?

I thought of a way:
In MyItemRenderer::synchronize set the item's renderer to this. I don't like this very much, because it's abuse of synchronize, but it's certainly much better than mutable.

What about this?
QQuickFramebufferObject::Renderer* MyItem::createRenderer() const {
return new MyItemRenderer();
}
More information here. At the end of this page, it is stated the following:
However, there is a special case when the FBO has to get recreated regardless: when moving the window to a screen with a different device pixel ratio. For example moving a window between a retina and non-retina screen on OS X systems will inherently need a new, double or half sized, framebuffer, even when the window dimensions are the same in device independent units. Just like with ordinary resizes, Qt is prepared to handle this by requesting a new framebuffer object with a different size when necessary. One possible pitfall here is the applications’ caching of the results of the factory functions: avoid this. createFramebufferObject() and createRenderer() must never cache their return value. Just create a new instance and return it. Keep it simple.

Related

Recreating swapchain vs using dynamic state on resize

I have a vulkan application that uses GLFW as its window manager.
Upon the window resize event, vulkan needs to update its drawable area. I have seen 2 ways that this is possible. One is to recreate the swapchain alongside all of the other objects that are tied to it and the other is to use dynamic state for the viewport so that recreation is not needed.
What is the difference between these two and when should I prefer one over the other?
If the window is resized to a smaller size, the display engine may not force you to change your swapchain image sizes. It may inform you of this through the VK_SUBOPTIMAL_KHR error code (though it may not give you even that if performance of presenting is not affected). However, if the window is resized to be larger, the display engine may throw VK_ERROR_OUT_OF_DATE_KHR. That is not something you can ignore. Nor is it something a display engine can promise to never give you.
This means your code must be able to do swapchain rebuilding. Since you have to make allowances for this regardless, the only question is whether you do it whenever the window is resized or just when the display engine forces you to.
I would say that, if the display engine doesn't make you rebuild the swapchain, then it's probably faster not to. Using dynamic state isn't particularly slower than pipeline state, and its not like you're going to be changing it mid-frame. Indeed, you shouldn't rebuild all of your pipelines just because the swapchain was resized, so you should be using dynamic state for the viewport anyway.
In short: you ought to do both.

Drawing to Multiple Windows Using Vulkan

I am trying to create an application that could dynamically create additional windows. Each window will be drawn to using Vulkan and I know that this means that each window will have to contain it's own SwapChain resources (image views, framebuffers, etc.) and graphics pipeline (as it is references the swap chain's extents). I am wondering if each window will also have to remember its own present queue family or if I can assume that the same queue family can be used for each window. Specifically to find a present queue family you need to find out if a particular queue family supports surface presentation using:
VkResult vkGetPhysicalDeviceSurfaceSupportKHR(
VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex,
VkSurfaceKHR surface,
VkBool32* pSupported);
This requires a VkSurfaceKHR and thus the HWND and HINSTANCE of a particular window, but I'm not sure if the present queue family is likely to change between different windows created by the same operating system or if I can safely use the same one for each window.
Similarly while reviewing swap chain recreation within the vulkan-tutorial I read that VKSurfaceFormatKHR::format rarely changes during window resize and that is the only reason the render pass needs to be reconstructed during the window resizing operation. How safe would it be to skip the render pass recreation in this step during window resizing and how well could the same render pass be used for different windows?
If each window uses a similar graphics pipeline, more specifically uses the same synchronization objects, would it be typical to have each window append to the same command buffer and use a single vkQueueSubmit? I only ask because you need to create a command buffer for each frame in flight, and thus the number of command buffers required would be numWindows * numFramesInFlight which feels excessive, but I'm not sure if it would be any different from a single large command buffer (appended to by each window) per frame in flight.
As an aside, the resources for drawing to multiple windows using Vulkan seems to be fairly scarce, so if anyone knows of any good ones I would greatly appreciate it.
On Windows you could largely assume everything can render to everything. But use you should check that is so anyway. vkGetPhysicalDeviceWin32PresentationSupportKHR does not need surface, and gives a strong hint that the device\queue is a presentation able, and not e.g. compute accelerator or something.
Similarly while reviewing swap chain recreation within the vulkan-tutorial I read that VKSurfaceFormatKHR::format rarely changes during window resize and that is the only reason the render pass needs to be reconstructed
It is not supposed to ever change for the lifetime of the physical device and surface. If it could change, that would be a TOCTOU problem.
If each window uses a similar graphics pipeline, more specifically uses the same synchronization objects, would it be typical to have each window append to the same command buffer and use a single vkQueueSubmit?
Why not. I mean there is nothing "typical" about this. But if it can be done, then it should probably be done. Otherwise if the windows are unrelated then they should probably have each their own private logical device (or even instance).
As an aside, the resources for drawing to multiple windows using Vulkan seems to be fairly scarce
Lot of resources for Vulkan are "scarce". That is because Vulkan is like a lego. Once you know what the individual pieces do, then you can build whatever without needing outside help. Drawing to multiple windows is no different than drawing to a single window, exept you do it multiple times.

Should I use a singleton.?

I am planning on making a 3D scene editor app using c++ and open gl. And I have to keep track of the current loaded project and different scenes it contains also the user preferences and other things. The best solution I can think of is to wrap them in a context class which will be a singleton. Is there a better way of doing this?
Not a good idea, probably. One reason is technical. The actual OpenGL context's lifetime is limited and is far shorter than app itself, usually related to surface you're outputting to. You would need to initialize context after your visualizing window is ready and de-initialize it before window is gone. Trying to do so when window gone may end to undefined behaviour depending on platform. In some cases you might need several contexts.
Another reason is, it doesn't look like proper separation of responsibilities. User settings aren't part of context, but some may affect only a single render pass (out of plural). You likely would have Preferences, Renderer which would be an interface to Context manager, Geometries, Textures (or materials) separate, an Scene Manager as well (think of scene tree in Blender or DAZ studio, each item in scene can have separate user settings, regarding how to visualize them).

How to share OpenGL context or data?

I need to shared data (textures, vertex-buffers,... ) across all OpenGL widgets in a application.
The following code isn't working:
I've found some solutions that have one main QGLWidget and other are constructed using this main widget. Unfortunately, I can't use this approach, because all my QGLWidgets are equal and almost certainly the first(main) created QGLWidget will be destroyed before others are.
Possible approach:
single shared OpenGL context between all QGLWidgets
not working: just one QGLWidget gets rendered correctly, others behave as they weren't rendered, corrupted/random data
error for each QGLWidget construction except first one:
QGLWidget::setContext: Context must refer to this widget
Another approach:
main OpenGL context and create sub-context for each QGLWidget
not working: context->isSharing() returns false
code that I use for context creation, context1 and context2 are later passed to constructors of QGLWidgets:
QGLContext *mainContext = new QGLContext(format), *context1, *context2;
mainContext->create();
context1 = new QGLContext(format);
context1->create(mainContext);
context2 = new QGLContext(format);
context2->create(mainContext);
cout << mainContext->isSharing() << " " << context1->isSharing() << endl;
With regards to the first approach, you are not setting up sharing but trying to force the same context to be used with different QGLWidgets. As pointed out above, this is wrong and will not work.
Instead, create the QGLWidgets normally and pass the first QGLWidget in the shareWidget parameter when creating the others. This way you will get a separate context for each QGLWidget but they will all share with the context of the first one (and thus with each other). See http://qt-project.org/doc/qt-4.8/qglwidget.html#QGLWidget
Destroying the first widget before the others should not be an issue since the shared objects will be around until any of the sharing contexts are alive.
I realize that it has been almost a year since this question has been asked, but I believe the comment above may be inaccurate.
To be more precise, while it may be indeed invalid to use a single QGLContext with multiple QGLWidgets, this would be a limitation of Qt's OpenGL implementation rather than a limitation of OpenGL or the windowing system. It certainly seems valid to use the same context to render to multiple windows. For example, the functions wglMakeCurrent and SwapBuffers accept as parameters device handles alongside OpenGL context handles. To quote the wglMakeCurrent documentation:
The hdc parameter must refer to a drawing surface supported by OpenGL.
It need not be the same hdc that was passed to wglCreateContext when
hglrc was created, but it must be on the same device and have the same
pixel format.
I do not even want to go into problems with SwapBuffers, since there are several bug reports all over the web regarding Qt5, which seems to force making the OpenGL context current unnecessarily before SwapBuffers is called.
This has been updated since QT 5.4 and you should now use QOpenGLWidget instead of QGLWidget. Global sharing of contexts has been written into QOpenGLWidget now so you don't have to code it yourself. You just need to enable the sharing flag Qt::AA_ShareOpenGLContexts before you create QGuiApplication.

Best Practice in Cocos2d

I am at the start of my cocos2d adventure, and have some ideological questions to ask. I am making a small space-shooter game, am I right to use the following class structure?
Scene
Background Layer
Infinite parallax background
Game Layer
Space ships
Bullets
Control Layer
Joystick
Buttons
and a followup question — what is the best practice in accessing objects from other layers? For example, when the joystick is updated, it must rotate the space ship and move the background. Both of these are in other layers. Is there some recommended way to go about this or should I simply get the desired objects by Tag and operate on them?
Cocos is a big singleton-based system, which may not appeal to some developers but is often used in Cocos apps and is the fundamental architecture of the framework. If you have one main scene and many subsequent layers added to that scene, and you want controls from one layer to affect sprites or logic on other layers, there really is nothing wrong with making your main scene a singleton and sending the information from the joystick layer back to the scene to handle for manipulating other layers or sprites. I do this all the the time and this technique is used in countless Cocos tutorials in books and online, so you can feel that you aren't breaking too many rules if you do it this way (and it's also quite easy to do).
If you instead choose to use pointers in one layer to send data to other layers, this can get you into a lot of trouble since one node should never own another node that it doesn't have a specific parent-child relationship with. Doing so can cause crashes and problems with the native Cocos cleanup methods when you remove scenes later, and potentially leak memory. You could use a weak reference in such a case instead, but that is still dependent on one layer expecting another layer to always be around, which may not be the case.
Sending data back to the main game scene to then dispatch and use accordingly is really efficient.
This seems like a perfectly reasonable way to arrange your objects, this is a method I use.
For accessing objects, I would keep an explicit reference to the object as a member variable and use it directly. (Using tags isn't a bad option, I just find it can get a little messy).
#interface Class1 : NSObject
{
CCLayer *backgroundLayer;
CCLayer *contentLayer;
CCLayer *hudLayer;
CCSprite *objectIMayNeedToUseOnBackgroundLayer;
CCNode *objectIMayNeedToUseOnContentLayer;
}
Regarding tags, one method I use to make sure the tag numbers I'm assigning are unique is define an enum as follows:
typedef enum
{
kTag_BackgroundLayer = 100,
kTag_BackgroundImage,
kTag_GameLayer = 200,
kTag_BadGuy,
kTag_GoodGuy,
kTag_Obstacle,
kTag_ControlLayer = 300
kTag_Joystick,
kTag_Buttons
};
Most times I'll also just set zOrder and tag properties of CCNodes (i.e. CCSprites, CCLabelTTFs, etc.) the same, so you can actually use the enum to define your zOrder, too.