Qt3D: Request rendering - opengl

My GPU used too many resources and that is why I set QRenderSettings to OnDemand. That works well when I use render my 3D scene in Qt3DWindow.
self.renderSettings.setRenderPolicy(self.renderSettings.OnDemand)
But this is not the case if I render my scene in my custom OffscreenSurface. In an other class:
self.renderCapture = Qt3DRender.QRenderCapture(
self.offscreenFrameGraph.getRenderTargetSelector())
self.renderCapture.requestCapture()
Sadly requestCapture() does not generate an image when OnDemand is set on.

Related

Amortize rendering of Direct2D UI overlay for DirectX12

My experience with D3D11on12 and Direct2D hasn't been that good. Infrequently I get
D3D12 ERROR: ID3D12Device::RemoveDevice: Device removal has been triggered for the following reason (DXGI_ERROR_ACCESS_DENIED: The application attempted to use a resource it does not access to. This could be, for example, rendering to a texture while only having read access.). [ EXECUTION ERROR #232: DEVICE_REMOVAL_PROCESS_AT_FAULT]
when I render to the swap chain backbuffer. There are lag spikes as well. And on top of all
this, I think amortizing the "UI" will be needed when I try to push the frame rate.
Synchronization between the UI and the actual scene doesn't really matter, so I can happily just use whatever UI Direct2D has most recently finished rendering.
So I would like to use Direct2D to render the UI on a transparent D3D11on12 bitmap (i.e. one created by using CreateBitmapFromDxgiSurface with the ID3D11Resource from ID3D11On12Device::CreateWrappedResource). And then render this overlay this to the swapchain backbuffer.
The problem is I don't really know anything about the 3D pipeline, as I do everything with compute shaders/DirectML + CopyTextureRegion or Direct2D. I suppose this is a pretty simple question about how to do alpha blending.
I suppose to do alpha blending you have to use the 3D pipeline. Luckily enough directXTK12 seems to do a tutorial that is reasonable trivial on this topic https://github.com/Microsoft/DirectXTK12/wiki/Sprites-and-textures

Oculus Rift / Vulkan : Write to swapchain with a compute shader

I would like to write to the swapchain generated by OVR with a compute shader.
The problem is that the images don't have the usage VK_IMAGE_USAGE_STORAGE_BIT.
The creation of the swapchain is done with ovr_CreateTextureSwapChainVk which ask for a flag BindFlags. I added the flag ovrTextureBind_DX_UnorderedAccess but still the images don't have the correct usage.
The problem is that the images don't have the usage VK_IMAGE_USAGE_STORAGE_BIT.
Then you cannot write to a swapchain image directly with a compute shader.
The display engine that provides the swapchain images has the right to decide how you may and may not use them. The only method of interaction which is required is the ability to use them as a color render target; everything else is optional.
So you will have to do this another way, perhaps by writing to an intermediate image and copying/rendering it to the swapchain image.

How to send QImage to Qt3D Entity from C++ to QML for using it as texture?

I need to change the texture of a plane in a 3D scene. In the BackEnd class in C++ I make a new QImage for setting it on a texture. I want to send it as a signal to my QML and there assign it to the planes material property.
But it looks like TextureMaterial etc. can only use a URL path to a texture file. I cant't save my Images to my hard drive to use them as a URL path. It will be too long. I need to change my texture 20+ times in a second and there are other things which this program should to do in the same time.
Is there any way to do this?
You can have a look at my implementation of a background image in Qt3D.
The way I achieved what you are looking for is by using the classes QTextureMaterial, QTexture2D and QPaintedTextureImage. I had to flip the image before drawing it that's why I subclassed QPaintedTextureImage but this might also just be what you need. Instead of loading the texture from disk, like I did, you could set the QImage on your subclass as an attribute. You only need to make the C++ class available to QML and set it on your plane (maybe you could even use PaintedTextureImage directly and write the image painting in their JavaScript-style language).
You can add your QPaintedTextureIage to the QTexture2D and then set the result as the texture on the QTextureMaterial. I'm not sure if this yields the best performance, though.
You could also try to follow these steps but they seem to be a bit more involved. This would mean you have to implement your own version of QAbstractTextureImage and return the appropriate QTextureImageDataGeneratorPtr. You can checkout the sources to gain a better understanding.

Write texture mapped obj file to disk with VTK

I am using VTK to read an obj file, texture map the 3D model and transform it to another view (by applying rotateY/X/Z transforms to vtkActors) and writing it to file using vtkwindowtoImageFilter. Due to this pipeline, the rendered image is displayed on the screen before being written to file. Is there a way to do the same pipeline without the image being displayed on screen ?
If you are using VTK 5.10 or earlier, you can reander the geometry off screen.
I am not quite sure if this is what you are looking for. I am new to vtk and I found the above link looking for a way to convert a triangular surface to volume data, ie, to voxelize the surface. All profile I found on the internet is about using vtkwindowtoImageFilter to obtain a 2D section of the screen, have you worked out a way to access 3D data of the rendered window? Please tell me about that.

Libgdx postprocessing with modelbatch in chain: opengl setting breaks model faces rendering order

I use libgdx library with opengl-gl20 graphics.
I've added some library into rendering chain and it seams this brokes some opengl setting.
Problems:
model faces forgets about their depth in rendering scene;
inner faces are rendered (when outer faces are expected to be rendered).
So my rendered scene looks like:
(source: cs617131.vk.me)
I have no sources for this library and I couldn't debug what is going on inside.
Update after solved:
I found official sources of that compiled strange jar. This library I tried to use is libgdx-contribs postprocessing.
Solution comes at once when I start read api - constructor of the main processor should get parameter depth=true:
new PostProcessor(true/*enable depth!*/, false, true);
Сonclusion:
situation on image shows rendering with disabled depth_buffer.
If you use together libgdx#ModelBatch and libgdx-contribs postprocessing library
you need create main post-processor with parameter depth=true:
new PostProcessor(true/*enable depth!*/, false, true).