I would like to write to the swapchain generated by OVR with a compute shader.
The problem is that the images don't have the usage VK_IMAGE_USAGE_STORAGE_BIT.
The creation of the swapchain is done with ovr_CreateTextureSwapChainVk which ask for a flag BindFlags. I added the flag ovrTextureBind_DX_UnorderedAccess but still the images don't have the correct usage.
The problem is that the images don't have the usage VK_IMAGE_USAGE_STORAGE_BIT.
Then you cannot write to a swapchain image directly with a compute shader.
The display engine that provides the swapchain images has the right to decide how you may and may not use them. The only method of interaction which is required is the ability to use them as a color render target; everything else is optional.
So you will have to do this another way, perhaps by writing to an intermediate image and copying/rendering it to the swapchain image.
Related
My experience with D3D11on12 and Direct2D hasn't been that good. Infrequently I get
D3D12 ERROR: ID3D12Device::RemoveDevice: Device removal has been triggered for the following reason (DXGI_ERROR_ACCESS_DENIED: The application attempted to use a resource it does not access to. This could be, for example, rendering to a texture while only having read access.). [ EXECUTION ERROR #232: DEVICE_REMOVAL_PROCESS_AT_FAULT]
when I render to the swap chain backbuffer. There are lag spikes as well. And on top of all
this, I think amortizing the "UI" will be needed when I try to push the frame rate.
Synchronization between the UI and the actual scene doesn't really matter, so I can happily just use whatever UI Direct2D has most recently finished rendering.
So I would like to use Direct2D to render the UI on a transparent D3D11on12 bitmap (i.e. one created by using CreateBitmapFromDxgiSurface with the ID3D11Resource from ID3D11On12Device::CreateWrappedResource). And then render this overlay this to the swapchain backbuffer.
The problem is I don't really know anything about the 3D pipeline, as I do everything with compute shaders/DirectML + CopyTextureRegion or Direct2D. I suppose this is a pretty simple question about how to do alpha blending.
I suppose to do alpha blending you have to use the 3D pipeline. Luckily enough directXTK12 seems to do a tutorial that is reasonable trivial on this topic https://github.com/Microsoft/DirectXTK12/wiki/Sprites-and-textures
I'm trying to use a compute shader to render directly to the swap chain.
For this, I need to create the swapchain with the usage VK_IMAGE_USAGE_STORAGE_BIT.
The problem is that the swapchain needs to be created with the format VK_FORMAT_B8G8R8A8_UNORM or VK_FORMAT_B8G8R8A8_SRGB and neither of the 2 allows the format feature VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT with the physical device I use.
Do I said something wrong or it's impossible to render to the swapchain with compute shader using my configuration?
Vulkan imposes no requirement on the implementation that it permit direct usage of a swapchain image in a compute shader operation (FYI: "rendering" typically refers to a very specific operation; it's not something that happens in a compute shader). Therefore, it is entirely possible that the implementation will forbid you from using a swapchain image in a CS through various means.
If you cannot create a swapchain image in your preferred format, then your next best bet is to see if you can find a compatible format for an image view of the format you can use as a storage image. This however requires that the implementation support the KHR extension swapchain_mutable_format, and the creation flags for the swapchain must include VK_SWAPCHAIN_CREATE_MUTABLE_FORMAT_BIT_KHR as well as a VkImageFormatListCreateInfoKHR list of formats that you intend to create views for.
Also, given support, this would mean that your CS will have to swap the ordering of the data. And don't forget that, when you create the swapchain, you have to ask it if you can use its images as storage images (imageUsage). It may directly forbid this.
I am looking in this demo for rendering a scene in vulkan using depth peeling Order Independent Transparency
Blog: https://matthewwellings.com/blog/depth-peeling-order-independent-transparency-in-vulkan/
Code: https://github.com/openforeveryone/VulkanDepthPeel
I have modified the code so that I am able to save the final render in an output image(png) before presenting for rendering to the surface.
Once the primary command buffer consisting secondary command buffers responsible for drawing operations is submitted to queue for execution & rendering is finished, I am using vkCmdCopyImageToBuffer for copying the data from the current swap chain image(The copy operation is done after introducing the image barrier to make sure rendering is completed first) to a VkBuffer & then mapping the buffer memory to an unsigned char pointer & writing this information to the PNG file. But the output which I see in the PNG is different from the one rendered on window as the boxes are almost entirely transparent with some RGB information as you can see in the image below.
My guess is this might be the case due to this particular demo involving multiple subpasses & I am not copying the data properly but only thing bothering me is that since I am directly copying from swapchain image just before finally presenting to the surface, I should be having the final color data in the image & hence PNG & render should match.
Rendered Frame:
Saved Frame:
Let me know if I have missed explaining any details, any help is appreciated. Thanks!
You have alpha value 41 in the saved image.
If I just rewrite it to 255 then the images are identical.
You are probably using VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR with the swapchain, which does that automatically. But typical image viewer will treat the alpha as premultiplied — hence the perceived (brighter) image difference.
I'm fairly new to CUDA, but I've managed to display something generated by a kernel on the screen using OpenGL. I've tried several approach :
Using a PBO and an OpenGL texture (old style);
Using a OpenGL texture as a CUDA surface and rendering on a quad (new style);
Using a renderbuffer as a CUDA surface and rendering using glBlitFramebuffer.
All of them worked, but, while implementing #2, I erroneously set the hint as cudaGraphicsRegisterFlagsWriteDiscard. Since all of the data will be generated by CUDA, I thought this was the correct option. However, later I realized that I needed a CUDA surface to write to an OpenGL texture, and when you use a surface, you are requested to use the LoadStore flag.
So basically my question is this : Since I absolutely need a CUDA surface to write to an OpenGL texture in CUDA, what is the use case of cudaGraphicsRegisterFlagsWriteDiscard in cudaGraphicsGLRegisterImage?
The documentation description seems pretty straightforward. It is for one-way delivery of data from CUDA to OpenGL.
This online book excerpt provides a similar explanation:
Applications where CUDA is the producer and OpenGL is the consumer should register the objects with a write-discard flag...
If you want to see an example, take a look at the postProcessGL cuda sample. In that case, OpenGL is rendering an image, and it's being post-processed (blur added) by cuda, before display. In this case, there are two separate pathways for data flow. In the OpenGL->CUDA case, the data is handled by the createTextureSrc function, and the flag specified is read-only. For the CUDA->OpenGL case (delivery of the post-processed frame) the function is handled in createTextureDst, where a call is made to cudaGraphicsGLRegisterImage with the cudaGraphicsMapFlagsWriteDiscard flag specified, since on this path, CUDA is producing and OpenGL is consuming.
To understand how the textures are handled (populated with data from the cuda operations via a cudaArray) you probably want to study the sequence of operations in processImage().
I'm relatively new to DirectX and have to work on an existing C++ DX9 application. The app does tracking on a camera images and displays some DirectDraw (ie. 2d) content. The camera has an aspect ratio of 4:3 (always) and the screen is undefined.
I want to load a texture and use this texture as a mask, so tracking and displaying of the content only are done within the masked area of the texture. Therefore I'd like to load a texture that has exactly the same size as the camera images.
I've done all steps to load the texture, but when I call GetDesc() the fields Width and Height of the D3DSURFACE_DESC struct are of the next bigger power-of-2 size. I do not care that the actual memory used for the texture is optimized for the graphics card but I did not find any way to get the dimensions of the original image file on the harddisk.
I do (and did, but with no success) search a possibility to load the image into the computers RAM only (graphicscard is not required) without adding a new dependency to the code. Otherwise I'd have to use OpenCV (which might anyway be a good idea when it comes to tracking), but at the moment I still try to avoid including OpenCV.
thanks for your hints,
Norbert
D3DXCreateTextureFromFileEx with parameters 3 and 4 being
D3DX_DEFAULT_NONPOW2.
After that, you can use
D3DSURFACE_DESC Desc;
m_Sprite->GetLevelDesc(0, &Desc);
to fetch the height & width.
D3DXGetImageInfoFromFile may be what you are looking for.
I'm assuming you are using D3DX because I don't think Direct3D automatically resizes any textures.