using gl_texture_2d_array in openXR swapchain - opengl

What is the purpose of ARRAY textures in openXR swapchains?
I wanted to make a single texture swapchain with two array slices, corresponding to each eye.
The runtime has created the texture as intended.
I have set the corresponding subImage.imageArrayIndex for each eye.
However it does not work (windows, openXR sdk 1.0.25, steamvr 1.24.6, valve index hmd), it would just show blank screen, as if I was not submitting any layers.
Using separate swapchain for each eye works as intended.
Bonus question: What is the purpose of swapchain textures with mipmaps?

Related

Creating a 3d card using DirectXTK with Dynamic texture on one side

I am trying to create a card game. I want to have a deck of cards where the back of the card is a fixed texture but the front is dynamic, i.e. it has some text fields on it as well as a picture. I have created a box sized 3x2x0.16 to represent my card. I can get the fixed texture to load but I cannot find any code examples on the web that show me how to load a fixed texture on one side of the box and a dynamic one on the other. Can anyone point me to some examples please. I'm using DirectXTK mainly, but can probably fathom it out from any DirectX code too.
DirectX11 is version of DirectX I am using.
Any recommendations on how to do this would also be welcome.
Thanks
Easiest method for generating your cards, depending on how many there are and how large you want them, is to generate the faces at startup by using render to texture. Effectively, draw your dynamic card faces exactly like you would draw them in the world, but use an orthographic projection matrix and a blank 2D texture object as the render target. Once you have that, cache these "dynamic" textures in an std::map and bind them when drawing a specific card.
If your faces are relatively small, or you want to save on texture memory, you can stitch multiple card faces together into a large sheet of textures, then use some shader scaling logic to reference a subsection of the sheet for rendering a specific texture. With this, you can assemble "decks" of cards that only contain the faces in use in that particular game, allowing you to evict the others from GPU RAM.

Applying a shader to framebuffer object to get fisheye affect

Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.

How to create textures within GPU

Can anyone pls tell me how to use hardware memory to create textures in OpenGL ? Currently I'm running my game in window mode, do I need to switch to fullscreen to get the use of hardware ?
If I can create textures in hardware, is there a limit for no of textures (other than the hardware memory) ? and then how can I cache my textures into hardware ? Thanks.
This should be covered by almost all texture tutorials for OpenGL. For example here, here and here.
For every texture you first need a texture name. A texture name is like a unique index for a single texture. Every name points to a texture object that can have its own parameters, data, etc. glGenTextures is used to get new names. I don't know if there is any limit besides the uint range (2^32). If there is then you will probably get 0 for all new texture names (and a gl error).
The next step is to bind your texture (see glBindTexture). After that all operations that use or affect textures will use the texture specified by the texture name you used as parameter for glBindTexture. You can now set parameters for the texture (glTexParameter) and upload the texture data with glTexImage2D (for 2D textures). After calling glTexImage you can also free the system memory with your texture data.
For static textures all this has to be done only once. If you want to use the texture you just need to bind it again and enable texturing (glEnable(GL_TEXTURE_2D)).
The size (width/height) for a single texture is limited by GL_MAX_TEXTURE_SIZE. This is normally 4096, 8192 or 16384. It is also limited by the available graphics memory because it has to fit into it together with some other resources like the framebuffer or vertex buffers. All textures together can be bigger then the available memory but then they will be swapped.
In most cases the graphics driver should decide which textures are stored in system memory and which in graphics memory. You can however give certain textures a higher priority with either glPrioritizeTextures or with glTexParameter.
Edit:
I wouldn't worry too much about where textures are stored because the driver normally does a very good job with that. Textures that are used often are also more likely to be stored in graphics memory. If you set a priority that's just a "hint" for the driver on how important it is for the texture to stay on the graphics card. It's also possible the the priority is completely ignored. You can also check where textures currently are with glAreTexturesResident.
Usually when you talk about generating a texture on the GPU, you're not actually creating texture images and applying them like normal textures. The simpler and more common approach is to use Fragment shaders to procedurally calculate the colors of for each pixel in real time from scratch for every single frame.
The canonical example for this is to generate a Mandelbrot pattern on the surface of an object, say a teapot. The teapot is rendered with its polygons and texture coordinates by the application. At some stage of the rendering pipeline every pixel of the teapot passes through the fragment shader which is a small program sent to the GPU by the application. The fragment shader reads the 2D texture coordinates and calculates the Mandelbrot set color of the 2D coordinates and applies it to the pixel.
Fullscreen mode has nothing to do with it. You can use shaders and generate textures even if you're in window mode. As I mentioned, the textures you create never actually occupy space in the texture memory, they are created on the fly. One could probably think of a way to capture and cache the generated texture but this can be somewhat complex and require multiple rendering passes.
You can learn more about it if you look up "GLSL" in google - the OpenGL shading language.
This somewhat dated tutorial shows how to create a simple fragment shader which draws the Mandelbrot set (page 4).
If you can get your hands on the book "OpenGL Shading Language, 2nd Edition", you'll find it contains a number of simple examples on generating sky, fire and wood textures with the help of an external 3D Perlin noise texture from the application.
To create a texture on GPU look into "render to texture" tutorials. There are two common methods: Binding a PBuffer context as texture, or using Frame Buffer Objects. PBuffer render to textures are the older method, and have the wider support. Frame Buffer Objects are easier to use.
Also you don't have to switch to "fullscreen" mode for OpenGL to be HW accelerated. In fact OpenGL doesn't know about windows at all. A fullscreen OpenGL window is just that: A toplvel window on top of all other windows with no decorations and the input focus grabed. Some drivers bypass window masking and clipping code, and employ a simpler, faster buffer swap method if the window with the active OpenGL context covers the whole screen, thus gaining a little performance, but with current hard- and software the effect is very small compared to other influences.

DirectX post-processing shader

I have a simple application in which I need to let the user select a shader (.fx HLSL or assembly file, possibly with multiple passes, but all and only pixel shader) and preview it.
The application runs, the list of shaders comes up, and a button launches the "preview window."
From this preview window (which has a DirectX viewport in it), the user selects an image and the shader is run on that image and displayed. Only one frame needs rendered (not real-time).
I have a vertex/pixel shader combination set up that takes a quad and renders it to the screen, textured with the chosen image. This works perfectly.
I need to then run another effect, purely pixel shader, on the output from the first effect, and display the final image (post-processed) to the screen. This doesn't work at all.
I've tried for the past few days to get it working, but for no apparent reason, the identical code blocks used to render each effect only render the first. I can add the second shader file as a second pass in the first shader file and it runs perfectly (although completely defeats my goal of previewing user-created shaders). When I try to use a second effect (which loads and compiles just fine), it does nothing.
I've taken the results of the first shader (with GetRenderTargetData) and placed them in a texture & surface (destTex and destSur), then set that texture as the input for the second pass (using dev->SetTexture and later effect->SetTexture("thisframe", destTex)).
All calls succeed, effects compile, textures load, quads are drawn, but the effect is not visible.
I suspected at first the device (created with software vertex processing) was causing the issue, but that doesn't seem to be the case (I tried with hardware and mixed).
Additionally, using both a HAL and REF device (not a problem, since the app isn't realtime anyways), that second shader isn't visible.
Everything is written in C++ for Direct3D 9.
Try clearing the depth-stencil buffer after each time you render the quad.
First Create a texture, then render the first shader directly into that texture. Finally render the second shader with the texture as input to the Backbuffer.
There must be some kind of vertex input and vertex processing (either fixed-function or shader) in order for the pixel shader to be run. Are you supplying the vertex shader, and if so are you sure it does what the pixel shader expects? What does your draw call look like?
It's probably worth looking at a PIX trace of your app to see what the device state is when trying to use the user effect.

OpenGL primitives too dark when multitexturing?

I'm having a problem getting accurate primitive colours when I'm using multi-texturing elsewhere in the scene. Basically, I have some lines and polygons that I am trying render over a video texture (I'm using 3 stage multitexturing to create the video texture)... Anyhow, I know the problem is not alpha related... In fact, I know that in my texture update function if I just comment out the calls to glBindTexture() for texture levels 1 and 2, the primitive color is fine (so leaving texture level 0)... Is it trying to multitexture the primitives too (even though I'm obviously not setting texture coordinates for primitives)?
Make sure to disable multitexturing when not using it. OpenGL uses a state machine, so if you turn on a texture it will stay on until you explicitly turn it off.
Just because you're not setting coordinates, doesn't mean OpenGL will assume you're not using the texture.