Instanced-based VR rendering in OpenGL - opengl

I'm trying to use instancing to do VR rendering in OpenGL with 1 draw call, 2 instances (one for left eye, one for right eye). The vertex shader then translates the vertices left for instanceID 0 and right for instanceID 1. The only thing I need more is per-instance viewport for automatic hardware culling/clipping. This is doable in directX but is it in OpenGL?

Recently I was actually implementing instanced stereo rendering for VR and had the same problem. I had the choice of using geometry shader for instanced viewports but I didn't want the overhead it'd introduce. So, in the end I ended up shifting the perspective for each view and using a clip plane.
So that's probably what you're looking for, a clip plane. It's really simple to implement in a vertex shader too, you just pass the 'x' coord into gl_ClipDistance.
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/gl_ClipDistance.xhtml
Good luck

Related

How to draw a sphere in D3D11, given position and radius?

To draw a sphere, one does not need to know anything else but it's position and radius. Thus, rendering a sphere by passing a triangle mesh sounds very inefficient unless you need per-vertex colors or other such features. Despite googling, searching D3D11 documentation and reading Introduction to 3D Programming with DirectX 11, I failed to understand
Is it possible to draw a sphere by passing only the position and radius of it to the GPU?
If not, what is the main principle I have misunderstood?
If yes, how to do it?
My ultimate goal is to pass more parameters later on which will be used by a shader effect.
You will need to implement Geometry Shader. This shader should take Sphere center and radius as input and emit a banch of vertices for rasterization. In general this is called point sprites.
One option would be to use tessellation.
https://en.wikipedia.org/wiki/Tessellation_(computer_graphics)
Most of the mess will be generated on the gpu side.
Note:
In the end you still have more parameters sent to the shaders because the sphere will be split into triangles that will be each rendered individually on the screen.
But the split is done on the gpu side.
While you can create a sphere from a point & vertex on the GPU, it's generally not very efficient. With higher-end GPUs you could use Hardware Tessellation, but even that would be better done a different way.
The better solution is to use instancing and render lots of the same VB/IB of sphere geometry scaled to different positions and sizes.

Texture Mapping in GLSL

I am currently working on a project where I have to texture a cube using the reflection vector between the normal of a fragment and the camera.
I have the sampler2D picture, and I somehow have to implement it to a cube using reflection.
The question is: Can someone explain how this process goes. That would help me finish my project and further understand the process behind texturing.
The thing is that I can't use textureCube(), but texture2D(), so that the fragment shader is applicable to not only cubes but to every surface.
Thank you in advance for the answer!
The thing is that I can't use textureCube(), but texture2D(), so that the fragment shader is applicable to not only cubes but to every surface.
Why do you have to do this? Implementing this yourself with texture2D (...) is going to involve multiple texture lookups. textureCube (...) will leave the implementation details hidden and can even seamlessly filter stuff on supported hardware.
In all cases, the fact that it is called a cubemap means nothing about the surface you are mapping it onto, it is actually the texture itself that is a cube (six 2D textures define all of the cube faces).
When you sample a cubemap, you are shooting a ray through this virtual cube and the color or depth returned is where that ray intersects it. The sampled value will come from at least one of the six cube faces, possibly multiple cube faces depending on the texture filter setup.

GLSL - testing fragment world space coordinate intersection with geometry texture, and texture modification

I am exploring some GLSL and have something I want to try to implement. Here is the situation:
I have a previously rendered texture which stores only world-space coordinates of fragments (rgb = xyz). This texture is being passed to another render pass, is it possible take the world position texture and sample it to test the current fragments' world-space coordinate to see if they are a match?
An example could be 2 cameras, testing to see if any of the points in 3D space rendered to texture by camera A can also be seen by camera B.
Also, is it possible to have a texture that can be modified between several different shaders? i.e. having a camera render a texture, then pass that texture to another shader and change it?
Any help is greatly appreciated, thanks :)
I have a previously rendered texture which stores only world-space coordinates of fragments (rgb = xyz). This texture is being passed to another render pass, is it possible take the world position texture and sample it to test the current fragments' world-space coordinate to see if they are a match?
An example could be 2 cameras, testing to see if any of the points in 3D space rendered to texture by camera A can also be seen by camera B.
Yes, it is possible. This is essentially a shadow-map, but now you'll have to calculate the distances manually during the sampling. It's unclear why you insist on storing the world-space XYZ coordinates and what's the use-case of this. It should be much simpler and more efficient to store the depths in a depth texture and use the built-in depth-texture lookup.
Also, is it possible to have a texture that can be modified between several different shaders? i.e. having a camera render a texture, then pass that texture to another shader and change it?
Yes. You can render a texture and then use imageLoad and imageStore (and related APIs) in another shader to modify it. You must be careful, however, with feedback loops. Because of the parallel nature of the GPUs, and their cache-incoherent architecture, it might be complicated and a detailed answer would depend on the exact thing you're trying to achieve.

OpenGL instanced rendering - frustum culling

I'm drawing large amount of cubes (100 000+) using glDrawElementsInstanced(). Due the performance reasons I'd like to implement frustum culling, but I'm not quite sure how to do that when I'm using instancing.
From what I know, the only way to access individual instance of object is in shaders, so I assume I have to do the culling there. I'm not quite sure how to do so.
Can anyone point me to any tutorials?
Trying to do culling in the vertex shader is way too late in the process. You have to feed the cube's transforms to the shaders somehow, just take that data and set up a Bounding Volume Hierarchy. Then only draw the instances that pass the frustum culling.

OpenGL 360 degree perspective

I'm looking to capture a 360 degree - spherical panorama - photo of my scene. How can I do this best? If I have it right, I can't do this the ordinary way of setting the perspective to 360.
If I would need a vertex shader, is there one available?
This is actually a nontrivial thing to do.
In a naive approach a vertex shader that transforms the vertex positions not by matrix multiplication, but by feeding them through trigonometric functions may seem to do the trick. The problem is, that this will not make straight lines "curvy". You could use a tesselation shader to add sufficient geometry to compensate for this.
The most straightforward approach is two-fold. First you render your scene into a cubemap, i.e. render with a 90°×90° FOV into the 6 directions making up a cube. This allows you to use regular affine projections rendering the scene.
In a second step you use the generated cubemap to texture a screen filling grid, where the texture coordinates of each vertex are azimuth and elevation.
Another approach is to use tiled rendering with very small FOV and rotating the "camera", kind of like doing a panoramic picture without using a wide angle lens. As a matter of fact the cubemap based approach is tiled rendering, but its easier to get right than trying to do this directly with changed camera direction and viewport placement.