GLSL - testing fragment world space coordinate intersection with geometry texture, and texture modification - opengl

I am exploring some GLSL and have something I want to try to implement. Here is the situation:
I have a previously rendered texture which stores only world-space coordinates of fragments (rgb = xyz). This texture is being passed to another render pass, is it possible take the world position texture and sample it to test the current fragments' world-space coordinate to see if they are a match?
An example could be 2 cameras, testing to see if any of the points in 3D space rendered to texture by camera A can also be seen by camera B.
Also, is it possible to have a texture that can be modified between several different shaders? i.e. having a camera render a texture, then pass that texture to another shader and change it?
Any help is greatly appreciated, thanks :)

I have a previously rendered texture which stores only world-space coordinates of fragments (rgb = xyz). This texture is being passed to another render pass, is it possible take the world position texture and sample it to test the current fragments' world-space coordinate to see if they are a match?
An example could be 2 cameras, testing to see if any of the points in 3D space rendered to texture by camera A can also be seen by camera B.
Yes, it is possible. This is essentially a shadow-map, but now you'll have to calculate the distances manually during the sampling. It's unclear why you insist on storing the world-space XYZ coordinates and what's the use-case of this. It should be much simpler and more efficient to store the depths in a depth texture and use the built-in depth-texture lookup.
Also, is it possible to have a texture that can be modified between several different shaders? i.e. having a camera render a texture, then pass that texture to another shader and change it?
Yes. You can render a texture and then use imageLoad and imageStore (and related APIs) in another shader to modify it. You must be careful, however, with feedback loops. Because of the parallel nature of the GPUs, and their cache-incoherent architecture, it might be complicated and a detailed answer would depend on the exact thing you're trying to achieve.

Related

Stencil-like shader masking, but with shader control over how the mask is created?

What I'm trying to accomplish:
Draw geometry with backface culling disabled, and with a clip plane defined, such that the geometry is clipped and the drawn backfaces fill in the exposed interior to create a mask.
Then in another pass, render a quad which exactly matches the position of the clip plane, using a mask from the first pass to cover only the exposed holes with a textured surface, giving the illusion that the geometry is properly modified.
The clipped geometry may create multiple separated holes, so I can't just draw the plane over top without masking.
I can get part of the way there in the shader, testing gl_FrontFacing to see which pixels are over an exposed interior. However, I can't find a way to record this for a later pass.
My first thought was to use the stencil buffer, which would work great for constraining the second pass, but as far as I can tell there's no way for a shader to selectively write to it.
The only other thing that comes to mind is to use a framebuffer and have the shader write to an extra output buffer, and then feed that back in to the second pass to do the filtering manually. But:
How would I know what part of the texture to sample? I assume I'd need to calculate screen positions of vertices, pass that to the interpolated by the fragment shader, and test/discard from there?
Is there a better or perhaps more automatic (similar to stencil buffer) way to accomplish this?

How to draw a sphere in D3D11, given position and radius?

To draw a sphere, one does not need to know anything else but it's position and radius. Thus, rendering a sphere by passing a triangle mesh sounds very inefficient unless you need per-vertex colors or other such features. Despite googling, searching D3D11 documentation and reading Introduction to 3D Programming with DirectX 11, I failed to understand
Is it possible to draw a sphere by passing only the position and radius of it to the GPU?
If not, what is the main principle I have misunderstood?
If yes, how to do it?
My ultimate goal is to pass more parameters later on which will be used by a shader effect.
You will need to implement Geometry Shader. This shader should take Sphere center and radius as input and emit a banch of vertices for rasterization. In general this is called point sprites.
One option would be to use tessellation.
https://en.wikipedia.org/wiki/Tessellation_(computer_graphics)
Most of the mess will be generated on the gpu side.
Note:
In the end you still have more parameters sent to the shaders because the sphere will be split into triangles that will be each rendered individually on the screen.
But the split is done on the gpu side.
While you can create a sphere from a point & vertex on the GPU, it's generally not very efficient. With higher-end GPUs you could use Hardware Tessellation, but even that would be better done a different way.
The better solution is to use instancing and render lots of the same VB/IB of sphere geometry scaled to different positions and sizes.

Multi-pass shading using render-to-texture

I'm trying to implement a multi-pass rendering method using OpenSceneGraph. However, I'm not entirely certain my problem is theoretical or due to a lack of applied knowledge of OSG. Thus far, I've successfully implemented multi-pass shading by rendering to a texture using an orthogonal projection, but I cannot seem to make a perspective projection work.
It may be that I don't quite understand how to implement multi-pass shading. Of course, I have to pre-render the entire scene with the multi-pass shaders to a texture, then use the texture in the final render. However, I'm not talking about creating a separate texture for each object in the scene, but effectively capturing a screenshot of the entire prerendered scene. Then, from that texture alone, applying the rendered effects to the individual geometries.
I assume this means I would have to do an extra conversion of the vertex coordinates for each geometry in the vertex shader. That is, after computing:
gl_Position = ModelViewProjectionMatrix * Vertex;
I would need to go a step further and calculate the vertex's screen coordinates in order to map the vertices correctly (again, given that the texture consists of an entire screen shot of the scene).
If I am correct, then I must be able to pre-render the scene in a perspective view identical to the view used in the final render, rather than an orthogonal view. This is where I have troubles. I can make an orthogonal view do what I want, but not the perspective view.
Am I correct in my approach? The only other approach I can imagine is to render everything to a screen-filling quad (in effect, the same thing as converting to screen coordinates), but that doesn't alleviate the need to use a perspective projection in the pre-render stage.
Thoughts? Links??
edit: I should also point out that in my successful attempts, I used a fragment shader only. The perspective projection worked, but, of course, the screen aligned quad I was using was offset rather than centered. I added a pass-through vertex shader and everything went blank.
As it turns out, my approach was correct. It's especially nice as it avoids having to add another camera to my scene graph to render the final output - I can simply use the main camera. Unfortunately, it means that all of my output textures are rendered at the screen resolution, rather than a resolution appropriate to the size of the object. That is, if my screen is 1024 x 1024, then so is the output texture, one for each pre-render camera in the graph. Not exactly efficient, but it'll do for now.

OpenGL: Using shaders to create vertex lighting by using pre-calculated colormap?

First of all, I have very little knowledge of what shaders can do, and i am very interested in making vertex lighting. I am attempting to use a 3d colormap which would be used to calculate the vertex color at that position of the world, and also interpolate the color by using the nearby colors from the colormap.
I cant use typical OpenGL lighting because its probably too slow and theres a lot of lights i need to render. I am going to "render" the lights at the colormap first, and then i could either manually map every vertex drawn with the corresponding color from the colormap.
...Or i could somehow automate this process, so i wouldnt have to change the color values of vertexes myself, but a shader could perhaps do this for me?
Questions is... is this possible, and if it is: what i need to know to make it possible?
Edit: Note that i also need to update the lightmap efficiently, without caring about the size of the lightmap, so the update should be done only at that specific part of the lightmap i want to update.
It almost sounds like what you want to do is render the lights to your color map, then use your color map as a texture, but instead of decal mode set it to modulate mode, so it's multiplied with the existing color instead of just replacing it.
That is different in one way though: instead of just affecting the vertexes, it'll map to the individual fragments (pixels, in essence).
Edit: What I had in mind wasn't a 3D texture -- it was a cube map. Basically, create a virtual cube surrounding everything in your "world". Create a 2D texture for each face of that cube. Render your coloring to the cube map. Then, to color a vertex you (virtually) extend a ray outward from the center, through the vertex, to the cube. The pixel you hit on the cube map gives you the color of lighting for that vertex.
Updating should be relatively efficient -- you have normal 2D textures for the top, bottom, front, etc., and you update them as needed.
If you cant use the fixed function pipeline functionality the best way to do per vertex lighting should be to do all the lighting calculations per vertex in the vertex-shader, when you then pass it on the the fragment shader it will be correctly interpolated across the face.
Another way to deal with performances issues when using a lot of light sources is to use deferred rendering as it will only do lighting calculation on the geometry that is actually visible.
That is possible, but will not be effective on the current hardware.
You want to render light volumes into 3d texture. The rasterizer works on a 2D surface, so your volumes have to be split along one of the axises. The split can be done in one of the following ways:
Different draw calls for each split
Instanced draw, with layer selection based on glInstanceID (will require geometry shader)
Branch in geometry shader directly from a single draw call
In order to implement it, I would suggest reading GL-3 specification and examples. It's not going to be easy, nor it will be fast enough in the result for complex scenes.

Polygonal gradients with OpenGL

I'm wondering how I could create a gradient wuth multiple stops and a direction if I'm making polygons. Right now I'm creating gradients by changing the color of the verticies but this is limiting. Is there another way to do this?
Thanks
One option you may have is to render a simple polygon with a gradient to a texture, which you then use to texture your actual polygon.
Then you can rotate the source polygon and anything textured with its image will have its gradient rotate as well, without the actual geometry changing.
The most flexible way is probably to create a texture with the gradient you want, and then apply that to your geometry.
If you're using a shader, you can pass your vertex world positions into your vertex shader and they'll interpolate to your fragment shader, so for every fragment, you'll get where it is in world-space (of course you can use any space). Then it's just a matter of choosing whatever transfer function to change that value to a color. You can make any kind of elaborate algorithm using b-splines or whatever in your fragment shader.