HLSL multiple passes to blur (RTT?) - hlsl

I try to build up a complex water shader ( i got the water shader from an example in the internet ).
Now i want to add some features, blend a pattern grid into the water which is blured.
Atm the water and the blur works, but when i try to build up the blur effekt with some randomization ... i use to many instruction for the shader ... :(
I searched for topics like "hlsl multiple passes", "hlsl render to texture", "hlsl multiple passes without texture", cause i dont have an existing "ground" texture.
I build up the water from a normal map and a enviroment map, now ... is it possible to get this "whole map / texture / shader data" from the first pass into the second ?
Cause when i only execute the both passes, the color from the first pass get completely overwritten :(
I hope you guys can understand my problem and have all the information you need.
Would be nice if you would help me.
Thanks...

You can try compiling for shader model 3.0. It allows to use more instructions.
If you want to perform multiple passes you don't have to use rendering to target. You should enable alpha blending to prevent previous colors from being overwritten.
However, this will not work if you need to sample the output of the previous pass. In this case you need 2 textures (A and B, both created as render targets). Follow these steps:
Set render target to texture A.
Render the first pass.
Set render target to texture B (or to the final render target, if you don't need to sample texture B in the following render process).
Set up hlsl sampler with texture A (using ID3DXEffect::SetTexture, for example).
Render the second pass (using texture A for sampling).
If you have more than 2 passes, swap A and B and do steps 3.-5. for each pass.

Related

unity3d, multiple render targets - different behavior in Direct3D/OpenGl

I'm writing shader for unity3d. The shader uses multiple render targets to render post processing effect.
However, I've run into interesting issue.
When Unity3d runs in direct3d mode, by default all standard shaders write data only into first color buffer (i.e. with index 0). I.e. if I attach 3 color buffers to camera, call Camera.Render color buffer with index 0 will contain rendered scene, and all the other buffers will remain untouched unless some shader specifically write in them. My shader utilizes that behavior (I use buffers with indexes 1 and 2 to accumulate data needed for post process effect).
However, in OpenGL mode standard unity3d shaders write in ALL color buffers at once. I.e. if I attach multiple render buffers to a camera, call Camera.Render all 3 buffers will contain copy of rendered scene.
That breaks my shader in OpenGL mode.
How can I fix that? I need to render the whole scene in one go, and only objects that have specific shader should modify additional color buffers.
I need to render scene in one go because using layer masks causes unity to recalculate projector shadows for ALL lights and I need shadows to be correct.
Advice?
Sadly, it turned out that "not writing into one of the render targets" is undocumented behavior in opengl. Standard unity shader when compiled for forward rendering path produces gl_FragData[0] = ...; assignment and writes into only one buffer, which triggers undocumented behavior and causes the mess.
In order to fix that problem, I would need to make unity write data explicitly into additional render targets in standard shaders. Unfortunately, this cannot be done, because there is no "entry point" to "hook" standard shader and write additional data into other color buffers. The closest thing to that is "finalcolor" modifier, but it does not actually allow to write into additional buffers via CG shader (that requires additional data to be from fragment shader, which is inaccessible from surface shader), it is only possible to modify one color.
I decided to rewrite portion of the shader (so it won't trigger undocumented behavior in OpenGL) and gave up on having unity shadowmap support in the effect. As far as I know, there is no other options short of modifying unity engine (requires "special arrangements" and source code access) or replacing entire lighting system with my own.

How to apply a vertex shader to all vertices in a scene in OpenGL?

I'm working on a small engine in OpenTK right now, and I've got shaders working so far. I wonder though , how it is possible to apply a shader to an entire scene!?. I've seen this done in minecraft for example, where someone created a shader that warped the entire scene. But since every object is rendered with its own shader active, how would I achieve this?
You seem to be referring to a technique called post processing. The way it works is that you first render the entire scene to a texture using the shaders you already have. You can then render this texture to the screen using a fragment shader to apply various effects like motion blur, warping or depth of field.
"But since every object is rendered with its own shader active"
That's not how OpenGL works. In fact there's no such thing as "models" (what you probably mean by "object") in OpenGL. OpenGL draws primitives (points, lines and triangles) one at a time. Furthermore there's no hard association between a set of primitives and the shaders being used.
It's trivial to just bind a single shader program at the beginning of a batch and every primitive of that batch is subjected to this shader. If the batch consists of the whole scene, then the whole scene uses that shader.
AFAIK, you can only bind one vertex shader at a time.
What you may want to try is to render to a texture first then rerender the texture onto the screen but applying some changes to it (warping it for example). You can also extract the depth buffer and use it if you have a more complex change that you want to apply.
If you bind the shader you want before the render loop, it would effect all items until you un-bind it (i.e. binding id #0) or disable GL_TEXTURE_2D via glEnable()/glDisable().

OpenGL3 two sets of shaders, texture showing black

I've recently succeeded at making a small test app with a GL_TEXTURE_RECTANGLE. Now I'm trying to integrate it into my larger project, but when I call glBindTexture(GL_TEXTURE_RECTANGLE, _tex_id[0]) inside the render function, it's causing the GL_INVALID_OPERATION​ error. The texture image sometimes shows for a fraction of a second, then turns black and stays black.
I am trying to do this by using two sets of vertex and fragment shaders, one set for the 3D scene, and one set for the 2D overlay, but I've never tried this before so I don't know if that's what's causing the error, or if I should be going about this a different way. The shaders are all compiling and linking fine.
Any insight would be much appreciated, and if it would help to see some code, let me know and I'll post some of it (although I think it may be too much for anyone to reasonably look through).
Edit: gDEBugger breaks at the call to glBindTexture(), and when clicking on the breakpoint, the properties window shows a picture of one of my other textures (one that's being loaded by the 3D scene's shaders), it shows that it's trying to load texture number 1, but I know this number is already being used to draw the same 3D scene's texture shown in the properties window... why would glGenTextures() be giving me overlapping texture id numbers? Is this normal or maybe part of the problem?
The black texture was due to me not forwarding some vertex shader inputs (normals) through to the fragment shader, even though I'm not using normals for anything in the 2D overlay shaders. As soon as I added outputs for all the inputs, and forwarded them along to the fragment shader, the texture was no longer black, but it was still disappearing after a fraction of a second. This was because I was calling glBindTexture(GL_TEXTURE_RECTANGLE, 0) at the end of the render function with the hopes that it would clean up some state... this was clearly the wrong thing to do, because removing that call caused the 2D texture to stay on-screen. Furthermore, calling glBindTexture() with the GL_TEXTURE_RECTANGLE target seems to work during the texture setup stage, but during rendering the GL_TEXTURE_RECTANGLE target was causing the GL_INVALID_OPERATION​ error. Changing the target to GL_TEXTURE_2D only in the render function made the error go away, and everything seems to work nicely now.

Applying a shader to framebuffer object to get fisheye affect

Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.

DirectX post-processing shader

I have a simple application in which I need to let the user select a shader (.fx HLSL or assembly file, possibly with multiple passes, but all and only pixel shader) and preview it.
The application runs, the list of shaders comes up, and a button launches the "preview window."
From this preview window (which has a DirectX viewport in it), the user selects an image and the shader is run on that image and displayed. Only one frame needs rendered (not real-time).
I have a vertex/pixel shader combination set up that takes a quad and renders it to the screen, textured with the chosen image. This works perfectly.
I need to then run another effect, purely pixel shader, on the output from the first effect, and display the final image (post-processed) to the screen. This doesn't work at all.
I've tried for the past few days to get it working, but for no apparent reason, the identical code blocks used to render each effect only render the first. I can add the second shader file as a second pass in the first shader file and it runs perfectly (although completely defeats my goal of previewing user-created shaders). When I try to use a second effect (which loads and compiles just fine), it does nothing.
I've taken the results of the first shader (with GetRenderTargetData) and placed them in a texture & surface (destTex and destSur), then set that texture as the input for the second pass (using dev->SetTexture and later effect->SetTexture("thisframe", destTex)).
All calls succeed, effects compile, textures load, quads are drawn, but the effect is not visible.
I suspected at first the device (created with software vertex processing) was causing the issue, but that doesn't seem to be the case (I tried with hardware and mixed).
Additionally, using both a HAL and REF device (not a problem, since the app isn't realtime anyways), that second shader isn't visible.
Everything is written in C++ for Direct3D 9.
Try clearing the depth-stencil buffer after each time you render the quad.
First Create a texture, then render the first shader directly into that texture. Finally render the second shader with the texture as input to the Backbuffer.
There must be some kind of vertex input and vertex processing (either fixed-function or shader) in order for the pixel shader to be run. Are you supplying the vertex shader, and if so are you sure it does what the pixel shader expects? What does your draw call look like?
It's probably worth looking at a PIX trace of your app to see what the device state is when trying to use the user effect.