How to do multipass rendering with opengl and cg - opengl

I'd like to implement an algorithm about pencil rendering. Now I've got 32 textures with different stroke intensity and some models. The process goes like this. First, I should render the using Phong shading to determine the intensity. Then I should map the texture to the rendered result.
The textures should be stored in 3d texture space. The problem is that I don't know how to do multipass rendering with opengl and shaders. And I don't know how to access to the textures with the right coordinate. What if the faces of the mesh is smaller than the texture? Can anybody show me some examples of doint this?

You need to render to some sort of offscreen buffer in order to do multi-pass rendering. Framebuffers are the current method of doing offscreen rendering in OpenGL. Apple has a good page describing how to use them. (It's not Mac-specific.)
I'm not sure I understand your question about texturing, so can't answer that part.

Related

Can I carry out MSAA for deferred rendering by just rendering the geometry twice?

I have question about 3D rendering.
Deferred rendering is very powerful but popular for not being nice to MSAA.
I clearly see why, but I suddenly came up some idea to solve that.
It's simple : just do deferred rendering completely, and get screen image on texture. This texture(attached on framebuffer or whatever) is of course not-antialiased.
Here comes further processing : then next, draw full scene again but this time fragment shader looks up the exact same position on pre-rendered texture using texelFetch(). And output that. Done.
It's silly but I think it might work. If we draw the geometry again with deferred-rendered result as the output color, it means we re-render the scene with geometry.
So we can now provide super-sampled depth information, and the GPU will be able to perform MSAA with aliased color but super-sampled depth geometry. (It's similar with picking up only the 'center' of fragment and evaluating that on ordinary MSAA process).
I'm not sure whether this description makes sense or not. I tested using opengl, but doing that makes no difference with just deferred-rendering.
Does my idea work?
No, your idea does not work.
If you did not render the initial image with multisampling, reading from it later while doing multisampling will not magically create information that doesn't exist in that image.
In your method, every sample which corresponds to a particular pixel in the multisampled rendering will have the same color value. So if two primitives overlap in a pixel, writing to different samples, it won't matter, since both primitives will be generating the same color. All you would be doing is generating multiple different depth values within a pixel, and that doesn't actually contribute to an antialiased output (directly).

OpenGL mipmapping inconsistent?

I have a 512X512 texture which holds a number of images that i want to use in my application. After adding the image data to the texture i save the texture coords for the individual images. Later i apply these on some quads that i am drawing. The texture has mipmapping activated.
When i take a screenshot of the rendered scene at exactly the same instance in two different runs of the applications, i notice that there are differences in the image only among those quads textured using this mipmapped texture. Can mipmapping cause such an issue?
My best guess is that it has to do with precisions in your shader. Check out this problem that I had (and fought with for a while) and my solution:
opengl texture mapping off by 5-8 pixels
It probably is a combination of mimapping's automatic scaling of your texture atlas and the precision hints in your shader code.
Also see the other linked question:
Why is a texture coordinate of 1.0 getting beyond the edge of the texture?

Fur shading using GLSL

I develop a 3D Engine using GLSL and I want to add the fur shading effect.
I did some researches to find a tutorial that explains correctly the 'Fur shading' technique and the best site I've found is the following one :
http://www.xbdev.net/directx3dx/specialX/Fur/
However it uses the DirectX API and HSLS. But in short it's very close to OpenGL and GLSL.
I understood that I have to render the scene in several pass with the same couple of shaders. To give the impression of spikes I just have decrease the alpha value for each layer. So the last one is almost transparent.
For example if I want to render a fur with 5 layers on plane I must render my scene 5 times. With DirectX it seems to be easier that OpenGL because HLSL has a keyword 'pass' integrated to render the scene with a multi pass. But I think I have to use Frame Buffer Object (FBO) with the OpenGL API using render buffers or textures. I think I must render the scene several times and for each time I register the color buffer in a different texture each time (attached all on a FBO or several FBO by texture). At the end the process I will have a unique color buffer which is the result of the superposition of the privious textures. Is my idea is correct ?

'Render to Texture' and multipass rendering

I'm implementing an algorithm about pencil rendering. First, I should render the model using Phong shading to determine the intensity. Then I should map the texture to the rendered result.
I'm going to do a multipass rendering with opengl and cg shaders. Someone told me that I should try 'render to texture'. But I don't know how to use this method to get the effects that I want. In my opinion, we should first use this method to render the mesh, then we can get a 2D texture about the whole scene. Now that we have draw content to the framebuffer, next we should render to the screen, right? But how to use the rendered texture and do some post-processing on it? Can anybody show me some code or links about it?
I made this tutorial, it might help you : http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/
However, using RTT is overkill for what you're trying to do, I think. If you need the fragment's intensity in the texture, well, you already have it in your shader, so there is no need to render it twice...
Maybe this could be useful ? http://www.ozone3d.net/demos_projects/toon-snow.php
render to a texture with Phong shading
Draw that texture to the screen again in a full screen textured quad, applying a shader that does your desired operation.
I'll assume you need clarification on RTT and using it.
Essentially, your screen is a framebuffer (very similar to a texture); it's a 2D image at the end of the day. The idea of RTT is to capture that 2D image. To do this, the best way is to use a framebuffer object (FBO) (Google "framebuffer object", and click on the first link). From here, you have a 2D picture of your scene (you should check it by saving to an image file that it actually is what you want).
Once you have the image, you'll set up a 2D view and draw that image back onto the screen with an 800x600 quadrilateral or what-have-you. When drawing, you use a fragment program (shader), which transforms the brightness of the image into a greyscale value. You can output this, or you can use it as an offset to another, "pencil" texture.

Can you render two quads with transparency at the same point?

I'm learning about how to use JOGL and OpenGL to render texture-mapped quads. I have a test program and a test quad, and I figured out how to enable GL_BLEND so that I can specify the alpha value of a vertex to make a quad with a sort of gradient... but now I want this to show through to another textured quad at the same position.
Drawing two quads with the same vertex locations didn't work, it only renders the first quad. Is this possible then, or will I need to basically construct a custom texture on-the-fly based on what I want and then draw one quad with this texture? I was really hoping to take advantage of blending in this case...
Have a look at which glDepthFunc you're using, perhaps you're using GL_LESS/GL_GREATER and it could work if you're using GL_LEQUAL/GL_GEQUAL.
Its difficult to make out of the question what exactly you're trying to achieve but here's a try
For transparency to work correctly in OpenGL you need to draw the polygons from the furthest to the nearest to the camera. If you're scene is static this is definitely something you can do. But if it's rotating and moving then this is usually not feasible since you'll have to sort the polygons for each and every frame.
More on this can be found in this FAQ page:
http://www.opengl.org/resources/faq/technical/transparency.htm
For alpha blending, the renderer blends all colors behind the current transparent object (from the camera's point of view) at the time the transparent object is rendered. If the transparent object is rendered first, there is nothing behind it to blend with. If it's rendered second, it will have something to blend it with.
Try rendering your opaque quad first, then rendering your transparent quad second. Plus, make sure your opaque quad is slightly behind your transparent quad (relative to the camera) so you don't get z-buffer striping.