Fur shading using GLSL - opengl

I develop a 3D Engine using GLSL and I want to add the fur shading effect.
I did some researches to find a tutorial that explains correctly the 'Fur shading' technique and the best site I've found is the following one :
http://www.xbdev.net/directx3dx/specialX/Fur/
However it uses the DirectX API and HSLS. But in short it's very close to OpenGL and GLSL.
I understood that I have to render the scene in several pass with the same couple of shaders. To give the impression of spikes I just have decrease the alpha value for each layer. So the last one is almost transparent.
For example if I want to render a fur with 5 layers on plane I must render my scene 5 times. With DirectX it seems to be easier that OpenGL because HLSL has a keyword 'pass' integrated to render the scene with a multi pass. But I think I have to use Frame Buffer Object (FBO) with the OpenGL API using render buffers or textures. I think I must render the scene several times and for each time I register the color buffer in a different texture each time (attached all on a FBO or several FBO by texture). At the end the process I will have a unique color buffer which is the result of the superposition of the privious textures. Is my idea is correct ?

Related

Can I carry out MSAA for deferred rendering by just rendering the geometry twice?

I have question about 3D rendering.
Deferred rendering is very powerful but popular for not being nice to MSAA.
I clearly see why, but I suddenly came up some idea to solve that.
It's simple : just do deferred rendering completely, and get screen image on texture. This texture(attached on framebuffer or whatever) is of course not-antialiased.
Here comes further processing : then next, draw full scene again but this time fragment shader looks up the exact same position on pre-rendered texture using texelFetch(). And output that. Done.
It's silly but I think it might work. If we draw the geometry again with deferred-rendered result as the output color, it means we re-render the scene with geometry.
So we can now provide super-sampled depth information, and the GPU will be able to perform MSAA with aliased color but super-sampled depth geometry. (It's similar with picking up only the 'center' of fragment and evaluating that on ordinary MSAA process).
I'm not sure whether this description makes sense or not. I tested using opengl, but doing that makes no difference with just deferred-rendering.
Does my idea work?
No, your idea does not work.
If you did not render the initial image with multisampling, reading from it later while doing multisampling will not magically create information that doesn't exist in that image.
In your method, every sample which corresponds to a particular pixel in the multisampled rendering will have the same color value. So if two primitives overlap in a pixel, writing to different samples, it won't matter, since both primitives will be generating the same color. All you would be doing is generating multiple different depth values within a pixel, and that doesn't actually contribute to an antialiased output (directly).

how to retrieve z depth and color of a rendered pixel

I would like to retrieve the z height of each pixels of a rendered object in a scene.
I will need to retrieve the color rendered too.
What are the opengl technics to implement ?
glReadPixels and CPU side code
use glReadPixels to obtain both RGB and Depth buffers. Here examples for both:
depth buffer got by glReadPixels is always 1
OpenGL Scale Single Pixel Line
That will read the buffers into CPU accessible memory. This way is slow (due to sync) but should work on any platform.
FBO render to texture and GPU shader
Faster method is to use FBO and render to texture and use that output in next rendering pass as input texture for computing your stuff inside shaders. This however will not run properly on Intel and might need additional tweaking of code between nVidia and AMD.
If you have per pixel output use single QUAD covering your screen as the second rendering pass.
If you got single output for the whole screen instead use single POINT render and compute all in the fragment shader (scann the whole texture inside) something like this:
How to implement 2D raycasting light effect in GLSL
The difference is that by usnig shaders and FBO you are not transferring data between GPU/CPU so its way faster.
The content of the targeted textures can be still readed by CPU using texture related GL functions
compute GPU shaders
There are also compute shaders out there but I did not use them yet so I am just guessing however with them it might be possible to do your stuff in single pass and also the form of the result and computation should not be as limiting.
My bet is that you are doing some post processing similar to Deferred Shading so googling such topic/tutorials might help.

OpenGL: Objects are smooth if normally drawn, but edged when rendering to FBO

I have a problem with different visual results when using a FBO compared to the default framebuffer:
I render my OpenGL scene into a framebuffer object, because I use this for color picking. The thing is that if I render the scene directly to the default framebuffer, the output on the screen is quite smooth, meaning the edges of my objects look a bit like if they were anti-aliased. When I render the scene into the FBO and afterwards use the output to texture a quad that spans the whole viewport, the objects have very hard edges where you can easily see every single colored pixel that belongs to the objects.
Good:
Bad:
At the moment I have no idea what the reason for this could be. I am not using some kind of anti-aliasing.
System:
Fedora 18 x64
Intel HD Graphics 4000 and Nvidia GT 740M (same result)
Edit1:
As stated by Damon and Steven Lu, there is probably some kind of anti-aliasing enabled by the system by default. I couldn't figure out so far how to disable this feature.
The thing is that I was just curious why this setting only had an effect on the default framebuffer and not the one handled by the FBO. To get anti-aliased edges for the FBO too, I will probably have to implement my own AA method.
Once you draw your scene into custom FBO the externally defined MSAA level doesn't apply anymore.You must configure your FBO to have Multi-sample texture or render buffer attachments setting number of sample levels along the way.Here is a reference.

How to do multipass rendering with opengl and cg

I'd like to implement an algorithm about pencil rendering. Now I've got 32 textures with different stroke intensity and some models. The process goes like this. First, I should render the using Phong shading to determine the intensity. Then I should map the texture to the rendered result.
The textures should be stored in 3d texture space. The problem is that I don't know how to do multipass rendering with opengl and shaders. And I don't know how to access to the textures with the right coordinate. What if the faces of the mesh is smaller than the texture? Can anybody show me some examples of doint this?
You need to render to some sort of offscreen buffer in order to do multi-pass rendering. Framebuffers are the current method of doing offscreen rendering in OpenGL. Apple has a good page describing how to use them. (It's not Mac-specific.)
I'm not sure I understand your question about texturing, so can't answer that part.

How do I assign multiple textures into single a mesh in OpenGL?

First example:
You can take a huge rock shaped mesh and put a tiled rock texture all over it.
Now, some places needs to be covered with a grass texture (or other vegetation).
Another example:
Usually, terrain are built from tiled textures. In order to achieve a less "tilly" look, you can apply 4 times bigger (or 16 and so on..) tiled texture on it, and by that you'll gain a nice "random" tiled texture (seen that in the UDK's docs).
Blender (the 3d graphics app) is OpenGL based, and it allows you to assign multiple materials to a single mesh.
How can i do it in my own OpenGL application?
Thanks,
Amir
P.S:
I'm looking for a better solution than rendering 50 tris with tex a and and 3 more tris with tex b.
What you're looking for is called multitexturing. Modern graphics cards have several texture units that each can sample a different texture. When you render your rock you specify vertices that have UV coordinates for each texture you want to render.
In OpenGL you can use glActiveTexture to select your active texture unit so that you can bind a texture to it and use it in subsequent rendering. Your vertices will need additional texture coordinate pairs; one pair per texture you intend to render.
The modern way to do multitexturing is using shaders (GLSL in OpenGL typically). Load and bind each texture to a different texture unit, set your shader uniforms to the value of the texture units (0 for texture unit 0 etc) you're using, sample each texture, and blend using the desired blending function to get your output color.