Rendering an Object more than once - c++

Right now I'm facing the issue of rendering the same objects more than once in Directx 11, as the object has:
A diffuse shader
A directional lighting shader
A texture shader
Now the final color should be all of them somehow put together, maybe something like this:
Render Diffuse
Render Texture
Render Directional
Final Color = (Diffuse + Texture) * Lighting // Not sure about this though
But how can this be archieved? Without the EFFECTS FRAMEWORK!

It can be achieved in DirectX11 a couple of ways. The first is by making an "uber shader". This means to do diffuse, texture and lighting in the same shader. The second is to use dynamic shader linking and dynamically link together unique diffuse/texture/lighting shaders at runtime. The June 2010 sdk has a good example of dynamic shader linking. Also the usual combination of colors is:
Final Color = Diffuse * Texture * Lighting

Related

How to know when we have a texture in a fragment shader

I have a C++/OpenGL render engine that uses a library to do the character animation / rendering. Each character uses a mix of textures, no textures, vertex shaders, no vertex shaders etc. I want to be able to add a tint to some of the characters, but not all of them. I think the easiest way to do this is using a fragment shader to apply the tint color to the color of the fragment. I am using Cg, as this is a requirement of the project.
The main body of my rendering engine would be something like:
Enable my tint fragment shader
Call library code to do character rendering
Disable my tint fragment shader
Within the shader the tint is applied by multiplying the fragment color, fragment texture and tint color. This all works fine except when no texture is enabled/bound to GL_TEXTURE_2D. I just get black. I've been able to work around this by using textureSize and checking for texture width greater than 1, but this feels fairly cheesy. Is there a better way to do this?
Also, as I have implemented it, textures are applied as though the GK_MODULAR setting were on for textures. It would be nice to know what the current OpenGL setting is and apply that instead.

Using sampler2DShadow with multisampled deferred rendering breaks

As the title states, using a sampler2DShadow causes an error in the lighting shader of my multisampling FBO, but I cannot detect the problem due to having a very similar configuration using a standard deferred rendering setup without multisampling, which works fine.
Is there a compatibility issue with sampler2DShadow and multisampling in openlGL, or some alternative I should be using?
The shaders compile fine.
The code works fine until I run this line:
texture(gShadowMap2D, vec3(pCoord.xy, (pCoord.z) / pCoord.w));
and retrieve the result. I then get GL_INVALID_OPERATION.
The shadow map is from a directional light (depth map is valid and visible) and uses GL_COMPARE_R_TO_TEXTURE, set to a standard texture (GL_TEXTURE_2D).
The multisampling deferred FBO textures uses GL_TEXTURE_2D_MULTISAMPLE.
I'm using glsl 330 (openGL 3.3 core profile).
UPDATE
I think the problem is related to getting the world position from the position map in the multisampled fragment shader.
The standard way:
vec3 worldPos = texture(gPositionMap, texCoord).xyz;
The multisampled way:
vec2 texCoordMS = floor(vertTextureSize * texCoord.xy);
for(int i = 0; i < samples; i++)
{
worldPos += texelFetch(gPositionMapMS, ivec2(texCoordMS), i).xyz;
}
worldPos = worldPos / samples;
(I omitted the other samplers.)
I'm guessing I am out of bounds which throws the error when trying to access the sampler2DShadow (pCoord is calculated using worldPos).
Now to figure out how to get this multisampled worldPos to get the same result as the standard way???
Standard way (mDepthVP = mat4 (light's depth view prog):
vec4 coord = gLight.mDepthVP * vec4(worldPos, 1.0);
Well, after almost pulling my hair out desperately searching for a single hint as to why this problem was happening I finally figured it out, but I'm not entirely sure why it was causing the problem.
During the geometry pass (before the lighting pass) the models are rendered to the position, colour (diffuse), normals and depth-stencil as you would expect. During this pass a texture in binded (the diffuse texture of a mesh) but only as a standard texture (GL_TEXTURE_2D) at unit zero (GL_TEXTURE0) (I'm only using diffuse for now).
I left it like that as the system worked, because the lighting pass overrides that unit when it binds the four FBO textures for reading. However, in the multisampling FBO they were being binded as multisampling textures (GL_TEXTURE_2D_MULTISAMPLE) and it just happens that the 'position' map was using unit zero (GL_TEXTURE0).
For some reason this didn't overwrite the previously bound unit from the geometry pass and caused the GL_INVALID_OPERATION error. After calling:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
straight after the geometry pass the problem went away.
The question I ask comes down to asking "why didn't it overwrite?"

Incorrect normal texture using FBO

I've a strange problem with multiple render targets. I attached 3 textures to my FBO: color, normal and position. I can correctly render color and position, but rendering normal texture yields (green and red are part of a spinning cube):
In lower left corner, there is the result of rendering normal texture to a quad.
In my vertex shader, I'm computing normal as: normal = gl_NormalMatrix * gl_Normal, and in my fragment shader, I'm emitting it as: gl_FragData[1] = vec4(normal, 1);.
What's the issue here?
Turns out I forgot to supply normals for rendered quads. Adding glNormal3f() calls fixed the problem.

Basic OpenGL lighting question

I think this is an extremely stupid and newbie question, but then I am a newbie in graphics and openGL. Having drawn a sphere and put a light source nearby, also having specified ambient light, I started experimenting with light and material values and came to a surprising conclusion: the colors which we specify with glColor* do not matter at all when lighting is enabled. Instead, the equivalent is the material's ambient component. Is this conclusion correct? Thanks
If the lighting is enabled, then instead of the vertex color, the material color (well, colors - there are several of them for different types of response to light) is used. Material colors are specified by glMaterial* functions.
If you want to reuse your code, you can use glEnable(GL_COLOR_MATERIAL) and glColorMaterial(GL_AMBIENT_AND_DIFFUSE) to have your old glColor* calls mapped to material color automatically.
(And please switch to shaders as fast as possible - the shader approach is both easier and more powerful)
I suppose you don't use fragment shader yet. From glprogramming.com:
vertex color =
the material emission at that vertex +
the global ambient light scaled by the materials ambient
property at that vertex +
the ambient, diffuse, and specular contributions from all the
light sources, properly attenuated
So yes, vertex color is not used.
Edit: You can also look for GL lightning equation in GL specification (you have one nearby, do you? ^^)

Directx9 Specular Mapping

How would I implement loading a texture to be used as a specular map for a piece of geometry and rendering it in Directx9 using C++?
Are there any tutorials or basic examples I can refer to?
Use D3DXCreateTextureFromFile to load the file from disk. You then need to set up a shader that multiplies the specular value by the value stored in the texture. This gives you the specular colour.
So you're final pixel comes from
Final = ambient + (N.L * texture colour) + (N.H * texture specular)
You can do this easily in a shader.
Its also worth noting that it can be very useful to store the per texel specular in the alpha channel of the texture. This way you only need one texture around, though it does break per-pixel transparency.