Shader and opengl transformations - opengl

When i add shaders (in cg) to my opengl program all the local transformations (glRotatef, glTranslatef and glScalef between glPushMatrix and glPopMatrix) stop working. Transforms outside push/pop still working though. So what might be the problem here?
update:
I have a rotating cube in the center of the scene:
glPushMatrix();
glRotatef(angle, 1, 0, 0);
drawBox();
glPopMatrix();
and after that i send worldview and worldviewprojection matrices to shader:
cgGLSetStateMatrixParameter(
myCgVertexParam_modelViewProj,
CG_GL_MODELVIEW_PROJECTION_MATRIX,
CG_GL_MATRIX_IDENTITY
);
cgGLSetStateMatrixParameter(
myCgVertexParam_modelView,
CG_GL_MODELVIEW_MATRIX,
CG_GL_MATRIX_IDENTITY
);
Vertex shader code:
void C9E2v_fog(float4 position : POSITION,
float4 color : COLOR,
out float4 oPosition : POSITION,
out float4 oColor : COLOR,
out float fogExponent : TEXCOORD1,
uniform float fogDensity, // Based on log2
uniform float4x4 modelViewProj : MODELVIEW_PROJECTION_MATRIX,
uniform float4x4 modelView : MODELVIEW_MATRIX)
{
// Assume nonprojective modelview matrix
float3 eyePosition = mul(modelView, position).xyz;
float fogDistance = length(eyePosition);
fogExponent = fogDistance * fogDensity;
oPosition = mul(modelViewProj, position);
//oDecalCoords = decalCoords;
oColor = color;
}
So in the end cube doesnt rotate, but if i do write (no push/pop)
glRotatef(angle, 1, 0, 0);
drawBox();
everything works fine. How do i fix that?

You can use either fixed function pipeline or programmable one. Since you switched to shaders, fixed function pipeline "stopped working". To switch back you need to glUseProgram(0). And you need to send those local transformations to the shader.

Related

Switching Ortho to perspective for OpenGL HUD

I'm trying to implement a HUD in OpenGL which will display text in 2D on the front of the viewing window and a 3D perspective view behind (similar to a HUD).
I'm creating my 3D fragments using a projectionView matrix then switch to an ortho matrix to render my quads. I've managed to get this to work without the ortho matrix (see below), but for some reason when using the matrix if I draw 3D objects my 2D text disappears, if rendering the text alone it is present
and displays correctly.
Basic rendering loop:
glm::mat4 projection, projectionView, windowMatrix;
projection = glm::perspective(glm::radians(camera.Zoom), 800.0f / 600.0f, 0.1f, 100.0f);
windowMatrix = glm::ortho(0.0f,800.0f,0.0f,600.0f);
while (!glfwWindowShouldClose(window)) //Render loop
{
glm::mat4 view = camera.GetViewMatrix();
projectionView = projection * view;
//Update the Uniform
threeDShader->setMat4("projection", projectionView);
//Render() calls glDrawElements()
threeDObject->Render();
//Update the Uniform
textShader->setMat4("projection", windowMatrix);
//Render params = text, xPos, yPos, scale, color
//RenderText() calls glDrawArrays()
textHUD->RenderText("Hello World!", 25.0f, 25.0f, 1.0f, glm::vec3(0.9, 0.2f, 0.8f));
}
The textShader vertex shader is:
#version 420 core
layout (location = 0) in vec4 vertex; // <vec2 pos, vec2 tex>
out vec2 TexCoords;
uniform mat4 projection;
void main()
{
gl_Position = vec4(vertex.xy * vec2(2.0/800,2.0/600) - vec2(1,1), 0.0f, 1.0f); <----(1)
//gl_Position = projection * vec4(vertex.xy, 0.0f, 1.0f); <----(2)
TexCoords = vertex.zw;
}
The line (1) in the vertex shader code that explicitly states the location on the screen works displaying the text with other 3D objects being in the background.
I would prefer to use line (2) which uses the orthographic matrix (which would be neater to change for the resolution), but for some reason works when no other 3D objects are rendered, but disappears when a 3D object is rendered in the scene. They both use separate matricies then draw their vertices/fragments, so in my opinion they should not be interfering with each other.
The fragment shader is the same for each and should not be an issue. I thought it might be something to do with the near clipping plane for the perspective matrix, but with the independent draw calls it shouldn't be an issue.
I've also tried implementing the ortho matrix with a near and far clipping plane similar to the perspective matrix, but to no success.
but for some reason works when no other 3D objects are rendered
I guess, that threeDObject->Render(); respectively textHUD->RenderText install the shader program by glUseProgram.
glUniform* changes a uniform variable in the default uniform block of the currently installed program.
You've install the shader program before you can change the uniform:
(In the following I suggest, that the sahder program class has a method use(), which installs the program)
while (!glfwWindowShouldClose(window)) //Render loop
{
glm::mat4 view = camera.GetViewMatrix();
projectionView = projection * view;
// Install 3D shader program and update the Uniform
threeDShader->use();
threeDShader->setMat4("projection", projectionView);
//Render() calls glDrawElements()
threeDObject->Render();
// Install text shader program and update the Uniform
textShader->use();
textShader->setMat4("projection", windowMatrix);
//Render params = text, xPos, yPos, scale, color
//RenderText() calls glDrawArrays()
textHUD->RenderText("Hello World!", 25.0f, 25.0f, 1.0f, glm::vec3(0.9, 0.2f, 0.8f));
}

OpenGL shader light position changed in shader

First of all, I'm sorry if the title is misleading but I'm not quite sure how to describe the issue, if it is an issue at all.
I'm vert new to OpenGL, and I have just started to scratch the surface of GLSL following this tutorial.
The main part of the rendering funcion looks like this
GLfloat ambientLight[] = {0.5f, 0.5f, 0.5f, 1.0f};
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientLight);
//Add directed light
GLfloat lightColor1[] = {0.5f, 0.5f, 0.5f, 1.0f}; //Color (0.5, 0.2, 0.2)
//Coming from the direction (-1, 0.5, 0.5)
GLfloat lightPos1[] = { 40.0 * cos((float) elapsed_time / 500.0) , 40.0 * sin((float) elapsed_time / 500.0), -20.0f, 0.0f};
glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor1);
glLightfv(GL_LIGHT0, GL_POSITION, lightPos1);
glPushMatrix();
glTranslatef(0,0,-50);
glColor3f(1.0, 1.0, 1.0);
glRotatef( (float) elapsed_time / 100.0, 0.0,1.0,0.0 );
glUseProgram( shaderProg );
glutSolidTeapot( 10 );
glPopMatrix();
Where "shaderProg" is a shader program consisting of a vertex shader
varying vec3 normal;
void main(void)
{
normal = gl_Normal;
gl_Position = ftransform();
}
And a fragment shader
uniform vec3 lightDir;
varying vec3 normal;
void main() {
float intensity;
vec4 color;
intensity = dot(vec3(gl_LightSource[0].position), normalize(normal));
if (intensity > 0.95)
color = vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.25)
color = vec4(0.4,0.2,0.2,1.0);
else
color = vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}
I have two issues.
First is that according to the tutorial the uniform lightDir should be usable, yet I only get results with vec3(gl_LightSource[0].position). Is there any difference between the two?
The other problem is that the setup rotates the light around the teapot differently when using the shader program. Without the shader the light orbits the teapot in the XY axis of the camera. Yet, if the shader is used, the light moves in the XZ axis of the camera. Have I made a mistake? Or have i forgot som translation in the shaders?
Thanks in advance : )
First is that according to the tutorial the uniform lightDir should be
usable, yet I only get results with vec3(gl_LightSource[0].position).
Is there any difference between the two?
That tutorial uses lightDir as a uniform variable. You have to set that yourself. via some glUniform call. If it is the same or not will depend on what exactly you set as the light position here. The lightDir as it is used here is the vector from the surface point you want to shade to the light source. The tutorial uses a directional light, so the light direction is the same everywhere in the scene and does not really depend on the position of the vertex/fragment. You can do the same with the fixed-function lighting by setting the w component of the light poisition to 0. If you don't do that, the results will be very different.
A side note: The GLSL code in that tutorial is unforunately relying on lots of deprecated features. If you learn GLSL, I would really recommend that you learn modern GL core profile.
lightDir is not a pre-defined uniform. The typical definition for a light direction vector is just a normalized vector to the light position in your shader, which you can easily calculate yourself by normalizing the position vector:
vec3 lightDir = normalize(gl_LightSource[0].position.xyz);
You could also pass it into the shader as a uniform you define yourself. For this approach, you would define the uniform in your fragment shader:
uniform vec3 lightDir;
and then get the uniform location with the glGetUniformLocation() call, and set a value with the glUniform3f() call. So once after linking the shader, you have this:
GLint lightDirLoc = glGetUniformLocation(shaderProg, "lightDir");
and then every time you want to change the light direction to (vx, vy, vz):
glUniform3f(lightDirLoc, vx, vy, vz);
For the second part of your question: The reason you get different behavior for the light position with the fixed pipeline compared to what you get with your own shader is that the fixed pipeline applies the current modelview matrix to the specified light position, which is not done in your shader.
As a number of others already suggested: If you learn OpenGL now, I strongly recommend that you skip the legacy features, which includes the fixed function light source parameters. In this case, you can simply use uniform variables you define yourself, as I already illustrated as an option for the lightDir variable above.

Direct X & HLSL - Normal ReCalculation

I'm using HLSL and DirectX 9. I'm trying to recalculate the normals of a mesh so that HLSL receives updated normals as a result of transforming the mesh. What method is best to do this...also...D3DXComputeNormals will not work for me because I do not use FVF_NORMAL as a vertex declaration...I declare vertex format like so:
const D3DVERTEXELEMENT9 dec[4] =
{
{0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION,0},
{0, 12, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_NORMAL, 0},
{0, 24, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD,0},
D3DDECL_END()
};
I know how to access the adjacency data and vertex buffers but I'm not sure what method to use in order to properly associate a vertex and its normal with a face...Any help would be greatly appreciated. Thanks!
It's not a good idea to update the normals on CPU and send it to GPU each frame. That would ruin the performance. What you really should do is to calculate the transformed normals in the vertex shader, just like you do with the positions. HLSL code would look like this:
float4x4 mWorldView; // World * View
float4x4 mWorldViewProj; // World * View * Proj
struct VS_OUTPUT
{
float4 position : POSITION;
float2 tex : TEXCOORD0;
float3 normalVS : TEXCOORD1; // view-space normal
};
// Vertex Shader
VS_OUTPUT VS(float3 position : POSITION,
float3 normal : NORMAL,
float2 tex : TEXCOORD0)
{
VS_OUTPUT out;
// transform the position
out.position = mul(float4(position, 1), mWorldViewProj);
// pass the texture coordinates to the pixel shader
out.tex = tex;
// calculate the transformed normal
float3 n = mul(normal, (float3x3)mWorldView); // calculate view-space normal
// and use it for vertex lighting
// ... some shading calculations ...
// or pass it to the pixel shader and perform per-pixel lighting
out.normalVS = n;
// output
return out;
}

HLSL and DX Skybox Issues (creates seams)

I'm (re)learning DirectX and have moved into HLSL coding. Prior to using my custom .fx file I created a skybox for a game with a vertex buffer of quads. Everything worked fine...texture mapped and wrapped beautifully. However now that I have HLSL setup to manage the vertices there are distinctive seams where the quads meet. The textures all line up properly I just cant get rid of this damn seam!
I tend to think the problem is with the texCube...or rather all the texturing information here. I'm texturing the quads in DX...it may just be that I still don't quite get the link between the two..not sure. Anyway thanks for the help in advance!
Heres the .fx file:
float4x4 World;
float4x4 View;
float4x4 Projection;
float3 CameraPosition;
Texture SkyBoxTexture;
samplerCUBE SkyBoxSampler = sampler_state
{
texture = <SkyBoxTexture>;
minfilter = ANISOTROPIC;
mipfilter = LINEAR;
AddressU = Wrap;
AddressV = Wrap;
AddressW = Wrap;
};
struct VertexShaderInput
{
float4 Position : POSITION0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float3 TextureCoordinate : TEXCOORD0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
float4 VertexPosition = mul(input.Position, World);
output.TextureCoordinate = VertexPosition - CameraPosition;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
return texCUBE(SkyBoxSampler, normalize(input.TextureCoordinate));
}
technique Skybox
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
To avoid seams you need to draw your skybox in a single DrawIndexedPrimitive call, preferably using triangle strip. DON'T draw each face as separate primitive transformed with individual matrix or something like that - you WILL get seams. If you for some unexplainable reason don't want to use single DrawIndexedPrimitive call for skybox parts, then you must ensure that all faces are drawn using same matrix (same world + view + projection matrix used in every call) and same coordinate values for corner vertices - i.e. "top" face should use exactly same vectors (position) for corners that are used by "side" faces.
Another thing is that you should either store skybox as
cubemap (looks like that's what you're doing) - make just 8 vertices for skybox, draw them as indexed primitive.
Or an unwrapped "atlas" texture that has unused areas filled. with border color.
Or - if you're fine with shaders, you could "raytrace" skybox using shader.
You need to clamp the texture coordinates with setsampler state to get rid of the seam. This toymaker page explains this. Toymaker is a great site for learning Direct3D you should check out the tutorials if you have any more trouble.
You may like to draw a skybox using only one quad. Everything you need is an inverse of World*View*Proj matrix, that is (World*View*Proj)^(-1).
The vertices of the quad should be: (1, 1, 1, 1), (1, -1, 1, 1), (-1, 1, 1, 1), (-1, -1, 1, 1).
Then you compute texture coordinates in VS:
float4 pos = mul(vPos, WorldViewProjMatrixInv);
float3 tex_coord = pos.xyz / pos.w;
And finally you sample the texture in PS:
float4 color = texCUBE(sampler, tex_coord);
No worry about any seams! :)

volume rendering (using glsl) with ray casting algorithm

I am learning volume rendering using ray casting algorithm. I have found a good demo and tuturial in here. but the problem is that I have a ATI graphic card instead of nVidia which make me can't using the cg shader in the demo, so I want to change the cg shader to glsl shader. I have gone through the red book (7 edition) of OpenGL, but not familiar with glsl and cg.
does anyone can help me change the cg shader in the demo to glsl? or is there any materials to the simplest demo of volume rendering using ray casting (of course in glsl).
here is the cg shader of the demo. and it can work on my friend's nVidia graphic card. what most confusing me is that I don't know how to translate the entry part of cg to glsl, for example:
struct vertex_fragment
{
float4 Position : POSITION; // For the rasterizer
float4 TexCoord : TEXCOORD0;
float4 Color : TEXCOORD1;
float4 Pos : TEXCOORD2;
};
what's more, I can write a program bind 2 texture object with 2 texture unit to the shader provided that I assign two texcoord when draw the screen, for example
glMultiTexCoord2f(GL_TEXTURE0, 1.0, 0.0);
glMultiTexCoord2f(GL_TEXTURE1, 1.0, 0.0);
In the demo the program will bind to two texture (one 2D for backface_buffer one 3D for volume texture), but with only one texture unit like glMultiTexCoord3f(GL_TEXTURE1, x, y, z); I think the GL_TEXTURE1 unit is for the volume texture, but which one (texure unit) is for the backface_buffer? as far as I know in order to bind texture obj in a shader, I must get a texture unit to bind for example:
glLinkProgram(p);
texloc = glGetUniformLocation(p, "tex");
volume_texloc = glGetUniformLocation(p, "volume_tex");
stepsizeloc = glGetUniformLocation(p, "stepsize");
glUseProgram(p);
glUniform1i(texloc, 0);
glUniform1i(volume_texloc, 1);
glUniform1f(stepsizeloc, stepsize);
//When rendering an object with this program.
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, backface_buffer);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_3D, volume_texture);
the program is compiled fine and linked ok. but I only got -1 of all three location(texloc, volume_texloc and stepsizeloc). I know it may be optimized out.
anyone can help me translate the cg shader to glsl shader?
Edit: If you are interest in modern OpenGL API implementation(C++ source code) with glsl:Volume_Rendering_Using_GLSL
Problem solved. the glsl version of the demo:
vertex shader
void main()
{
gl_Position = gl_ModelViewProjectionMatrix*gl_Vertex;
//gl_FrontColor = gl_Color;
gl_TexCoord[2] = gl_Position;
gl_TexCoord[0] = gl_MultiTexCoord1;
gl_TexCoord[1] = gl_Color;
}
fragment shader
uniform sampler2D tex;
uniform sampler3D volume_tex;
uniform float stepsize;
void main()
{
vec2 texc = ((gl_TexCoord[2].xy/gl_TexCoord[2].w) + 1) / 2;
vec4 start = gl_TexCoord[0];
vec4 back_position = texture2D(tex, texc);
vec3 dir = vec3(0.0);
dir.x = back_position.x - start.x;
dir.y = back_position.y - start.y;
dir.z = back_position.z - start.z;
float len = length(dir.xyz); // the length from front to back is calculated and used to terminate the ray
vec3 norm_dir = normalize(dir);
float delta = stepsize;
vec3 delta_dir = norm_dir * delta;
float delta_dir_len = length(delta_dir);
vec3 vect = start.xyz;
vec4 col_acc = vec4(0,0,0,0); // The dest color
float alpha_acc = 0.0; // The dest alpha for blending
float length_acc = 0.0;
vec4 color_sample; // The src color
float alpha_sample; // The src alpha
for(int i = 0; i < 450; i++)
{
color_sample = texture3D(volume_tex,vect);
// why multiply the stepsize?
alpha_sample = color_sample.a*stepsize;
// why multply 3?
col_acc += (1.0 - alpha_acc) * color_sample * alpha_sample*3 ;
alpha_acc += alpha_sample;
vect += delta_dir;
length_acc += delta_dir_len;
if(length_acc >= len || alpha_acc > 1.0)
break; // terminate if opacity > 1 or the ray is outside the volume
}
gl_FragColor = col_acc;
}
if you seen the original shader of cg there is only a little difference between cg and glsl. the most difficult part to translate the demo to glsl version is that the cg function in the opengl such as:
param = cgGetNamedParameter(program, par);
cgGLSetTextureParameter(param, tex);
cgGLEnableTextureParameter(param);
encapsulate the process of texture unit and multitexture activation (using glActiveTexture) and deactivation, which is very important in this demo as it used the fixed pipeline as well as programmable pipeline. here is the key segment changed in the function void raycasting_pass() of main.cpp of the demo in Peter Triers GPU raycasting tutorial:
function raycasting_pass
void raycasting_pass()
{
// specify which texture to bind
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, final_image, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glUseProgram(p);
glUniform1f(stepsizeIndex, stepsize);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, volume_texture);
glUniform1i(volume_tex, 1);
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, backface_buffer);
glUniform1i(tex, 0);
glUseProgram(p);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
drawQuads(1.0,1.0, 1.0); // Draw a cube
glDisable(GL_CULL_FACE);
glUseProgram(0);
// recover to use only one texture unit as for the fixed pipeline
glActiveTexture(GL_TEXTURE1);
glDisable(GL_TEXTURE_3D);
glActiveTexture(GL_TEXTURE0);
}
That's it.