HLSL fixed lighting location - hlsl

Hi im trying to create a shader for my 3D models with materials and fog. Everything works fine but the light direction. I'm not sure what to set it to so I used a fixed value, but when I rotate my 3D model (which is a simple textured sphere) the light rotates with it. I want to change my code so that my light stays in one place according to the camera and not the object itself. I have tried multiplying the view matrix by the input normals but the same result occurs.
Also, should I be setting the light direction according to the camera instead?
EDIT: removed pastebin link since that is against the rules...

Use camera depended values just for transforming vertex pos to view and projected position (needed in shaders for clipping and rasterizer stage). The video cards needs to know, where to draw your pixel.
For lighting you normally pass additional to the camera transformed value the world position of the vertex and the normal in world position to the needed shader stages (i.e pixel shader stage for phong lighting).
So you can set your light position, or better light direction in world space coordinate system as global variable to your shaders. With that the lighting is independent of the camera view position.
If you want to have a effect like using a flashlight. You can set the lightposition to camera position, and light direction to your look direction. So the bright parts are always in the center of your viewing frustum.
good luck

Related

(GLSL) Orienting raycast object to camera space

I'm trying to orient a ray-cast shape to a direction in world space. Here's a picture of it working correctly:
It works fine as long as the objects are pointing explicitly at the camera's focus point. But, if I pan the screen left or right, I get this:
So, the arrows are maintaining their orientation as if they were still looking at the center of the screen.
I know my problem here is that I'm not taking the projection matrix into account to tweak the direction the object is pointing. However, anything I try only makes things worse.
Here's how I'm drawing it: I set up these vertices:
float aUV=.5f;
Vertex aV[4];
aV[0].mPos=Vector(-theSize,-theSize,0);
aV[0].mUV=Point(-aUV,-aUV);
aV[1].mPos=Vector(theSize,-theSize,0);
aV[1].mUV=Point(aUV,-aUV);
aV[2].mPos=Vector(-theSize,theSize,0);
aV[2].mUV=Point(-aUV,aUV);
aV[3].mPos=Vector(theSize,theSize,0);
aV[3].mUV=Point(aUV,aUV);
// Position (in 3d space) and a point at direction are sent in as uniforms.
// The UV of the object spans from -.5 to .5 to put 0,0 at the center.
So the vertex shader just draws a billboard centered at a position I send (and using the vertex's position to determine the billboard size).
For the fragment shader, I do this:
vec3 aRayOrigin=vec3(0.,0.,5.);
vec3 aRayDirection=normalize(vec3(vertex.mUV,-2.5));
//
// Hit object located at 0,0,0 oriented to uniform_Pointat's direction
//
vec4 aHH=RayHitObject(aRayOrigin,aRayDirection,uniform_PointAt.xyz);
if (aHH.x>0.0) // Draw the fragment
I compute the final PointAt (in C++, outside of the shader), oriented to texture space, like so:
Matrix aMat=getWorldMatrix().Normalized();
aMat*=getViewMatrix().Normalized();
Vector aFinalPointAt=pointAt*aMat; // <- Pass this in as uniform
I can tell what's happening-- I'm shooting rays straight in no matter what, but when the object is near the edge of the screen, they should be getting tweaked to approach the object from the side. I've been trying to factor that into the pointAt, but whenever I add the projection matrix to that calculation, I get worse results.
How can I change this to make my arrows maintain their orientation even when perspective becomes an issue? Should I be changing the pointAt value, or should I be trying to tweak the ray's direction? And whichever is smarter-- what's the right way to do it?

Opengl Lighting Artifact (calculating light before translation)

I am trying to create a scene in opengl and I am having trouble with my lighting. I believe it is something to do with translating my models from the origin of the world into their respective places.
I only have 1 light in my scene placed on the right in the centre of the world, however you can see the light on the wall at the front of the scene.
I have wrote my own shaders. I suspect that I'm calculating the lighting too early as it seems that it is being calculated before the models are being translated around the world, that or I am using local coordinates rather than world coordinates (I think thats right anyway...).
(please ignore the glass, they are using a global light and a different shader)
Does anyone know if this is indeed the case or where would be the best place to find a solution.
Below is how I call rendering my models.
glUseProgram(modelShader);
//center floor mat
if (floorMat)
{
glUniformMatrix4fv(modelShader_modelMatrixLocation, 1, GL_FALSE, (GLfloat*)&(modelTransform.M));
floorMat->setTextureForModel(carpetTexture);
floorMat->renderTexturedModel();
}
https://www.youtube.com/watch?annotation_id=annotation_1430411783&feature=iv&index=86&list=PLRwVmtr-pp06qT6ckboaOhnm9FxmzHpbY&src_vid=NH68sIdF-48&v=P3DQXzyjswQ
Turns out I was not calculating the lighting in world space.
Rather than using the transposed modelworldposition I was just using the plain vertex position

OpenGL beam spotlight

After reading up on OpenGL and GLSL I was wondering if there were examples out there to make something like this http://i.stack.imgur.com/FtoBj.png
I am particular interesting in the beam and intensity of light (god ray ?) .
Does anybody have a good start point ?
OpenGL just draws points, lines and triangles to the screen. It doesn't maintain a scene and the "lights" of OpenGL are actually just a position, direction and color used in the drawing calculations of points, lines or triangles.
That being said, it's actually possible to implement an effect like yours using a fragment shader, that implements a variant of the shadow mapping method. The difference would be, that instead of determining if a surface element of a primitive (point, line or triangle) lies in the shadow or not, you'd cast rays into a volume and for every sampling position along the ray test if that volume element (voxel) lies in the shadow or not and if it's illuminated add to the ray accumulator.

OpenGL lighting coordinate system

So according to many sources I've read, most lighting should be done in eye-space (which is camera space).
The book that I'm reading also claims to be using lighting in eye-space, and I took its examples into my application as well.
In my application, the only thing that I pass to the fragment shader which is related to the camera is the camera position, and it's a model-space coordinate.
The fragment shader also uses a normal matrix which I pass in as a uniform, and its only use is to transform local-space normal vectors to model-space normal vectors.
So why is my lighting implementation considered to be used in eye-space although I never passed a model transformation matrix multiplied by a camera matrix? Can anyone shed some light on this subject? I might be missing something here.

Display voxels using shaders in openGL

I am working on voxelisation using the rendering pipeline and now I successfully voxelise the scene using vertex+geometry+fragment shaders. Now my voxels are stored in a 3D texture which has size, for example, 128x128x128.
My original model of the scene is centered at (0,0,0) and it extends in both positive and negative axis. The texure, however, is centered at (63,63,63) in tex coordinates.
I implemented a simple ray marcing for visualizing but it doesn't take into account the camera movements (I can render only from very fixed positions because my ray have to be generated taking into account the different coordinates of the 3D texture).
My question is: how can I map my rays so that they are generated at point Po with direction D in the coordinates of my 3D model but intersect the voxels at the corresponding position in texture coordinates and every movement of the camera in the 3D world is remapped in the voxel coordinates?
Right now I generate the rays in this way:
create a quad in front of the camera at position (63,63,-20)
cast rays in direction towards (63,63,3)
I think you should store your entire view transform matrix in your shader uniform params. Then for each shader execution you can use its screen coords and view transform to compute the view ray direction for your particular pixel.
Having the ray direction and camera position you just use them the same as currently.
There's also another way to do this that you can try:
Let's say you have a cube (0,0,0)->(1,1,1) and for each corner you assign a color based on its position, like (1,0,0) is red, etc.
Now for every frame you draw your cube front faces to the texture, and cube back faces to the second texture.
In the final rendering you can use both textures to get enter and exit 3D vectors, already in the texture space, which makes your final shader much simpler.
You can read better descriptions here:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
http://web.cse.ohio-state.edu/~tong/vr/