I am trying to create a scene in opengl and I am having trouble with my lighting. I believe it is something to do with translating my models from the origin of the world into their respective places.
I only have 1 light in my scene placed on the right in the centre of the world, however you can see the light on the wall at the front of the scene.
I have wrote my own shaders. I suspect that I'm calculating the lighting too early as it seems that it is being calculated before the models are being translated around the world, that or I am using local coordinates rather than world coordinates (I think thats right anyway...).
(please ignore the glass, they are using a global light and a different shader)
Does anyone know if this is indeed the case or where would be the best place to find a solution.
Below is how I call rendering my models.
glUseProgram(modelShader);
//center floor mat
if (floorMat)
{
glUniformMatrix4fv(modelShader_modelMatrixLocation, 1, GL_FALSE, (GLfloat*)&(modelTransform.M));
floorMat->setTextureForModel(carpetTexture);
floorMat->renderTexturedModel();
}
https://www.youtube.com/watch?annotation_id=annotation_1430411783&feature=iv&index=86&list=PLRwVmtr-pp06qT6ckboaOhnm9FxmzHpbY&src_vid=NH68sIdF-48&v=P3DQXzyjswQ
Turns out I was not calculating the lighting in world space.
Rather than using the transposed modelworldposition I was just using the plain vertex position
Related
Hi im trying to create a shader for my 3D models with materials and fog. Everything works fine but the light direction. I'm not sure what to set it to so I used a fixed value, but when I rotate my 3D model (which is a simple textured sphere) the light rotates with it. I want to change my code so that my light stays in one place according to the camera and not the object itself. I have tried multiplying the view matrix by the input normals but the same result occurs.
Also, should I be setting the light direction according to the camera instead?
EDIT: removed pastebin link since that is against the rules...
Use camera depended values just for transforming vertex pos to view and projected position (needed in shaders for clipping and rasterizer stage). The video cards needs to know, where to draw your pixel.
For lighting you normally pass additional to the camera transformed value the world position of the vertex and the normal in world position to the needed shader stages (i.e pixel shader stage for phong lighting).
So you can set your light position, or better light direction in world space coordinate system as global variable to your shaders. With that the lighting is independent of the camera view position.
If you want to have a effect like using a flashlight. You can set the lightposition to camera position, and light direction to your look direction. So the bright parts are always in the center of your viewing frustum.
good luck
I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.
I have a very simple OpenGL (3.2) setup, no lighting, perspective projection and a simple shader program (applies projection transformation and uses texture2D to read the color from the texture).
The camera is looking down the negative z-axis and I draw a few walls and pillars on the x-y-plane with a texture (http://i43.tinypic.com/2ryszlz.png).
Now I'm moving the camera in the x-y-plane and this is what it looks like:
http://i.imgur.com/VCrNcly.gif.
My question is now: How do I handle the flickering of the wall texture?
As the camera centers the walls, the view angle onto the texture compresses the texture for the screen, so one pixel on the screen is actually several pixels on the texture, but only one is chosen for display. From the information I have access to in the shaders, I don't see how to perform an operation which interpolates the required color.
As this looks like a problem nearly every 3D application should have, the solution is probably pretty simple (I hope?).
I can't seem to understand the images, but from what you are describing you seem to be looking for MIPMAPPING. Please google it, it's a very easy and very generally used concept. You will be able to use it by adding one or two lines to your program. Good Luck. I'd be more detailed but I am out of time for today.
I'm looking to capture a 360 degree - spherical panorama - photo of my scene. How can I do this best? If I have it right, I can't do this the ordinary way of setting the perspective to 360.
If I would need a vertex shader, is there one available?
This is actually a nontrivial thing to do.
In a naive approach a vertex shader that transforms the vertex positions not by matrix multiplication, but by feeding them through trigonometric functions may seem to do the trick. The problem is, that this will not make straight lines "curvy". You could use a tesselation shader to add sufficient geometry to compensate for this.
The most straightforward approach is two-fold. First you render your scene into a cubemap, i.e. render with a 90°×90° FOV into the 6 directions making up a cube. This allows you to use regular affine projections rendering the scene.
In a second step you use the generated cubemap to texture a screen filling grid, where the texture coordinates of each vertex are azimuth and elevation.
Another approach is to use tiled rendering with very small FOV and rotating the "camera", kind of like doing a panoramic picture without using a wide angle lens. As a matter of fact the cubemap based approach is tiled rendering, but its easier to get right than trying to do this directly with changed camera direction and viewport placement.
I am trying to develop a basic Ray Tracer. So far i have calculated intersection with a plane and blinn-phong shading.i am working on a 500*500 window and my primary ray generation code is as follows
vec3 rayDirection = vec3( gl_FragCoord.x-250.0,gl_FragCoord.y-250.0 , 10.0);
Now i doubt that above method is right or wrong. Please give me some insights.
I am also having doubt that do we need to construct geometry in OpenGL code while rayTracing in GLSL. for example if i am trying to raytrace a plane do i need to construct plane in OpenGL code using glVertex2f ?
vec3 rayDirection = vec3( gl_FragCoord.x-250.0,gl_FragCoord.y-250.0 , 10.0);
Now i doubt that above method is right or wrong. Please give me some insights.
There's no right or wrong with projections. You could as well map viewport pixels to azimut and elevation angle. Actually your way of doing this is not so bad at all. I'd just pass the viewport dimensions in a additional uniform, instead of hardcoding, and normalize the vector. The Z component literally works like focal lengths.
I am also having doubt that do we need to construct geometry in OpenGL code while rayTracing in GLSL. for example if i am trying to raytrace a plane do i need to construct plane in OpenGL code using glVertex2f?
Raytracing works on a global description containing the full scene. OpenGL primitives however are purely local, i.e. just individual triangles, lines or points, and OpenGL doesn't maintain a scene database. So geometry passed using the usual OpenGL drawing function can not be raytraced (at least not that way).
This is about the biggest obstacle for doing raytracing with GLSL: You somehow need to implement a way to deliver the whole scene as some freely accessible buffer.
It is possible to use Ray Marching to render certain types of complex scenes in a single fragment shader. Here are some examples: (use Chrome or FireFox, requires WebGL)
Gift boxes: http://glsl.heroku.com/e#820.2
Torus Journey: http://glsl.heroku.com/e#794.0
Christmas tree: http://glsl.heroku.com/e#729.0
Modutropolis: http://glsl.heroku.com/e#327.0
The key to making this stuff work is writing "distance functions" that tell the ray marcher how far it is from the surface of an object. For more info on distance functions, see:
http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm