OpenGL Fog effect changes when I move or rotate the camera - opengl

I want to add fog to a small 3D world, I tried fiddling with the arguments, however, the fog is not homogeneous.
I have two problems that are maybe linked :
Fog Homogeneity:
When I move or rotate my viewpoint with gluLookAt, the fog is too heavy and all the world is grey.However the are two angles where the rendering of the fog is nice.
The fog seems normal when the camera orentation on the Y axis is 45° or -135° (opposite)
Fog centered on origin of the scene:
When my fog is correctly displayed, it is centered on the (0;0;0) of the scene
Here is the code I use to initialise the fog and the call to gluLookAt
private static final float density = 1f;
private void initFog() {
float[] vertices = {0.8f, 0.8f, 0.8f, 1f};
ByteBuffer temp = ByteBuffer.allocateDirect(16);
temp.order(ByteOrder.nativeOrder());
FloatBuffer fogColor = temp.asFloatBuffer();
fogColor.put(vertices);
GL11.glClearColor(0.8f,0.8f,0.8f,1.0f);
GL11.glFogi(GL11.GL_FOG_MODE, GL11.GL_LINEAR);
GL11.glFog(GL11.GL_FOG_COLOR, temp.asFloatBuffer());
GL11.glFogf(GL11.GL_FOG_DENSITY, density);
GL11.glHint(GL11.GL_FOG_HINT, GL11.GL_FASTEST);
GL11.glFogf(GL11.GL_FOG_START, 1f);
GL11.glFogf(GL11.GL_FOG_END, 10000f);
}
private void initWindow() {
try {
Display.setDisplayMode(new DisplayMode(1600, 900));
Display.create();
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GLU.gluPerspective(60f, 1600f / 900f, 3, 100000);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
GL11.glEnable(GL11.GL_FOG);
GL11.glEnable(GL11.GL_DEPTH_TEST);
initFog();
initParticles();
} catch (LWJGLException e) {
Display.destroy();
System.exit(1);
}
}
Called from the updatePosition function inside main loop
The angle parameter is the direction of the viewport on y axis and yCpos is a value between -1 and 1 that I use to look up or down.
GL11.glLoadIdentity();
GLU.gluLookAt(xpos, ypos, zpos, xpos + (float)Math.cos(angle), ypos+ yCpos, zpos+ (float)Math.sin(angle), 0, 1, 0);

I was drawing the ground with one giant quad, and now I draw the ground with tiles, and the problem isn't happening any more. Therefore, the cause remains mysterious, but the problem is solved.

Related

QOpenGLWidget resize results in incorrect viewport size

I have some issues with my QOpenGLWidget resizing functionality. Obviously, I am aiming for a new viewport with the correct amount of pixels and a scene centered in the actual center of my window. But these two things are off somehow.
Here are some images:
Initial one:
Scaled in Y:
Scaled in X:
The result is pixelated and translated. For me it looks like the GL viewport has the correct amount of pixels, but is scaled to the top and to the right (if the (0,0) is defined as the bottom left corner).
Here is my code:
void GLWidget::initializeGL() {
QOpenGLFunctions::initializeOpenGLFunctions();
glClearColor(0.7f, 0.75f, 0.8f, 1.0f);
glEnable(GL_MULTISAMPLE);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
void GLWidget::resizeGL(int w, int h) {
qreal aspect = qreal(w) / qreal(h ? h : 1);
const qreal zNear = 3, zFar = 7, fov = 3.14/6;
//I will leave this at it is. This cannot cause the viewport translation
mGraphics->setProjectionPers(fov, aspect, zNear, zFar);
}
void GLWidget::paintGL() {
glClear(GL_COLOR_BUFFER_BIT);
glClear(GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
//actual Drawing
//...
}
The resizeGL is called with the correct values. What am I doing wrong for having a pixelated and translated image when I run this piece of code?
For whatever reason this was in the header file of my QOpenGLWidget decendant:
void resizeEvent(QResizeEvent* ev) {
resizeGL(width(), height());
}
This pretty much skips all the resize logic of the QOpenGLWidget class.

OpenGL "camera" Yaw on wrong axis

I am using gluLookAt() to set the "camera" position and orientation
GLU.gluLookAt(xPosition, yPosition, zPosition,
xPosition + lx, yPosition, zPosition + lz
0, 1, 0);
my lz and lx variables represent my forward vector
lz = Math.cos(angle);
lx = -Math.sin(angle);
When turn around in the 3D world, it appears that I am rotating around an axis that is always infront of me
I know this because my xPosition and yPosition variables stay the same, but I appear to spin around an object when im close to it and I turn.
I know there is not a problem with the maths that I have used here, because I have tried using code from past projects that have worked properly but the problem still remains.
This is what I am doing in the rendering loop
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//draw scene from user perspective
glLoadIdentity();
GLU.gluLookAt(camera.getxPos(), camera.getyPos(), camera.getzPos(),
camera.getxPos()+camera.getLx(), camera.getyPos(), camera.getzPos()+p1.getLz(),
0, 1, 0);
glBegin(GL_QUADS);
glVertex3f(-dim, dim, 0);
glVertex3f(dim, dim, 0);
glVertex3f(dim, 0, 0);
glVertex3f(-dim, 0, 0);
glEnd();
pollInput();
camera.update();
I have tried rendering a box where the player coordinates are and I got this result. The camera appears to be looking from behind the player coordinates. To use an analogy right now its like a 3rd Person game and It should look like a first person game
The small box here is rendered in the camera coordinates, to give some perspective the bigger box is infront.
Solved!
The problem was that I was initially calling gluLookAt() while the matrix mode was set to GL_PROJECTION.
I removed that line and moved it to just after I had set the matrix mode to GL_MODELVIEW and that solved the problem.

OpenGL Directional Lighting + Positioning

I'm writing an engine and using Light 0 as the "sun" for the scene. The sun is a directional light.
I setup the scene's Ortho viewpoint, then setup the light to be on the "East" side of the screen (and to the character) (x/y are coordinates of the plane terrain, with a positive z facing the camera and indicating "height" on the terrain -- the scene is also rotated for an isometric view on the x axis).
The light seems to be shining fine "East" of 0,0,0, but as the character moves it does not shift (CenterCamera does a glTranslate3f on the negative of the values provided, such that they can be mapped specifying world coordinates). Meaning, the further I move to the west, it's ALWAYS dark, with no light.
Graphics.BeginRenderingLayer();
{
Video.MapRenderingMode();
Graphics.BeginLightingLayer( Graphics.AmbientR, Graphics.AmbientG, Graphics.AmbientB, Graphics.DiffuseR, Graphics.DiffuseG, Graphics.DiffuseB, pCenter.X, pCenter.Y, pCenter.Z );
{
Graphics.BeginRenderingLayer();
{
Graphics.CenterCamera( pCenter.X, pCenter.Y, pCenter.Z );
RenderMap( pWorld, pCenter, pCoordinate );
}
Graphics.EndRenderingLayer();
Graphics.BeginRenderingLayer();
{
Graphics.DrawMan( pCenter );
}
Graphics.EndRenderingLayer();
}
Graphics.EndLightingLayer();
}
Graphics.EndRenderingLayer();
Graphics.BeginRenderingLayer = PushMatrix, EndRenderingLayer = PopMatrix Video.MapRenderingMode = Ortho Projection and Scene Rotation/Zoom CenterCamera does a translate to the opposite of the X/Y/Z, such that the character is now centered at X/Y/Z in the middle of the screen.
Any thoughts? Maybe I've confused some of my code here a little?
The lighting code is as follows:
public static void BeginLightingLayer( float pAmbientRed, float pAmbientGreen, float pAmbientBlue, float pDiffuseRed, float pDiffuseGreen, float pDiffuseBlue, float pX, float pY, float pZ )
{
Gl.glEnable( Gl.GL_LIGHTING );
Gl.glEnable( Gl.GL_NORMALIZE );
Gl.glEnable( Gl.GL_RESCALE_NORMAL );
Gl.glEnable( Gl.GL_LIGHT0 );
Gl.glShadeModel( Gl.GL_SMOOTH );
float[] AmbientLight = new float[4] { pAmbientRed, pAmbientGreen, pAmbientBlue, 1.0f };
float[] DiffuseLight = new float[4] { pDiffuseRed, pDiffuseGreen, pDiffuseBlue, 1.0f };
float[] PositionLight = new float[4] { pX + 10.0f, pY, 0, 0.0f };
//Light position of Direction is 5 to the east of the player.
Gl.glLightfv( Gl.GL_LIGHT0, Gl.GL_AMBIENT, AmbientLight );
Gl.glLightfv( Gl.GL_LIGHT0, Gl.GL_DIFFUSE, DiffuseLight );
Gl.glLightfv( Gl.GL_LIGHT0, Gl.GL_POSITION, PositionLight );
Gl.glEnable( Gl.GL_COLOR_MATERIAL );
Gl.glColorMaterial( Gl.GL_FRONT_AND_BACK, Gl.GL_AMBIENT_AND_DIFFUSE );
}
You will need to provide normals for each surface. What is happening (without normals) is the directional light is essentially shining on everything east of zero, positionally, while everything there has a normal of 0,0,1 (it faces west.)
You do not need to send normals with each vertex as far as I can tell, but rather because GL is a state machine, you need to make sure that whenever the normal changes you change it. So if you're rendering a face on a cube, the 'west' face should have a single call
glNormal3i(0,0,1);
glTexCoord..
glVertex3f...
glTexCoord..
etc.
In the case of x-y-z aligned rectangular prisms, 'integers' are sufficient. For faces that do not face one of the six cardinal directions, you will need to normalize them. In my experience you only need to normalize the first three points unless the quad is not flat. This is done by finding the normal of the triangle formed by the first three sides in the quad.
There are a few simple tuts on 'Calculating Normals' that I found enlightening.
The second part of this is that since it is a directional light, (W=0) repositioning it with the player position doesn't make sense. Unless the light itself is being emitted from behind the camera and you are rotating an object in front of you (like a model) that you wish to always be front-lit, its position should probably be something like
float[] PositionLight = new float[4] { 0.0f, 0.0f, 1.0f, 0.0f };
Or, if the GLx direction is being interpreted as the East-West direction (i.e. you initially are facing north/south)
float[] PositionLight = new float[4] { 1.0f, 0.0f, 0.0f, 0.0f };
The concept is that you are calculating the light per-face, and if the light doesn't move and the scene itself is not moving (just the camera moving around the scene) the directional calculation will always remain correct. Provided the normals are accurate, GL can figure out the intensity of light showing on a particular face.
The final thing here is that GL will not automatically handle shadows for you. Basic GL_Light is sufficient for a controlled lighting of a series of convex shapes, so you will have to figure out whether or not a light (such as the sun) should be applied to a face. In some cases this is just taking the solid the face belongs to and seeing if the vector of the sun's light intersects with another solid before reaching the 'sky'.
Look for stuff on lightmaps as well as shadowmapping for this.
One thing that can trip up many people is that the position sent to glLightFv is translated by the current matrix stack. Thus if you want to have your light set to a specific position in world coordinates, your camera and projection matrices must be set and active on the matrix stack at the time of the glLightFv call.

static circle in openGL

I hope you can help me with a little problem...
I know how to draw a circle, that's not a problem - here is the code in c#
void DrawEllipse()
{
GL.Color3(0.5, 0.6, 0.2);
float x, y, z;
double t;
GL.Begin(BeginMode.Points);
for (t = 0; t <= 360; t += 0.25)
{
x = (float)(3*Math.Sin(t));
y = (float)(3*Math.Cos(t));
z = (float)0;
GL.Vertex3(x, y, z);
}
GL.End();
}
But there is a problem - when I Rotate 'Gl.Rotate(angle, axis)' and then redraw a circle - yeah, it's still circle in the 3D, but I want a circle in the screen - I mean static circle which is not rotating with 3D object in it... Is that possible? How to repair the code?
Are you trying to draw a 2D circle on top of a 3D scene to create a HUD or similar? If you are then you should research 2D OpenGL, glOrtho and using multiple viewports in a scene. There is a discussion around this here:
http://www.gamedev.net/topic/388298-opengl-hud/
Just draw it at a position before the camera!
Use pushMatrix() and popMatrix().
Or you can draw the other things between pushMatrix() and popMatrix(). Then draw the circle.
HUD (heads-up display): http://en.wikipedia.org/wiki/HUD_(video_gaming)
void setupScene ()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// set the perspective
glFrustum(...) // or glu's perspective
}
void loop ()
{
// main scene
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport (...)
// push the camera position into GL_MODELVIEW
// (i.e. the inverse matrix of its object position)
// draw your normal 3D objects
// switch to 2D projection (for the HUD)
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(....)
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// draw the objects onto the HUD
// switch back to 3d projection (i.e. restore GL_PROJECTION)
// glEnable (GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
// glMatrixMode(GL_MODELVIEW);
// swap buffers
}
The commented code is optional, depending on what you're gonna do in the end. Take it as hints.

OpenGL particles, help controlling direction

I am trying to modify this Digiben sample in order to get the effect of particles that generate from a spot (impact point) and float upwards kind of like the sparks of a fire. The sample has the particles rotating in a circle... I have tried removing the cosine/sine functions and replace them with a normal glTranslate with increasing Y value but I just can't get any real results... could anyone please point out roughly where I should add/modify the translation in this code to obtain that result?
void ParticleMgr::init(){
tex.Load("part.bmp");
GLfloat angle = 0; // A particle's angle
GLfloat speed = 0; // A particle's speed
// Create all the particles
for(int i = 0; i < P_MAX; i++)
{
speed = float(rand()%50 + 450); // Make a random speed
// Init the particle with a random speed
InitParticle(particle[i],speed,angle);
angle += 360 / (float)P_MAX; // Increment the angle so when all the particles are
// initialized they will be equally positioned in a
// circular fashion
}
}
void ParticleMgr::InitParticle(PARTICLE &particle, GLfloat sss, GLfloat aaa)
{
particle.speed = sss; // Set the particle's speed
particle.angle = aaa; // Set the particle's current angle of rotation
// Randomly set the particles color
particle.red = rand()%255;
particle.green = rand()%255;
particle.blue = rand()%255;
}
void ParticleMgr::DrawParticle(const PARTICLE &particle)
{
tex.Use();
// Calculate the current x any y positions of the particle based on the particle's
// current angle -- This will make the particles move in a "circular pattern"
GLfloat xPos = sinf(particle.angle);
GLfloat yPos = cosf(particle.angle);
// Translate to the x and y position and the #defined PDEPTH (particle depth)
glTranslatef(xPos,yPos,PDEPTH);
// Draw the first quad
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex3f(-5, 5, 0);
glTexCoord2f(1,0);
glVertex3f(5, 5, 0);
glTexCoord2f(1,1);
glVertex3f(5, -5, 0);
glTexCoord2f(0,1);
glVertex3f(-5, -5, 0);
glEnd(); // Done drawing quad
// Draw the SECOND part of our particle
tex.Use();
glRotatef(particle.angle,0,0,1); // Rotate around the z-axis (depth axis)
//glTranslatef(0, particle.angle, 0);
// Draw the second quad
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex3f(-4, 4, 0);
glTexCoord2f(1,0);
glVertex3f(4, 4, 0);
glTexCoord2f(1,1);
glVertex3f(4, -4, 0);
glTexCoord2f(0,1);
glVertex3f(-4, -4, 0);
glEnd(); // Done drawing quad
// Translate back to where we began
glTranslatef(-xPos,-yPos,-PDEPTH);
}
void ParticleMgr::run(){
for(int i = 0; i < P_MAX; i++)
{
DrawParticle(particle[i]);
// Increment the particle's angle
particle[i].angle += ANGLE_INC;
}
}
For now I am adding a glPushMatrix(), glTranslate(x, y, z) in the run() function above, right before the loop, with x,y,z as the position of the enemy for placing them on top of the enemy....is that the best place for that?
Thanks for any input!
Using glTranslate and glRotate that way will in fact decrease your program's performance. OpenGL is not a scene graph, so the matrix manipulation functions directly influence the drawing process, i.e. they don't set "object state". The issue you're running into is, that a 4×4 matrix-matrix multiplication involves 64 multiplications and 16 additions. So you're spending 96 times the computing power for moving a particle, than simply update the vertex position directly.
Now to your problem: Like I already told you, glTranslate operates on (a global) matrix state of one of 4 selectable matrices. And the effects accumulate, i.e. each glTranslate will start from the matrix the previous glTranslate left. OpenGL provides a matrix stack, where one can push a copy of the current matrix to work with, then pop to revert to the state before.
However: Matrix manipulation has been removed from OpenGL-3 core and later entirely. OpenGL matrix manipulation never was accelerated (except on one particular graphics workstation made by SGI around 1996). Today it is a anachronism, as every respectable program working with 3D geometry used much more sophisticated matrix manipulation by either own implementation or 3rd party library. OpenGL's matrix stack was just redundant. So I strongly suggest you forget about OpenGL's matrix manipulation functionality and roll your own.