OpenGL fog versus OpenGL ES fog - opengl

I have a problem where the fog works like intended on a desktop program (PC) using OpenGL but the same fog doesn't work like it should on an Android device (using OpenGL ES).
The code is a exact duplicate, it looks like this:
// OpenGL ES Init
gl.glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
float fogColor[] = {0.5, 0.5, 0.5, 1.0};
// Fog color to mFogBuffer...
gl.glEnable(GL10.GL_FOG);
gl.glFogfv(GL10.GL_FOG_COLOR, mFogBuffer);
gl.glFogf(GL10.GL_FOG_DENSITY, 0.04f);
// OpenGL Init
glClearColor(0.5, 0.5, 0.5, 1.0);
float fogColor[] = {0.5, 0.5, 0.5, 1.0};
glEnable(GL_FOG);
glFogfv(GL_FOG_COLOR, fogColor);
glFogf(GL_FOG_DENSITY, 0.04f);
But I can't get the OpenGL fog work exactly the same on my Android device. I have tested glShadeModel()'s attributes and so on.
The area that should fog is totally white and it is a basic quad (built by triangles).
I have done some gluLookAt() transformations, but it shouldn't affect this fog.
Any ideas?

Try glHint(GL_FOG_HINT, GL_NICEST).

Related

Translating a 3d model to 2d using assimp

I'm using c++ to translate a 3d model entered using command line arguments into a 2d picture in assimp. However I'm not sure of the best way to go about it. I have the basic hard coding for to create a set object but I need to redo it using vectors and loops. What's the best way to go about it?
void createSimpleQuad(Mesh &m) {
// Clear out vertices and elements
m.vertices.clear();
m.indices.clear();
// Create four corners
Vertex upperLeft, upperRight;
Vertex lowerLeft, lowerRight;
Vertex upperMiddle;
// Set positions of vertices
// Note: glm::vec3(x, y, z)
upperLeft.position = glm::vec3(-0.5, 0.5, 0.0);
upperRight.position = glm::vec3(0.5, 0.5, 0.0);
lowerLeft.position = glm::vec3(-0.5, -0.5, 0.0);
lowerRight.position = glm::vec3(0.5, -0.5, 0.0);
upperMiddle.position = glm::vec3(-0.9, 0.5, 0.0);
// Set vertex colors (red, green, blue, white)
// Note: glm::vec4(red, green, blue, alpha)
upperLeft.color = glm::vec4(1.0, 0.0, 0.0, 1.0);
upperRight.color = glm::vec4(0.0, 1.0, 0.0, 1.0);
lowerLeft.color = glm::vec4(0.0, 0.0, 1.0, 1.0);
lowerRight.color = glm::vec4(1.0, 1.0, 1.0, 1.0);
upperMiddle.color = glm::vec4(0.5, 0.15, 0.979797979, 1.0);
// Add to mesh's list of vertices
m.vertices.push_back(upperLeft);
m.vertices.push_back(upperRight);
m.vertices.push_back(lowerLeft);
m.vertices.push_back(lowerRight);
m.vertices.push_back(upperMiddle);
// Add indices for two triangles
m.indices.push_back(0);
m.indices.push_back(3);
m.indices.push_back(1);
m.indices.push_back(0);
m.indices.push_back(2);
m.indices.push_back(3);
m.indices.push_back(0);
m.indices.push_back(2);
m.indices.push_back(4);
}
If you want to generate a 2D-picture out of a 3D-Model you need to:
Import the model
Render it via a common render-lib into a texture or manually by using our viewer and take a snapshot
At this moment there is no post-process to generate a 2D-View automatically in Assimp.
But when you want to do this with your own render-code this is not so hard to do. After importing your model you have to:
Get the bounding box for your imported asset, just check the opengl-samples in the assimp-repo for some tips
Calculate the diameter for this bounding box.
Create a camera, for OpenGL you can use glm for calculating the View-Matrix
Place the asset at (0|0|0) world coordinate system
Move your camera by the diameter at let it view onto (0|0|0)
Render the view into a 2D-Texture or just take a screenshot

shapes skewed when rotated, using openGL, glm math, orthographic projection

For practice I am setting up a 2d/orthographic rendering pipeline in openGL to be used for a simple game, but I am having issues related to the coordinate system.
In short, rotations distort 2d shapes, and I cannot seem to figure why. I am also not entirely sure that my coordinate system is sound.
First I looked for previous answers, but the following (the most relevant 2D opengl rotation causes sprite distortion) indicates that the problem was an incorrect ordering of transformations, but for now I am using just a view matrix and projection matrix, multiplied in the correct order in the vertex shader:
gl_Position = projection * view * model vec4(1.0); //(The model is just the identity matrix.)
To summarize my setup so far:
- I am successfully uploading a quad that should stretch across the whole screen:
GLfloat vertices[] = {
-wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top left
-wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom left
wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom right
wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top right
};
GLuint indices[] = {
0, 1, 2, // first Triangle
2, 3, 0, // second Triangle
};
wf and hf are 1, and I am trying to use a -1 to 1 coordinate system so I don't need to scale by the resolution in shaders (though I am not sure that this is correct to do.)
My viewport and orthographic matrix:
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
...
glm::mat4 mat_ident(1.0f);
glm::mat4 mat_projection = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
... though this clearly does not factor in the screen width and height. I have seen others use width and height instead of 1s, but this seems to break the system or display nothing.
I rotate with a static method that modifies a struct containing a glm::quaternion (time / 1000) to get seconds:
main_cam.rotate((GLfloat)curr_time / TIME_UNIT_TO_SECONDS, 0.0f, 0.0f, 1.0f);
// which does: glm::angleAxis(angle, glm::vec3(x, y, z) * orientation)
Lastly, I pass the matrix as a uniform:
glUniformMatrix4fv(MAT_LOC, 1, GL_FALSE, glm::value_ptr(mat_projection * FreeCamera_calc_view_matrix(&main_cam) * mat_ident));
...and multiply in the vertex shader
gl_Position = u_matrix * vec4(a_position, 1.0);
v_position = a_position.xyz;
The full-screen quad rotates on its center (0, 0 as I wanted), but its length and width distort, which means that I didn't set something correctly.
My best guess is that I haven't created the right ortho matrix, but admittedly I have had trouble finding anything else on stack overflow or elsewhere that might help debug. Most answers suggest that the matrix multiplication order is wrong, but that is not the case here.
A secondary question is--should I not set my coordinates to 1/-1 in the context of a 2d game? I did so in order to make writing shaders easier. I am also concerned about character/object movement once I add model matrices.
What might be causing the issue? If I need to multiply the arguments to gl::ortho by width and height, then how do I transform coordinates so v_position (my "in"/"varying" interpolated version of the position attribute) works in -1 to 1 as it should in a shader? What are the implications of choosing a particular coordinates system when it comes to ease of placing entities? The game will use sprites and textures, so I was considering a pixel coordinate system, but that quickly became very challenging to reason about on the shader side. I would much rather have THIS working.
Thank you for your help.
EDIT: Is it possible that my varying/interpolated v_position should be set to the calculated gl_Position value instead of the attribute position?
Try accounting for the aspect ratio of the window you are displaying on in the first two parameters of glm::ortho to reflect the aspect ratio of your display.
GLfloat aspectRatio = SCREEN_WIDTH / SCREEN_HEIGHT;
glm::mat4 mat_projection = glm::ortho(-aspectRatio, aspectRatio, -1.0f, 1.0f, -1.0f, 1.0f);

OpenGL: Quads seemingly not culled properly

I have built a simple scene like the following:
The problem is, the blue shape is lower than the red one but somehow bleeds through. It looks proper when I rotate it like the following:
From what I searched this could be related to the order of vertices being sent, and here is my definition for those:
Shape* Obj1 = new Quad(Vec3(-5.0, 5.0, 0.0), Vec3(5.0, 5.0, 0.0), Vec3(5.0, 5.0, -10.0), Vec3(-5.0, 5.0, -10.0));
Shape* Obj2 = new Quad(Vec3(-5.0, 3.0, 0.0), Vec3(5.0, 3.0, 0.0), Vec3(5.0, 3.0, -10.0), Vec3(-5.0, 3.0, -10.0));
The Vec3 class just holds 3 doubles for x,y,z coordinates. I add these Vec3 classes to a vector, and iterate through them when I want to draw, as such:
glBegin(GL_QUADS);
for (auto it = vertex_list.begin(); it != vertex_list.end(); ++it)
glVertex3d(it->get_x(), it->get_y(), it->get_z());
glEnd();
Finally, my settings:
glEnable(GL_ALPHA_TEST | GL_DEPTH_TEST | GL_CULL_FACE);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glAlphaFunc(GL_GREATER, 0.0f);
glViewport(0, 0, WINDOW_X, WINDOW_Y);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-1.0, 1.0, -1.0, 1.0, 1.0f, 300.0);
// camera origin xyz, point to look at xyz, camera rot xyz
gluLookAt(10, 10, -20, 2.5, 2.5, -10, 0, 1, 0);
You should enable depth test, face culling and alpha testing separately.
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
They are not flags. You cannot use them in that way.
See glEnable:
glEnable — enable or disable server-side GL capabilities
void glEnable(GLenum cap);
cap Specifies a symbolic constant indicating a GL capability.
This means the paramter of glEnable is a constant and not a set of bits and GL_ALPHA_TEST, GL_DEPTH_TEST, GL_CULL_FACE are symbolic constats and not bits of a bit set.
Change your code like this:
glEnable(GL_ALPHA_TEST);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
See OpenGL Specifiction - 17.3.4 Depth Buffer Test, p. 500:
17.3.4 Depth Buffer Test
The depth buffer test discards the incoming fragment if a depth comparison fails. The comparison is enabled or disabled with the generic Enable and Disable commands using target DEPTH_TEST.
See OpenGL Specifiction - 14.6.1 Basic Polygon Rasterization, p. 473:
Culling is enabled or disabled by calling Enable or Disable with target CULL_FACE.

Inconsistency of shading in polygons on the same axis in OpenGL

I am building a house in OpenGL. On the outside and on the inside where there are doors or windows, I use a Quad to go below and above the windows all the way around the house, and then a quad to fill in the gaps between windows. These will have the same plane value, but for some reason GL_LIGHT passes shadows onto some. Any clue why?
Quad between windows
glBegin(GL_QUADS);
glTexCoord2d(0, 0);glVertex3d(0, 1.1, 0);
glTexCoord2d(2, 0);glVertex3d(0, 1.1, 2);
glTexCoord2d(2, 1.6);glVertex3d(0, 2.7, 2);
glTexCoord2d(0, 1.6);glVertex3d(0, 2.7, 0);
glEnd();
Below windows
glBegin(GL_QUADS);
glTexCoord2d(0, 0);glVertex3d(0, 0.1, 0);
glTexCoord2d(15, 0);glVertex3d(0, 0.1, 15);
glTexCoord2d(15, 1);glVertex3d(0, 1.1, 15);
glTexCoord2d(0.0, 1);glVertex3d(0, 1.1, 0);
glEnd();
Above windows
glBegin(GL_QUADS);
glTexCoord2d(0, 2.6);glVertex3d(0, 2.7, 0);
glTexCoord2d(15, 2.6);glVertex3d(0, 2.7, 15);
glTexCoord2d(15, 3.0);glVertex3d(0, 3.1, 15);
glTexCoord2d(0.0,3.0);glVertex3d(0, 3.1, 0);
glEnd();
here is the code for the light
GLfloat light_position[] = { 50, 50, -1.0};
GLfloat diffuse[] = { 1.0, 1.0, 1.0, 1.0 };
GLfloat specular[] = { 1.0, 1.0, 1.0, 1.0 };
GLfloat ambient[] = { 1.0, 1.0, 1.0, 1.0 };
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
glEnable(GL_NORMALIZE);
glLightfv(GL_LIGHT0, GL_POSITION , light_position );
glLightfv(GL_LIGHT0, GL_SPECULAR , specular);
glLightfv(GL_LIGHT0, GL_DIFFUSE , diffuse );
glLightfv(GL_LIGHT0, GL_AMBIENT , ambient );
Here is a screenshot of the result
http://imgur.com/WsgZWBF
Why is it doing this, and is there any way to fix it?
You need to supply (meaningful) vertex/face normals for OpenGL's lighting to work properly:
18.020 Why are my objects all one flat color and not shaded and illuminated?
This effect occurs when you fail to supply a normal at each vertex.
OpenGL needs normals to calculate lighting equations, and it won't calculate normals for you (with the exception of evaluators). If your application doesn't call glNormal*(), then it uses the default normal of (0.0, 0.0, 1.0) at every vertex. OpenGL will then compute the same, or nearly the same, lighting result at each vertex. This will cause your model to look flat and lack shading.
The solution is to simply calculate the normals that need to be specified at any given vertex. Then send them to OpenGL with a call to glNormal3f() just prior to specifying the vertex, which the normal is associated with.

Light and shadow not working in opengl and c++

I am creating the solar system and I keep running into problems with the lighting. The first problem is that the moon casts no shadows on the earth and the earth casts no shadows on the moon.
The other problem is that the light that is shining on the the earth and the moon are not coming from my sun, but from the center point of the orbit. I added the red lines in the picture below to show what I mean.
the picture below should illustrate what my two problems are.
Here is the code that is dealing with the lights and the planets.
glDisable(GL_LIGHTING);
drawCircle(800, 720, 1, 50);
//SUN
//Picture location, major radius, minor radius, major orbit, minor orbit, angle
Planet Sun ("/home/rodrtu/Desktop/SolarSystem/images/Sun.png",
100, 99, 200.0, 0.0, 0.0);
double sunOrbS = 0;
double sunRotS = rotatSpeed/10;
cout << sunRotS << " Sun Rotation" << endl;
//orbit speed, rotation speed, moon reference coordinates (Parent planet's major and minor Axis)
Sun.displayPlanet(sunOrbS, sunRotS, 0.0, 0.0);
//Orbit path
//EARTH
GLfloat light_diffuse[] = { 1.5, 1.5, 1.5, 1.5 };
GLfloat pos[] = { 0.0, 0.0, 0.0, 200.0 };
glEnable(GL_LIGHTING);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_POSITION, pos);
Planet Earth ("/home/rodrtu/Desktop/SolarSystem/images/EarthTopography.png",
50, 49, 500.0, 450.0, 23.5);
double eaOrbS = orbitSpeed;
double eaRotS = rotatSpeed*3;
Earth.displayPlanet(eaOrbS, eaRotS, 0.0, 0.0);
//EARTH'S MOON
Planet Moon ("/home/rodrtu/Desktop/SolarSystem/images/moonTest.png",
25, 23, 100.0, 100.0, 15);
double moOrbS = rotatSpeed*4;
double moRotS = eaOrbS;
Moon.displayPlanet(moOrbS, moRotS, Earth.getMajorAxis(), Earth.getMinorAxis());
orbitSpeed+=.9;
if (orbitSpeed > 359.0)
orbitSpeed = 0.0;
rotatSpeed+=2.0;
if (rotatSpeed > 7190.0)
rotatSpeed = 0.0;
This next functions are used to determine the orbit coordinate and location of each planet
void Planet::setOrbit(double orbitSpeed, double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
majorAxis = orbitSemiMajor * cos(orbitSpeed / 180.0 * Math::Constants<double>::pi);
minorAxis = orbitSemiMinor * sin(orbitSpeed / 180.0 * Math::Constants<double>::pi);
glTranslate(majorAxis+moonOrbitX, minorAxis+moonOrbitY, 0.0);
glRotatef(orbitAngle, 0.0, 1.0, 1.0);
glRotatef(rotationSpeed, 0.0, 0.0, 1.0);
}
void Planet::displayPlanet(double orbitSpeed,double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
GLuint surf;
Images::RGBImage surfaceImage;
surfaceImage=Images::readImageFile(texture);
glEnable(GL_TEXTURE_2D);
glGenTextures(0, &surf);
glBindTexture(GL_TEXTURE_2D, surf);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
surfaceImage.glTexImage2D(GL_TEXTURE_2D,0,GL_RGB);
glPushMatrix();
setOrbit(orbitSpeed,rotationSpeed, moonOrbitX, moonOrbitY);
drawSolidPlanet(equatRadius, polarRadius, 1, 40, 40);
glPopMatrix();
}
What am I doing wrong? I read up on the w component of GL_POSITION and I changed my position to be 200 (where the sun is centered), but the light source is still coming from the center of the orbit.
To make a proper reply for the light position issue..
[X, Y, Z, W] is called homogenous coordinates
A coordinate [X, Y, Z, W] in homogenous space is will be [X/W, Y/W, Z/W] in 3D space.
Now, consider the following W values :
W=1.0 : [1.0, 1.0, 1.0, 1.0] is [1.0, 1.0, 1.0] in 3D place.
W=0.1 : [1.0, 1.0, 1.0, 0.1] is [10.0, 10.0, 10.0] in 3D place.
W=0.001 : [1.0, 1.0, 1.0, 0.001] is [1000.0, 1000.0, 1000.0] in 3D place.
When we keep moving towards W=0 the [X/W, Y/W, Z/W] values approaches a point at infinity. It's actually no longer a point, but a direction from [0,0,0] to [X,Y,Z].
So when defining the light position we need to make sure to get this right.
W=0 defines a directional light, so x,y,z is a directional vector
W=1 defined a positional light, so x,y,z is a position in 3D space
You'll get to play around with this a lot once you dig deeper into matrix math. If you try to transform a direction (W=0) with a translation matrix for example, it will not have any effect. This is very relevant here as well since the light position will be affected by the modelview matrix.
Some easy to understand information here for further reading :
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
If OpenGL doesn't have a "cast shadow" function, how could I acomplish this then?
What you must understand is, that OpenGL has no concept of a "scene". All OpenGL does is drawing points, lines or triangles to the screen, one at a time. After it's drawn, it has no influence on the following drawing operations.
So to do something fancy like shadows, you must get, well, artistic. By that I mean, like an artist who paints a plastic picture which has depth with "just" a brush and a palette of colours, you must use OpenGL in a artistic way to recreate with it the effects you desire. Drawing a shadow can be done in various ways. But the most popular one is known by the term Shadow Mapping.
Shadow Mapping is a two step process. In the first step the scene is rendered into a "grayscale" picture "seen" from the points of view of the light, where the distance from the light is drawn as the "gray" value. This is called a Shadow Depth Map.
In the second step the scene is drawn as usual, where the lights' shadow depth map(s) are projected into the scene, as if the lights were a slide projector (where everything receives that image, as OpenGL doesn't shadow). In a shader the depth value in the shadow depth map is compared with the actual distance to the light source for each processed fragments; if the distance to the light is farther than the corresponding pixel in the shadow map this means that while rendering the shadow map something got in front of the currently processed geometry fragment, which hence lies in the shadow, so it's drawn in a shadow color (usually the ambient illumination color); you might want to combine this with an Ambient Occlusion effect to simulate soft, self shadowing ambient illumination.