How to proper position skybox camera using openGL - c++

I created a skybox for my project and it looks the way I wanted it to; however, there are a few issues I cannot figure out how to fix and I have read some tutorials on this subject, but I was not able to find something that would help me.
The first problem is that I don't know how to get the box to always move with my camera. In the image below you can see that I am able to zoom out and see the whole box, instead of only zooming in/out of the solar system and always having the stars on the background.
The other issue I have is that when I zoom in too close my background disappears.The picture below illustrates what I mean
I know if I can get the camera working properly I can get this fixed, but it goes back to my first problem. I don't know how to access the camera info.
I believe I would have to modify glTranslatef() and glScalef() in my code from a fixed number to a number that changes with the camera position.
Here is my code:
void Skybox::displaySkybox()
{
Images::RGBImage test[6]; //6 pictures for 6 sides
test[0]=Images::readImageFile(fileName); //Top
//test[1]=Images::readImageFile(fileName);//Back
//test[2]=Images::readImageFile(fileName);//Bottom
//test[3]=Images::readImageFile(fileName);//Right
//test[4]=Images::readImageFile(fileName); //Left
//test[5]=Images::readImageFile(fileName); //Front
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
test[0].glTexImage2D(GL_TEXTURE_2D,0,GL_RGB);
// Save Current Matrix
glPushMatrix();
// Second Move the render space to the correct position (Translate)
glTranslatef(0,0,0);
// First apply scale matrix
glScalef(10000,10000,10000);
static const GLint faces[6][4] =
{
{5, 1, 2, 6}, // back
{5, 4, 0, 1}, // bottom
{0, 4, 7, 3}, // front
{4, 5, 6, 7}, // right ( 'left' in crinity's labeling )
{1, 0, 3, 2}, // left ( 'right' in crinity's labeling )
{2, 3, 7, 6} // top*/
};
GLfloat v[8][3];
GLint i;
v[0][0] = v[1][0] = v[2][0] = v[3][0] = -1; // min x
v[4][0] = v[5][0] = v[6][0] = v[7][0] = 1; // max x
v[0][1] = v[1][1] = v[4][1] = v[5][1] = -1; // min y
v[2][1] = v[3][1] = v[6][1] = v[7][1] = 1; // max y
v[0][2] = v[3][2] = v[4][2] = v[7][2] = -1; // min z
v[1][2] = v[2][2] = v[5][2] = v[6][2] = 1; // max z
for (i = 0; i < 7; i++)
{
//
glBegin(GL_QUADS);
glTexCoord2f(0,1); glVertex3fv(&v[faces[i][0]][0]);
glTexCoord2f(1,1); glVertex3fv(&v[faces[i][1]][0]);
glTexCoord2f(1,0); glVertex3fv(&v[faces[i][2]][0]);
glTexCoord2f(0,0); glVertex3fv(&v[faces[i][3]][0]);
glEnd();
}
// Load Saved Matrix
glPopMatrix();
}
How can I get access to these variables? Does openGL alreay have a function that takes care of that?

I believe I would have to modify glTranslatef() and glScalef() in my code from a fixed number to a number that changes with the camera position.
You're close, but there's a simpler solution:
Draw the skybox first, before translating the camera, so that you don't have to translate the box. Don't forget to clear your depth buffer for each new frame (you'll see why in a second).
Disable writes to the depth buffer (call glDepthMask(GL_FALSE)). This will cause every other object you render to draw over it, making it always appear "behind" everything else.
Assuming your transform matrices were reset at the start of the frame, apply only the rotation of the camera. This way the camera will always be "centered" inside the box.
Draw the skybox. Since writes to the depth buffer are off, it doesn't matter how small it is as long as it's larger than your camera's near clip plane.
Re-enable writes to the depth buffer (call glDepthMask(GL_TRUE))
Render your scene normally.

I haven't worked with skyboxes before, but it would make sense that the camera should always be at the center of the box. So start by translating the box to center around the camera coordinates, something like glTranslate(camera.x, camera.y, camera.z);
Then I'd think the box should stay infinitely distant, so maybe set the vertices to INT_MAX or something ridonculously big.
v[0][0] = v[1][0] = v[2][0] = v[3][0] = -INT_MAX; // min x
v[4][0] = v[5][0] = v[6][0] = v[7][0] = INT_MAX; // max x ...etc
Then probably get rid of the call to glScalef(). Try that out

Related

Rendering surfaces in OpenGL with depth test correctly

I am wondering how to render surfaces using depth test correctly. In my case it is not working although it has been enabled. I tried many combinations but can not figure out what is being done wrong, it might been some ordering of OpenGL commands, or it might be something I am missing completely.
I have this code that uses opengl to render a 2d game I am working on. I want to enable z buffering and depth test to simplify things in the code. I read a number of tutorials online and made changes as instructed but can not figure out why it is not working.
the code of the main function is shown below, I am changing the values of z for the two squares to be -10 and -25 and swapping them later on, but I always get the first square rendered over the second one no matter what values I use:
void MainGame::RenderTestUI()
{
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LESS);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLSLProgram *ActiveShader = nullptr;
ActiveShader = &ColorShader;
ActiveShader->Use();
GLint Location1 = ActiveShader->GetUniformLocation("cam");
glm::mat4 tmp = Camera.GetCameraMatrix();
glUniformMatrix4fv(Location1, 1, GL_FALSE, &tmp[0][0]);
glActiveTexture(GL_TEXTURE0);
GLint Location2 = ActiveShader->GetUniformLocation("basic");
glUniform1f(Location2, 0);
glBindTexture(GL_TEXTURE_2D, GameTextures.ID);
CurrentBoundTexture = GameTextures.ID;
RenderingBatch.StartAddingVerticies();
this->GameMap.TileList[1].FillSixVerticies(RenderingBatch.VertexListPtr, 0, 0);
RenderingBatch.VertexCount += 6;
for (int i = 0; i < 6; i++)
RenderingBatch.VertexListPtr[i].z = -10; // first face
this->GameMap.TileList[2].FillSixVerticies(&RenderingBatch.VertexListPtr[RenderingBatch.VertexCount], 8, 8);
RenderingBatch.VertexCount += 6;
for (int i = 0; i < 6; i++)
RenderingBatch.VertexListPtr[i+6].z = -25; // second face
RenderingBatch.EndAddingVerticies();
RenderingBatch.CreateVBO();
RenderingBatch.Render();
ActiveShader->Unuse();
// swap buffers
SDL_GL_SwapWindow(GameWindow);
}
The end result is always the same regardless of the value of z i am assigning to the two faces, the result could be seen here:
any advice is highly appreciated.
When setting up the SDL surface to draw on, did you ask for a depth buffer prior to calling SDL_CreateWindow?
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);

Object rotation using keys

I want to rotate a cube using keys. This is a part of the code. When I press LEFT key, cube rotates to left, etc. My goal is to rotate cube all around, so I have to rotate it by x and y axis which causes a problem.
I have defined mat4 rotation; and used it to assign a rotation when I press and hold a key. When I hold the key, it is rotating, for example to left. Then I release the key and the object gets back to initial position (camera gets back to initial position, since object is not moving). I think this problem is causing the auto rotateMat = rotation; line which is defined below the key functions.
What am I doing wrong?
mat4 rotation; //global
if(keysPressed[GLFW_KEY_LEFT]){
timer -= delta;
rotation = rotate(mat4{}, timer * 0.5f, {0, 1, 0});
}
if(keysPressed[GLFW_KEY_RIGHT]){
timer += delta;
rotation = rotate(mat4{}, timer * 0.5f, {0, 1, 0});
}
if(keysPressed[GLFW_KEY_UP]){
timer += delta;
rotation = rotate(mat4{}, timer * 0.5f, {1, 0, 0});
}
if(keysPressed[GLFW_KEY_DOWN]){
timer -= delta;
rotation = rotate(mat4{}, timer * 0.5f, {1, 0, 0});
}
...
program.setUniform("ModelMatrix", rotation* cubeMat);
cube.render();
UPDATE:
So the problem to this was solved when I used matrix variable as global variable, not local.
There are multiple ways how such an interaction can be implemented. One of the easier ones is to create a relative translation in every frame instead of a global one and add it to the current rotation:
For this, one has to store the sum of all rotations in a global varialbe
//Global variable
mat4 total_rotate;
And calculate the relative translation in every frame:
//In the function
mat4 rotation;
if(keysPressed[GLFW_KEY_LEFT]){
rotation = rotate(mat4{}, delta, {0, 1, 0});
}
if(keysPressed[GLFW_KEY_RIGHT]){
rotation = rotate(mat4{}, -delta, {0, 1, 0});
}
if(keysPressed[GLFW_KEY_UP]){
rotation = rotate(mat4{}, delta, {1, 0, 0});
}
if(keysPressed[GLFW_KEY_DOWN]){
rotation = rotate(mat4{}, -delta, {1, 0, 0});
}
total_rotate = total_rotate * rotation;
...
program.setUniform("ModelMatrix", total_rotate * cubeMat);
cube.render();
As an alternative, you could store the two rotation angles instead and calculate the matrix in every frame:
//Global variables
float rot_x = 0.0f, rot_y = 0.0f;
//In every frame
if(keysPressed[GLFW_KEY_LEFT]){
rot_x += delta;
}
if(keysPressed[GLFW_KEY_RIGHT]){
rot_x -= delta;
}
//Same for y
auto rotation = rotate(rotate(mat4{}, rot_y, {0, 1, 0}), rot_x, {1, 0, 0}
...
program.setUniform("ModelMatrix", rotation * cubeMat);
cube.render();

3d drawing in OpenGL

I'm trying to draw a chess board in OpenGL. I can draw the squares of the game board exactly as I want. But I also want to add a small boarder around the perimeter of the game board. Somehow, my perimeter is way bigger than I want. In fact, each edge of the border is the exact width of the entire game board itself.
My approach is to draw a neutral gray rectangle to represent the entire "slab" of wood that would be cut to make the board. Then, inside of this slab, I place the 64 game squares, which should be exactly centered and take up just slightly less 2d space as the slab does. I'm open to better ways, but keep in mind that I'm not very bright.
EDIT: in the image below all that gray area should be about 1/2 the size of a single square. But as you can see, each edge is the size of the entire game board. Clearly I'm not understanding something.
Here is the display function that I wrote. Why is my "slab" so much too large?
void display()
{
// Clear the image
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Reset any previous transformations
glLoadIdentity();
// define the slab
float square_edge = 8;
float border = 4;
float slab_thickness = 2;
float slab_corner = 4*square_edge+border;
// Set the view angle
glRotated(ph,1,0,0);
glRotated(th,0,1,0);
glRotated(zh,0,0,1);
float darkSquare[3] = {0,0,1};
float lightSquare[3] = {1,1,1};
// Set the viewing matrix
glOrtho(-slab_corner, slab_corner, slab_corner, -slab_corner, -slab_corner, slab_corner);
GLfloat board_vertices[8][3] = {
{-slab_corner, slab_corner, 0},
{-slab_corner, -slab_corner, 0},
{slab_corner, -slab_corner, 0},
{slab_corner, slab_corner, 0},
{-slab_corner, slab_corner, slab_thickness},
{-slab_corner, -slab_corner, slab_thickness},
{slab_corner, -slab_corner, slab_thickness},
{slab_corner, slab_corner, slab_thickness}
};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_INT, 0, board_vertices);
// this defines each of the six faces in counter clockwise vertex order
GLubyte slabIndices[] = {0,3,2,1,2,3,7,6,0,4,7,3,1,2,6,5,4,5,6,7,0,1,5,4};
glColor3f(0.3,0.3,0.3); //upper left square is always light
glDrawElements(GL_QUADS, 24, GL_UNSIGNED_BYTE, slabIndices);
// draw the individual squares on top and centered inside of the slab
for(int x = -4; x < 4; x++) {
for(int y = -4; y < 4; y++) {
//set the color of the square
if ( (x+y)%2 ) glColor3fv(darkSquare);
else glColor3fv(lightSquare);
glBegin(GL_QUADS);
glVertex2i(x*square_edge, y*square_edge);
glVertex2i(x*square_edge+square_edge, y*square_edge);
glVertex2i(x*square_edge+square_edge, y*square_edge+square_edge);
glVertex2i(x*square_edge, y*square_edge+square_edge);
glEnd();
}
}
glFlush();
glutSwapBuffers();
}
glVertexPointer(3, GL_INT, 0, board_vertices);
specifies that board_vertices contains integers, but it's actually of type GLfloat. Could this be the problem?

Read different parts of an openGL framebuffer keeping overlap

I need to do some CPU operations on the framebuffer data previously drawn by openGL. Sometimes, the resolution at which I need to draw is higher than the texture resolution, therefore I have thought about picking a SIZE for the viewport and the target FBO, drawing, reading to a CPU bufffer, then moving the viewport somewhere else in the space and repeating. In my CPU memory I will have all the needed colordata. Unfortunately, for my purposes, I need to keep an overlap of 1 pixel between the vertical and horizontal borders of my tiles. Therefore, imagining a situation with four tiles with size SIZE x SIZE:
0 1
2 3
I need to have the last column of data of tile 0 holding the same data of the first column of data of tile 1, and the last row of data of tile 0 holding the same data of the first row of tile 2, for example. Hence, the total resolution I will draw at will be
SIZEW * ntilesHor -(ntilesHor-1) x SIZEH * ntilesVer -(ntilesVer-1)
For semplicity, SIZEW and SIZEH will be the same, and the same for ntilesVer and ntilesHor. My code now looks like
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glViewport(0, 0, tilesize, tilesize);
glPolygonMode(GL_FRONT, GL_FILL);
for (int i=0; i < ntiles; ++i)
{
for (int j=0; j < ntiles; ++j)
{
tileid = i * ntiles +j;
int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(left, right, bottom, top, -1, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw display list
glCallList(DList);
// Texture target of the fbo
glReadBuffer(tex_render_target);
// Read to CPU to preallocated buffer
glReadPixels(0, 0, tilesize, tilesize, GL_BGRA, GL_UNSIGNED_BYTE, colorbuffers[tileid]);
}
}
The code runs and in the various buffers "colorbuffers" I seem to have what looks like colordata, and also similar to what I should have given my draw; only, the overlap I need is not there, namely, last column of tile 0 and first column of tile 1 yield different values.
Any idea?
int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
I'm not sure about those margins. If your intention is a pixel based mapping, as suggested by your viewport, with some constant overlap, then the -j and -i terms make no sense, as they're nonuniform. I think you want some constant value there. Also you don't need that max there. You want a 1 pixel overlap though, so your constant will be 0. Because then you have
right_j == left_(j+1)
and the same for bottom and top, which is exactly what you intend.

Gradient "miter" in OpenGL shows seams at the join

I am doing some really basic experiments around some 2D work in GL. I'm trying to draw a "picture frame" around an rectangular area. I'd like for the frame to have a consistent gradient all the way around, and so I'm constructing it with geometry that looks like four quads, one on each side of the frame, tapered in to make trapezoids that effectively have miter joins.
The vert coords are the same on the "inner" and "outer" rectangles, and the colors are the same for all inner and all outer as well, so I'd expect to see perfect blending at the edges.
But notice in the image below how there appears to be a "seam" in the corner of the join that's lighter than it should be.
I feel like I'm missing something conceptually in the math that explains this. Is this artifact somehow a result of the gradient slope? If I change all the colors to opaque blue (say), I get a perfect solid blue frame as expected.
Update: Code added below. Sorry kinda verbose. Using 2-triangle fans for the trapezoids instead of quads.
Thanks!
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// Prep the color array. This is the same for all trapezoids.
// 4 verts * 4 components/color = 16 values.
GLfloat colors[16];
colors[0] = 0.0;
colors[1] = 0.0;
colors[2] = 1.0;
colors[3] = 1.0;
colors[4] = 0.0;
colors[5] = 0.0;
colors[6] = 1.0;
colors[7] = 1.0;
colors[8] = 1.0;
colors[9] = 1.0;
colors[10] = 1.0;
colors[11] = 1.0;
colors[12] = 1.0;
colors[13] = 1.0;
colors[14] = 1.0;
colors[15] = 1.0;
// Draw the trapezoidal frame areas. Each one is two triangle fans.
// Fan of 2 triangles = 4 verts = 8 values
GLfloat vertices[8];
float insetOffset = 100;
float frameMaxDimension = 1000;
// Bottom
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = frameMaxDimension;
vertices[3] = 0;
vertices[4] = frameMaxDimension - insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = 0 + insetOffset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
// Left
vertices[0] = 0;
vertices[1] = frameMaxDimension;
vertices[2] = 0;
vertices[3] = 0;
vertices[4] = 0 + insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = frameMaxDimension - inset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
/* top & right would be as expected... */
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
As #Newbie posted in the comments,
#quixoto: open your image in Paint program, click with fill tool somewhere in the seam, and you see it makes 90 degree angle line there... means theres only 1 color, no brighter anywhere in the "seam". its just an illusion.
True. While I'm not familiar with this part of math under OpenGL, I believe this is the implicit result of how the interpolation of colors between the triangle vertices is performed... I'm positive that it's called "Bilinear interpolation".
So what to do to solve that? One possibility is to use a texture and just draw a textured quad (or several textured quads).
However, it should be easy to generate such a border in a fragment shader.
A nice solution using a GLSL shader...
Assume you're drawing a rectangle with the bottom-left corner having texture coords equal to (0,0), and the top-right corner with (1,1).
Then generating the "miter" procedurally in a fragment shader would look like this, if I'm correct:
varying vec2 coord;
uniform vec2 insetWidth; // width of the border in %, max would be 0.5
void main() {
vec3 borderColor = vec3(0,0,1);
vec3 backgroundColor = vec3(1,1,1);
// x and y inset, 0..1, 1 means border, 0 means centre
vec2 insets = max(-coord + insetWidth, vec2(0,0)) / insetWidth;
If I'm correct so far, then now for every pixel the value of insets.x has a value in the range [0..1]
determining how deep a given point is into the border horizontally,
and insets.y has the similar value for vertical depth.
The left vertical bar has insets.y == 0,
the bottom horizontal bar has insets.x = 0,, and the lower-left corner has the pair (insets.x, insets.y) covering the whole 2D range from (0,0) to (1,1). See the pic for clarity:
Now we want a transformation which for a given (x,y) pair will give us ONE value [0..1] determining how to mix background and foreground color. 1 means 100% border, 0 means 0% border. And this can be done in several ways!
The function should obey the requirements:
0 if x==0 and y==0
1 if either x==1 or y==1
smooth values in between.
Assume such function:
float bias = max(insets.x,insets.y);
It satisfies those requirements. Actually, I'm pretty sure that this function would give you the same "sharp" edge as you have above. Try to calculate it on a paper for a selection of coordinates inside that bottom-left rectangle.
If we want to have a smooth, round miter there, we just need another function here. I think that something like this would be sufficient:
float bias = min( length(insets) , 1 );
The length() function here is just sqrt(insets.x*insets.x + insets.y*insets.y). What's important: This translates to: "the farther away (in terms of Euclidean distance) we are from the border, the more visible the border should be", and the min() is just to make the result not greater than 1 (= 100%).
Note that our original function adheres to exactly the same definition - but the distance is calculated according to the Chessboard (Chebyshev) metric, not the Euclidean metric.
This implies that using, for example, Manhattan metric instead, you'd have a third possible miter shape! It would be defined like this:
float bias = min(insets.x+insets.y, 1);
I predict that this one would also have a visible "diagonal line", but the diagonal would be in the other direction ("\").
OK, so for the rest of the code, when we have the bias [0..1], we just need to mix the background and foreground color:
vec3 finalColor = mix(borderColor, backgroundColor, bias);
gl_FragColor = vec4(finalColor, 1); // return the calculated RGB, and set alpha to 1
}
And that's it! Using GLSL with OpenGL makes life simpler. Hope that helps!
I think that what you're seeing is a Mach band. Your visual system is very sensitive to changes in the 1st derivative of brightness. To get rid of this effect, you need to blur your intensities. If you plot intensity along a scanline which passes through this region, you'll see that there are two lines which meet at a sharp corner. To keep your visual system from highlighting this area, you'll need to round this join over. You can do this with either a post processing blur or by adding some more small triangles in the corner which ease the transition.
I had that in the past, and it's very sensitive to geometry. For example, if you draw them separately as triangles, in separate operations, instead of as a triangle fan, the problem is less severe (or, at least, it was in my case, which was similar but slightly different).
One thing I also tried is to draw the triangles separately, slightly overlapping onto one another, with a right composition mode (or OpenGL blending) so you don't get the effect. I worked, but I didn't end up using that because it was only a tiny part of the final product, and not worth it.
I'm sorry that I have no idea what is the root cause of this effect, however :(