I am building a simple city with OpenGL and GLUT, I created a textured skydome and now I would like to connect that with a flat plane to give an appearance of the horizon. To give relative size, the skydome is 3.0 in radius with depth mask turned off, and it only has the camera rotation applied and sits over the camera. A building is about 30.0 in size, and I am looking at it from y=500.0 down.
I have a ground plane that is 1000x1000, I am texturing with a 1024x1024 resolution texture that looks good up close when I am against the ground. My texture is loaded with GL_REPEAT with texture coordinate of 1000 to repeat it 1000 times.
Connecting the skydome with the flat ground plane is where I am having some issues. I will list a number of things I have tried.
Issues:
1) When I rotate my heading, because of the square nature of the plane, I see edge like the attached picture instead of a flat horizon.
2) I have tried a circular ground plane instead, but I get a curve horizon, that becomes more curvy when I fly up.
3) To avoid the black gap between the infinite skydome, and my limited size flat plane, I set a limit on how far up I can fly, and shift the skydome slightly down as I go up, so I don't see the black gap between the infinite skydome and my flat plane when I am up high. Are there other methods to fade the plane into the skydome and take care of the gap when the gap varies in size at different location (ie. Circle circumscribing a square)? I tried to apply a fog color of the horizon, but I get a purple haze over white ground.
4) If I attached the ground as the bottom lid of the skydome hemisphere, then it looks weird when I zoom in and out, it looks like the textured ground is sliding and disconnected with my building.
5) I have tried to draw the infinitely large plane using the vanishing point concept by setting w=0. Rendering infinitely large plane
The horizon does look flat, but texturing properly seems difficult, so I am stuck with a single color.
6) I am disable lighting for the skydome, if I want to enable lighting for my ground plane, then at certain pitch angle, my plane would look black, but my sky is still completely lit, and it looks unnatural.
7) If I make my plane larger, like 10000x10000, then the horizon will look seemingly flat, but, if I press the arrow key to adjust my heading, the horizon will shake for a couple of seconds before stabilizing, what is causing it, and how could I prevent it. A related question to this, it seems like tiling and texturing 1000x1000 ground plane and 10000x10000 does not affect my frame rate, why is that? Wouldn't more tiling mean more work?
8) I read some math-based approach with figuring out the clipping rectangle to draw the horizon, but I wonder if there are simpler approaches http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/a-super-simple-method-for-creating-infinite-sce-r2769
Most threads I read regarding horizon would say, use a skybox, use a skydome, but I haven't come across a specific tutorial that talks about merging skydome with a large ground plane nicely. A pointer to such a tutorial would be great. Feel free to answer any parts of the question by indicating the number, I didn't want to break them up because they are all related. Thanks.
Here is some relevant code on my setup:
void Display()
{
// Clear frame buffer and depth buffer
glClearColor (0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
camera.Update();
GLfloat accumulated_camera_rotation_matrix[16];
GetAccumulatedRotationMatrix(accumulated_camera_rotation_matrix);
SkyDome_Draw(accumulated_camera_rotation_matrix);
FlatGroundPlane_Draw();
// draw buildings
// swap buffers when GLUT_DOUBLE double buffering is enabled
glutSwapBuffers();
}
void SkyDome_Draw(GLfloat (&accumulated_camera_rotation_matrix)[16])
{
glPushMatrix();
glLoadIdentity();
glDepthMask(GL_FALSE);
glDisable(GL_LIGHTING);
glMultMatrixf(accumulated_camera_rotation_matrix);
// 3.0f is the radius of the skydome
// If we offset by 0.5f in camera.ground_plane_y_offset, we can offset by another 1.5f
// at skydome_sky_celing_y_offset of 500. 500 is our max allowable altitude
glTranslatef( 0, -camera.ground_plane_y_offset - camera.GetCameraPosition().y /c amera.skydome_sky_celing_y_offset/1.5f, 0);
skyDome->Draw();
glEnable(GL_LIGHTING);
glDepthMask(GL_TRUE);
glEnable(GL_CULL_FACE);
glPopMatrix();
}
void GetAccumulatedRotationMatrix(GLfloat (&accumulated_rotation_matrix)[16])
{
glGetFloatv(GL_MODELVIEW_MATRIX, accumulated_rotation_matrix);
// zero out translation is in elements m12, m13, m14
accumulated_rotation_matrix[12] = 0;
accumulated_rotation_matrix[13] = 0;
accumulated_rotation_matrix[14] = 0;
}
GLfloat GROUND_PLANE_WIDTH = 1000.0f;
void FlatGroundPlane_Draw(void)
{
glEnable(GL_TEXTURE_2D);
glBindTexture( GL_TEXTURE_2D, concreteTextureId);
glBegin(GL_QUADS);
glNormal3f(0, 1, 0);
glTexCoord2d(0, 0);
// repeat 1000 times for a plane 1000 times in width
GLfloat textCoord = GROUND_PLANE_WIDTH;
glVertex3f( -GROUND_PLANE_WIDTH, 0, -GROUND_PLANE_WIDTH);
// go beyond 1 for texture coordinate so it repeats
glTexCoord2d(0, textCoord);
glVertex3f( -GROUND_PLANE_WIDTH, 0, GROUND_PLANE_WIDTH);
glTexCoord2d(textCoord, textCoord);
glVertex3f( GROUND_PLANE_WIDTH, 0, GROUND_PLANE_WIDTH);
glTexCoord2d(textCoord, 0);
glVertex3f( GROUND_PLANE_WIDTH, 0, -GROUND_PLANE_WIDTH);
glEnd();
glDisable(GL_TEXTURE_2D);
}
Void Init()
{
concreteTextureId = modelParser->LoadTiledTextureFromFile(concreteTexturePath);
}
ModelParser::LoadTiledTextureFromFile(string texturePath)
{
RGBImage image; // wrapping 2-d array of data
image.LoadData(texturePath);
GLuint texture_id;
UploadTiledTexture(texture_id, image);
image.ReleaseData();
return texture_id;
}
void ModelParser::UploadTiledTexture(unsigned int &iTexture, const RGBImage &img)
{
glGenTextures(1, &iTexture); // create the texture
glBindTexture(GL_TEXTURE_2D, iTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// the texture would wrap over at the edges (repeat)
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, img.Width(), img.Height(), GL_RGB, GL_UNSIGNED_BYTE, img.Data());
}
Try using a randomized heightmap rather than using a flat plane. Not only will this look more realistic, it will make the edge of the ground plane invisible due to the changes in elevation. You can also try adding in some vertex fog, to blur the area where the skybox and ground plane meet. That's roughly what I did here.
A lot of 3D rendering relies on tricks to make things look realistic. If you look at most games, they have either a whole bunch of foreground objects that obscure the horizon, or they have "mountains" in the distance (a la heightmaps) that also obscure the horizon.
Another idea is to map your ground plane onto a sphere, so that it curves down like the earth does. That might make the horizon look more earthlike. This is similar to what you did with the circular ground plane.
Related
I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics.
I have achieved this by setting the internal resolution via glOrtho():
glOrtho(0, 320, 240, 0, 0, 1);
And then I scale up the output resolution by a factor of 3, like this:
glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);
I draw rectangles like this:
glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();
It works perfectly at 320x240 (not scaled):
When I scale up to 960x720, the pixel rendering all works just fine! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. The result is 1px lines in a 3px world :(
How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas?
There is no "320x240 glOrtho canvas". There is only the window's actual resolution: 960x720.
All you are doing is scaling up the coordinates of the primitives you render. So, your code says to render a line from, for example, (20, 20) to (40, 40). And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120).
But that's only dealing with the end points. What happens in the middle is still based on the fact that you're rendering at the window's actual resolution.
Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. So diagonal lines won't have the pixelated appearance you likely want.
The only way to do this properly is to, well, do it properly. Render to an image that is actual 320x240, then draw it to the window's actual resolution.
You'll have to create a texture of that size, then attach it to a framebuffer object. Bind the FBO for rendering and render to it (with the viewport set to the image's size). Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution).
As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:
set low res view
do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)
copy rendered screen into texture
set target resolution view
render the texture
do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.
Here C++ source for this:
void gl_draw()
{
// render resolution and multiplier
const int xs=320,ys=200,m=2;
// [low res render pass]
glViewport(0,0,xs,ys);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// 50 random lines
RandSeed=0x12345678;
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINES);
for (int i=0;i<100;i++)
glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
glEnd();
// [multiply resiolution render pass]
static bool _init=true;
GLuint txrid=0; // texture id
BYTE map[xs*ys*3]; // RGB
// init texture
if (_init) // you should also delte the texture on exit of app ...
{
// create texture
_init=false;
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST); // must be nearest !!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
}
// copy low res screen to CPU memory
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
// and then to GPU texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
// set multiplied resolution view
glViewport(0,0,m*xs,m*ys);
glClear(GL_COLOR_BUFFER_BIT);
// render low res screen as texture
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // swap buffers only here !!!
}
And preview:
I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).
I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics.
I have achieved this by setting the internal resolution via glOrtho():
glOrtho(0, 320, 240, 0, 0, 1);
And then I scale up the output resolution by a factor of 3, like this:
glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);
I draw rectangles like this:
glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();
It works perfectly at 320x240 (not scaled):
When I scale up to 960x720, the pixel rendering all works just fine! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. The result is 1px lines in a 3px world :(
How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas?
There is no "320x240 glOrtho canvas". There is only the window's actual resolution: 960x720.
All you are doing is scaling up the coordinates of the primitives you render. So, your code says to render a line from, for example, (20, 20) to (40, 40). And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120).
But that's only dealing with the end points. What happens in the middle is still based on the fact that you're rendering at the window's actual resolution.
Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. So diagonal lines won't have the pixelated appearance you likely want.
The only way to do this properly is to, well, do it properly. Render to an image that is actual 320x240, then draw it to the window's actual resolution.
You'll have to create a texture of that size, then attach it to a framebuffer object. Bind the FBO for rendering and render to it (with the viewport set to the image's size). Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution).
As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:
set low res view
do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)
copy rendered screen into texture
set target resolution view
render the texture
do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.
Here C++ source for this:
void gl_draw()
{
// render resolution and multiplier
const int xs=320,ys=200,m=2;
// [low res render pass]
glViewport(0,0,xs,ys);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// 50 random lines
RandSeed=0x12345678;
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINES);
for (int i=0;i<100;i++)
glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
glEnd();
// [multiply resiolution render pass]
static bool _init=true;
GLuint txrid=0; // texture id
BYTE map[xs*ys*3]; // RGB
// init texture
if (_init) // you should also delte the texture on exit of app ...
{
// create texture
_init=false;
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST); // must be nearest !!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
}
// copy low res screen to CPU memory
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
// and then to GPU texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
// set multiplied resolution view
glViewport(0,0,m*xs,m*ys);
glClear(GL_COLOR_BUFFER_BIT);
// render low res screen as texture
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // swap buffers only here !!!
}
And preview:
I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).
I am generating a lot of points to create an island with GL_POLYGON and I want to bind one texture to the entire island. Right now I set the coordinates every time I create a new square.
Right now I have this
for (int g =0;g<400;g++){
glBegin(GL_POLYGON);
glTexCoord2i(1,1);
glVertex3f(islandVert[g][0],islandVert[g][1],islandVert[g][2]);
g++;
glTexCoord2i(1,0);
glVertex3f(islandVert[g][0],islandVert[g][1],islandVert[g][2]);
g++;
glTexCoord2i(0,0);
glVertex3f(islandVert[g][0],islandVert[g][1],islandVert[g][2]);
g++;
glTexCoord2i(0,1);
glVertex3f(islandVert[g][0],islandVert[g][1],islandVert[g][2]);
//if(g==399){printf("at 0 =%f,%f,%f\n",islandVert[399][0],islandVert[399][1],islandVert[399][2]);}
glEnd();
}
But I don't want to repeat the patern as a whole on every square. I want the pattern to span all of my squares. Also note that all of the square have different y values.
If you want to stretch your texture across the whole island, you need to use texture coordinates that are not (0,0) to (1,1) per polygon, but for the whole model. The easiest way to do that is to use the (x,z) coordinates of the vertex and scale them appropriately, e.g.
glBegin(GL_POLYGON);
glTexCoord2f(islandVert[g][0] / xscale, islandVert[g][2] / zscale);
glVertex3f(islandVert[g][0], islandVert[g][1], islandVert[g][2]);
g++;
...
(Note: you need to use glTexCoord2f or it's not going to work.)
where xscale and zscale are the maximum possible x and z values (assuming (0,0) is the minimum possible value, but you get the idea). In modern programs this would be done in the shader, but it looks like you're not using those.
The disadvantage is that the one texture will be spread out over the whole model. Unless you have a very high resolution texture it is going to be very blurry. The usual response is to repeat the texture. Probably not over every polygon like you did, but a configurable number of times, like this:
glBegin(GL_POLYGON);
glTexCoord2f(islandVert[g][0] / xscale * xrepeat, islandVert[g][2] / zscale * zrepeat);
glVertex3f(islandVert[g][0], islandVert[g][1], islandVert[g][2]);
g++;
...
Texture coordinates can be larger than 1, assuming your texture parameters are set to repeat:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
Check out http://www.glprogramming.com/red/chapter09.html for an introduction to textures, texture coordinates etc.
Sideline 1: if you render quads, use the GL_QUADS primitive for a significant speed boost:
glBegin(GL_QUADS);
for (int g =0;g<400;g++)
{
glTexCoord2i(1,1);
glVertex3f(islandVert[g][0],islandVert[g][1],islandVert[g][2]);
g++;
glTexCoord2i(1,0);
glVertex3f(islandVert[g][0],islandVert[g][1],islandVert[g][2]);
g++;
glTexCoord2i(0,0);
glVertex3f(islandVert[g][0],islandVert[g][1],islandVert[g][2]);
g++;
glTexCoord2i(0,1);
glVertex3f(islandVert[g][0],islandVert[g][1],islandVert[g][2]);
}
glEnd();
Sideline 2: you really shouldn't be using Immediate Mode at all in this day and age. Check out Vertex Buffer Objects for the current way to specify geometry.
My problem concerns rendering text with OpenGL -- the text is rendered into a texture, and then drawn onto a quad. The trouble is that the pixels on the edge of the texture are drawn partially transparent. The interior of the texture is fine.
I'm calculating the texture coordinates to hit the center of my texels, using NEAREST (non-)interpolation, setting the texture wrapping to CLAMP_TO_EDGE, and setting the projection matrix to place my vertices at the center of the viewport pixels. Still seeing the issue.
I'm working on VTK with their texture utilities. These are the GL calls that are used to load the texture, as determined by stepping through with a debugger:
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Create and bind pixel buffer object here (not shown, lots of indirection in VTK)...
glTexImage2D( GL_TEXTURE_2D, 0 , GL_RGBA, xsize, ysize, 0, format, GL_UNSIGNED_BYTE, 0);
// Unbind PBO -- also omitted
glBindTexture(GL_TEXTURE_2D, id);
glAlphaFunc (GL_GREATER, static_cast<GLclampf>(0));
glEnable (GL_ALPHA_TEST);
// I've also tried doing this here for premultiplied alpha, but it made no difference:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
The rendering code:
float p[2] = ...; // point to render text at
int imgDims[2] = ...; // Actual dimensions of image
float width = ...; // Width of texture in image
float height = ...; // Height of texture in image
// Prepare the quad
float xmin = p[0];
float xmax = xmin + width - 1;
float ymin = p[1];
float ymax = ymin + height - 1;
float quad[] = { xmin, ymin,
xmax, ymin,
xmax, ymax,
xmin, ymax };
// Calculate the texture coordinates.
float smin = 1.0f / (2.0f * (imgDims[0]));
float smax = (2.0 * width - 1.0f) / (2.0f * imgDims[0]);
float tmin = 1.0f / (2.0f * imgDims[1]);
float tmax = (2.0f * height - 1.0f) / (2.0f * imgDims[1]);
float texCoord[] = { smin, tmin,
smax, tmin,
smax, tmax,
smin, tmax };
// Set projection matrix to map object coords to pixel centers
// (modelview is identity)
GLint vp[4];
glGetIntegerv(GL_VIEWPORT, vp);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
float offset = 0.5;
glOrtho(offset, vp[2] + offset,
offset, vp[3] + offset,
-1, 1);
// Disable polygon smoothing. Why not, I've tried everything else?
glDisable(GL_POLYGON_SMOOTH);
// Draw the quad
glColor4ub(255, 255, 255, 255);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, points);
glTexCoordPointer(2, GL_FLOAT, 0, texCoord);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
// Restore projection matrix
glMatrixMode(GL_PROJECTION);
glPopMatrix();
For debugging purposes, I've overwritten the outermost texels with red, and the next inner layer of texels with green (otherwise it's hard to see what's going on in the mostly-white text image).
I've inspected the texture in-memory using gDEBugger, and it looks as expected -- bright red and green borders around the texture area (the extra empty space is padding to make its size a power of two). For reference:
Here's what the final rendered image looks like (magnified 20x -- the black pixels are remnants of the text that was rendered under the debugging borders). Pale red border, but still a bold green inner border:
So it is just the outer edge of pixels that is affected. I'm not sure if it's color-blending or alpha-blending that's screwing things up, I'm at a loss. I've noticed that the corner pixels are twice as pale as the edge pixels, perhaps that's significant... Maybe someone here can spot the error?
Could be a "pixel perfect" problem. OpenGL defines the center of a line to be the spot that gets rasterized into a pixel. The middle is exactly half way between 1 integer and the next... to get pixel (x,y) to display "pixel perfect"... fix up your coordinates to be:
x=(int)x+0.5f; // x is a float.. makes 0.0 into 0.5, 16.343 into 16.5, etc.
y=(int)y+0.5f;
This probably is what is messing up the blending. I had the same issues with texture modulating... a single somewhat dimmer line or series of pixels at the bottom and right edges.
Okay, I've worked on it for the last few days. There were few ideas that didn't work at all. The only one that worked is to admit that this "Perfect Pixel" exists and try to trick it. Bad That I can't vote up for your answer Cosmic Bacon. But your answer, even if it looks good -- will a little bit ruin everything in a special programs like Games. My answer -- is improved yours.
Here's the solution:
Step1: Make a method that draws texture that you need and use only it for drawing. And Add 0.5f to every coordinate. Look:
public void render(Texture tex,float x1,float y1,float x2,float y2)
{
tex.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex2f(x1+0.5f,y1+0.5f);
GL11.glTexCoord2f(1,0);
GL11.glVertex2f(x2+0.5f,y1+0.5f);
GL11.glTexCoord2f(1,1);
GL11.glVertex2f(x2+0.5f,y2+0.5f);
GL11.glTexCoord2f(0,1);
GL11.glVertex2f(x1+0.5f,y2+0.5f);
GL11.glEnd();
}
Step2: If you're going to use "glTranslatef(somethin1,somethin2,0)" it will be nice to make a method that overcomes "Translatef" and doesn't let camera to move on fractional distance. Cause if there will be a little chance that Camera moves on, let's say, 0.3 -- Sooner or later you'll see this issue again(multiple times, i suppose). Next code makes camera follow the Object that has X and Y. And Camera will never loose the object from it's sight:
public void LookFollow(Block AF)
{
float some=5;//changing me will cause camera to move faster/slower
float mx=0,my=0;
//Right-Left
if(LookCorX!=AF.getX())
{
if(AF.getX()>LookCorX)
{
if(AF.getX()<LookCorX+2)
mx=AF.getX()-LookCorX;
if(AF.getX()>LookCorX+2)
mx=(AF.getX()-LookCorX)/some;
}
if(AF.getX()<LookCorX)
{
if(2+AF.getX()>LookCorX)
mx=AF.getX()-LookCorX;
if(2+AF.getX()<LookCorX)
mx=(AF.getX()-LookCorX)/some;
}
}
//Up-Down
if(LookCorY!=AF.getY())
{
if(AF.getY()>LookCorY)
{
if(AF.getY()<LookCorY+2)
my=AF.getY()-LookCorY;
if(AF.getY()>LookCorY+2)
my=(AF.getY()-LookCorY)/some;
}
if(AF.getY()<LookCorY)
{
if(2+AF.getY()>LookCorY)
my=AF.getY()-LookCorY;
if(2+AF.getY()<LookCorY)
my=(AF.getY()-LookCorY)/some;
}
}
//Evading "Perfect Pixel"
mx=(int)mx;
my=(int)my;
//Moving Camera
GL11.glTranslatef(-mx,-my,0);
//Saving up Position of camera.
LookCorX+=mx;
LookCorY+=my;
}
float LookCorX=300,LookCorY=200; //camera's starting position
As the result -- we receive a camera that moves a little sharper, cause steps can't be less than 1 pixel, and sometimes, it's necessary to make a smaller step, but textures are looking okay, and, it's -- a Great Progress!
Sorry for a real Big Answer. I'm still working on a Good Solution. Once I'll find something better and shorter -- this will be erased by me.
I need to render a sphere to a texture (done using a Framebuffer Object (FBO)), and then alpha blend that texture with the back buffer. So far I'm not doing any processing with the texture except clearing it at the beginning of every frame.
I should say that my scene consists of nothing but a planet in empty space, the sphere should appear next to or around the planet (kind of like a moon for now). When I render the sphere directly to the back buffer, it displays correctly; but when I do the intermediary step of rendering it to a texture and then blending that texture with the back buffer, the sphere only shows up when it is in front of the planet, the part that isn't in front is just "cut off":
I render the sphere using glutSolidSphere to a RGBA8 fullscreen texture that's bound to an FBO, making sure that every sphere pixel receives an alpha value of 1.0. I then pass the texture to a fragment shader program, and use this code to render a fullscreen quad - with the texture mapped onto it - to the backbuffer while alpha blending:
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2i(0, 1);
glVertex3i(-1, 1, -1); // TOP LEFT
glTexCoord2i(0, 0);
glVertex3i(-1, -1, -1); // BOTTOM LEFT
glTexCoord2i(1, 0);
glVertex3i( 1, -1, -1); // BOTTOM RIGHT
glTexCoord2i(1, 1);
glVertex3i( 1, 1, -1); // TOP RIGHT
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
This is the shader code (taken from an FX file written in Cg):
sampler2D BlitSamp = sampler_state
{
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = Clamp;
AddressV = Clamp;
};
float4 blendPS(float2 texcoords : TEXCOORD0) : COLOR
{
float4 outColor = tex2D(BlitSamp, texcoords);
return outColor;
}
I don't even know whether this is a problem with the depth buffer or with alpha blending, I've tried a lot of combinations of enabling and disabling depth testing (with a depth buffer attached to the FBO) and alpha blending.
EDIT: I tried just rendering a blank fullscreen quad straight to the back buffer and even that was cropped around the planet's edges. For some reason, enabling depth testing for rendering the quad (that is, removing the lines glDisable(GL_DEPTH_TEST) and glEnable(GL_DEPTH_TEST) in the code above) got rid of the problem, but now everything but the planet and the sphere appears white:
I made sure (and could confirm) that the alpha channel of the texture is 0 at every pixel but the sphere's, so I don't understand where the whiteness could be introduced. (Would also still be interested in an explanation why enabling depth testing has this effect.)
I see two possible sources of error here:
1. Rendering to the FBO
If the missing pixels are not even present in the FBO after rendering, there must be some mechanism which discarded the corresponding fragments. The OpenGL pipeline includes four different types of fragment tests which can lead to fragments being discarded:
Scissor Test: Unlikely to be the cause, as the scissor test only affects a rectangular portion of the screen.
Alpha Test: Equally unlikely, as your fragments should all have the same alpha value.
Stencil Test: Also unlikely, unless you use stencil operations when drawing the background planet and copy over the stencil buffer from the back buffer to the FBO.
Depth Test: Same as for stencil test.
So there's a good chance that rendering into FBO is not the issue here. But just to be absolutely sure, you should read back your color attachment texture and dump it into a file for inspection. You can use the following function for that:
void TextureToFile(GLuint texture, const char* filename) {
glBindTexture(GL_TEXTURE_2D, texture);
GLint width, height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
std::vector<GLubyte> pixels(3 * width * height);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGB, GL_UNSIGNED_BYTE, &pixels[0]);
std::ofstream out(filename, std::ios::out | std::ios::binary);
out << "P6\n"
<< width << '\n'
<< height << '\n'
<< 255 << '\n';
out.write(reinterpret_cast<const char*>(&pixels[0]), pixels.size());
}
The resulting file is a portable pixmap (.ppm). Be sure to unbind the FBO before reading back the texture.
2. Texture mapping
Assuming rendering into the FBO works as expected, the only other source of error is blending the texture over the previously rendered scene. There are two scenarios:
a) Fragments get discarded
The possible reasons for fragments to get discarded are the same as in 1.:
Scissor Test: Nope, affects rectangular areas only.
Alpha Test: Probably not, the texels covered sphere should all have the same alpha value.
Stencil Test: Might be the cause if you use stencil operations/stencil testing when drawing the background planet and the old stencil state is still active.
Depth Test: Might be the cause, but as you already disable it, it really shouldn't have any effect.
So you should make sure that all of these tests are disabled, especially the stencil test.
b) Wrong results from blending
Assuming all fragments reach the back buffer, blending is the only thing which could still cause the wrong result. With your blending function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) the values in the back buffer are irrelevant for blending, and we assume that the alpha values in the texture are correct. So I see no reason for why blending should be the root cause here.
Conclusion
In conclusion, the only sensible cause for the observed result seems to be stencil testing. If it's not, I'm out of options :)
I solved it or at least came up with a work around.
First off, the whiteness stems from the fact that glClearColor had been set to glClearColor(1.0f, 1.0f, 1.0f, 1000.0f), so everything but the planet wasn't even written to in the end. I now copy the contents of the back buffer (which is the planet, the atmosphere, and the space around it) to the texture before rendering the sphere, and I render the atmosphere and space before that copy/blit operation, so they are included in it. Previously, everything but the planet itself was rendered after my quad, which - when using depth testing - apparently placed everything behind the quad, making it invisible.
The reference implementation of the effect I'm trying to achieve has always used this kind of blit operation in its code but I didn't think it was necessary for the effect. Now I feel like there might be no other way...