I recently implemented a zoom-in/zoom-out function in my simple 2D engine and experienced some terrible seams between adjacent textures, as shown here:
http://oi59.tinypic.com/anmxyf.jpg
It doesn't look that bad but it was definitely annoying when it was constantly blinking at you when moving around.
I decided to change it so a very large portion of the game (as much as the player is allowed to zoom out) is instead drawn on a framebuffer, then I print the framebuffer and when zooming in or out it instead increases/decreases the framebuffer texture size, as to avoid the seams.
At first I decided to draw 5 times as much as is visible to the player at default zoom, so I made a framebuffer object with a texture 5 times as big, then draw to it.
Here is the initialization of the framebuffer object:
glGenFramebuffers(1, &main_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, main_framebuffer);
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT);
glGenTextures(1, &main_ColorBuffer);
glBindTexture(GL_TEXTURE_2D, main_ColorBuffer);
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA, SCREEN_BUFFER_WIDTH, SCREEN_BUFFER_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(
GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, main_ColorBuffer, 0
);
Where SCREEN_BUFFER_WIDTH and SCREEN_BUFFER_HEIGHT is set as five times as big as the default screen size. I then draw my world as I would normally do (where I could zoom out as much as I wanted and everything was fine, except the seams).
The issue is that only 1024 x 768 (being the default screen size) is drawn to the framebuffer. This is how I activate, draw to the framebuffer, then draw the framebuffer:
glBindFramebuffer(GL_FRAMEBUFFER, main_framebuffer);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
draw_blocks(); //draws all the blocks in the correct screen y and x position
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, main_ColorBuffer);
draw_quad(shift_y*ratio, shift_x*ratio, SCREEN_BUFFER_WIDTH*ratio, SCREEN_BUFFER_HEIGHT*ratio);
glBindTexture(GL_TEXTURE_2D, 0);
Where ratio is a float I use for zooming in and out and shift_y and shift_x I use to shift the framebuffer texture around to get a better feel for how things are going.
By zooming out and shifting the framebuffer a bit I get this:
http://oi61.tinypic.com/34zlpip.jpg
it only draws a small portion of the screen (which exactly fits the screen if I don't zoom out).
In contrast, this is what it looks like if I zoom all the way out (and then some) before using a framebuffer and instead drawing straight to the screen:
http://oi61.tinypic.com/21akbit.jpg (the empty parts are just chunks that haven't been loaded as the player isn't supposed to be able to zoom out this much).
I'm truly stumped here, I've tried changing the viewport before drawing but this does just about nothing.
I'd also like to note that I'm pretty confident the actual framebuffer texture object actually is five times as large as what's being drawn to it, because if I don't stretch it in my draw_quad function by instead giving it the same width and height as the screen, I get this:
http://oi62.tinypic.com/4triiu.jpg
The width and height of the framebuffer is now the same as the width and height of the screen, yet there's only graphics in a fraction of it, which is the small portion that's actually being drawn to.
Anyone have any clue? If more portions of code are needed I'm happy to oblige but the entire thing is far too much.
Related
I'm working on a viewer in Qt to show images with lines or text on top.
I have organized images, lines and text on several layers, each layer is a GL_QUADS.
If I stack images in z and then draw a layer on top with lines, it all works as expected.
But I want to draw more lines on several other layers at the same z as the first lines layer, and that's the result:
lines layers conflict.
I don't understand why each lines layer erase previous overlapped lines layer (but don't corrupt underlying images).
Moreover if I draw another layer at the same z as lines layer but with some text, this is the result:
text layer issue.
Text layer create a hole in all undelying layers and you can see the background.
Lines and text are painted with QPainter on a QImage this way:
m_img = new QImage(&m_buffer[0], width, height, QImage::Format_RGBA8888);
m_img->fill(Qt::transparent);
QPen pen(color);
pen.setWidth(2);
m_painter.begin(m_img);
m_painter.setRenderHints(QPainter::Antialiasing, true);
m_painter.setPen(pen);
m_painter.drawLines(lines);
m_painter.end();
QFont font;
int font_size = font.pointSize() * scale;
if (font_size > 0) { font.setPointSize(font_size); }
QPen pen(color);
m_painter.begin(m_img);
m_painter.setRenderHints(QPainter::Antialiasing, true);
m_painter.setFont(font);
m_painter.setPen(pen);
for(int index = 0; index < messages.size(); index++)
m_painter.drawText(positions.at(index), messages.at(index));
m_painter.end();
and textures:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, d->width(), d->height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, d->data());
This is my texture setup():
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST/*GL_LINEAR*/);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST/*GL_LINEAR*/);
opengl_error_check(__FILE__, __LINE__);
This is my initializeGL():
glClearColor(0.0, 0.25, 0.5, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_MULTISAMPLE);
glEnable(GL_DEPTH_TEST);
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glShadeModel(GL_FLAT);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
Finally I have set QGLFormat(QGL::AlphaChannel) in my QGLWidget.
I know that there is a problem of z-fighting using overlapping planes at the same z, but as far as I know it should matter only if the overlapping textures are not transparent. In my case some artifacts may be expected where lines crosses, but I don't understand why lines disappear.
And since I use the same way to draw lines and text, I don't understand why text layer influence images while lines not.
Last note: I have printed pixel values in all textures right before glTexImage2D() and values are as expected.
I'm pretty sure there is some obvious mistake, can someone point me where I'm wrong?
OpenGL works by "painter" principle. Unless you use Z-ordering (aka depth test), what is drawn last is drawn on top. It works like splattering paint on the wall, hence the term. Nota bene: depth test is off by default, in general, it does slow rendering down.
If you use Z-ordering, OpenGL will "hide" fragments which fall into area of "window space" (color buffer), where "paint" with bigger Z value already exist. Thus , there is no depth-based "automatic" transparency in OpenGL: to emulate transparency you must paint things in proper order, with proper blending mode. That may prove to be problem, if objects intersect or self-intersect. Creating complex scenes with transparency and shadows requres technique called deferred rendering.
If you paint with same Z, result again depends on blending and if color is solid, you'll just overpaint what is already there, just like if depth test is off.
PS. There is not enough data about text issue, I don't see any text there but it looks like you do painting on top of OpenGL's output. Which widget is is, QGLWidget, or QOpenGLWindget? In fact, those two source write into separate passes, and font is draw by Qt using platform-depenadant means, so text might be overwritten? It's not recomended to use Qt's painter output with OpenGL, you may need to look into use of libraries to output text in OpenGL.
I'm writing my 3D engine, without OpenGL or DirectX. Every 3D calculations are my own code. When a whole frame is calculated (a 2D color array), I have to draw it to a window, and to do this I use GLUT and OpenGL. But, I don't use OpenGL's 3D features.
So I have a 2D color array (actually a 1D unsigned char array, which is used as a 2D array), and I have to draw it with OpenGL/GLUT, in an efficient way.
There are two important things:
performance (FPS value)
easy to resize, auto-scale the window's content
I wrote 3 possible solutions:
2 for loop, every pixel is a GL_QUADS with a color. Very slow, but working good, and window resize is also working good.
glDrawPixels, not the most efficient, but I'm satisfied with the FPS-value. Unfortunately, when I resize the window, the content don't scale automatically. I tried to write my own resize callback function, but it never worked. Green is not missing, it's OK.
The best solution is probably to make a texture from that unsigned char array, and draw only one GL_QUADS. Very fast, and window resize working good, the content scaling automatically. But, and this is my question, green color component is missing, I don't know why.
Some hours ago, I used float array, 3 components, and 0...1 values. Green was missing, so I decided to use unsigned char array, because it's smaller. Now, I use 4 components, but the 4th is always unused. Values are 0...255. Green is still missing. My texturing code:
GLuint TexID;
glGenTextures(1, &TexID);
glPixelStorei(GL_UNPACK_ALIGNMENT, TexID); // I tried PACK/UNPACK and ALIGNMENT/ROW_LENGTH, green is still missing
glTexImage2D(GL_TEXTURE_2D, 0, RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, a->output);
glTexParametri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParametri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParametri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParametri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glBindTexture(GL_TEXTURE_2D, TexID);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
// glTexCoord2f ... glVertex2f ... four times
glEnd();
glDisable(GL_TEXTURE_2D);
glDeleteTextures(1, &TexID);
If I use glDrawPixels, it is working good, R, G and B components are on the window. But when I use the code above with glTexImage2D, green component is missing. I tried a lots of things, but nothing solved this green-issue.
I draw coloured cubes with my 3D engine, and which contains green component (for example orange, white or green), it has a different color, without the green component. Orange is red-like, green is black, etc. 3D objects which doesn't contain green component, are good, for example red cubes are red, blue cubes are blue.
I think, glTexImage2D's parameters are wrong, and I have to change them.
glPixelStorei(GL_UNPACK_ALIGNMENT, TexID); is very wrong.
Accepted values for that function are:
1, 2, 4 and 8
And they represent the row-alignment (in bytes) for the image data you upload. For an 8-bit per-component RGBA image, you generally want either 1 or 4 (default).
You are extremely lucky here if this actually works. Miraculously TexID must be 1 (probably because you only have 1 texture loaded). As soon as you have 3 textures loaded and you wind up with a value for TexID of 3 you're going to generate GL_INVALID_VALUE and your program's not going to work at all.
I'm copying a texture from a framebuffer to an empty texture using
float *temp = new float[width*height*4];
glBindTexture(GL_TEXTURE_2D, fbTex2);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, temp);
glBindTexture(GL_TEXTURE_2D, colour_map_tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, temp);
(yes I tried glCopyImageSubData() and it didn't work)
and colour_map_tex is initialized as
glGenTextures(1,&colour_map_tex);
glBindTexture(GL_TEXTURE_2D, colour_map_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,width,height,0,GL_RGBA,GL_UNSIGNED_BYTE,NULL);
this is then used to hold a colour map (everything drawn but using a single colour for each object/mesh) which is drawn to a frame buffer and then used to make a mask.
but the issue is when I use the mask, everything is aligned ie. the mask and the texture it is masking are lined up, but when I move the camera around the translations are slightly different which results in the masked image being really skewed in relation to the actual scene.
so my questions are, is there anything that is likely to be the cause of the skewing? or anything that can be done to fix it? would a 3rd frame buffer be a better idea instead of copying the data to an empty texture? and if so why?
overview of what is happening :
1. whole scene is being rendered with textures to a framebuffer.
2. whole scene is rendered a second time without textures but each mesh has a colour associated with it, this is rendered to a second framebuffer and is for a mask.
3. the mask texture is copied to an empty texture
4. the texture from the first frame buffer is drawn onto a plane the size of the viewport ( drawn to the second framebuffer)
5. the mask is overlayed onto the plane to mask out parts of the texture (drawn to the second view buffer)
6. the texture from the first frame buffer is drawn on to a plane the size of the viewport, this time drawn to the screen
7. there is an optional post processed image generated from the texture in the second frame buffer. which is semi transparent and drawn over the rendering of the scene.
I havent posted the whole display function because its pretty big but I'm happy to post more if there is a specific bit that you want.
I've been trying to make Worms style destructible terrain, and so far it's been going pretty well...
Snapshot1
I have rigged it so that the following image is masked onto the "chocolate" texture.
CircleMask.png
However, as can be seen on Snapshot 1, the "edges" of the CircleMask are still visible (overlapping each other). I'm fairly certain it has something to do with aliasing, as mask image is being stretched before being applied (that, and the SquareMask.png does not have this issue). This is my problem.
My masking code is as follows:
void MaskedSprite::draw(Point pos)
{
glEnable(GL_BLEND);
// Our masks should NOT affect the buffers color, only alpha.
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
glBlendFunc(GL_ONE_MINUS_DST_ALPHA,GL_DST_ALPHA);
// Draw all holes in the texture first.
for (unsigned i = 0; i < masks.size(); i++)
if (masks.at(i).mask) masks.at(i).mask->draw(masks.at(i).pos, masks.at(i).size);
// But our image SHOULD affect the buffers color.
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// Now draw the actual sprite.
Sprite::draw(pos);
glDisable(GL_BLEND);
}
The draw() function draws a quad with the texture on it to the screen. It has no blend functions.
If you invert the alpha channel on your mask image so that the inside of the circle has alpha 0.0, You can use the following blending mode:
glClearColor(0,0,0,1);
// ...
glBlendFunc(GL_DST_ALPHA, GL_ZERO);
This means, when the screen is cleared, each pixel will be set to alpha 1.0. Each time the mask is rendered with blending enabled, it will multiply the mask's alpha value with the current alpha at that pixel, so the alpha value will never increase.
Note that using this technique, any alpha channel in the sprite texture will be ignored. Also, if you are rendering a background before the terrain, you will need to change the blend function before rendering the final sprite image. Something like glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA) would work.
Another solution would be to use your blending mode but set the mask texture's interpolation mode to nearest-neighbor to ensure that each value sampled from the mask is either 0.0 or 1.0:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
My last bit of advice is this: the hard part about destructible 2D terrain is not getting it to render correctly, it's doing collision detection with it. If you haven't given thought to how you plan to tackle it, you might want to.
I have created a png image in photoshop with transparencies that I have loaded into and OpenGL program. I have binded it to a texture and in the program the picture looks blurry and I'm not sure why.
alt text http://img685.imageshack.us/img685/9130/upload2.png
alt text http://img695.imageshack.us/img695/2424/upload1e.png
Loading Code
// Texture loading object
nv::Image title;
// Return true on success
if(title.loadImageFromFile("test.png"))
{
glGenTextures(1, &titleTex);
glBindTexture(GL_TEXTURE_2D, titleTex);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, title.getInternalFormat(), title.getWidth(), title.getHeight(), 0, title.getFormat(), title.getType(), title.getLevel(0));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16.0f);
}
else
MessageBox(NULL, "Failed to load texture", "End of the world", MB_OK | MB_ICONINFORMATION);
Display Code
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindTexture(GL_TEXTURE_2D, titleTex);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTranslatef(-800, 0, 0.0);
glColor3f(1,1,1);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(0,0);
glTexCoord2f(0.0, 1.0); glVertex2f(0,600);
glTexCoord2f(1.0, 1.0); glVertex2f(1600,600);
glTexCoord2f(1.0, 0.0); glVertex2f(1600,0);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
EDIT: I don't think i'm stretching each pixel should be two in the co-ordinate system
int width=800, height=600;
int left = -400-(width-400);
int right = 400+(width-400);
int top = 400+(height-400);
int bottom = -400-(height-400);
gluOrtho2D(left,right,bottom,top);
OpenGL will (normally) require that the texture itself have a size that's a power of 2, so what's (probably) happening is that your texture is being scaled to a size where the dimensions are a power of 2, then it's being scaled back to the original size -- in the process of being scaled twice, you're losing some quality.
You apparently just want to display your bitmap without any scaling, without wrapping it to the surface of another object, or anything like that (i.e., any of the things textures are intended for). That being the case, I'd just display it as a bitmap, not a texture (e.g. see glRasterPos2i and glBitmap).
Why would you want to use mipmapping and anisotropic filtering for a static image on your start screen in the first place? It looks unlikely the image will be rotated (what anisotropic filtering is for) or has to be resized many times really fast (what mipmapping is for).
If the texture is being stretched: try using GL_NEAREST for your GL_MAG_FILTER, in this case it could give better results (GL_LINEAR is more accurate, but has a nature of blurring).
If the texture is minimized: same thing, try using GL_NEAREST_MIPMAP_NEAREST, or even better, try using no mipmaps and GL_NEAREST (or GL_LINEAR, whichever gives you the best result).
I'd suggest that you make the png have a resolution in a power of 2's. Ie 1024x512 and place the part you want to drawn in the upper left corner of it still in the resolution for the screen. Then rescale the texcoords to be 800.0/1024.0 and 600.0/512.0 to get the right part from the texture. I belive that what is going on is glTexImage2D can sometime handle width and height that are not a power of 2 but can then scale the input image thus filter it.
An example of handling this can be viewed here (a iPhone - OpenGLES - project that grabs a screen part non-power of 2 and draws that into a 512x512 texture and rescales the GL_TEXTURE matrix, line 123, instead of doing a manual rescale of the texcoords)
Here is another post mentioning this method.
Here is an hypothesis:
Your window is 800x600 (and maybe your framebuffer too), but your client area is not, because of the window decoration on the sides.
So your frame-buffer gets resized when being blitted to the client area of your window. Can you check your window creation code ?
Beware of exact size of the client area of your window. Double check it is what you expect it to be
Beware of pixel alignment rules. You might need to add 0.5 to your x/y coordinates to hit the pixel centers. Description for DirectX can be found here - OpenGL rules may be different, though.