Most efficient way to remove darker border around gaussian blur - opengl

So I am drawing a blurred image in opengl, using the standard gaussian blur formula.
gaussian blur wiki
This works perfectly fine. The problem like many others have is the border. Since the framebuffer only contains black outside of the buffer created it will cause a dark edge. visual
I've seen people mentioning that you can draw the image with a mirror.mirror
From my understanding, this would require calculating / doubling the size of the framebuffer and would make opengl draw a lot more than otherwise needed.
Would there be a better way to go about this.
I was also considering just flipping a pixel if its out of bounds. flip
This would require another 2-4 calculations for each pixel.
Are there any better ways to do this, or have I missed some really useful documentation.
Just to recap, I'm trying to find out what are existing / optimized solutions to removing the darkened border on gaussian blurred images.

OpenGL has a built-in way to wrap textures automatically when sampling from them. There is no need to double the framebuffer:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

Related

OpenGL Super Resolution issues

I have a OpenGL based gui. I use super resolution to be able to handle various scales. Instead of scaling images up, they are downscaled(unless it so happens someone is running at 4000x4000+ resolution).
The problem is, OpenGL doesn't seem to downscale smoothly. I have artifacts as if the scaling is nearest neighbor. (e.g. the text edges are blocky, even though they are not in the original)
These are the settings I use:
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
Here is a sample of the artifacts, the scaling is 2:1 I believe. Maybe it isn't exact though due to window edges and such.
You can see the left edge looks perfect(it's not though) but the right edge has weird breaks in it. The original graphic is perfectly symmetrical and no artifacts.
I've tried GL_NEAREST, GL_LINEAR. No mipmapping so...
Surely OpenGL is not that poor at scaling? I'd like something like bi-cubic scaling or something that will produce good results.
I am using OpenGL 1.1. I could potentially pre-scale images but I'd have to do that every time the window sizes changes and might be slow in cpu.
I have jagged edges on some images too. The whole point of super resolution was to avoid all this ;/
Is there some settings I'm missing?
First you have to understand signal theory, namely the Nyquist Theorem (that wikipedia page is overly specific when talking signals in the "time" domain; the principles are universal for all kinds of discretely sampled signals, including images). When downsampling you always must apply a lowpass anti aliasing filter that cuts off all frequency components above half the sampling frequency to avoid the creation of aliasing artifacts. Without filtering even a linear integrating downsampler will create artifacts. The realtime graphics way of implementing a low pass filter for textures are mipmaps. Every mipmap level cuts off at exactly half the frequency of the next higher level.
You have two options now:
Implement mipmapping
Implement a downsampling fragment shader
Of course the sane thing to do would be not to render in an excess resolution in the first place, but render your GUIs at exactly the target resolution.
With the code you provided, i will make guess at what might be the problem.
Try to load your image or at least allocate the memory before you set those texture parameters with glParameteri. And also, set GL_TEXTURE_MIN_FILTER to GL_LINEAR
Perhaps you meant super sampling (SSAA) which use 2 or more times the original resolution and downsample it to get a smooth image?
It does look from your image that it is using Linear filtering (bilinear)
Try using Anisotropic filtering:
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &aniso);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, aniso);
Anisotropic filtering can be applied at different levels, this code will apply it at the maximum level, you can use a number less than aniso if you like. These are extention macros, if you don't have the extention defenitions, they are this:
#define GL_TEXTURE_MAX_ANISOTROPY_EXT 0x84FE
#define GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT 0x84FF

Easy way to increase GL_LINEAR radius?

I want to do a bilinear style filter with a larger radius, anybody know if there is some secret OpenGL command like the following that controls the parameters of the texture filter? In particular I want better texture scaling when viewing the texture from far away, and I am getting a good result from pythons imshow with a large radius.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

Tiling a background in OpenGL

I'm sure this is a relatively simple question, it's just one thing I've always had trouble wrapping my mind around.
I have a 512x512 background I'd like to tile "infinitely." I've searched around and can't seem to find a whole lot, so I figured I'd come here. Anyway, here it is:
background http://dl.dropbox.com/u/5003139/hud/stars_far.png
So, there you have it. I have a ship sprite that can move anywhere on a 2D plane, and this is a top-down game. How would I render this background so that it covers every pixel of an arbitrarily sized window?
With GL_REPEAT texture clamping/wrapping mode, texture coordinates outside the range [0,1] will wrap around, repeating the texture. So you can draw a screen filling quad, but use larger texture coordinates. For example using the texture coordinates (0,0) to (10,10) will repeat the texture 10 times in each direction. Repeating mode is enabled for the currently bound 2D texture with
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

OpenGL Mipmapping: how does OpenGL decide on map level?

I am having trouble implementing mipmapping in OpenGL. I am using OpenFrameworks and have modified the ofTexture class to support the creation and rendering of mipmaps.
The following code is the original texture creation code from the class (slightly modified for clarity):
glEnable(texData.textureTarget);
glBindTexture(texData.textureTarget, (GLuint)texData.textureID);
glTexSubImage2D(texData.textureTarget, 0, 0, 0, w, h, texData.glType, texData.pixelType, data);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glDisable(texData.textureTarget);
This is my version with mipmap support:
glEnable(texData.textureTarget);
glBindTexture(texData.textureTarget, (GLuint)texData.textureID);
gluBuild2DMipmaps(texData.textureTarget, texData.glTypeInternal, w, h, texData.glType, texData.pixelType, data);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glDisable(texData.textureTarget);
The code does not generate errors (gluBuild2DMipmaps returns '0') and the textures are rendered without problems. However, I do not see any difference.
The scene I render consists of "flat, square tiles" at z=0. It's basically a 2D scene. I zoom in and out by using "glScale()" before drawing the tiles. When I zoom out, the pixels of the tile textures start to "dance", indicating (as far as I can tell) unfiltered texture look-up. See: http://www.youtube.com/watch?v=b_As2Np3m8A at 25s.
My question is: since I do not move the camera position, but only use scaling of the whole scene, does this mean OpenGL can not decide on the appropriate mipmap level and uses the full texture size (level 0)?
Paul
Mipmapping will compensate for scene scale in addition to perspective distance. The vertex shader outputs (which the driver will still create even if you aren't using your own shader) specify the screenspace coordinates of each vertex and the texture coordinates of those vertices. The GPU will decide which mip level to use based on the texel-to-pixel ratio of the fragments that will be generated.
Are you setting GL_LINEAR_MIPMAP_LINEAR when you render your tiles as well? It only matters when you render things, not when you create/load the texture. Your bracketing glEnable/glDisable calls may need to be moved too, depending on what state you are actually passing in there.
You should probably switch to automatic mipmap generation if you're targeting OpenGL >= 1.4.
You could try changing GL_TEXTURE_MIN/MAX_LOD to force a particular mip level.

OpenGL: small black pixel on top right corner of texture

I wrote an uncompressed TGA texture loader and it works nearly perfect, except for the fact that there's just one TINY little black patch on the upper right and it's driving me mad. I can get rid of it by using a texture border, but somehow I think that's not the practical solution.
Has anyone encountered this kind of problem before and knows -generally- what's going wrong when something like this happens, or should I post the image-loading function code?
Here's a picture, the little black dot is REALLY small.
Ok, I'm assuming that your image loading routine is correct. Do you use texture clamping (where the last pixels at the edges get repeated)? This may be necessary for OpenGL in this case to calculate the smoothed version of the texture. I remember, that it did work for me without that trick on Linux, but not on Windows.
Texture clamping:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
You may also need to play around with GL_TEXTURE_MAG_FILTER.