OpenGL still tries to blur even with GL_NEAREST (GL_TEXTURE_2D) - opengl

An image says a thousand words, so what about two? I have this map art:
In order to actually use this as a map I scale this texture 6 times. This however didn't go as expected:
All the OpenGL code is in my homebrew 2D OpenGL rendering library, and since OpenGL is a state machine it is hard to actually document the whole rendering process. But here is +/- what I do (the code is Python):
width, height = window.get_size()
glViewport(0, 0, width, height)
glMatrixMode(GL_PROJECTION)
glPushMatrix()
glLoadIdentity()
glOrtho(0.0, width, height, 0.0, 0.0, 1.0)
glMatrixMode(GL_MODELVIEW)
glPushMatrix()
glLoadIdentity()
# displacement trick for exact pixelization
glTranslatef(0.375, 0.375, 0.0)
glDisable(GL_DEPTH_TEST)
glEnable(GL_BLEND)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
glEnable(self.texture.target) # GL_TEXTURE_2D
glBindTexture(self.texture.target, self.texture.id)
glTexParameteri(self.texture.target, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexParameteri(self.texture.target, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glPushMatrix()
glTranslatef(self.x, self.y, 0.0) # self.x and self.y are negative int offsets
glScalef(self.scale_x, self.scale_y, 1.0) # scale_x and scale_y are both 6
glBegin(GL_QUADS)
glTexCoord2i(0, 0)
glVertex2i(0, 0)
glTexCoord2i(0, 1)
glVertex2i(0, self.texture.height)
glTexCoord2i(1, 1)
glVertex2i(self.texture.width, self.texture.height)
glTexCoord2i(self.texture.width, 0)
glVertex2i(self.texture.width, 0)
glEnd()
glPopMatrix()
glDisable(self.texture.target)
However, this "blurring" bug doesn't occur when I use GL_TEXTURE_RECTANGLE_ARB. I'd like to also be able to use GL_TEXTURE_2D, so can someone please point out how to stop this from happening?

When in doubt - replace your texture with same size black'n'white checkerboard (black/white 1px). It will give you a good sense of what is going - is it going to be uniformly gray (displacement is wrong) or is it going to have waves (scaling is wrong).
Make sure you don't have mip-maps automatically generated and used.
Generally you don't need any special displacement, texels match pixels with properly setup glOrtho.
Another important issue - use PowerOfTwo textures, as older GPU could used various schemes to support NPOT textures (scaling or padding transparently to user) which could result in just that sort of blurring.
To manually work with NPOT textures you will need to pad them with clear pixels till next POT size and scale your UV values glTextCoord2f(u,v) by the factor npotSize / potSize. Note that this is not compatible with tiling, but judging from your art you don't need it.

# displacement trick for exact pixelization
glTranslatef(0.375, 0.375, 0.0)
Unfortunately this is not enough since you also need to scale the texture. For a unscaled, untranslated texture matrix, to address a certain pixel i from a texture with dimension N you need to apply the formula
(2i + 1)/(2N)
You can derive your scaling and translation from that – or determine the texture coordinates directly.
EDIT due to comment.
Okay, let's say your texture is 300 pixels wide, and you want to address exactly pixels 20 to 250, then the texture coordinates to choose for a identity texture matrix would be
(2*20 + 1)/(2*300) = 41/600 = 0.0650
and
(2*250 + 1)/(2*300) = 501/600 = 0.8350
But you could apply a transformation through the texture matrix as well. You want to map the pixels 0…299 (for 300 pixels width the index goes from 0 to 300-1 = 299) to 0…1. So let's put in those figures:
(2*0 + 1)/(2*300) = 1/600 =~= 0.0017 = a
(2*299 + 1)/(2*300) = 599/600 =~= 0.9984 = b
b-a =~= 0.9967
So you have to scale down the range 0…1 by 0.9967 and offset it by 0.0017 in the width. The same calculation goes for the height←→t coordinates. Remember that order of transformations matters. You must first scale then translate when performing the transformation, so in the matrix multiplications the translation is multiplied first:
// for a texture 300 pixels wide
glTranslatef(0.0017, …, …);
glScale(0.9967, …, …);
If you want use pixels instead of the range 0…1, further divide the scale by the texture width.
A BIG HOWEVER:
OpenGL-3 completely discarded the whole matrix manipulation functions and expect you to supply it ready to use matrices and shaders. And for OpenGL fragment shaders there is a nice function texelFetch which you can use to directly fetch texture pixels with absolute coordinates. Using that would make things a lot easier!

Related

Passing SRTM *.hgt file as texture to OpenGL shader results in skewed image

I am writing terrain renderer that interprets NASA SRTM *.hgt files to get height data. I implemented LOD technique described here: http://www.pheelicks.com/2014/03/rendering-large-terrains/.
My terrain mesh has 1024 width and height. The height map is 1201 x 1201. I load the height map into memory and the send it to shader as a texture. Then I sample it in vertex shader using current vertex position which x and z component is in range [-512, 512]. Next I divide it by 1024 which gives me vector with x and z in range [-0.5, 0.5]. The upper left corner of the mesh is at (-0.5, -0.5) so I add vec2(0.5, 0.5) to "normalize" it.
When sampling a jpeg file with the calculated co-ordinates it all looks as expected, but when I'm sampling the heightmap it looks like this:
with this strange diagonal line.
After thinkig for a while what's wrong with this image I discovered that the heightmap is actually just skewed (or my sampling is somehow skewed...) and on the left you can see the part that should be on the right, like here:
I have no idea what's going on here. Considering the fact that the jpeg image was laying perfectly on the mesh with the same sampling code I thought that it cannot be something wrong with the sampling method.
All the code is available on the github repository: https://github.com/0ctothorp/terrain_rendering/tree/lod2 in the "lod2" branch.
The most interesting part is probably the sampling code:
sample_ = texture(heightmap, (position.xz / meshSize) + vec2(0.5f, 0.5f)).r;
position.y = sample_ / 50.0f;
and how I pass heightmap to shader:
GL_CHECK(glTexImage2D(GL_TEXTURE_2D, 0, GL_R16I, 1201, 1201, 0, GL_RED_INTEGER, GL_SHORT,
hmData->data()));
Apparently your data is tightly packed, whereas OpenGL expects it to be 4-byte aligned by default. You can override the expected alignment with the following prior to the glTexImage2D call:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

How to get accurate 3D depth from 2D screen mouse click for large scale object in OpenGL?

I am computing the 3D coordinates from the 2D screen mouse click. Then I draw point at the computed 3D coordinate. Nothing is wrong in the code, nothing is wrong in the method, everything is working fine. But there is one issue which is relevant to depth.
If the object size is around (1000, 1000, 1000), I get the full depth, the exact 3D coordinate of the object's surfel. But when I load the object with size (20000, 20000, 20000), I do not the get the exact (depth) 3D coordinates. I get some offset from the surface. The point draws with a some offset from the surface. So my first question is that why it is happening? and the second question is how can I get the full depth and the accurate 3D coordinate for very large scale objects?
I draw a 3D point with
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0, 0.999999);
and using
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &wz);
if(wz > 0.0000001f && wz < 0.999999f)
{
gluUnProject()....saving 3D coordinate
}
The reason why this happens is the limited precision of the depth buffer. A 8-bit depth buffer can, for example, only store 2^8=256 different depth values.
The second parameter set that effects the depth precision are the settings for near and far plane in the projection, since this is the range that has to be mapped to the available data values in the depth buffer. If one sets this range to [0, 100] using a 8-bit depth buffer, then the actual precision is 100/256 = ~0.39, which means, that approximately 0.39 units in eye space will have the same depth value assigned.
Now to your problem: Most probably you have too less bits assigned to the depth buffer. As described above this introduces an error since the exact depth value can not be stored. This is why the points are close to the surface, but not exactly on it.
I have solved this issue that was happening because of depth range. Because OpenGL is a state machine, so I think I might somewhere changed the depth range which should be from 0.0 to 1.0. I think its always better to put depth range just after clearing the depth buffer, I used to have following settings just after clearing the depth and color buffers.
Solution:
{
glClearColor(0.0,0.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0, 1.0);
glDepthMask(GL_TRUE);
}

How to turn off OpenGL filtering when scaling up textures?

I already have these lines in here when I'm loading the texture:
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
But when I scale the image here during rendering:
glTexCoord2d(0.0,0.0);
glTexCoord2d(0.0454,0.0);
glTexCoord2d(0.0454,1.0);
glTexCoord2d(1.0,1.0);
I get really bad filtering on the texture (the texture is a spritesheet with multiple frames and I'd rather not make an individual file for each frame).
You're doing it right in terms of texture filtering. However, it's not enough to avoid all sampling artifacts, especially if you move your sprites. You must also make sure that:
Your sprites' size in pixels (on the screen) is an integral multiple of the their size in texels (in the texture)
You draw your sprites at integral coordinates in pixels (or integral-and-a-half, depending on how OpenGL positions texels in textures, I can't remember)
To illustrate the problem, say you have a 32x32 sprite (in the texture) and you map it to a 43x43-pixel quad on screen. Then the GPU only has 32 texels to fill a width of 43 pixels, so it needs to duplicate some of the texels. Exactly which texels are duplicated will depend on the coordinates of your quad on-screen (if you use non-integral coordinates). So moving sprites will appear to have weirdly flickering colors as the GPU decides to duplicate different texels to fill your quad.
To avoid this problem and achieve the best-looking sprites, you really want each texel to map to a single on-screen pixel. If you do this, then using GL_NEAREST or GL_LINEAR won't matter anymore since every pixel will only use the color from a single texel.
You're texture coordinates are pretty confusing to me because they don't form a square or rectangle (you have 3 different X values), nor are you showing the corresponding vertex coordinates (if you aren't interleaving those tex coords with vertex coords, you're just overwriting one texture coordinate over and over).
If you're trying to access a given (square) sub-region of the texture, then I would expect your texCoords to look something like this:
float x, float y; // position of the sprite in the texture
float height, float width; // size of the sprite in the texture
glBegin(GL_TRIANGLE_STRIP)
// lower left
glVertex3D(...)
glTexCoord2d(x, y);
// lower right
glVertex3D(...)
glTexCoord2d(x, y + width);
// upper left
glVertex3D(...)
glTexCoord2d(x + height, y);
// upper right
glVertex3D(...)
glTexCoord2d(x + height, y + width);
glEnd()

OpenGL Frame Buffer Object for rendering to textures, renders weirdly

I'm using python but OpenGL is pretty much done exactly the same way as in any other language.
The problem is that when I try to render a texture or a line to a texture by means of a frame buffer object, it is rendered upside down, too small in the bottom left corner. Very weird. I have these pictures to demonstrate:
This is how it looks,
www.godofgod.co.uk/my_files/Incorrect_operation.png
This is how it did look when I was using pygame instead. Pygame is too slow, I've learnt. My game would be unplayable without OpenGL's speed. Ignore the curved corners. I haven't implemented those in OpenGL yet. I need to solve this issue first.
www.godofgod.co.uk/my_files/Correct_operation.png
I'm not using depth.
What could cause this erratic behaviour. Here's the code (The functions are indented in the actual code. It does show right), you may find useful,
def texture_to_texture(target,surface,offset): #Target is an object of a class which contains texture data. This texture should be the target. Surface is the same but is the texture which should be drawn onto the target. offset is the offset where the surface texture will be drawn on the target texture.
#This will create the textures if not already. It will create textures from image data or block colour. Seems to work fine as direct rendering of textures to the screen works brilliantly.
if target.texture == None:
create_texture(target)
if surface.texture == None:
create_texture(surface)
frame_buffer = glGenFramebuffersEXT(1)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, frame_buffer)
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, target.texture, 0) #target.texture is the texture id from the object
glPushAttrib(GL_VIEWPORT_BIT)
glViewport(0,0,target.surface_size[0],target.surface_size[1])
draw_texture(surface.texture,offset,surface.surface_size,[float(c)/255.0 for c in surface.colour]) #The last part changes the 0-255 colours to 0-1 The textures when drawn appear to have the correct colour. Don't worry about that.
glPopAttrib()
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0)
glDeleteFramebuffersEXT(1, [int(frame_buffer)]) #Requires the sequence of the integer conversion of the ctype variable, meaning [int(frame_buffer)] is the odd required way to pass the frame buffer id to the function.
This function may also be useful,
def draw_texture(texture,offset,size,c):
glMatrixMode(GL_MODELVIEW)
glLoadIdentity() #Loads model matrix
glColor4fv(c)
glBegin(GL_QUADS)
glVertex2i(*offset) #Top Left
glVertex2i(offset[0],offset[1] + size[1]) #Bottom Left
glVertex2i(offset[0] + size[0],offset[1] + size[1]) #Bottom, Right
glVertex2i(offset[0] + size[0],offset[1]) #Top, Right
glEnd()
glColor4fv((1,1,1,1))
glBindTexture(GL_TEXTURE_2D, texture)
glBegin(GL_QUADS)
glTexCoord2f(0.0, 0.0)
glVertex2i(*offset) #Top Left
glTexCoord2f(0.0, 1.0)
glVertex2i(offset[0],offset[1] + size[1]) #Bottom Left
glTexCoord2f(1.0, 1.0)
glVertex2i(offset[0] + size[0],offset[1] + size[1]) #Bottom, Right
glTexCoord2f(1.0, 0.0)
glVertex2i(offset[0] + size[0],offset[1]) #Top, Right
glEnd()
You don't show your projection matrix, so I'll assume it's identity too.
OpenGL framebuffer origin is bottom left, not top left.
The size issue is more difficult to explain. What is your projection matrix after all ?
also, you don't show how to use the texture, and I'm not sure what we're looking at in your "incorrect" image.
Some non-related comments:
creating a framebuffer each frame is not the right way to go about it.
come to think about it, why use framebuffer at all ? it seems that the only thing you're after is blending to the frame buffer ? glEnable(GL_BLEND) does that just fine.

How to use GL_REPEAT to repeat only a selection of a texture atlas? (OpenGL)

How can I repeat a selection of a texture atlas?
For example, my sprite (selection) is within the texture coordinates:
GLfloat textureCoords[]=
{
.1f, .1f,
.3f, .1f,
.1f, .3f,
.3f, .3f
};
Then I want to repeat that sprite N times to a triangle strip (or quad) defined by:
GLfloat vertices[]=
{
-100.f, -100.f,
100.f, -100.f,
-100.f, 100.f,
100.f, 100.f
};
I know it has something to do with GL_REPEAT and textureCoords going passed the range [0,1]. This however, doesn't work: (trying to repeat N = 10)
GLfloat textureCoords[]=
{
10.1f, 10.1f,
10.3f, 10.1f,
10.1f, 10.3f,
10.3f, 10.3f
};
We're seeing our full texture atlas repeated...
How would I do this the right way?
It can't be done the way it's described in the question. OpenGL's texture coordinate modes only apply for the entire texture.
Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
None of the texture wrap modes support the kind of operation as described in the question, i.e. they all map to the full [0,1] range, not some arbitrary subset. when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL vertex program to map the texture coordinates appropriately.
I'm not sure you can do that. I think OpenGL's texture coordinate modes only apply for the entire texture. When using an atlas, you're using "sub-textures", so that your texture coordinates never come close to 0 and 1, the normal limits where wrapping and clamping occurs.
There might be extensions to deal with this, I haven't checked.
EDIT: Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
This is a bit crude to explain without images.
Anyway, when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
None of the texture wrap modes support the kind of operation you are looking for, i.e. they all map to the full [0,1] range, not some arbitrary subset. You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL pixel program to map the texture coordinates appropriately.
While this may be an old topic; here's how I ended up doing it:
A workaround would be to create multiple meshes, glued together containing the subset of the Texture UV's.
E.g.:
I have a laser texture contained within a larger texture atlas, at U[0.05 - 0.1] & V[0.05-0.1].
I would then construct N meshes, each having U[0.05-0.1] & V[0.05-0.1] coordinates.
(N = length / texture.height; height being the dimension of the texture I would like to repeat. Or easier: the amount of times I want to repeat the texture.)
This solution would be more cost effective than having to reload texture after texture.
Especially if you batch all render calls (as you should).
(OpenGL ES 1.0,1.1,2.0 - Mobile Hardware 2011)
Can be done with modulo of your tex-coords in shader. The mod will repeat your sub range coords.
I was running into your question while working on the same issue - although in HLSL and DirectX. I also needed mip mapping and solve the related texture bleeding too.
I solved it this way:
min16float4 sample_atlas(Texture2D<min16float4> atlasTexture, SamplerState samplerState, float2 uv, AtlasComponent atlasComponent)
{
//Get LOD
//Never wrap these as that will cause the LOD value to jump on wrap
//xy is left-top, zw is width-height of the atlas texture component
float2 lodCoords = atlasComponent.Extent.xy + uv * atlasComponent.Extent.zw;
uint lod = ceil(atlasTexture.CalculateLevelOfDetail(samplerState, lodCoords));
//Get texture size
float2 textureSize;
uint levels;
atlasTexture.GetDimensions(lod, textureSize.x, textureSize.y, levels);
//Calculate component size and calculate edge thickness - this is to avoid bleeding
//Note my atlas components are well behaved, that is they are all power of 2 and mostly similar size, they are tightly packed, no gaps
float2 componentSize = textureSize * atlasComponent.Extent.zw;
float2 edgeThickness = 0.5 / componentSize;
//Calculate texture coordinates
//We only support wrap for now
float2 wrapCoords = clamp(wrap(uv), edgeThickness, 1 - edgeThickness);
float2 texCoords = atlasComponent.Extent.xy + wrapCoords * atlasComponent.Extent.zw;
return atlasTexture.SampleLevel(samplerState, texCoords, lod);
}
Note the limitation is that the mip levels are blended this way, but in our use-case that is completely fine.
Can't be done...