I'm loading a custom data into 2D texture GL_RGBA16F:
glActiveTexture(GL_TEXTURE0);
int Gx = 128;
int Gy = 128;
GLuint grammar;
glGenTextures(1, &grammar);
glBindTexture(GL_TEXTURE_2D, grammar);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA16F, Gx, Gy);
float* grammardata = new float[Gx * Gy * 4](); // set default to zero
*(grammardata) = 1;
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,Gx,Gy,GL_RGBA,GL_FLOAT,grammardata);
int grammarloc = glGetUniformLocation(p_myGLSL->getProgramID(), "grammar");
if (grammarloc < 0) {
printf("grammar missing!\n");
exit(0);
}
glUniform1i(grammarloc, 0);
When I read the value of uniform sampler2D grammar in GLSL, it returns 0.25 instead of 1. How do I fix the scaling problem?
if (texture(grammar, vec2(0,0) == 0.25) {
FragColor = vec4(0,1,0,1);
} else
{
FragColor = vec4(1,0,0,1);
}
By default texture interpolation is set to the following values:
GL_TEXTURE_MIN_FILTER = GL_NEAREST_MIPMAP_LINEAR,
GL_TEXTURE_MAG_FILTER = GL_LINEAR
GL_WRAP[R|S|T] = GL_REPEAT
This means, in cases where the mapping between texels of the texture and pixels on the screen does not fit, the hardware interpolates will interpolate for you. There can be two cases:
The texture is displayed smaller than it actually is: In this case interpolation is performed between two mipmap levels. If no mipmaps are generated, these are treated as beeing 0, which could lead to 0.25.
The texture is displayed larger than it actually is (and I think this will be the case here): Here, the hardware does not interpolate between mipmap levels, but between adjacent texels in the texture. The problem now comes from the fact, that (0,0) in texture coordinates is NOT the center of pixel [0,0], but the lower left corner of it.
Have a look at the following drawing, which illustrates how texture coordinates are defined (here with 4 texels)
tex-coord: 0 0.25 0.5 0.75 1
texels |-----0-----|-----1-----|-----2-----|-----3-----|
As you can see, 0 is on the boundary of a texel, while the first texels center is at (1/(2 * |texels|)).
This means for you, that with wrap mode set to GL_REPEAT, texture coordinate (0,0) will interpolate uniformly between the texels [0,0], [-1,0], [-1,-1], [0,-1]. Since -1 == 127 (due to repeat) and everything except [0,0] is 0, this results in
([0,0] + [-1,0] + [-1,-1] + [0,-1]) / 4 =
1 + 0 + 0 + 0 ) / 4 = 0.25
Related
I have created a texture and filled it with ones:
size_t size = width * height * 4;
float *pixels = new float[size];
for (size_t i = 0; i < size; ++i) {
pixels[i] = 1.0f;
}
glTextureStorage2D(texture_id, 1, GL_RGBA16F, width,
height);
glTextureSubImage2D(texture_id, 0, 0, 0, width, height, GL_RGBA,
GL_FLOAT, pixels);
I use linear filtering (GL_LINEAR) and clamp to border.
But when I draw the image:
color = texture(atlas, uv);
the last row looks like it has alpha values of less than 1. If in the shader I set the alpha to 1:
color.a = 1.0f;
it draws it correctly. What could be the reason for this?
The problem comes from the combination of GL_LINEAR and GL_CLAMP_TO_BORDER:
Clamp to border means that every texture coordinate outside of [0, 1]
will return the border color. This color can be set with
glTexParameterf(..., GL_TEXTURE_BORDER_COLOR, ...) and is black by
default.
Linear filter will take into account pixels that are adjacent to the
sampling location (unless sampling happens exactly at texel centers1),
and will thus also read border color texels (which are here black).
If you don't want this behavior, the simplest solution would be to use GL_CLAMP_TO_EDGE instead which will repeat the last row/column of texels to infinity. The different wrapping modes are explained very well explained at open.gl.
1) Sampling happens most probably not at pixel centers as explained in this answer.
I'm porting some old OpenGL 1.2 bitmap font rendering code to modern OpenGL (at least OpenGL 3.2+), and I'm wondering if I can use a GLSL shader to achieve what I've been doing manually.
When I want to draw the string "123", scaled to particular size, I do the following steps with the sprites below.
I draw the sprite to the screen, scaled 2x with GL_NEAREST. However, to get a black outline, I actually draw the sprite several times.
x + 1, y + 0, BLACK
x + 0, y + 1, BLACK
x - 1, y + 0, BLACK
x + 0, y - 1, BLACK
x + 0, y + 0, COLOR (RED)
After the sprites have been drawn to the screen, I copy the screen to a texture, via glCopyTexSubImage2D.
I draw that texture back to the screen, but with GL_LINEAR.
The end result is a more visually appealing form of scaling pixel sprites. When upscaling small pixel sprites to arbitrary dimensions, using just GL_NEAREST (bottom-right) or just GL_LINEAR (bottom-left) gives an effect I don't like. Pixel doubling with GL_NEAREST, and then do the remaining scaling with GL_LINEAR, gives a result that I prefer (top).
I'm pretty sure GLSL can do the black outline (thus saving me from having to do lots of draws), but could it also do the combination of GL_NEAREST and GL_LINEAR scaling?
You could achieve the effect of "2x nearest-neighbour upscaling followed by linear sampling" by pretending to sample a 4-texel neighbourhood from the upscaled texture while in reality sampling them from the original one. Then you'll have to implement bilinear interpolation manually. If you were targeting OpenGL 4+, textureGather() would be useful, though do keep this issue in mind. In my proposed solution below, I'll be using 4 texelFetch() calls, rather than textureGather(), as textureGather() would complicate things quite a bit.
Suppose you have an unscaled texture with black borders around the glyphs already present. Let's assume you have a normalized texture coordinate of vec2 pn = ... into that texture, where pn.x and pn.y are between 0 and 1. The following code should achieve the desired effect, though I haven't tested it:
ivec2 origTexSize = textureSize(sampler, 0);
int upscaleFactor = 2;
// Floating point texel coordinate into the upscaled texture.
vec2 ptu = pn * vec2(origTexSize * upscaleFactor);
// Decompose "ptu - 0.5" into the integer and fractional parts.
vec2 ptuf;
vec2 ptui = modf(ptu - 0.5, ptuf);
// Integer texel coordinates into the upscaled texture.
ivec2 ptu00 = ivec2(ptui);
ivec2 ptu01 = ptu00 + ivec2(0, 1);
ivec2 ptu10 = ptu00 + ivec2(1, 0);
ivec2 ptu11 = ptu00 + ivec2(1, 1);
// Integer texel coordinates into the original texture.
ivec2 pt00 = clamp(ptu00 / upscaleFactor, ivec2(0), origTexSize - 1);
ivec2 pt01 = clamp(ptu01 / upscaleFactor, ivec2(0), origTexSize - 1);
ivec2 pt10 = clamp(ptu10 / upscaleFactor, ivec2(0), origTexSize - 1);
ivec2 pt11 = clamp(ptu11 / upscaleFactor, ivec2(0), origTexSize - 1);
// Sampled colours.
vec4 clr00 = texelFetch(sampler, pt00, 0);
vec4 clr01 = texelFetch(sampler, pt01, 0);
vec4 clr10 = texelFetch(sampler, pt10, 0);
vec4 clr11 = texelFetch(sampler, pt11, 0);
// Bilinear interpolation.
vec4 clr0x = mix(clr00, clr01, ptuf.y);
vec4 clr1x = mix(clr10, clr11, ptuf.y);
vec4 clrFinal = mix(clr0x, clr1x, ptuf.x);
This question is related to Repeating OpenGL-es texture bound to hills in cocos2d 2.0
After reading the answers posted in the above post, I've used the following code for computing the vertices and texture coordinates:
CGPoint pt0,pt1;
float ymid = (p0.y + p1.y) / 2;
float ampl = (p0.y - p1.y) / 2;
pt0 = p0;
float U_Off = floor(pt0.x / 512);
for (int j=1; j<_segments+1; j++)
{
pt1.x = p0.x + j*_dx;
pt1.y = ymid + ampl * cosf(_da*j);
float xTex0 = pt0.x/512 - U_Off;
_vertices[vertices++]=CGPointMake(pt0.x, 0);
_vertices[vertices++]=CGPointMake(pt0.x, pt0.y);
_texCoords[texCoords++]=CGPointMake(xTex0, 1.0f);
_texCoords[texCoords++]=CGPointMake(xTex0, 0);
pt0 = pt1;
}
p0 = p1;
But unfortunately, I still get a tear / misalignment in my texture (circled in yellow):
I've attached dumps of the arrays of vertices and texcoords
I'm new to OpenGl, and can't figure out where the miscalculation is. How do I prevent the line (circled in yellow in image) from appearing ?
EDIT: My texture is either 1024x512 or 512x512 depending on the device. I use the following texture parameters:
ccTexParams tp2 = {GL_LINEAR, GL_LINEAR, GL_REPEAT, GL_CLAMP_TO_EDGE};
Most likely the reason is in non-continuous texture coordinates.
In texcoords dump you have the following coordinates:
(CGPoint) 0x34b0b28 = (x=1.00390625, y=0)
(CGPoint) 0x34b0b30 = (x=0.005859375, y=1)
It means that between these two points texture is mapped from 1 to 0 (in reverse direction). You should continue texcoords after 1.00390625 => 1.005859375 => ... Also, your texture must have power-of-two size and must be set up with REPEAT mode.
If your texture is in atlas and you cannot set REPEAT mode, you may try to clamp texcoords to [0; 1] range and place two edge points with x=1 and x=0 in the same position.
And, at last, if your texture doesn't change in x-axis you may set x = 0.5 for all points.
I am after smooth texture based outline effect in OpenGL. So far I tried mostly all kinds of edge detection algorithms which result mostly in crude and jagged outlines. Then I read about Distance Field. I found an example which does pretty nice distance field. Here is the GLSL code:
#version 420
layout(binding=0) uniform sampler2D colorMap;
flat in vec4 diffuseOut;
in vec2 uvsOut;
out vec4 outputColor;
const float ALPHA_THRESHOLD = 0.9;
const float NUM_SPOKES = 36.0; // Number of radiating lines to check in.
const float ANGULAR_STEP =360.0 / NUM_SPOKES;
const int ZERO_VALUE =128; // Color channel containing 0 => -128, 128 => 0, 255 => +127
int in_StepSize=15; // Distance to check each time (larger steps will be faster, but less accurate).
int in_MaxDistance=30; // Maximum distance to search out to. Cannot be more than 127!
vec4 distField(){
vec2 pixel_size = 1.0 / vec2(textureSize(colorMap, 0));
vec2 screenTexCoords = gl_FragCoord.xy * pixel_size;
int distance;
if(texture(colorMap, screenTexCoords).a == 0.0)
{
// Texel is transparent, search for nearest opaque.
distance = ZERO_VALUE + 1;
for(int i = in_StepSize; i < in_MaxDistance; i += in_StepSize)
{
if(find_alpha_at_distance(screenTexCoords, float(i) * pixel_size, 1.0))
{
i = in_MaxDistance + 1; // BREAK!
}
else
{
distance = ZERO_VALUE + 1 + i;
}
}
}
else
{
// Texel is opaque, search for nearest transparent.
distance = ZERO_VALUE;
for(int i = in_StepSize; i <= in_MaxDistance; i += in_StepSize)
{
if(find_alpha_at_distance(screenTexCoords, float(i) * pixel_size, 0.0))
{
i = in_MaxDistance + 1; // BREAK!
}
else
{
distance = ZERO_VALUE - i;
}
}
}
return vec4(vec3(float(distance) / 255.0) * diffuseOut.rgb, 1.0 - texture(colorMap, screenTexCoords).a);
}
void main()
{
outputColor= distField();
}
The result of this shader covers the whole screen using the diffuse color for filling the screen area outside the Distance Field outline.Here is how it looks like :
What I need is to leave all the area which has the solid red fill outside the distance field as transparent.
I came to the solution by using Distance Field gray scale 8 bit alpha map.Stefan Gustavson
describes in detail how to do it.Basically one needs to generate the distance field version of the original texture.Then this texture is rendered with the primitive normally in the first pass into an FBO.In the second pass the alpha blending mode should be on.The texture from the first pass in used with the screen quad.At this stage the the fragment shader samples the alpha from that texture.This results in both smooth edges and alpha transparency around the edges.
Here is the result:
Based on the screenshot I'm assuming you're rendering a fullscreen quad? If that's the case Tim just provided the answer, try:
glEnable( GL_BLEND );
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Before you render the quad. Obviously if you're going to render non-transparent stuff too, I advise you to render those first so you won't get depth buffer problems. When you're done drawing the transparent stuff, call:
glDisable( GL_BLEND );
To turn alphablending off again.
I'm working with omnidirectional point lights. I already implemented shadow mapping using a cubemap texture as color attachement of 6 framebuffers, and encoding the light-to-fragment distance in each pixel of it.
Now I would like, if this is possible, to change my implementation this way:
1) attach a depth cubemap texture to the depth buffer of my framebuffers, instead of colors.
2) render depth only, do not write color in this pass
3) in the main pass, read the depth from the cubemap texture, convert it to a distance, and check whether the current fragment is occluded by the light or not.
My problem comes when converting back a depth value from the cubemap into a distance. I use the light-to-fragment vector (in world space) to fetch my depth value in the cubemap. At this point, I don't know which of the six faces is being used, nor what 2D texture coordinates match the depth value I'm reading. Then how can I convert that depth value to a distance?
Here are snippets of my code to illustrate:
Depth texture:
glGenTextures(1, &TextureHandle);
glBindTexture(GL_TEXTURE_CUBE_MAP, TextureHandle);
for (int i = 0; i < 6; ++i)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT,
Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Framebuffers construction:
for (int i = 0; i < 6; ++i)
{
glGenFramebuffers(1, &FBO->FrameBufferID);
glBindFramebuffer(GL_FRAMEBUFFER, FBO->FrameBufferID);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, TextureHandle, 0);
glDrawBuffer(GL_NONE);
}
The piece of fragment shader I'm trying to write to achieve my code:
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
ShadowVec = DepthValueToDistance(ShadowVec);
if (ShadowVec * ShadowVec > dot(VertToLightWS, VertToLightWS))
return 1.0;
return 0.0;
}
The DepthValueToDistance function being my actual problem.
So, the solution was to convert the light-to-fragment vector to a depth value, instead of converting the depth read from the cubemap into a distance.
Here is the modified shader code:
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
if (ShadowVec + 0.0001 > VectorToDepthValue(VertToLightWS))
return 1.0;
return 0.0;
}
Explaination on VectorToDepthValue(vec3 Vec) :
LocalZComp corresponds to what would be the Z-component of the given Vec into the matching frustum of the cubemap. It's actually the largest component of Vec (for instance if Vec.y is the biggest component, we will look either on the Y+ or the Y- face of the cubemap).
If you look at this wikipedia article, you will understand the math just after (I kept it in a formal form for understanding), which simply convert the LocalZComp into a normalized Z value (between in [-1..1]) and then map it into [0..1] which is the actual range for depth buffer values. (assuming you didn't change it). n and f are the near and far values of the frustums used to generate the cubemap.
ComputeShadowFactor then just compare the depth value from the cubemap with the depth value computed from the fragment-to-light vector (named VertToLightWS here), also add a small depth bias (which was missing in the question), and returns 1 if the fragment is not occluded by the light.
I would like to add more details regarding the derivation.
Let V be the light-to-fragment direction vector.
As Benlitz already said, the Z value in the respective cube side frustum/"eye space" can be calculated by taking the max of the absolute values of V's components.
Z = max(abs(V.x),abs(V.y),abs(V.z))
Then, to be precise, we should negate Z because in OpenGL, the negative Z-axis points into the screen/view frustum.
Now we want to get the depth buffer "compatible" value of that -Z.
Looking at the OpenGL perspective matrix...
http://www.songho.ca/opengl/files/gl_projectionmatrix_eq16.png
http://i.stack.imgur.com/mN7ke.png (backup link)
...we see that, for any homogeneous vector multiplied with that matrix, the resulting z value is completely independent of the vector's x and y components.
So we can simply multiply this matrix with the homogeneous vector (0,0,-Z,1) and we get the vector (components):
x = 0
y = 0
z = (-Z * -(f+n) / (f-n)) + (-2*f*n / (f-n))
w = Z
Then we need to do the perspective divide, so we divide z by w (Z) which gives us:
z' = (f+n) / (f-n) - 2*f*n / (Z* (f-n))
This z' is in OpenGL's normalized device coordinate (NDC) range [-1,1] and needs to be transformed into a depth buffer compatible range of [0,1]:
z_depth_buffer_compatible = (z' + 1.0) * 0.5
Further notes:
It might make sense to upload the results of (f+n), (f-n) and (f*n) as shader uniforms to save computation.
V needs to be in world space since the shadow cube map is normally axis aligned in world space thus the "max(abs(V.x),abs(V.y),abs(V.z))"-part only works if V is a world space direction vector.