I'm trying to get familiar with shaders in opengl. Here is some sample code that I found (working with openframeworks). The code simply blurs an image in two passes, first horizontally, then vertically. Here is the code from the horizontal shader. My only confusion is the texture coordinates. They exceed 1.
void main( void )
{
vec2 st = gl_TexCoord[0].st;
//horizontal blur
//from http://www.gamerendering.com/2008/10/11/gaussian-blur-filter-shader/
vec4 color;
color += 1.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -4.0, 0));
color += 2.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -3.0, 0));
color += 3.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -2.0, 0));
color += 4.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -1.0, 0));
color += 5.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt , 0));
color += 4.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 1.0, 0));
color += 3.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 2.0, 0));
color += 2.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 3.0, 0));
color += 1.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 4.0, 0));
color /= 5.0;
gl_FragColor = color;
}
I can't make heads or tails out of this code. Texture coordinates are supposed to be between 0 and 1, and I've read a bit about what happens when they're greater than 1, but that's not the behavior I'm seeing (or I don't see the connection). blurAmnt varies between 0.0 and 6.4, so s can go from 0 to 25.6. The image just gets blurred more or less depending on the value, I don't see any repeating patterns.
My question boils down to this: what exactly is happening when the texture coordinate argument in the call to texture2DRect exceeds 1? And why does the blurring behavior still function perfectly despite this?
The [0, 1] texture coordinate range only applies to the GL_TEXTURE_2D texture target. Since that code uses texture2DRect (and a samplerRect), it's using the GL_TEXTURE_RECTANGLE_ARB texture target, that this target uses Unnormalized texture coordinates, in the range [0, width]x[0, height].
That's why you have "weird" texture coords. Don't worry, they work fine with this texture target.
Depends on the host code. If you saw something like
glTexParameteri (GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP);
Then the out of bounds s dimension will be zeros, IIRC. Similar for t.
Related
A while ago I asked a question similar to this one, but in that case I was trying to correct the perspective texture mapping of a trapezoid that had the horizontal lines constantly parallel with glTexCoord4f() and this is relatively simple. However, now I'm trying to fix the texture mapping of the floor and ceiling in my engine, the problem is that since both depend on the shape of the map, I need to use triangles to fill in the polygonal shapes that the map may contain.
I tried a few variations of the same method I used for correct texture mapping on trapezoids, the attempt with more "acceptable" results were when I calculated the size of the triangle's edges (with screen coordinates) and used each result in the different 'q' in each glTexCoord4f(), that is how code currently stands.
With that in mind, how can I fix this while using glTexCoord4f()?
Here is the code I used to correct the texture mapping of the walls (functional):
float u, v;
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
float sza = wyaa - wyab; //Size of the first vertical edge on the wall
float szb = wyba - wybb; //Size of the second vertical edge on the wall
//Does the wall have streeched textures?
if(!(*wall).streechTexture){
u = -texLength;
v = -texHeight;
}else{
u = -1;
v = -1;
}
glBindTexture (GL_TEXTURE_2D, texture.at((*wall).texture));
glBegin(GL_TRIANGLE_STRIP);
glTexCoord4f(0, 0, 0, sza);
glVertex3f(wxa, wyaa + shearing, -tza * 0.001953);
glTexCoord4f(u * szb, 0, 0, szb);
glVertex3f(wxb, wyba + shearing, -tzb * 0.001953);
glTexCoord4f(0, v * sza, 0, sza);
glVertex3f(wxa, wyab + shearing, -tza * 0.001953);
glTexCoord4f(u * szb, v * szb, 0, szb);
glVertex3f(wxb, wybb + shearing, -tzb * 0.001953);
glEnd();
glDisable(GL_TEXTURE_2D);
And here the current code that renders both the floor and the ceiling (which needs to be fixed):
glEnable(GL_TEXTURE_2D);
glBindTexture (GL_TEXTURE_2D, texture.at((*floor).texture));
float difA, difB, difC;
difA = vectorMag(Vertex(fxa, fyaa), Vertex(fxb, fyba)); //Size of the first edge on the triangle
difB = vectorMag(Vertex(fxb, fyba), Vertex(fxc, fyca)); //Size of the second edge on the triangle
difC = vectorMag(Vertex(fxc, fyca), Vertex(fxa, fyaa)); //Size of the third edge on the triangle
glBegin(GL_TRIANGLE_STRIP); //Rendering the floor
glTexCoord4f(ua * difA, va * difA, 0, difA);
glVertex3f(fxa, fyaa + shearing, -tza * 0.001953);
glTexCoord4f(ub * difB, vb * difB, 0, difB);
glVertex3f(fxb, fyba + shearing, -tzb * 0.001953);
glTexCoord4f(uc * difC, vc * difC, 0, difC);
glVertex3f(fxc, fyca + shearing, -tzc * 0.001953);
glEnd();
glBegin(GL_TRIANGLE_STRIP); //Rendering the ceiling
glTexCoord4f(uc, vc, 0, 1);
glVertex3f(fxc, fycb + shearing, -tzc * 0.001953);
glTexCoord4f(ub, vb, 0, 1);
glVertex3f(fxb, fybb + shearing, -tzb * 0.001953);
glTexCoord4f(ua, va, 0, 1);
glVertex3f(fxa, fyab + shearing, -tza * 0.001953);
glEnd();
glDisable(GL_TEXTURE_2D);
Here a picture of how it looks visually (for comparison purposes, the floor has the failed attempt at correct texture mapping, while the ceiling has affine texture mapping):
I understand that it would be easier if I just set a normal perspective view, but that would simply defeat the whole purpose of the engine.
This is an issue only for floor and ceiling (unless your camera can tilt). So you can render your wals as you doing. But for floors and ceiling you have these basic options (As I mentioned in your old duplicate post):
Rasterize scan line on your own
So instead of rendering triangles (which old ray casters did not do) you render vertical lines pixel by pixel using points instead of triangles. That will be much slower of coarse as GL is more suited for polygonal primitives. See draw_scanline functions in here:
Efficient floor/ceiling rendering in Raycaster
Use perspective view and pass z coordinate
Looks like you added the z coordinate already. So now you just need to set perspective view that matches your wall rendering. OpenGL will do the rest on its own. So you should add something like gluPerspective for your GL_PROJECTION matrix. but just for your floors/ceilings ...
Pass z coordinate and overide fragment shader
So you just write fragment shader that computes the perspective correct texture mapping correction in it and just output wanted texel color +/- some lighting. Here example of shaders usage:
complete GL+GLSL+VAO/VBO C++ example
For more info see:
Ray Casting with different height size
I'm porting some old OpenGL 1.2 bitmap font rendering code to modern OpenGL (at least OpenGL 3.2+), and I'm wondering if I can use a GLSL shader to achieve what I've been doing manually.
When I want to draw the string "123", scaled to particular size, I do the following steps with the sprites below.
I draw the sprite to the screen, scaled 2x with GL_NEAREST. However, to get a black outline, I actually draw the sprite several times.
x + 1, y + 0, BLACK
x + 0, y + 1, BLACK
x - 1, y + 0, BLACK
x + 0, y - 1, BLACK
x + 0, y + 0, COLOR (RED)
After the sprites have been drawn to the screen, I copy the screen to a texture, via glCopyTexSubImage2D.
I draw that texture back to the screen, but with GL_LINEAR.
The end result is a more visually appealing form of scaling pixel sprites. When upscaling small pixel sprites to arbitrary dimensions, using just GL_NEAREST (bottom-right) or just GL_LINEAR (bottom-left) gives an effect I don't like. Pixel doubling with GL_NEAREST, and then do the remaining scaling with GL_LINEAR, gives a result that I prefer (top).
I'm pretty sure GLSL can do the black outline (thus saving me from having to do lots of draws), but could it also do the combination of GL_NEAREST and GL_LINEAR scaling?
You could achieve the effect of "2x nearest-neighbour upscaling followed by linear sampling" by pretending to sample a 4-texel neighbourhood from the upscaled texture while in reality sampling them from the original one. Then you'll have to implement bilinear interpolation manually. If you were targeting OpenGL 4+, textureGather() would be useful, though do keep this issue in mind. In my proposed solution below, I'll be using 4 texelFetch() calls, rather than textureGather(), as textureGather() would complicate things quite a bit.
Suppose you have an unscaled texture with black borders around the glyphs already present. Let's assume you have a normalized texture coordinate of vec2 pn = ... into that texture, where pn.x and pn.y are between 0 and 1. The following code should achieve the desired effect, though I haven't tested it:
ivec2 origTexSize = textureSize(sampler, 0);
int upscaleFactor = 2;
// Floating point texel coordinate into the upscaled texture.
vec2 ptu = pn * vec2(origTexSize * upscaleFactor);
// Decompose "ptu - 0.5" into the integer and fractional parts.
vec2 ptuf;
vec2 ptui = modf(ptu - 0.5, ptuf);
// Integer texel coordinates into the upscaled texture.
ivec2 ptu00 = ivec2(ptui);
ivec2 ptu01 = ptu00 + ivec2(0, 1);
ivec2 ptu10 = ptu00 + ivec2(1, 0);
ivec2 ptu11 = ptu00 + ivec2(1, 1);
// Integer texel coordinates into the original texture.
ivec2 pt00 = clamp(ptu00 / upscaleFactor, ivec2(0), origTexSize - 1);
ivec2 pt01 = clamp(ptu01 / upscaleFactor, ivec2(0), origTexSize - 1);
ivec2 pt10 = clamp(ptu10 / upscaleFactor, ivec2(0), origTexSize - 1);
ivec2 pt11 = clamp(ptu11 / upscaleFactor, ivec2(0), origTexSize - 1);
// Sampled colours.
vec4 clr00 = texelFetch(sampler, pt00, 0);
vec4 clr01 = texelFetch(sampler, pt01, 0);
vec4 clr10 = texelFetch(sampler, pt10, 0);
vec4 clr11 = texelFetch(sampler, pt11, 0);
// Bilinear interpolation.
vec4 clr0x = mix(clr00, clr01, ptuf.y);
vec4 clr1x = mix(clr10, clr11, ptuf.y);
vec4 clrFinal = mix(clr0x, clr1x, ptuf.x);
I want to implement the following blend function in my program which isn't using OpenGL.
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA);
In the OpenGL realtime test Application I was able to blend with this function colors on a white background. The blended result should look like http://postimg.org/image/lwr9ossen/.
I have a white background and want to blend red points over it. a high density of red points should be get opaque / black.
glClearColor(1.0f, 1.0f, 1.0f, 0.0f);
for(many many many times)
glColor4f(0.75, 0.0, 0.1, 0.85f);
DrawPoint(..)
I tried something, but I had no success.
Has anyone the equation for this blend function?
The blend function should translate directly to the operations you need if you want to implement the whole thing in your own code.
The first argument specifies a scaling factor for your source color, i.e. the color you're drawing the pixel with.
The second argument specifies a scaling factor for your destination color, i.e. the current color value at the pixel position in the output image.
These two terms are then added, and the result written to the output image.
GL_DST_COLOR corresponds to the color in the destination, which is the output image.
'GL_ONE_MINUS_SRC_ALPHA` is 1.0 minus the alpha component of the pixel you are rendering.
Putting this all together, with (colR, colG, colB, colA) the color of the pixel you are rendering, and (imgR, imgG, imgB) the current color in the output image at the pixel position:
GL_DST_COLOR = (imgR, imgG, imgB)
GL_ONE_MINUS_SRC_ALPHA = (1.0 - colA, 1.0 - colA, 1.0 - colA)
GL_DST_COLOR * (colR, colG, colB) + GL_ONE_MINUS_SRC_ALPHA * (imgR, imgG, imgB)
= (imgR, imgG, imgB) * (colR, colG, colB) +
(1.0 - colA, 1.0 - colA, 1.0 - colA) * (imgR, imgG, imgB)
= (imgR * colR + (1.0 - colA) * imgR,
imgG * colG + (1.0 - colA) * imgG,
imgB * colB + (1.0 - colA) * imgB)
This is the color you write to your image as the result of rendering the pixel.
This question is related to Repeating OpenGL-es texture bound to hills in cocos2d 2.0
After reading the answers posted in the above post, I've used the following code for computing the vertices and texture coordinates:
CGPoint pt0,pt1;
float ymid = (p0.y + p1.y) / 2;
float ampl = (p0.y - p1.y) / 2;
pt0 = p0;
float U_Off = floor(pt0.x / 512);
for (int j=1; j<_segments+1; j++)
{
pt1.x = p0.x + j*_dx;
pt1.y = ymid + ampl * cosf(_da*j);
float xTex0 = pt0.x/512 - U_Off;
_vertices[vertices++]=CGPointMake(pt0.x, 0);
_vertices[vertices++]=CGPointMake(pt0.x, pt0.y);
_texCoords[texCoords++]=CGPointMake(xTex0, 1.0f);
_texCoords[texCoords++]=CGPointMake(xTex0, 0);
pt0 = pt1;
}
p0 = p1;
But unfortunately, I still get a tear / misalignment in my texture (circled in yellow):
I've attached dumps of the arrays of vertices and texcoords
I'm new to OpenGl, and can't figure out where the miscalculation is. How do I prevent the line (circled in yellow in image) from appearing ?
EDIT: My texture is either 1024x512 or 512x512 depending on the device. I use the following texture parameters:
ccTexParams tp2 = {GL_LINEAR, GL_LINEAR, GL_REPEAT, GL_CLAMP_TO_EDGE};
Most likely the reason is in non-continuous texture coordinates.
In texcoords dump you have the following coordinates:
(CGPoint) 0x34b0b28 = (x=1.00390625, y=0)
(CGPoint) 0x34b0b30 = (x=0.005859375, y=1)
It means that between these two points texture is mapped from 1 to 0 (in reverse direction). You should continue texcoords after 1.00390625 => 1.005859375 => ... Also, your texture must have power-of-two size and must be set up with REPEAT mode.
If your texture is in atlas and you cannot set REPEAT mode, you may try to clamp texcoords to [0; 1] range and place two edge points with x=1 and x=0 in the same position.
And, at last, if your texture doesn't change in x-axis you may set x = 0.5 for all points.
I need to write a shader where the color of the pixel are black when the following equation is true:
(x-coordinate of pixel) mod 2 == 1
If it is false, the pixel should be white. Therefore I searched the web but it did not work.
More information:
I've an OpenGL scene with 800 x 600 resolution and the teapot in it. The teapot is red. Now I need to create that zebra look.
Here is some code I've wrote, but it didn'T work:
FragmentShader:
void main(){
if (mod(gl_FragCoord[0].x * 800.0 , 2.0) == 0){
gl_FragColor = vec4(1.0,1.0,1.0,1.0);
}else{
gl_FragColor = vec4(0.0,0.0,0.0,1.0);
}
}
VertexShader:
void main(void)
{
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
}
As far as I know, gl_FragCood.x is in range(0,1) therefore I need to multiply with width.
Interesting you mention the need to multiply with the width, have you tried without the * 800.0 in there? The range of gl_FragCoord is such that the distance between adjacent pixels is 1.0, for example [0.0, 800.0] or possibly [0.5, 800.5].
Remove the width multiplication and see if it works.
Instead of comparing directly to 0, try doing a test against 1.0, e.g.
void main(){
if (mod(gl_FragCoord[0].x , 2.0) >= 1.0){
gl_FragColor = vec4(1.0,1.0,1.0,1.0);
}else{
gl_FragColor = vec4(0.0,0.0,0.0,1.0);
}
}
That'll avoid precision errors and the cost of rounding.
As emackey points out, gl_FragCoord is specified in window coordinates, which:
... result from scaling and translating Normalized
Device Coordinates by the viewport. The parameters to glViewport() and
glDepthRange() control this transformation. With the viewport, you can
map the Normalized Device Coordinate cube to any location in your
window and depth buffer.
So you also don't actually want to multiply by 800 — the incoming coordinates are already in pixels.