I have been writing my own SSAA fragment shader, so far I am sampling 4 points: top left, top right, bottom left and bottom right.
I am rendering to a small window, I have noticed when I full screen this window I have bars on all side of the screen that repeats what should be on the other side of the screen.
My shader currently calculates if we are looking at an edge pixel like this:
TL = TopLeft
if(texCoord.s <= 1 || texCoord.s >= textureWidth - 1)
{
//dont offset by -x, instead just use the current x and only offset y
TL = texture2D(texture, vec2(texCoord.s , texCoord.t + y ));
}
else
{
TL = texture2D(texture, vec2(texCoord.s - x, texCoord.t + y ));
}
Texel coordinates range from 0 to 1 on both axes over the whole texture. Therefore, texture coordinates (0, 0) refers to the bottom-left corner (i.e. not the center) of the bottom-left texel.
Illustrated:
d_x = 1 / textureWidth
d_y = 1 / textureHeight
+-----+-----+-----+
| | | |
| | | |
| | | |
(-d_x,d_y) +-----+-----+-----+ (2d_x, d_y)
| | | |
| | A | |
| | | |
+-----+-----+-----+
(-d_x,0) (0,0) (d_x,0) (2d_x, 0)
A: (d_x/2, d_y/2) <= center of the bottom-left texel
If you sample at point A, linear texture filtering will produce the value of the bottom-left texel exactly. If you sample at (0, 0) however, linear filtering will give the average of all the corners texels.
This behavior can be controlled by configuring GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T with glTexParameter. In your case, GL_CLAMP_TO_EDGE is probably the right mode to use.
Related
I am struggling with OpenGL lighting. I have the following enabled:
Specular[0] = 1f;
Specular[1] = 1f;
Specular[2] = 1f;
Specular[3] = 1f;
Gl.glMaterialfv(Gl.GL_FRONT_AND_BACK, Gl.GL_SHININESS, new float[] { 70 });
Gl.glMaterialfv(Gl.GL_FRONT_AND_BACK, Gl.GL_SPECULAR, Specular);
Gl.glLightfv(Gl.GL_LIGHT0, Gl.GL_POSITION, LightDef.LightPosToArray);
Gl.glLightfv(Gl.GL_LIGHT0, Gl.GL_AMBIENT, LightDef.AmbientToArray);
Gl.glLightfv(Gl.GL_LIGHT0, Gl.GL_DIFFUSE, LightDef.DiffuseToArray);
Gl.glEnable(Gl.GL_LIGHT0);
Gl.glEnable(Gl.GL_LIGHTING);
Gl.glShadeModel(Gl.GL_SMOOTH);
When I draw rectangles adjacent to each other and they have the same size everything looks fine. When I draw them at different sizes the lighting changes on each one. How can I make them look seamless?
Gl.glNormal3f(0, 0, 1);
Gl.glRectf(0, 0, 1000, 500);
Gl.glRectf(0, 500, 500, 1000);
Gl.glRectf(500, 500, 700, 700);
The Legacy OpenGLlight model uses Gouraud Shading. The light is computed for the vertices and interpolated on the fragments which are covered by the polygons.
The specular light depends on the location or direction of the light source, the normal vector of the surface and the viewing direction. Since direction of view is different to each vertex, the lighting of the surfaces is different.
In compare to the specular light, the diffuse light does not depend on the viewing direction. It depends on the location or direction of the light source and the normal vector of the surface. If the normal vectors of the surfaces are equal and the light source is a directional light, then the light would be the same over all the surfaces.
Set the specular light 0:
Specular[0] = 0f;
Specular[1] = 0f;
Specular[2] = 0f;
Specular[3] = 0f;
and ensure that the light source is a directional light (4th component of LightDef.LightPosToArray has to be 0).
Another option would be to tessellate the larger surfaces in that way, that the share the vertices with the smaller ones. e.g.:
+---+---+
| | |
+---+---+---+
| | | |
+---+---+---+---+
| | | | |
+---+---+---+---+
See also GLSL fixed function fragment program replacement
In a vertex i give pointSize a value bigger than 1. Say 15.
In the fragment i would like choose a point inside that 15x15 square :
vec2 sprite = gl_PointCoord;
if (sprite.s == (9. )/15.0 ) discard ;
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
But that does not work when Size is not a power of 2.
(if size is 16, so (sprite.s == a/16.) where a is in 1..16 : Perfect !)
is a way to achieve my purpose where size is not of power of 2 ?
edit : i know the solution with a texture of size : PointSize * PointSize
gl_FragColor = texture2D(tex, gl_PointCoord);
but that not fit for dynamic change
edit 26 july :
first I do not understand why it is easier to read in a float texture using webgl2 rather than webgl. For my part I make an ext = gl.getExtension ("OES_texture_float"); and the gl.readpixel uses the same syntax.
Then, it is certain that I did not understand everything but I tried the solution s = 0.25 and s = 0.75 for a correctly centered 2x2 pixel, and that does not seem to work.
On the other hand, the values: 0.5 and 1.0 give me a correct display (see fiddle 1)
(fiddle 1) https://jsfiddle.net/3u26rpf0/274/
In fact, to accurately display any size vertex (say SIZE) I use the following formula:
float size = 13.0;
float nby = floor ((size) /2.0);
float nbx = floor ((size-1.0) /2.0);
//
// <nby> pixels CENTER <nbx> pixels
//
// if size is odd nbx == nby
// if size is even nbx == nby +1
vec2 off = 2. * vec2 (nbx, nby) / canvasSize;
vec2 p = -1. + (2. * (a_position.xy * size) + 1.) / canvasSize + off;
gl_Position vec4 = (p, 0.0,1.0);
gl_PointSize = size;
https://jsfiddle.net/3u26rpf0/275/
Checking for exact values with floating point numbers is not generally a good idea. Check for range
sprite.s > ??? && sprite.s < ???
Or better yet consider using a mask texture or something more flexible than a hard coded if statement.
Otherwise in WebGL pixels are referred to by their centers. So, if you draw a 2x2 point on pixel boundary then these should be the .s values for gl_PointCoord.
+-----+-----+
| .25 | .75 |
| | |
+-----+-----+
| .25 | .75 |
| | |
+-----+-----+
If you draw it off a pixel boundary then it depends
++=====++=====++======++
|| || || ||
|| +------+------+ ||
|| | | | ||
++==| | |===++
|| | | | ||
|| +------+------+ ||
|| | | | ||
++==| | |===++
|| | | | ||
|| +------+------+ ||
|| || || ||
++=====++=====++======++
It will still only draw 4 pixels (the 4 that are closest to where the point lies) but it will choose different gl_PointCoords as though it could draw on fractional pixels. If we offset gl_Position so our point is over by .25 pixels it still draws the exact same 4 pixels as when pixel aligned since an offset of .25 is not enough move it from drawing the same 4 pixels we can guess it's going to offset gl_PointCoord by -.25 pixels (in our case that's for a 2x2 point that's an offset of .125 so (.25 - -.125) = .125 and (.75 - .125) = .675.
We can test what WebGL is using by writing them into a floating point texture using WebGL2 (since it's easier to read the float pixels back in WebGL2)
function main() {
const gl = document.createElement("canvas").getContext("webgl2");
if (!gl) {
return alert("need WebGL2");
}
const ext = gl.getExtension("EXT_color_buffer_float");
if (!ext) {
return alert("need EXT_color_buffer_float");
}
const vs = `
uniform vec4 position;
void main() {
gl_PointSize = 2.0;
gl_Position = position;
}
`;
const fs = `
precision mediump float;
void main() {
gl_FragColor = vec4(gl_PointCoord.xy, 0, 1);
}
`;
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
const width = 2;
const height = 2;
// creates a 2x2 float texture and attaches it to a framebuffer
const fbi = twgl.createFramebufferInfo(gl, [
{ internalFormat: gl.RGBA32F, minMag: gl.NEAREST, },
], width, height);
// binds the framebuffer and set the viewport
twgl.bindFramebufferInfo(gl, fbi);
gl.useProgram(programInfo.program);
test([0, 0, 0, 1]);
test([.25, .25, 0, 1]);
function test(position) {
twgl.setUniforms(programInfo, {position});
gl.drawArrays(gl.POINTS, 0, 1);
const pixels = new Float32Array(width * height * 4);
gl.readPixels(0, 0, 2, 2, gl.RGBA, gl.FLOAT, pixels);
console.log('gl_PointCoord.s at position:', position.join(', '));
for (y = 0; y < height; ++y) {
const s = [];
for (x = 0; x < width; ++x) {
s.push(pixels[(y * height + x) * 4]);
}
console.log(`y${y}:`, s.join(', '));
}
}
}
main();
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
The formula for what gl_PointCoord will be is in the spec section 3.3
so following that a point drawn .25 pixels off of a 0 pixel boundary for a 2 pixel width point
drawing a 2x2 at .25,.25 (slightly off center)
// first pixel
// this value is constant for all pixels. It is the unmodified
// **WINDOW** coordinate of the **vertex** (not the pixel)
xw = 1.25
// this is the integer pixel coordinate
xf = 0
// gl_PointSize
size = 2
s = 1 / 2 + (xf + 1 / 2 - xw) / size
s = .5 + (0 + .5 - 1.25) / 2
s = .5 + (-.75) / 2
s = .5 + (-.375)
s = .125
which is the value I get from running the sample above.
xw is the window x coordinate for the vertex. In other words xw is based on what we set gl_Position to so
xw = (gl_Position.x / gl_Position.w * .5 + .5) * canvas.width
Or more specificially
xw = (gl_Position.x / gl_Position.w * .5 + .5) * viewportWidth + viewportX
Where viewportX and viewportWidth are set with gl.viewport(x, y, width, height) and default to the same size as the canvas.
Say I have an image of size 320x240. Now, sampling from an sampler2D with integer image coordinates ux, uy I must normalize for texture coordinates in range [0, size] (size may be width or height).
Now, I wonder if I should normalize like this
texture(image, vec2(ux/320.0, uy/240.0))
or like this
texture(image, vec2(ux/319.0, uy/239.0))
Because ux = 0 ... 319 and uy = 0 ... 239. The latter one will actually cover the whole range of [0, 1] correct? That means 0 corresponds to the e.g. left-most pixels and 1 corresponds to the right most pixels, right?
Also I want to maintain filtering, so I would like to not use texelFetch.
Can anyone tell something about this? Thanks.
Texture coordinates (and pixel coordinates) go from 0 to 1 on the edges of the pixels no matter how many pixels.
A 4 pixel wide texture
0 0.5 1 <- texture coordinate
| | |
V V v
+-----+-----+-----+-----+
| | | | |
| | | | | <- texels
+-----+-----+-----+-----+
A 5 pixel wide texture
0 0.5 1 <- texture coordinate
| | |
V V v
+-----+-----+-----+-----+-----+
| | | | | |
| | | | | | <- texels
+-----+-----+-----+-----+-----+
A 6 pixel wide texture
0 0.5 1 <- texture coordinate
| | |
V V V
+-----+-----+-----+-----+-----+-----+
| | | | | | |
| | | | | | | <- texels
+-----+-----+-----+-----+-----+-----+
A 1 pixel wide texture
0 0.5 1 <- texture coordinate
| | |
V V V
+-----+
| |
| | <- texels
+-----+
If you use u = integerTextureCoordinate / width for each texture coordinate you'd get these coordinates
0 0.25 0.5 0.75 <- u = intU / width;
| | | |
V V V V
+-----+-----+-----+-----+
| | | | |
| | | | | <- texels
+-----+-----+-----+-----+
Those coordinates point directly between texels.
But, the texture coords you want if you want to address specific texels are like this
0.125 0.375 0.625 0.875
| | | |
V V V V
+-----+-----+-----+-----+
| | | | |
| | | | | <- texels
+-----+-----+-----+-----+
Which you get from
u = (integerTextureCoord + .5) / width
No, the first one is actually correct:
texture(image, vec2(ux/320.0, uy/240.0))
Your premise that "ux = 0 ... 319 and uy = 0 ... 239" is incorrect. If you render a 320x240 quad, say, then it is actually ux = 0 ... 320 and uy = 0 ... 240.
This is because pixels and texels are squares sampled at half-integer coordinates. So, for example, let's assume that you render your 320x240 texture on a 320x240 quad. Then the bottom-left pixel (0,0) will actually be sampled at screen-coordinates (.5,.5). You normalize it by dividing by (320,240), but then OpenGL will multiply the normalized coordinates back by (320,240) to get the actual texel coordinates, so it will sample (.5,.5) from the texture, which corresponds to the center of the (0,0) pixel, which returns its exact color.
It is important to think of pixels in OpenGL as squares, so that coordinates (0,0) correspond the bottom-left corner of the bottom-left pixel and the non normalized (w,h) corresponds to the top-right corner of the top-right pixel (for texture of size (w,h)).
I'll try to explain my problem with images. So this is a test texture I'm using for my OpenGL application:
As you can see, there is a 2-pixels wide border around the image with different colors for me to be able to see if coordinates are properly set in my application.
I'm using a 9-cell pattern so I'm drawing 9 quads with specific texture coordinates. At first sight everything works fine, but there is a small problem with displaying a texture:
In the picture I marked where is first quad, and where is the second one. As you can see, first one is displayed correctly, but second one smoothly goes from first quad's colors to it's own, but it should start with pure green and pink. So I'm guessing it's a problem with texture coordinates.
This is how they are set:
// Bottom left quad [1st quad]
glBegin(GL_QUADS);
// Bottom left
glTexCoord2f(0.0f, 1.0);
glVertex2i(pos.x, pos.y + height);
// Top left
glTexCoord2f(0.0f, (GLfloat)1.0 - maxTexCoordBorderY);
glVertex2i(pos.x, pos.y + height - m_borderWidth);
// Top right
glTexCoord2f(maxTexCoordBorderX, (GLfloat)1.0 - maxTexCoordBorderY);
glVertex2i(pos.x + m_borderWidth, pos.y + height - m_borderWidth);
// Bottom right
glTexCoord2f(maxTexCoordBorderX, 1.0);
glVertex2i(pos.x + m_borderWidth, pos.y + height);
glEnd();
// Bottom middle quad [2nd quad]
glBegin(GL_QUADS);
// Bottom left
glTexCoord2f(maxTexCoordBorderX, 1.0);
glVertex2i(pos.x + m_borderWidth, pos.y + height);
// Top left
glTexCoord2f(maxTexCoordBorderX, (GLfloat)1.0 - maxTexCoordBorderY);
glVertex2i(pos.x + m_borderWidth, pos.y + height - m_borderWidth);
// Top right
glTexCoord2f((GLfloat)1.0 - maxTexCoordBorderX, (GLfloat)1.0 - maxTexCoordBorderY);
glVertex2i(pos.x + width - m_borderWidth, pos.y + height - m_borderWidth);
// Bottom right
glTexCoord2f((GLfloat)1.0 - maxTexCoordBorderX, 1.0);
glVertex2i(pos.x + width - m_borderWidth, pos.y + height);
glEnd();
You can see that I'm using maxTexCoordBorderX variable which is calculated based on border and image size. Image width is 32 and border width is 2.
maxTexCoordBorderX = 2 / 32 = 0.0625
Could anyone please help with with finding out where the problem is?
Most likely culprit is that you are not sampling on the texel centers. For example, if you have a 32x32 pixel texture, the texel-centers are offset by 1/64.
Here's a rough diagram of a 4x4 texture. The squares are the texels (or pixels) of the image.
_________________1,1
| | | | |
| | | | |
|___|___|___|___|_0.75
| | | | |
| | | | |
|___|___|___|___|_0.5
| | | | |
| | | | |
|___|___|___|___|_0.25
| | | | |
| X | | | |
|___|___|___|___|
0,0 | 0.5 | 1
0.25 0.75
x = (0.125, 0.125)
If you sample on one of the lines, you will get a value exactly between two texels, which will (if you have texture sampling set to linear blend) give you an average value. If you want to sample the exact texel value, you need to specify a u,v in the center of the texel.
You've running into a fencepost problem. I answered the solution to your very problem here
https://stackoverflow.com/a/5879551/524368
I am reading the depthbuffer of a scene, however as I rotate the camera I notice that towards the edges of the screen the depth is returned closer to camera. I think the angle of impact has an effect on the depthbuffer, however as I am drawing a quad to the framebuffer, I do not want this to happen (this is not actually the case ofcourse but this sums up my what I need).
I linearize the depth with the following:
float linearize(float depth) {
float zNear = 0.1;
float zFar = 40.0;
return (2.0 * zNear) / (zFar + zNear - depth * (zFar - zNear));
}
I figured the following to correct for this, but it's not quite right yet. 45.0 is the angle of the camera vertically / 2. side is the space from the center of the screen.
const float angleVert = 45.0 / 180.0 * 3.17;
float sideAdjust(vec2 coord, float depth) {
float angA = cos(angleVert);
float side = (coord.y - 0.5);
if (side < 0.0) side = -side;
side *= 2.0;
float depthAdj = angA * side;
return depth / depthAdj;
}
To show my problem with a drawing with results of depth of a flat surface in front of the camera:
c
/ | \
/ | \
/ | \
closer further closer
is what I have, what I need:
c
| | |
| | |
| | |
even even even
An idea of how to do it, would be to find the position P in eye-space. Consider P a vector from origin to the point. Project the P onto the the eye direction vector (which in eye-space always is (0,0,-1)). The length of the projected vector is what you need.