I am rendering a point based terrain from loaded heightmap data - but the points change their texturing depending on where the camera position is. To demonstrate the bug (and the fact that this isnt occuring from a z-buffering problem) I have taken screenshots with the points rendered at a fixed 5 pixel size from very slightly different camera positions (same angle), shown bellow:
PS: The images are large enough if you drag them into a new tab, didn't realise stack would scale them down this much.
State 1:
State 2:
The code to generate points is relatively simple so I'm posting this merely to rule out the option - mapArray is a single dimensional float array and copied to a VBO:
for(j = 0; j < mHeight; j++)
{
for(i = 0; i < mWidth; i++)
{
height = bitmapImage[k];
mapArray[k++] = 5 * i;
mapArray[k++] = height;
mapArray[k++] = 5 * j;
}
}
I find it more likely that I need to adjust my fragment shader because I'm not great with shaders- although I'm unsure where I could have gone wrong with such simple code and guess it's probably just not fit for purpose (with point based rendering). Bellow is my frag shader:
in varying vec2 TexCoordA;
uniform sampler2D myTextureSampler;
void main(){
gl_FragColor = texture2D(myTextureSampler, TexCoordA.st) * gl_Color;
}
Edit (requested info):
OpenGL version 4.4 no texture flags used.
TexCoordA is passed into the shader directly from my Vertex shader with no alterations at all. Self calculated UV's using this:
float* UVs = new float[mNumberPoints * 2];
k = 0;
for(j = 0; j < mHeight; j++)
{
for(i = 0; i < mWidth; i++)
{
UVs[k++] = (1.0f/(float)mWidth) * i;
UVs[k++] = (1.0f/(float)mHeight) * j;
}
}
This looks just like a subpixel accurate texture mapping side-effect. The problem with texture mapping implementation is that it needs to interpolate the texture coordinates on the actual rasterized pixels (fragments). When your camera is moving, the roundoff error from real position to the integer pixel position affects texture mapping, and is normally required for jitter-free animation (otherwise all the textures would jump by seemingly random subpixel amounts as the camera moves. There was a great tutorial on this topic by Paul Nettle.
You can try to fix this by not sampling texel corners but trying to sample texel centers (add half size of the texel to your point texture coordinates).
Another thing you can try is to compensate for the subpixel accurate rendering by calculating the difference between the rasterized integer coordinate (which you need to calculate yourself in a shader) and the real position. That could be enough to make the sampled texels more stable.
Finally, size matters. If your texture is large, the errors in the interpolation of the finite-precision texture coordinates can introduce these kinds of artifacts. Why not use GL_TEXTURE_2D_ARRAY with a separate layer for each color tile? You could also clamp the S and T texcoords to edge of the texture to avoid this more elegantly.
Just a guess: How are your point rendering parameters set? Perhaps the distance attenuation (GL_POINT_DISTANCE_ATTENUATION ) along with GL_POINT_SIZE_MIN and GL_POINT_SIZE_MAX are causing different fragment sizes depending on camera position. On the other hand I think I remember that when using a vertex shader these functionality is disabled and the vertex shader must decide about the size. I did it once by using
//point size calculation based on z-value as done by distance attenuation
float psFactor = sqrt( 1.0 / (pointParam[0] + pointParam[1] * camDist + pointParam[2] * camDist * camDist) );
gl_PointSize = pointParam[3] * psFactor;
where pointParam holds the three coefficients and the min point size:
uniform vec4 pointParam; // parameters for point size calculation [a b c min]
You may play around by setting your point size in the vertex shader directly with gl_PointSize = [value].
Related
I've written a GLSL shader to emulate a vintage arcade game's indexed color tile-based graphics. I made a couple of shaders, one that does this with point sprites, and another using polygons. The point sprite shader converts gl_PointCoord to a pixel coordinate within each tile like so:
vec2 pixelFloat = gl_PointCoord * tileSizeInPixels;
ivec2 pixel = ivec2(int(pixelFloat.x), int(pixelFloat.y));
// pixel is now used in conjunction with a tile 'ID' uniform
// to locate indexed colors with a texture lookup from a
// large texture representing the game's ROM, with GL_NEAREST filtering.
// very clever 😋
The polygon shader instead uses an attribute buffer to pass pixel coordinates (which range {0.0 … 32.0} for a 32-pixel square tile, for example). After conversion to int, each fragment within the tile sees pixel coordinate values ranging x {0 … 31} y {0 … 31}, except:
This worked fine apart from artefacts sometimes showing at the edge of the tile with the higher numbered pixel coordinate at certain resolutions. I guessed that would be due to the fragment being at just the right location to be right on the maximum value of either gl_PointCoord or the vertex attribute value of 32.0, causing that fragment to sample the wrong tile.
These artefacts went away when I clamped the pixel ivec like this:
vec2 pixelFloat = gl_PointCoord * tileSizeInPixels;
ivec2 pixel = ivec2(
min(int(pixelFloat.x), tileSizeInPixels - 1),
min(int(pixelFloat.y), tileSizeInPixels - 1));
which solved the problem and didn't introduce any new artefacts.
My question is: Is there some way of controlling the interpolation of gl_PointCoord or my pixel coordinate attribute such that we can guarantee the interpolated value will range
minimum value <= interpolated value < maximum value
as opposed to
minimum value <= interpolated value <= maximum value
Is there some way I can avoid using min() here?
NB: GL_CLAMP_* is not an option here, as the pixel coordinate is used to look up the pixel's index color from a much larger texture, which is essentially the game's sprite ROM loaded into a single large texture buffer.
I have a quad of unitary size rendered with seamless texture and with texture repetition enabled.
For the texture coordinates I get it from the position.
So I can artificially raise the texture resolution by multiplying the texture coordinates by a factor of N, which will make the texture repeat over the quad N*N times.
In my code my texture is a normal map and there is specular light calculation with it in fragment shader. And I noticed more I multiply the texture coordinates more my performance drops, however it seems the performance drop is constant after a certain value (no difference between N = 1024 and N = 10240 for example)
What I do not understand is that the quad size stays the same, the texture size is the same, why does multiplying texture coordinates cost me performance over the same amount of fragments ?
No mipmapping, I use GL_LINEAR for filtering for both min & mag filters.
When scale increases, adjacent pixels in your fragment shader correspond to texels in your texture that are far apart. With GL_LINEAR, this means that the texels are not only far apart in the texture, but they are also far apart in memory.
With scale closer to 1:1, adjacent pixels in your fragment shader will be taken from texels that are also close together. This means they will be close together in memory, which means better memory locality. This requires fewer fetches from memory.
Mipmapped textures do not have this problem, and they often look better too because they don't have the aliasing problems you see with GL_LINEAR minification.
Simulating it on the CPU
The CPU has the same problem with memory fetches.
float sum(float *arr, int size, int stride, int count) {
int pos = 0;
float sum = 0.0f;
for (int i = 0; i < count; i++) {
pos = (pos + stride) % size;
sum += arr[pos];
}
return sum;
}
As stride increases, the performance gets worse. This happens for the same reason that your fragment shader performance gets worse, even though it's happening on the CPU.
I want to write a signed distance interpretation. For that I am creating a voxelgrid 100*100*100 for example (the size will increase if it is working).
Now my plans are to load a point cloud into a 1d texture:
glEnable(GL_TEXTURE_1D);
glGenTextures(1, &_texture);
glBindTexture(GL_TEXTURE_1D, _texture);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, pc->pc.size(), 0, GL_RGBA, GL_FLOAT, &pc->pc.front());
glBindTexture(GL_TEXTURE_1D, 0);
'pc' is just a class which holds a vector of structure Point, which has only floats x,y,z,w.
Than I want to render the hole 100x100x100 grid, so each voxel and iterate trough all points of that texture, calculate the distance to my current voxel and store that distance in a new texture (1000x1000). For the moment this texture I am creating holds only color valuables which stores the distance in the red and green component and blue is set to 1.0.
So I can see the result on screen.
My problem is now, that when I have about 500 000 points in my point cloud, It seems to stop rendering after a few voxels(less than 50 000). My guess is that if it takes to long, it stops and just trow out the buffer that it has.
I don't know if that can be the case but if it is, is there something I can do against it, or maybe something I can do to make this procedure better/faster.
My second guess is, that there is something I don't consider with the 1D Texture. But is there a better way to pass in a high amount of data? Because I will surely need a few hundred thousand points data.
I don't know if it helps if I show the full fragment shader, so I will only show some parts, which I think is important for that problem:
Distance calculation and iteration through all points:
for(int i = 0; i < points; ++i){
vec4 texInfo = texture(textureImg, getTextCoord(i));
vec4 pos = position;
pos.z /= rows*rows;
vec4 distVector = texInfo-pos;
float dist = sqrt(distVector.x*distVector.x + distVector.y*distVector.y + distVector.z*distVector.z);
if(dist < minDist){
minDist = dist;
}
}
Function getTexCoord:
float getTextCoord(float a)
{
return (a * 2.0f + 1.0f) / (2.0f * points);
}
*Edit:
vec4 newPos = vec4(makeCoord(position.x+Col())-1,
makeCoord(position.y+Row())-1,
0,
1.0);
float makeCoord(float a){
return (a/rows)*2;
}
int Col(){
float a = mod(position.z,rows);
return int(a);
}
int Row()
{
float a = position.z/rows;
return int(a);
}
You absolutely shouldn`t be looping through all of your points in a fragment shader, as it gets calculated N times per frame (where N equals the number of pixels), which effectively gives you O(N2) computational complexity.
All textures have limits on how much data they can hold per dimension. Two most important values here are GL_MAX_TEXTURE_SIZE and GL_MAX_3D_TEXTURE_SIZE. As stated in official docs,
Texture sizes have a limit based on the GL implementation. For 1D and 2D textures (and any texture types that use similar dimensionality, like cubemaps) the max size of either dimension is GL_MAX_TEXTURE_SIZE. For array textures, the maximum array length is GL_MAX_ARRAY_TEXTURE_LAYERS. For 3D textures, no dimension can be greater than GL_MAX_3D_TEXTURE_SIZE in size.
Within these limits, the size of a texture can be any value. It is advised however, that you stick to powers-of-two for texture sizes, unless you have a significant need to use arbitrary sizes.
The most typical values are listed here and here.
If you really have to use large data amounts inside your frag shader, consider a 2D or 3D texture with known power-of-2 dimensions and GL_NEAREST / GL_REPEAT coordinates. This will enable you to compute 2D texture coords just by multiplying the source offset by a precomputed 1/width value (Y coord; the remainder is by definition smaller than 1 texel and can be safely ignored in the presence of GL_NEAREST) and using it as-is for X coord (GL_REPEAT guarantees that only the remainder gets used). Personally I implemented this approach when I needed to pass 128 MB of data to a GLSL 1.20 shader.
If you are targeting a recent enough OpenGL (≥ 3.0), you also can use buffer textures.
And the last, but not the least. You cannot pass integer-precision values greater than 224 through standard IEEE floats.
Here's my situation: I need to draw a rectangle on the screen for my game's Gui. I don't really care how big this rectangle is or might be, I want to be able to handle any situation. How I'm doing it right now is I store a single VAO that contains only a very basic quad, then I re-draw this quad using uniforms to modify the size and position of it on the screen each time.
The VAO contains 4 vec4 vertices:
0, 0, 0, 0;
1, 0, 1, 0;
0, 1, 0, 1;
1, 1, 1, 1;
And then I draw it as a GL_TRIANGLE_STRIP. The XY of each vertex is it's position, and the ZW is it's texture co-ordinates*. I pass in the rect for the gui element I'm currently drawing as a uniform vec4, which offsets the vertex positions in the vertex shader like so:
vertex.xy *= guiRect.zw;
vertex.xy += guiRect.xy;
And then I convert the vertex from screen pixel co-ordinates into OpenGL NDC co-ordinates:
gl_Position = vec4(((vertex.xy / screenSize) * 2) -1, 0, 1);
This changes the range from [0, screenWidth | screenHeight] to [-1, 1].
My problem comes in when I want to do texture wrapping. Simply passing vTexCoord = vertex.zw; is fine when I want to stretch a texture, but not for wrapping. Ideally, I want to modify the texture co-ordinates such that 1 pixel on the screen is equal to 1 texel in the gui texture. Texture co-ordinates going beyond [0, 1] is fine at this stage, and is in fact exactly what I'm looking for.
I plan to implement texture atlasses for my gui textures, but managing the offsets and bounds of the appropriate sub-texture will be handled in the fragment shader - as far as the vertex shader is concerned, our quad is using one solid texture with [0, 1] co-ordinates, and wrapping accordingly.
*Note: I'm aware that this particular vertex format isn't neccesarily useful for this particular case, I could be using vec2 vertices instead. For the sake of convenience I'm using the same vertex format for all of my 2D rendering, and other objects ie text actually do need those ZW components. I might change this in the future.
TL/DR: Given the size of the screen, the size of a texture, and the location/size of a quad, how do you calculate texture co-ordinates in a vertex shader such that pixels and texels have a 1:1 correspondence, with wrapping?
That is really very easy math: You just need to relate the two spaces in some way. And you already formulated a rule which allows you to do so: a window space pixel is to map to a texel.
Let's assume we have both vec2 screenSize and vec2 texSize which are the unnormalized dimensions in pixels/texels.
I'm not 100% sure what exactly you wan't to achieve. There is something missing: you actaully did not specify where the origin of the texture shall lie. Should it always be but to the bottom left corner of the quad? Or should it be just gloablly at the bottom left corner of the viewport? I'll assume the lattter here, but it should be easy to adjust this for the first case.
What we now need is a mapping between the [-1,1]^2 NDC in x and y to s and t. Let's first map it to [0,1]^2. If we have that, we can simply multiply the coords by screenSize/texSize to get the desired effect. So in the end, you get
vec2 texcoords = ((gl_Position.xy * 0.5) + 0.5) * screenSize/texSize;
You of course already have caclulated (gl_Position.xy * 0.5) + 0.5) * screenSize implicitely, so this could be changed to:
vec2 texcoords = vertex.xy / texSize;
Is there any way to vary the point size when drawing lots of points? I know there's the glPointSize(float), but is there a way to do it in a 'batch' or array?
I would like the points to have different sizes based on an attribute of the data. Such as each point having x, y, z, and a size attribute. I'm using frame buffers right now in java.
Could I possibly use vertex shaders for this?
You can use point sprite: enable it using glEnable(GL_VERTEX_PROGRAM_POINT_SIZE); and then you can use gl_PointSize attribute in your vertex program.
Vertex shader example taken from an OpenGL discussion thread:
void main() {
gl_FrontColor=gl_Color;
gl_PointSize = gl_Normal.x;
gl_Position = ftransform();
}
This was what I ever did,
//reset
glLoadIdentity();
//set size to 1 for a group of points
glPointSize(1);
//group #1 starts here
glBegin(GL_POINTS);
//color of group #1 is white
glColor3f(1,1,1);
for(int a=0; a<x; a++)
for(int b=0; b<y; b++)
glVertex3f(a/953.,-b/413.,0.); //location of points
glEnd();
//set size to 5 for another group of points
glPointSize(5);
//group #2 starts here
glBegin(GL_POINTS);
//color of group #2 is red
glColor3f(1,0,0);
for(unsigned long int a=0; a<jd; a++)
{
glVertex3f(data[a].i,data[a].j,0);
}
glEnd();
I believe it's with glPointSize(GLfloat size)
Source:
http://www.talisman.org/opengl-1.1/Reference/glPointSize.html
With GL_POINT_DISTANCE_ATTENUATION (GL14?) you can set the coefficients for a function (I think it's quadratic) for computing point size from Z distance.
Increasing and decreasing the point size affects more than one pixel, but shaders are meant to be run only once per pixel. It will fail, because once the shader program has been run for a particular pixel, can the change only affect the following pixels, but not the previous pixels.
Shader programs are run on many shader units simultaneously and for many pixels in parallel making it impossible to do what you are trying to do. The limitation will be that one can set the pixel size with a shader program, but it will keep its size for all pixels the shader program gets run.
You could try to sort your point data by size and group points of the same size into one array and draw each array with a different point size.
Or you could try doing it with an indirection where you first render the different point sizes into a texture and in a second pass render the pixels by accessing the texture data and using it to test if a pixel should be rendered (or not).