I am not sure how exactly the texture2D command works in GLSL with linear filtering. How does it choose and linearly interpolate between pixels.
Imagine I had a texture with 4 grayscale pixels of value {0, 1, 2, 3}. And I wanted OpenGL to draw it across a line, 8 pixels wide. How would texture2D fill those 8 pixels?
Would it be something like: {0, .375(3/8), .75(6/8), 1.125(9/8), 1.5(12/8), 1.875(15/8), 2.25(18/8), 2.625(21/8), 3}?
in fact there are 2 aspects in the question:
- yes, GL_LINEAR means it's a linear interpolation, in each direction (so, bilinear).
- now, texture pixels and screen pixels might not fall exactly face to face, depending whether the floating point value of the vertex project it at the center or at the corner. So it changes first,last, and indeed all intermediate values.
Note that for these questions, it is almost quicker to experiment than to discuss about it: doing fast small experiment in GLSL(ES) is straightforward on web sites such as shadertoy, for instance.
Related
I have a application that will encode data for bearing and intensity using 32 bits. My fragment shader already decodes the values and then sets the color depending on bearing and intensity.
I'm wondering if it's also possible, via shader, to change the size (and possibly shape) of the drawn pixel.
As an example, let's say we have 4 possible values for intensity, then 0 would cause a single pixel to be drawn, 1 would draw a 2x2 square, 2 a 4x4 square and 3 a circle with a radius of 6 pixels.
In the past, we had to do all this on the CPU side and I was hoping to offload this job to the GPU.
No, fragment shaders cannot affect the "size" of the data they write. Once something has been rasterized into fragments, it doesn't really have a "size" anymore.
If you're rendering GL_POINTS primitives, you can change their size from the vertex shader. As for point sizes, it's rather difficult to ensure that a particular point covers an exact number of fragments.
The first thing that came into my mind is doing something similiar to blur technique, but instead of bluring the texture, we use it to look for neighbouring texels with a range to check if it has the intensity above 1.0f. if yes, then set the current texel color to for example red.
If you're using a fbo that is 1:1 in comparison to window size, use 1/width and 1/height in texture coordinates to get approximately 1 pixel (well not exactly because it is not a pixel but texel, just nearly)
Although this work just fine, the downside of this is it is very expensive as it will have n^2 complexity and probably some branching.
Edit: after thinking awhile this might not work for size with even number
I am working in VR field where good calibration of a projected screen is very important, and because of difficult-to-adjust ceiling mounts and other hardware specificities, I am looking for a fullscreen shader method to “correct” the shape of the screen.
Most of 2D or 3D engines allows to apply a full-screen effect or deformation by redrawing the rendering result on a quad that you can deform or render in a custom way.
The first idea was to use a vertex shader to offset the corners of this screen quad, so the image is deformed as a quadrilateral (like the hardware keystone on a projector), but it won’t be enough for the requirements
(this approach is described on math.stackexchange with a live fiddle demo).
In my target case:
The image deformation must be non-linear most of the time, so 9 or 16 control points are needed to get a finer adjust.
The borders of the image are not straight (barrel or pillow effect), so even with few control points, the image must be distorted in a curved way in between. Otherwise the deformation would make visible linear seams between at each control points’ limits.
Ideally, knowing the corrected position of each control points of 3x3 or 4x4 grid, the way would be to define a continuous transform for the texture coordinates of the image being drawn on the full screen
quad:
u,v => corrected_u, corrected_v
You can find an illustration here.
I’ve saw some FFD algorithm that works in 2D or 3D that would allow to deform “softly” an image or mesh as if it was made of rubber, but the implementation seems heavy for a real-time shader.
I thought also of a weight-based deformation as we have in squeletal/soft-bodies animation, but seems uncertain to weight properly the control points.
Do you know a method, algorithm or general approach that could help me solve the problem ?
I saw some mesh-based deformation like the new Oculus Rift DK2 requires for its own deformations, but most of the 2D/3D engine use a single quad made of 4 vertices only in standard.
If you need non linear deformation Bezier Surfaces are pretty handy and easy to implement.
You can either pre build them in CPU, or use hardware tessellation (example provided here)
Continuing my research, I found a way.
I created a 1D RGB texture corresponding to a "ramp" or cosine values. This will be the 3 influence coefficients of offset parameters on a 0..1 axis, with 3 coefficients at 0, 0.5 and 1 :
Red starts at 1 at x=0 and goes down to 0 at x=.5
Green start at 0 at x=0, goes to 1 at x=0.5 and goes back to 0 at x=1
Blue starts at 0 at x=0.1 and goes up to 1 at x=1
With these, from 9 float2 uniforms I can interpolate very softly my parameters over the image (with 3 lookups on horizontal, and a final one for vertical).
Then, one interpolated, I offsets the texture coord with these and it works :-D
This is more or less a weighted interpolation of the coordinates using texture lookups for speedup.
Problem Explaination
I am currently implementing point lights for a deferred renderer and am having trouble determining where a the heavy pixelization/triangulation that is only noticeable near the borders of lights is coming from.
The problem appears to be caused by loss of precision somewhere, but I have been unable to track down the precise source. Normals are an obvious possibility, but I have a classmate who is using directx and is handling his normals in a similar manner with no issues.
From about 2 meters away in our game's units (64 units/meter):
A few centimeters away. Note that the "pixelization" does not change size in the world as I approach it. However, it will appear to swim if I change the camera's orientation:
A comparison with a closeup from my forward renderer which demonstrates the spherical banding that one would expect with a RGBA8 render target (only 0-255 possible values for each color). Note that in my deferred picture the back walls exhibit normal spherical banding:
The light volume is shown here as the green wireframe:
As can be seen the effect isn't visible unless you get close to the surface (around one meter in our game's units).
Position reconstruction
First, I should mention that I am using a spherical mesh which I am using to only render the portion of the screen that the light overlaps. I rendering only the back-faces if the depth is greater or equal the depth buffer as suggested here.
To reconstruct the camera space position of a fragment I am taking the vector from the camera space fragment on the light volume, normalizing it, and scaling it by the linear depth from my gbuffer. This is sort of a hybrid of the methods discussed here (using linear depth) and here (spherical light volumes).
Geometry Buffer
My gBuffer setup is:
enum render_targets { e_dist_32f = 0, e_diffuse_rgb8, e_norm_xyz8_specpow_a8, e_light_rgb8_specintes_a8, num_rt };
//...
GLint internal_formats[num_rt] = { GL_R32F, GL_RGBA8, GL_RGBA8, GL_RGBA8 };
GLint formats[num_rt] = { GL_RED, GL_RGBA, GL_RGBA, GL_RGBA };
GLint types[num_rt] = { GL_FLOAT, GL_FLOAT, GL_FLOAT, GL_FLOAT };
for(uint i = 0; i < num_rt; ++i)
{
glBindTexture(GL_TEXTURE_2D, _render_targets[i]);
glTexImage2D(GL_TEXTURE_2D, 0, internal_formats[i], _width, _height, 0, formats[i], types[i], nullptr);
}
// Separate non-linear depth buffer used for depth testing
glBindTexture(GL_TEXTURE_2D, _depth_tex_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, _width, _height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);
Normal Precision
The problem was that my normals just didn't have enough precision. At 8 bits per component that means 255 discrete possible values. Examining the normals in my gbuffer overlaid ontop of the lighting showed a 1-1 correspondence with normal value to lit "pixel" value.
I am unsure why my classmate does not get the same issue (he is going to investigate further).
After some more research I found that a term for this is quantization. Another example of it can be seen here with a specular highlight on page 19.
Solution
After changing my normal render target to RG16F the problem is resolved.
Using method suggested here to store and retrieve normals I get the following results:
I now need to store my normals more compactly (I only have room for 2 components). This is a good survey of techniques if anyone finds themselves in the same situation.
[EDIT 1]
As both Andon and GuyRT have pointed out in the comments, 16 bits is a bit overkill for what I need. I've switched to RGB10_A2 as they suggested and it gives very satisfactory results, even on rounded surfaces. The extra 2 bits help a lot (256 vs 1024 discrete values).
Here's what it looks like now.
It should also be noted (for anyone that references this post in the future) that the image I posted for RG16F has some undesirable banding from the method I was using to compress/decompress the normal (there was some error involved).
[EDIT 2]
After discussing the issue some more with a classmate (who is using RGB8 with no ill effects), I think it is worth mentioning that I might just have the perfect combination of elements to make this appear. The game I'm building this renderer for is a horror game that places you in pitch black environments with a sonar-like ability. Normally in a scene you would have a number of lights at different angles (my classmate's environments are all very well lit - they're making an outdoor racing game). That combined with the fact that it only appears on very round objects relatively close up might be why I provoked this. This is all just a (slightly educated) guess on my part.
Since GL_QUADS has been removed from OpenGL 3.1 and above, what is the fastest way to draw lots of quads without using it? I've tried several different methods (below) and have ranked them on speed on my machine, but I was wondering if there is some better way, since the fastest way still seems wasteful and inelegant. I should mention that in each of these methods I'm using VBOs with interleaved vertex and texture coordinates, since I believe that to be best practice (though I may be wrong). Also, I should say that I can't reuse any vertices between separate quads because they will have different texture coordinates.
glDrawElements with GL_TRIANGLE_STRIP using a primitive restart index, so that the index array looks like {0, 1, 2, 3, PRI, 4, 5, 6, 7, PRI, ...}. This takes in the first 4 vertices in my VBO, treats them as a triangle strip to make a rectangle, and then treats the next 4 vertices as a separate strip. The problem here is just that the index array seems like a waste of space. The nice thing about GL_QUADS in earlier versions of OpenGL is that it automatically restarts primitives every 4 vertices. Still, this is the fastest method I can find.
Geometry shader. I pass in 1 vertex for each rectangle and then construct the appropriate triangle strip of 4 vertices in the shader. This seems like it would be the fastest and most elegant, but I've read, and now seen, that geometry shaders are not that efficient compared to passing in redundant data.
glDrawArrays with GL_TRIANGLES. I just draw every triangle independently, reusing no vertices.
glMultiDrawArrays with GL_TRIANGLE_STRIP, an array of all multiples of 4 for the "first" array, and an array of a bunch of 4's for the "count" array. This tells the video card to draw the first 4 starting at 0, then the first 4 starting at 4, and so on. The reason this is so slow, I think, is that you can't put these index arrays in a VBO.
You've covered all the typical good ways, but I'd like to suggest a few less typical ones that I suspect may have higher performance. Based on the wording of the question, I shall assume that you're trying to draw an m*n array of tiles, and they all need different texture coordinates.
A geometry shader is not the right tool to add and remove vertices. It's capable of doing that, but it's really intended for cases when you actually change the number of primitives you're rendering dynamically (e.g. shadow volume generation). If you just want to draw a whole bunch of adjacent different primitives with different texture coordinates, I suspect the absolute fastest way would be to use tessellation shaders. Just pass in a single quad and have the tessellator compute texture coordinates procedurally.
A similar and more portable method would be to look up each quad's texture coordinate. This is trivial: say you're drawing 50x20 quads, you would have a 50x20 texture that stores all your texture coordinates. Tap this texture in your vertex program (or perhaps more efficiently in your geometry program) and send the result in a varying to the fragment program for actual rendering.
Note that in both of the above cases, you can reuse vertices. In the first method, the intermediate vertices are generated on the fly. In the second, the vertices' texture coordinates are replaced in the shader with cached values from the texture.
I am trying to create a mipmapped textured image that represents elevation. The image must be 940 x 618. I realize that my texture must have a width and height of a power of 2. As of now I have tried to incrementally go through doing all my texturing in squares (eg 64 x 64, or 128 x 128, even 512 x 512), but the image still comes out blurry. Any idea of how to better texture an image of this size?
Use a 1024x1024 texture and put your image in just a part of the image, 940x618. Then use the values 940.0/1024.0 and 618.0/1024.0 for the max texture coordinates, or scale the TEXTURE_MATRIX. This will make a 1:1 mapping for your pixels. You might also need to shift the model half a pixel to get a perfect fit, this depends on your model setup and view.
This is the technic I used in this screensaver I made for the Mac. http://memention.com/void/ It grabs the screen contents and uses it as a texture on some 3D effects and I really wanted a pixel perfect fit.
As far as I know, modern technologies do not require you to use a power of 2 for your dimensions. Just know however, that if this code is run an older machine, you'll have some problems. How old is your machine?
The texture is probably not mapped 1:1, and you have GL_LINEAR or GL_NEAREST filtering. Try higher resolution texture, mipmapping, and 1:1 screen mapping.
Use a 940x618 sized texture (if this is truly the size of the surface it's applied to) and set the texture's minification/magnification to use GL_LINEAR. That should give you the results you're after.