OpenGL: Drawing very thin triangles with TriangleList turn into points - opengl

I'm using TriangleList to output my primitives. Most all of the time I need to draw rectangles, triangles, circles. From time to time I need to draw very thin triangles (width=2px for example). I thought it should look like a line (almost a line) but it looks like separate points :)
Following picture shows what I'm talking about:
First picture at the left side shows how do I draw a rectangle (counter clockwise, from top right corner). And then you can see the "width" of the rectangle which I call "dx".
How to avoid this behavior? I would it looks like a straight (almost straight) line, not as points :)

As #BrettHale mentions, this is an aliasing problem. For example,
Without super/multisampling, the triangle only covers the centre of the bottom right pixel and only it will receive colour. Real pixels have area and in a perfect situation, would receive a portion of the colour equal to the area covered. "Antialiasing" techniques reduce aliasing effects caused by not integrating colour across pixels.
Getting it to look right without being incredibly slow is hard. OpenGL provides GL_POLYGON_SMOOTH, which conservatively rasterizes triangles and draws the correct percentages of colour to each pixel using blending. This works well until you have overlapping triangles and you hit the problem of transparency sorting where order-independent transparency is needed. A simple and more brute force solution is to render to a much bigger texture and then downsample. This is essentially what supersampling does, except the samples can be "anisotropic" (irregular) which gives a nicer result. Multisampling techniques are adaptive and a bit more efficient, e.g. supersample pixels only at triangle edges. It is fairly straightforward to set this up with OpenGL.
However, as the triangle area approaches zero the area will too and it'll still disappear entirely even with antialiasing (although will fade out rather than become pixelated). Although not physically correct, you may instead be after a minimum 1-pixel width triangle so you get the lines you want even if it's a really thin triangle. This is where doing your own conservative rasterization may be of interest.

This is the problem of skinny triangles in general. For example, in adaptive subdivision when you have skinny T-junctions, it happens all the time. One solution is to draw the edges (you can use GL_LINE_STRIP) with having antialiasing effect on You can have:
Gl.glShadeModel(Gl.GL_SMOOTH);
Gl.glEnable(Gl.GL_LINE_SMOOTH);
Gl.glEnable(Gl.GL_BLEND);
Gl.glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA);
Gl.glHint(Gl.GL_LINE_SMOOTH_HINT, Gl.GL_DONT_CARE);
before drawing the lines so you get lines when your triangle is very small...

This is called a subpixel feature, when geometry gets smaller than a single pixel. If you animated the very thin triangle, you would see the pixels pop in and out.
Try turning multi-sampling on. Most GL windowing libraries support multisampled back buffer. You can also force it on in your graphics driver settings.

If the triangle is generated by geometry shader, then you can make the triangle area dynamic.
For example, you can make the triangle width always greater than 1px.
// ndc coord is range from -1.0 to 1.0 and the screen width is 1920.
float pixel_unit = 2.0 / 1920.0;
vec2 center = 0.5 * (triangle[0].xy + triangle[1].xy );
// Remember to divide the w component.
float triangle_width = (triangle[0].xy - center)/triangle[0].w;
float scale_ratio = pixel_unit / triangle_width;
if (scale_ratio > 1.0){
triagle[0].xy = (triangle[0].xy - center) * scale_ratio + center;
triagle[1].xy = (triangle[1].xy - center) * scale_ratio + center;
}

This issue can also be addressed via conservative rasterisation. The following summary is reproduced from the documentation for the NV_conservative_raster OpenGL extension:
This extension adds a "conservative" rasterization mode where any pixel
that is partially covered, even if no sample location is covered, is
treated as fully covered and a corresponding fragment will be shaded.
Similar extensions exist for the other major graphics APIs.

Related

Aliased rasterization: why sampling pixel at center?

Both OpenGL and Direct3D use pixel's center as a sample point during rasterization (without antialiasing).
For example here is the quote from D3D11 rasterization rules:
Any pixel center which falls inside a triangle is drawn
I tried to find out what is the reason to use (0.5, 0.5) instead of, say, (0.0, 0.0) or whatever else in range of 0.0 - 1.0f for both x and y.
The result might be translated a little, but does it really matter? Does it produce some visible artifacts? May be, it makes some algorithms harder to implement? Or it's just a convention?
Again, I don't talk about multisampling here.
So what is the reason?
Maybe this is not the answer to your problem, but I try to answer your question from ray tracing perspective.
In ray tracing, you can get color of every single points in the scene. But since we have a limited amount of pixel, you need to downsample to your image to your screen pixels.
In ray tracing, if you use 1 ray per pixel, we generally choose center point to create our ray which gives the most correct render results. In the image below, I try to show the difference when you choose a corner of pixel or center. The distance will get bigger when your object is far from the rendering screen.
If you use more than one ray for each pixel, lets say 5 rays (4 corners + 1 center) and average the result, you will of course get more realistic image ( Will handle aliasing problems much better) However it will be slower as you guess.
So, it is probably the same idea that opengl and directX take one sample for each pixel instead of multisampling and taking average (Performance issues) and center point is probably giving the best result.
EDIT :
For area rasterization, center of pixel is used because if center of pixel remains inside Area, it is guaranteed that at least 50% of pixel is inside the shape.(Except shape corners) That's why since the proportion is greater than half that pixel is colored.
For other corner selections there is no general rule. Lets look at example image below. The black point (bottom left) is outside of area and should not be drawn (And when you look at it more than half of pixel is outside. However if you look at blue point %80 of pixel is inside area but since bottom left corner is outside area it shouldn't be drawn
This answer mainly focuses on the OP's comment on
Cagkan Toptas answer:
Thanx for the answer, but my question is: why does it give better
results? Does it at all? If yes, what is the explanation?"
It depends on how you define "better" results. From an image qualioty perspective, it does not change much, as long as the primitves are not specifically aligned (after the projection).
Using just one sample at (0,0) instead (0.5, 0.5) will just shift the scene by half a pixel (in both axis, of course). In the general case of aribitrary placed primitves, the average error should be the same.
However, if you want "pixel-exact" drawing (i.e. for text, and UI, and also full-screen post-processing effects), you just would have to take the convention of the underlying implementation into account, and both conventions would work.
One advantage of the "center at half integers" rule is that you can get the integer pixel coordinates (with respect to the sample locations) of the nearest pixel by a simple floor(floating_point_coords) operation, which is simpler than rounding to the nearest integer.

OpenGL/OpenTK Fill Interior Space

I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.

Method to fix the video-projector deformation with GLSL/HLSL full-screen shader

I am working in VR field where good calibration of a projected screen is very important, and because of difficult-to-adjust ceiling mounts and other hardware specificities, I am looking for a fullscreen shader method to “correct” the shape of the screen.
Most of 2D or 3D engines allows to apply a full-screen effect or deformation by redrawing the rendering result on a quad that you can deform or render in a custom way.
The first idea was to use a vertex shader to offset the corners of this screen quad, so the image is deformed as a quadrilateral (like the hardware keystone on a projector), but it won’t be enough for the requirements
(this approach is described on math.stackexchange with a live fiddle demo).
In my target case:
The image deformation must be non-linear most of the time, so 9 or 16 control points are needed to get a finer adjust.
The borders of the image are not straight (barrel or pillow effect), so even with few control points, the image must be distorted in a curved way in between. Otherwise the deformation would make visible linear seams between at each control points’ limits.
Ideally, knowing the corrected position of each control points of 3x3 or 4x4 grid, the way would be to define a continuous transform for the texture coordinates of the image being drawn on the full screen
quad:
u,v => corrected_u, corrected_v
You can find an illustration here.
I’ve saw some FFD algorithm that works in 2D or 3D that would allow to deform “softly” an image or mesh as if it was made of rubber, but the implementation seems heavy for a real-time shader.
I thought also of a weight-based deformation as we have in squeletal/soft-bodies animation, but seems uncertain to weight properly the control points.
Do you know a method, algorithm or general approach that could help me solve the problem ?
I saw some mesh-based deformation like the new Oculus Rift DK2 requires for its own deformations, but most of the 2D/3D engine use a single quad made of 4 vertices only in standard.
If you need non linear deformation Bezier Surfaces are pretty handy and easy to implement.
You can either pre build them in CPU, or use hardware tessellation (example provided here)
Continuing my research, I found a way.
I created a 1D RGB texture corresponding to a "ramp" or cosine values. This will be the 3 influence coefficients of offset parameters on a 0..1 axis, with 3 coefficients at 0, 0.5 and 1 :
Red starts at 1 at x=0 and goes down to 0 at x=.5
Green start at 0 at x=0, goes to 1 at x=0.5 and goes back to 0 at x=1
Blue starts at 0 at x=0.1 and goes up to 1 at x=1
With these, from 9 float2 uniforms I can interpolate very softly my parameters over the image (with 3 lookups on horizontal, and a final one for vertical).
Then, one interpolated, I offsets the texture coord with these and it works :-D
This is more or less a weighted interpolation of the coordinates using texture lookups for speedup.

How to draw smooth lines in 2D scene with OpenGL without using GL_LINE_SMOOTH?

Since GL_LINE_SMOOTH is not hardware accelerated, nor supported on all GFX cards, how do you draw smooth lines in 2D mode, which would look as good as with GL_LINE_SMOOTH ?
Edit2: My current solution is to draw a line from 2 quads, which fade to zero transparency from edges and the colors in between those 2 quads would be the line color. it works good enough for basic smooth lines rendering and doesnt use texturing and thus is very fast to render.
So, you want smooth lines without:
line smoothing.
full-screen antialiasing.
shaders.
Alright.
Your best bet is to use Valve's Alpha-Tested Magnification technique. The basic idea, for your needs, is to create a texture that represents the distance from the line, with the center of the texture being a distance of 1.0. This could probably be a 1D texture.
Then using the techniques described in the paper (many of which work with fixed-function, including the antialiased version), draw a quad that represents your lines. Obviously you'll need alpha blending (and thus it isn't order-independent). You use your line width to control the distance at which it becomes the appropriate color, thus allowing you to make narrow or wide lines.
Doing this with shaders is virtually identical to the above, except without the texture. Instead of accessing a distance texture, the distance is passed and interpolated from the vertex shader. For the left-edge of the quad, the vertex shader passes 0. For the right edge, it passes 1. You multiply this by 2, subtract 1, and take the absolute value.
That's your distance from the line (the line being the center of the quad). Then just use that distance exactly as Valve's algorithm does.
Turning on full-screen anti-aliasing and using a quad would be my first choice.
Currently I am using 2 or 3 quads to do this, it is the simpliest way to do it.
If line thickness <= 1px, then you need only 2 quads.
If line thickness > 1px, then you need to add third quad in the middle.
The fading edge quads thickness must not change if the line thickness >= 1px.
In the image below you can see the quads with blue borders. White color means full opacity and black color means zero opacity (=fully transparent).

OpenGL rendering and rasterization of slivers

I am testing some rendering stuff with OpenGL and I noticed that I have some issues with long thin polygons that are forming a plane. So when having two of these long polygons directly next to each other, snapping at the long side, I noticed that some of the pixels at the edge are invisible. These invisible pixels move around when I move the camera.
What I found is that this is because the pixels at the edge of these "sliver" polygons will be invisible because the rasterization thinks that they are not within that polygon at this specific view angle.
What I didn't figure out is how to tell OpenGL to also put pixels on screen that are directly at the edge of that polygon.
If you found my description of the problem a bit weird see http://www.ugrad.cs.ubc.ca/~cs314/Vjan2008/slides/week5.day3-4x4.pdf page 27 and following. That's what I mean.
EDIT: ok i think i have to make clear what my problem is, because i have a feeling that i cant adress it with anti aliasing techniques
aaa|b|cc
aaa|b|cc
aaa|b|cc
^ ^
1 2
- the polygons a, b and c form a plane
- some pixels at edge 1 and 2 are invisible at certain camera angles
What I didn't figure out is how to tell OpenGL to also put pixels on screen that are directly at the edge of that polygon.
In general, you don't. If OpenGL thinks that a part of a triangle is too thin to be rendered for a given resolution, then it's too thin to be rendered. The general form of this issue is called called "aliasing".
The solution is to use an antialiasing technique. For example, multisampling. When you create the context, select a number of samples to use.