Rasterizer not picking up GL_LINES as I would want it to - opengl

So I'm rendering this diagram each frame:
https://dl.dropbox.com/u/44766482/diagramm.png
Basically, each second it moves everything one pixel to the left and every frame it updates the rightmost pixel column with current data. So a lot of changes are made.
It is completely constructed from GL_LINES, always from bottom to top.
However those black missing columns are not intentional at all, it's just the rasterizer not picking them up.
I'm using integers for positions and bytes for colors, the projection matrix is exactly 1:1; translating by 1 means moving 1 pixel. Orthogonal.
So my problem is, how to get rid of the black lines? I suppose I could write the data to texture, but that seems expensive. Currently I use a VBO.

Render you columns as quads instead with a width of 1 pixel, the rasterization rules of OpenGL will make sure you have no holes this way.

Realize the question is already closed, but you can also get the effect you want by drawing your lines centered at 0.5. A pixel's CENTER is at 0.5, and drawing a line there will always be picked up by the rasterizer in the right place.

Related

When exactly is glClear(GL_DEPTH_BUFFER_BIT) and glClear(GL_COLOR_BUFFER_BIT) necessary?

In a nutshell, when should the color buffer be cleared and when should the depth buffer be cleared? Is it always at the start of the drawing of the current scene/frame? Are there times when you would draw the next frame without clearing these? Are there other times when you would clear these?
Ouff... even though it's a simple question it's a bit hard to explain ^^
It should be cleared before drawing the scene again. Yes, every time if you want to avoid strange and nearly uncontrollable effect.
You don't want to clear the two Buffers after swapping when you bind the scene to a texture () but right after that there's no more use for it.
The Color Buffer is as the name says a buffer, storing the computed color data. For better understanding, imagine you draw on a piece of paper. Each Point on the Paper knows which color was drawn on top of it - and that's basically all of it.
But: without the Depth Buffer, your Color Buffer is (except some special cases like multiple renderpasses for FX effects) nearly useless. It's like a second piece of paper but in a gray scale. How dark the gray is decides how far the last drawn pixel is away from the screen (relative to the zNear and zFar clipping plane).
If you instruct OpenGl to draw another primitive, it goes pixel by pixel and checks, which depth value the pixel would have. If it's higher than the value stored in the Depth Buffer for that pixel, it draws over the Color Buffer in that position and does nothing if the value is lower.
To recap, the Color Buffer stores the picture to be drawn on your screen, and the Depth Buffer decides weather a part of your primitive get's drawn in this position.
So clearing the Buffers for a new scene is basically changing the Paper to draw in top. If you want to mess with the old picture, then keep it, but it's in better hands on the monitor (or on the wall^^).

Holes on Heightmap based terrain using Directx11

I'm currently working on a cylinder shaped terrain produced by a height map.
What happens in the program is simple, there is a texture for the colors of the terrain that has the alpha value of regions in with i want it to be invisible and another texture ARGB with the A being the gray scale for the heights and RGB is the normal for the light.
The texture is such that the A value goes from 1 to 255 and I'm reserving the 0 for the regions with holes, meaning i don't want then to exist.
So in theory no problem, I'm making those regions invisible based on the first texture but on practice what's happening is that the program is considering the 0 as the minimum height and, even with the texture on top, is creating some lines towards this regions of 0, like trying to make its triangle but not getting there because i cut the next vertex by making it invisible.
Notice the lines going to the center of the cylinder
This is how it gets when i stop making those vertex invisible
So, just to say, i used the function Clip() on the pixel shader to make it invisible.
Basically what i need of help:
Is it possible, the same way i use clip() on the pixel shader i do something like that on the vertex shader and get rid of the unwanted vertex?
Basically, is possible to just say to ignore value 0?
Any ideas to fix this? i thinking of making all the vertex that are 0 become the value of his neighbor, that way those lines wouldn't go to the center but to the same plane as the cylinder itself.
Another thing is that we can see that the program is interpolating the values from one vertex to the next, that is why i cuts on halfway through to the invisible vertex
I'm working with Directx11 API with C++ and the program uses Tessellation.
Thank you for your time and will be very glad with any input on this!
Well i did resolve a bit of this issue.
I made the texture with the height values pass through a modifier that created another texture with the zero values substituted by the side pixel with value different then zero or change for 128.0f.
with that it made the weird lines direction be more accurate not going to the center of the cylinder but along the line.

How to draw smooth lines in 2D scene with OpenGL without using GL_LINE_SMOOTH?

Since GL_LINE_SMOOTH is not hardware accelerated, nor supported on all GFX cards, how do you draw smooth lines in 2D mode, which would look as good as with GL_LINE_SMOOTH ?
Edit2: My current solution is to draw a line from 2 quads, which fade to zero transparency from edges and the colors in between those 2 quads would be the line color. it works good enough for basic smooth lines rendering and doesnt use texturing and thus is very fast to render.
So, you want smooth lines without:
line smoothing.
full-screen antialiasing.
shaders.
Alright.
Your best bet is to use Valve's Alpha-Tested Magnification technique. The basic idea, for your needs, is to create a texture that represents the distance from the line, with the center of the texture being a distance of 1.0. This could probably be a 1D texture.
Then using the techniques described in the paper (many of which work with fixed-function, including the antialiased version), draw a quad that represents your lines. Obviously you'll need alpha blending (and thus it isn't order-independent). You use your line width to control the distance at which it becomes the appropriate color, thus allowing you to make narrow or wide lines.
Doing this with shaders is virtually identical to the above, except without the texture. Instead of accessing a distance texture, the distance is passed and interpolated from the vertex shader. For the left-edge of the quad, the vertex shader passes 0. For the right edge, it passes 1. You multiply this by 2, subtract 1, and take the absolute value.
That's your distance from the line (the line being the center of the quad). Then just use that distance exactly as Valve's algorithm does.
Turning on full-screen anti-aliasing and using a quad would be my first choice.
Currently I am using 2 or 3 quads to do this, it is the simpliest way to do it.
If line thickness <= 1px, then you need only 2 quads.
If line thickness > 1px, then you need to add third quad in the middle.
The fading edge quads thickness must not change if the line thickness >= 1px.
In the image below you can see the quads with blue borders. White color means full opacity and black color means zero opacity (=fully transparent).

labels in an opengl map application

Short Version
How can I draw short text labels in an OpenGL mapping application without having to manually recompute coordinates as the user zooms in and out?
Long Version
I have an OpenGL-based mapping application where I need to be able to draw data sets with up to about 250k points. Each point can have a short text label, usally about 4 or 5 characters long.
Currently, I do this using a single textue containing all the characters. For each point, I define a quad for each character in its label. So a point with the label "Fred" would have four quads associated with it, and each quad uses texture coordinates into that single texture to draw its corresponding character.
When I draw the map, I draw the map points themselves in map coordinates (e.g., longitude/latitude). Then I compute the position of each point in screen coordinates and update the four corner points for each of that point's label quads, again in screen coordinates. (For instance, if I determine the point is drawn at screen point 100, 150, I could set the quad for the first character in the point's label to be the rectangle starting with left-top point of 105, 155 and having a width of 6 pixels and a height of 12 pixels, as appropriate for the particular character. Then the second character might start at 120, 155, and so on.) Then once all these label character quads are positioned correctly, I draw them using an orthogonal screen projection.
The problem is that the process of updating all of those character quad coordinates is slow, taking about half a second for a particular test data set with 150k points (meaning that, since each label is about four characters long, there are about 150k * [ 4 characters per point] * [ 4 coordinate pairs per character] coordinate pairs that need to be set on each update.
If the map application didn't involve zooming, I would not need to recompute all these coordinates on each refresh. I could just compute the label coordinates once and then simply shift my viewing rectangle to show the right area. But with zooming, I can't see how to make it work without doing coordniate computation, because otherwise the characters will grow huge as you zoom in and tiny as you zoom out.
What I want (and what I understand OpenGL doesn't provide) is a way to tell OpenGL that a quad should be drawn in a fixed screen-coordinate rectangle, but that the top-left position of that rectangle should be a fixed distance from a given point in map coordinate space. So I want both a primitive hierarchy (a given map point is that parent of its label character quads) and the ability to mix two different coordinate systems within this hierarchy.
I'm trying to understand whether there is some magic transformation matrix I can set that will do all this form me, but I can't see how to do it.
The other alternative I've considered is using a shader on each point to handle computing the label character quad coordinates for that point. I haven't worked with shaders before, and I'm just trying to understand (a) if it's possible to use shaders to do this, and (b) whether computing all those points in shader code actually buys me anything over computing them myself. (By the way, I have confirmed that the big bottleneck is computing the quad coordinates, not in uploading the updated coordinates to the GPU. The latter takes a bit of time, but it's the computation, the sheer number of coordinates being updated, that takes up the bulk of that half second.)
(Of course, the other other alternative is to be smarter about which labels need to be drawn in a given view in the first place. But for now I'd like to concentrate on the solution assuming all labels need to be drawn.)
So the basic problem ("because otherwise the characters will grow huge as you zoom in and tiny as you zoom out") is that you are doing calculations in map coordinates rather than screen coordinates? And if you did it in screen coords, this would require more computations? Obviously, any rendering needs to translate from map coordinates to screen coordinates. The problem seems to be that you are translating from map to screen too late. Therefore, rather than doing a single map-to-screen for each point, and then working in screen coords, you are working mostly in map coords, and then translating per-character to screen coords at the very end. And the slow part is that you are working in screen coords, then having to manually translate back to map coords just to tell OpenGL the map coords, and it will convert those back to screen coords! Is that a fair assessment of your problem?
The solution therefore is to push that transformation earlier in your pipeline. However, I can see why it is tricky, because at first glance, OpenGL seems want to do everything in "world coordinates" (for you, map coords), but not in screen coords.
Firstly, I am wondering why you are doing separate coordinate calculations for each character. What font rendering system are you using? Something like FreeType will automatically generate a bitmap image of an entire string, and doesn't require you to work per-character [edit: this isn't quite true; see comments]. You definitely shouldn't need to calculate the map coordinate (or even screen coordinate) for every character. Calculate the screen coordinate for the top-left corner of the label, and have your font rendering system produce the bitmap of the entire label in one go. That should speed things up about fourfold (since you assume 4 characters per label).
Now as for working in screen coords, it may be helpful to learn a bit about shaders. The more you learn about OpenGL, the more you learn that really it isn't a 3D rendering engine at all. It's just a 2D graphics library with some very fast matrix primitives built-in. OpenGL actually works, at the lowest level, in screen coordinates (not pixel coordinates -- it works in normalized screen space, I think from memory from -1 to 1 in both the X and Y axis). The only reason it "feels" like you're working in world coordinates is because of these matrices you have set up.
So I think the reason why you are working in map coords all the way until the end is because it's easiest: OpenGL naturally does the map-to-screen transform for you (using the matrices). You have to change that, because you want to work in screen coords yourself, and therefore you need to make the transformation a long time before OpenGL gets its hands on your data. So when you go to draw a label, you should manually apply the map-to-screen transformation matrix on each point, as follows:
You have a particular point (which needs a label drawn) in map coords.
Apply the map-to-screen matrix to convert the point to screen coords. This probably means multiplying the point by the MODELVIEW and PROJECTION matrices, using the same algorithm that OpenGL does when it's rendering a vertex. So you could either glGet the GL_MODELVIEW_MATRIX and GL_PROJECTION_MATRIX to extract OpenGL's current matrices, or you could manually keep around a copy of the matrix yourself.
Now you have the map label in screen coords, compute the position of the label's text. This is simply adding 5 pixels in the X and Y axis, as you said above. However, remember that you aren't in pixel space, but normalised screen space, so you are working in percentages (add 0.05 units, would add 5% of the screen space, for example). It's probably better not to think in pixels, because then your application will scale to match the resolution. But if you really want to think in pixels, you will have to calculate the pixels-to-units based on the resolution.
Use glPushMatrix to save the current matrix, then glLoadIdentity to set the current matrix to the identity -- tell OpenGL not to transform your vertices. (I think you will have to do this for both the PROJECTION and MODELVIEW matrices.)
Draw your label, in screen coordinates.
So you don't really need to write a shader. You could certainly do this in a shader, and it would certainly make step 2 faster (no need to write your own software matrix multiply code; multiplying matrices on the GPU is extremely fast). But that would be a later optimisation, and a lot of work. I think the above steps will help you work in screen coordinates and avoid having to waste a lot of time just to give OpenGL map coordinates.
Side comment on:
"""
generate a bitmap image of an entire string, and doesn't require you to work per-character
...
Calculate the screen coordinate for the top-left corner of the label, and have your font rendering system produce the bitmap of the entire label in one go. That should speed things up about fourfold (since you assume 4 characters per label).
"""
Freetype or no, you could certainly compute a bitmap image for each label, rather than each character, but that would require one of:
storing thousands of different textures, one for each label
It seems like a bad idea to store that many textures, but maybe it's not.
or
rendering each label, for each point, at each screen update.
this would certainly be too slow.
Just to follow up on the resolution:
I didn't really solve this problem, but I ended up being smarter about when I draw labels in the first place. I was able to quickly determine whether I was about to draw too many characters (i.e., so many characters that on a typical screen with a typical density of points the labels would be too close together to read in a useful way) and then I simply don't label at all. With drawing up to about 5000 characters at a time there isn't a noticeable slowdown recomputing the character coordinates as described above.

Opengl Depth buffer and Culling

Whats's the difference between use back face culling and a buffer of depth in OpenGL?
Backface culling is when OpenGL determines which faces are facing away from the viewer and are therefore unseen. Think of a cube. No matter how your rotate the cube, 3 faces will always be invisible. Figure out which faces these are, remove them from the list of polygons to be drawn and you just halved your drawing list.
Depth buffering is fairly simple. For every pixel of every polygon drawn, compare it's z value to the z buffer. if it's less than the value in the z buffer set the z buffer value as the new z buffer value. If not, discard the pixel. Depth buffering gives very good results but can be fairly slow as each and every pixel requires a value lookup.
In reality there is nothing similar between these two methods and they are often both used. Given a cube you can first cut out half the polygons using culling then draw them using z buffering.
Culling can cut down on the polygons rendered, but it's not a sorting algorithm. That's what Z buffering is.
A given triangle has two sides, the front face and the back face. The side you are looking at is determined by the order the points appear in the vertex list (also called the winding). Typically lists of triangles have alternating winding so that you can reuse the preceding two points but the facing of a given triangle in the strip doesn't alternate. Back face culling is the optimization step where in triangles in the scene which are oriented away from the view are removed from the list of triangles to draw.
A depth buffer (z-buffer) is used to hang onto the closest thing (the depth is relative to the view) that has already been rendered. If the thing that comes up next in the draw list is behind something that I've drawn already (ie, it has a depth that places it farther away) I can skip drawing it, as it is obstructed. If the new thing to draw is closer, I draw it and I update the depth buffer with the new, closer value.