In the image above, the trees are drawn in a batch and I'm trying to draw the small tree in front of the bigger tree using its z position and regardless of the order they are added for drawing. I'm also using an orthographic projection.
Unfortunately, I'm using an unknown game engine where the devs are either inactive or just doesn't care that's why I'm hoping someone here can help but the gist is this:
start batch drawing
draw small tree at location: x, y, 1 // 1 to make it appear in front
draw big tree at location: x, y, 0
end batch drawing
In an OpenGL / glsl application, what are the things to do in general to make something like this work?
I've already tried the equivalent of
glEnable( GL_BLEND );
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
The problem you seem to be having is the difference between "drawn with non-opaque alpha values" and "actually being transparent".
OpenGL (and most other simple alpha-based rendering techniques) cannot do the kind of transparency where drawing behind an already drawn element makes part of the newly drawn element (partially) visible.
The color of any newly-drawn, non-opaque pixel is a mixture of its own color and the color already on that place. I.e. only two input values exist.
The mixture is controlled by the alpha value of the newly drawn pixel.
The color "already on that place" has lost information on involved colors and alpha values.
The problem visible in your picutre is caused by the fact that in addition to the alpha-controlled mixture there is also the z-controlled influence of other elements closer to the observer. Alpha values do not influence that mixture, the foremost elements simply wins. And this includes the partially, or even fully "transparent" parts of those closer elements, which have already been drawn (with or without allpha influence).
So the gist of this is, as mentioned in comments already,
with the simple alpha-rendering mechanisms, you have to sort rendering chronologically by distance.
I guess my second comment is not clear. I've already found the problem and it's solution.
Problem: the alpha is not discarded in the fragment shader
Solution:
if(gl_FragColor.a < 0.5)
discard;
I don't know if it's the best solution but it's enough for pixel art sprites.
Thank you everyone for your time.
Related
Where can I get an algorithm to render filled triangles? Edit3: I cant use OpenGL for rendering it. I need the per-pixel algorithm for this.
My goal is to render a regular polygon from triangles, so if I use this triangle filling algorithm, the edges from each triangle wouldn't overlap (or make gaps between them), because then it would result into rendering errors if I use for example XOR to render the pixels.
Therefore, the render quality should match to OpenGL rendering, so I should be able to define - for example - a circle with N-vertices, and it would render like a circle with any size correctly; so it doesn't use only integer coordinates to render it like some triangle filling algorithms do.
I would need the ability to control the triangle filling myself: I could add my own logic on how each of the individual pixels would be rendered. So I need the bare code behind the rendering, to have full control on it. It should be efficient enough to draw tens of thousands of triangles without waiting more than a second perhaps. (I'm not sure how fast it can be at best, but I hope it wont take more than 10 seconds).
Preferred language would be C++, but I can convert other languages to my needs.
If there are no free algorithms for this, where can I learn to build one myself, and how hard would that actually be? (me=math noob).
I added OpenGL tag since this is somehow related to it.
Edit2: I tried the algo in here: http://joshbeam.com/articles/triangle_rasterization/ But it seems to be slightly broken, here is a circle with 64 triangles rendered with it:
But if you zoom in, you can see the errors:
Explanation: There is 2 pixels overlapping to the other triangle colors, which should not happen! (or transparency or XOR etc effects will produce bad rendering).
It seems like the errors are more visible on smaller circles. This is not acceptable if I want to have a XOR effect for the pixels.
What can I do to fix these, so it will fill it perfectly without overlapped pixels or gaps?
Edit4: I noticed that rendering very small circles isn't very good. I realised this was because the coordinates were indeed converted to integers. How can I treat the coordinates as floats and make it render the circle precisely and perfectly just like in OpenGL ? Here is example how bad the small circles look like:
Notice how perfect the OpenGL render is! THAT is what I want to achieve, without using OpenGL. NOTE: I dont just want to render perfect circle, but any polygon shape.
There's always the half-space method.
OpenGL uses the GPU to perform this job. This is accelerated in hardware and is called rasterization.
As far as i know the hardware implementation is based on the scan-line algorithm.
This used to be done by creating the outline and then filling in the horizontal lines. See this link for more details - http://joshbeam.com/articles/triangle_rasterization/
Edit: I don't think this will produce the lone pixels you are after, there should be a pixel on every line.
Your problem looks a lot like the problem one has when it comes to triangles sharing the very same edge. What is done by triangles sharing an edge is that one triangle is allowed to conquer the space while the other has to leave it blank.
When doing work with a graphic card usually one gets this behavior by applying a drawing order from left to right while also enabling a z-buffer test or testing if the pixel has ever been drawn. So if a pixel with the very same z-value is already set, changing the pixel is not allowed.
In your example with the circles the line of both neighboring circle segments are not exact. You have to check if the edges are calculated differently and why.
Whenever you draw two different shapes and you see something like that you can either fix your model (so they share all the edge vertexes), go for a z-buffer test or a color test.
You can also minimize the effect by drawing edges using a sub-buffer that has a higher resolution and down-sample it. Since this does not effect the whole area it is more cost effective in terms of space and time when compared to down-sampling the whole scene.
Edit: just so you know: I have not solved this problem perfectly yet, currently I am using 0.5px offset, it seems to work, but as others have said, it is not the "proper" solution. So I am looking for the real deal, the diamond exit rule solution didn't work at all.
I believe it is a bug in the graphics card perhaps, but if so, then any professional programmer should have their bullet-proof solutions for this, right?
Edit: I have now bought a new nvidia card (had ATI card before), and i still experience this problem. I also see the same bug in many, many games. So i guess it is impossible to fix in a clean way?
Here is image of the bug:
How do you overcome this problem? Preferrably a non-shader solution, if possible. I tried to set offset for the first line when i drew 4 individual lines myself instead of using wireframe mode, but that didnt work out very well: if the rectangle size changed, it sometimes looked perfect rectangle, but sometimes even worse than before my fix.
This is how i render the quad:
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glBegin(GL_QUADS);
glVertex2f(...);
glVertex2f(...);
glVertex2f(...);
glVertex2f(...);
glEnd();
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
Yes, I know I can use vertex arrays or VBO's, but that isn't the point here.
I also tried GL_LINE_LOOP, but it didn't fix the bug.
Edit: One solution is at, which works so far: Opengl pixel perfect 2D drawing by Lie Ryan:
Note that OpenGL coordinate space has no notion of integers,
everything is a float and the "centre" of an OpenGL pixel is really at
the 0.5,0.5 instead of its top-left corner. Therefore, if you want a
1px wide line from 0,0 to 10,10 inclusive, you really had to draw a
line from 0.5,0.5 to 10.5,10.5.
This will be especially apparent if you turn on anti-aliasing, if you
have anti-aliasing and you try to draw from 50,0 to 50,100 you may see
a blurry 2px wide line because the line fell in-between two pixels.
Although you've discovered that shifting your points by 0.5 makes the problem go away it's not for the reason that you think.
The answer does indeed lie in the diamond exit rule which is also at the heart of the correctly accepted answer to Opengl pixel perfect 2D drawing.
The diagram below shows four fragments/pixels with a diamond inscribed within each. The four coloured spots represent possible starting points for your quad/line loop i.e. the window co-ordinates of the first vertex.
You didn't say which way you were drawing the quad but it doesn't matter. I'll assume, for argument's sake, that you are drawing it clockwise. The issue is whether the top left of the four fragments shown will be produced by rasterising either your first or last line (it cannot be both).
If you start on the yellow vertex then the first line passes through the diamond and exits it as it passes horizontally to the right. The fragment will therefore be produced as a result of the first line's rasterisation.
If you start on the green vertex then the first line exits the fragment without passing through the diamond and hence never exits the diamond. However the last line will pass through it vertically and exit it as it ascends back to the green vertex. The fragment will therefore be produced as a result of the last line's rasterisation.
If you start on the blue vertex then the first line passes through the diamond and exits it as it passes horizontally to the right. The fragment will therefore be produced as a result of the first line's rasterisation.
If you start on the red vertex then the first line exits the fragment without passing through the diamond and hence never exits the diamond. The last line will also not pass through the diamond and therefore not exit it as it ascends back to the red vertex. The fragment will therefore not be produced as a result of either line's rasterisation.
Note that any vertex that is inside the diamond will automatically cause the fragment to be produced as the first line must exit the diamond (provided your quad is actually big enough to leave the diamond of course).
This is not a bug, this is exactly following the specification. The last pixel of a line is not drawn to prevent overdraw with following line segments, which would cause problems with blending. Solution: Send the last vertex twice.
Code Update
// don't use glPolygonMode, it doesn't
// do what you think it does
glBegin(GL_LINE_STRIP);
glVertex2f(a);
glVertex2f(b);
glVertex2f(c);
glVertex2f(d);
glVertex2f(a);
glVertex2f(a); // resend last vertex another time, to close the loop
glEnd();
BTW: You should learn how to use vertex arrays. Immediate mode (glBegin, glEnd, glVertex calls) have been removed from OpenGL-3.x core and onward.
#Troubadour described the problem perfectly. It's not a driver bug. GL is acting exactly as specified. It's designed for sub-pixel accurate representation of the world space object in device space. That's what it's doing. Solutions are 1) anti-alias, so the device space affords more fidelity and 2) arrange for a world coordinate system where all transformed vertices fall in the middle of a device pixel. This is the "general" solution you are looking for.
In all cases you can achieve 2) by moving the input points around. Just shift each point enough to take its transform to the middle of a device pixel.
For some (unaltered) point sets, you can do it by slighly modifying the view transformation. The 1/2 pixel shift is an example. It works e.g. if the world space is an integer-scaled transform of device space followed by a translation by integers, where world coordinates are also integers. Under many other conditions, though, +1/2 won't work.
** Edit **
NB as I said a uniform shift (1/2 or any other) can be built into the view transform. There is no reason to fiddle with vertex coordinates. Just prepend a translation, e.g. glTranslatef(0.5f, 0.5f, 0.0f);
Try change 0.5 into the odd magic number that's used everywhere 0.375.
Used be opengl, and X11 etc.
Becuase of that diamond rule mentioned and how graphiccards draw to avoid unnecessery overdraws of pixels.
Provide some link but there's lots of them, just search keywords opengl 0.375 diamond rule if you need more information. It's about how outlines and fills are treated algorithmically in opengl. It's needed for pixelperfect rendering of textures in for example 2d sprites aswell.
Take a look at this.
Want to add something; So doing what you want, implementing diamond rule implemented in code would be simply one liner; change 0.5 into 0.375 like this;
And it should render properly.
glTranslatef(0.375, 0.375, 0.0);
I know there are several techniques to achieve this, but none of them seems sufficient.
Using a sobel / laplace filter doesn't find all the correct edges (and finds unwanted edges), is slow and doesn't give me control over the outline width.
What i have settled on for now is rendering the backside of my objects first with a solid color and a little bigger than the actual objects. The result does look good, but i really want my outlines to have a constant width.
I already tried rendering the backside of my objects with thick wireframe lines. Gives me a constant outline width, but line width is deprecated, produces rendering artifacts and leaves gaps, if the outline abruptly changes direction (like on a cube for example). I have not yet tried using a third rendering pass drawing a point the size of the wireframe lines for each vertex, because of the other problems with this technique.
Any ideas?
edit I even looked at finding the edges myself using a geometry shader, as described in http://prideout.net/blog/?p=54, but it suffers from the same gaps as the wireframe technique.
edit I was able to get rid of the rendering artifacts with the wireframe technique by disabling the GL_DEPTH_TEST while drawing the outlines. Unfortunately i also lost the outlines on overlapping objects...
My goal is to get the same effect they use on characters in the Dragons Lair 3 game. Does anyone know how they did it?
in case you're after real edge detection, Ive found that you can get pretty good results with a convolution LoG (Laplacian over Gaussian) 5x5 kernel, applied to the depth buffer and blended over the rendered object (possibly with a decent FSAA). You need some tuning in the fragment shader in order to clamp the blended outline, but the results are good. (and its a matter of what you really want, btw)
note that:
1) Laplace filtering and log filtering are different things and produce different results
2) if you apply the convolution on the depth buffer, instead of the rendered image, you end up with totally different results, firthermore, if an outline width conrol is desired, a dilate filter followed by a selective-erode pass can be applied, this way you will end up with a render that closely match a hand drawn sketch made with a marker, and you have fine control over tip size but at the cost of 2 extra pass
http://img136.imageshack.us/img136/3508/texturefailz.png
This is my current program. I know it's terribly ugly, I found two random textures online ('lava' and 'paper') which don't even seem to tile. That's not the problem at the moment.
I'm trying to figure out the first steps of an RPG. This is a top-down screenshot of a 10x10 heightmap (currently set to all 0s, so it's just a plane), and I texture it by making one pass per texture per quad, and each vertex has alpha values for each texture so that they blend with OpenGL.
The problem is that, notice how the textures trend along diagonals, and even though I'm drawing with GL_QUAD, this is presumably because the quads are turned into sets of two triangles and then the alpha values at the corners have more weight along the hypotenuses... But I wasn't expecting that to matter at all. By drawing quads, I was hoping that even though they were split into triangles at some low level, the vertex alphas would cause the texture to radiate in a circular outward gradient from the vertices.
How can I fix this to make it look better? Do I need to scrap this and try a whole different approach? IS there a different approach for something like this? I'd love to hear alternatives as well.
Feel free to ask questions and I'll be here refreshing until I get a valid answer, so I'll comment as fast as I can.
Thanks!!
EDIT:
Here is the kind of thing I'd like to achieve. No I'm obviously not one of the billions of noobs out there "trying to make a MMORPG", I'm using it as an example because it's very much like what I want:
http://img300.imageshack.us/img300/5725/runescapehowdotheytile.png
How do you think this is done? Part of it must be vertex alphas like I'm doing because of the smooth gradients... But maybe they have a list of different triangle configurations within a tile, and each tile stores which configuration it uses? So for example, configuration 1 is a triangle in the topleft and one in the bottomright, 2 is the topright and bottomleft, 3 is a quad on the top and a quad on the bottom, etc? Can you think of any other way I'm missing, or if you've got it all figured out then please share how they do it!
The diagonal artefacts are caused by having all of your quads split into triangles along the same diagonal. You define points [0,1,2,3] for your quad. Each quad is split into triangles [0,1,2] and [1,2,3]. Try drawing with GL_TRIANGLES and alternating your choice of diagonal. There are probably more efficient ways of doing this using GL_TRIANGLE_STRIP or GL_QUAD_STRIP.
i think you are doing it right, but you should increase the resolution of your heightmap a lot to get finer tesselation!
for example look at this heightmap renderer:
mdterrain
it shows the same artifacts at low resolution but gets better if you increase the iterations
I've never done this myself, but I've read several guides (which I can't find right now) and it seems pretty straight-forward and can even be optimized by using shaders.
Create a master texture to control the mixing of 4 sub-textures. Use the r,g,b,a components of the master texture as a percentage mix of each subtextures ( lava, paper, etc, etc). You can easily paint a master texture using paint.net, photostop, gimp and just paint into each color channel. You can compute the resulting texture before hand using all 5 textures OR you can calculate the result on the fly with a fragment shader. I don't have a good example of either, but I think you can figure it out given how far you've come.
The end result will be "pixel" pefect blending (depends on the textures resolution and filtering) and will avoid the vertex blending issues.
By default it seems that objects are drawn front to back. I am drawing a 2-D UI object and would like to create it back to front. For example I could create a white square first then create a slightly smaller black square on top of it thus creating a black pane with a white border. This post had some discussion on it and described this order as the "Painter's Algorithm" but ultimately the example they gave simply rendered the objects in reverse order to get the desired effect. I figure back to front (first objects go in back, subsequent objects get draw on top) rendering can be achieved via some transformation (gOrtho?) ?
I will also mention that I am not interested in a solution using a wrapper library such as GLUT.
I have also found that the default behavior on the Mac using the Cocoa NSOpenGLView appears to draw back to front, where as in windows I cannot get this behavior. The setup code in windows I am using is this:
glViewport (0, 0, wd, ht);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho (0.0f, wd, ht, 0.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
The following call will turn off depth testing causing objects to be drawn in the order created. This will in effect cause objects to draw back to front.
glDepthFunc(GL_NEVER); // Ignore depth values (Z) to cause drawing bottom to top
Be sure you do not call this:
glEnable (GL_DEPTH_TEST); // Enables Depth Testing
For your specific question, no there is no standardized way to specify depth ordering in OpenGL. Some implementations may do front to back depth ordering by default because it's usually faster, but that is not guaranteed (as you discovered).
But I don't really see how it will help you in your scenario. If you draw a black square in front of a white square the black square should be drawn in front of the white square regardless of what order they're drawn in, as long as you have depth buffering enabled. If they're actually coplanar, then neither one is really in front of the other and any depth sorting algorithm would be unpredictable.
The tutorial that you posted a link to only talked about it because depth sorting IS relevant when you're using transparency. But it doesn't sound to me like that's what you're after.
But if you really have to do it that way, then you have to do it yourself. First send your white square to the rendering pipeline, force the render, and then send your black square. If you do it that way, and disable depth buffering, then the squares can be coplanar and you will still be guaranteed that the black square is drawn over the white square.
Drawing order is hard. There is no easy solution. The painter's alogorithm (sort objects by their distance in relation to your camera's view) is the most straightforward, but as you have discovered, it doesn't solve all cases.
I would suggest a combination of the painter's algroithm and layers. You build layers for specific elements on your program. So you got a background layer, objects layers, special effect layers, and GUI layer.
Use the painter's algorithm on each layer's items. In some special layers (like your GUI layer), don't sort with the painter's algorithm, but by your call order. You call that white square first so it gets drawn first.
Draw items that you want to be in back slightly behind the items that you want to be in the front. That is, actually change the z value (assuming z is perpendicular to the screen plane). You don't have to change it a lot to get the items to draw in front of eachother. And if you only change the z value slightly, you shouldn't notice much of an offset from their desired position. You could even go really fancy, and calculate the correct x,y position based on the changed z position, so that the item appears where it is supposed to be.
Your stuff will be drawn in the exact order you call the glBegin/glEnd functions in. You can get depth-buffering using the z-buffer, and if your 2d objects have different z values, you can get the effect you want that way. The only way you are seeing the behavior you describe on the Mac is if the program is drawing stuff in back-to-front order manually or using the z-buffer to accomplish this. OpenGL otherwise does not have any functionality automatically as you describe.
As AlanKley pointed out, the way to do this is to disable the depth buffer. The painter's algorithm is really a 2D scan-conversion technique used to render polygons in the correct order when you don't have something like a z-buffer. But you wouldn't apply it to 3D polygons. You'd typically transform and project them (handling intersections with other polygons) and then sort the resulting list of 2D projected polygons by their projected z-coordinate, then draw them in reverse z-order.
I've always thought of the painter's algorithm as an alternate technique for hidden surface removal when you can't (or don't want to) use a z-buffer.