In openGL, how can you get items to draw back to front? - opengl

By default it seems that objects are drawn front to back. I am drawing a 2-D UI object and would like to create it back to front. For example I could create a white square first then create a slightly smaller black square on top of it thus creating a black pane with a white border. This post had some discussion on it and described this order as the "Painter's Algorithm" but ultimately the example they gave simply rendered the objects in reverse order to get the desired effect. I figure back to front (first objects go in back, subsequent objects get draw on top) rendering can be achieved via some transformation (gOrtho?) ?
I will also mention that I am not interested in a solution using a wrapper library such as GLUT.
I have also found that the default behavior on the Mac using the Cocoa NSOpenGLView appears to draw back to front, where as in windows I cannot get this behavior. The setup code in windows I am using is this:
glViewport (0, 0, wd, ht);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho (0.0f, wd, ht, 0.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

The following call will turn off depth testing causing objects to be drawn in the order created. This will in effect cause objects to draw back to front.
glDepthFunc(GL_NEVER); // Ignore depth values (Z) to cause drawing bottom to top
Be sure you do not call this:
glEnable (GL_DEPTH_TEST); // Enables Depth Testing

For your specific question, no there is no standardized way to specify depth ordering in OpenGL. Some implementations may do front to back depth ordering by default because it's usually faster, but that is not guaranteed (as you discovered).
But I don't really see how it will help you in your scenario. If you draw a black square in front of a white square the black square should be drawn in front of the white square regardless of what order they're drawn in, as long as you have depth buffering enabled. If they're actually coplanar, then neither one is really in front of the other and any depth sorting algorithm would be unpredictable.
The tutorial that you posted a link to only talked about it because depth sorting IS relevant when you're using transparency. But it doesn't sound to me like that's what you're after.
But if you really have to do it that way, then you have to do it yourself. First send your white square to the rendering pipeline, force the render, and then send your black square. If you do it that way, and disable depth buffering, then the squares can be coplanar and you will still be guaranteed that the black square is drawn over the white square.

Drawing order is hard. There is no easy solution. The painter's alogorithm (sort objects by their distance in relation to your camera's view) is the most straightforward, but as you have discovered, it doesn't solve all cases.
I would suggest a combination of the painter's algroithm and layers. You build layers for specific elements on your program. So you got a background layer, objects layers, special effect layers, and GUI layer.
Use the painter's algorithm on each layer's items. In some special layers (like your GUI layer), don't sort with the painter's algorithm, but by your call order. You call that white square first so it gets drawn first.

Draw items that you want to be in back slightly behind the items that you want to be in the front. That is, actually change the z value (assuming z is perpendicular to the screen plane). You don't have to change it a lot to get the items to draw in front of eachother. And if you only change the z value slightly, you shouldn't notice much of an offset from their desired position. You could even go really fancy, and calculate the correct x,y position based on the changed z position, so that the item appears where it is supposed to be.

Your stuff will be drawn in the exact order you call the glBegin/glEnd functions in. You can get depth-buffering using the z-buffer, and if your 2d objects have different z values, you can get the effect you want that way. The only way you are seeing the behavior you describe on the Mac is if the program is drawing stuff in back-to-front order manually or using the z-buffer to accomplish this. OpenGL otherwise does not have any functionality automatically as you describe.

As AlanKley pointed out, the way to do this is to disable the depth buffer. The painter's algorithm is really a 2D scan-conversion technique used to render polygons in the correct order when you don't have something like a z-buffer. But you wouldn't apply it to 3D polygons. You'd typically transform and project them (handling intersections with other polygons) and then sort the resulting list of 2D projected polygons by their projected z-coordinate, then draw them in reverse z-order.
I've always thought of the painter's algorithm as an alternate technique for hidden surface removal when you can't (or don't want to) use a z-buffer.

Related

far parts invisible, though closer parts are alpha-transparent

In the image above, the trees are drawn in a batch and I'm trying to draw the small tree in front of the bigger tree using its z position and regardless of the order they are added for drawing. I'm also using an orthographic projection.
Unfortunately, I'm using an unknown game engine where the devs are either inactive or just doesn't care that's why I'm hoping someone here can help but the gist is this:
start batch drawing
draw small tree at location: x, y, 1 // 1 to make it appear in front
draw big tree at location: x, y, 0
end batch drawing
In an OpenGL / glsl application, what are the things to do in general to make something like this work?
I've already tried the equivalent of
glEnable( GL_BLEND );
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
The problem you seem to be having is the difference between "drawn with non-opaque alpha values" and "actually being transparent".
OpenGL (and most other simple alpha-based rendering techniques) cannot do the kind of transparency where drawing behind an already drawn element makes part of the newly drawn element (partially) visible.
The color of any newly-drawn, non-opaque pixel is a mixture of its own color and the color already on that place. I.e. only two input values exist.
The mixture is controlled by the alpha value of the newly drawn pixel.
The color "already on that place" has lost information on involved colors and alpha values.
The problem visible in your picutre is caused by the fact that in addition to the alpha-controlled mixture there is also the z-controlled influence of other elements closer to the observer. Alpha values do not influence that mixture, the foremost elements simply wins. And this includes the partially, or even fully "transparent" parts of those closer elements, which have already been drawn (with or without allpha influence).
So the gist of this is, as mentioned in comments already,
with the simple alpha-rendering mechanisms, you have to sort rendering chronologically by distance.
I guess my second comment is not clear. I've already found the problem and it's solution.
Problem: the alpha is not discarded in the fragment shader
Solution:
if(gl_FragColor.a < 0.5)
discard;
I don't know if it's the best solution but it's enough for pixel art sprites.
Thank you everyone for your time.

transparency in opengl (using FLTK)

I'm drawing some 3D structures in a Fl_Gl_Window in FLTK's implementation of opengl. This images are drawn and rotated so the code looks something like
glTranslatef(-xshift,-yshift,-zshift);
glRotatef(ang1,ang2,ang3);
glTranslatef(xshift,yshift,zshift);
glColor4f((120.0/256.0),(120.0/256.0),(120.0/256.0),0.2);
for (int side=0;side<num_sides;side++){
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_BLEND );
glBegin(GL_TRIANGLES);
//draw shape
glEnd();
glDisable(GL_BLEND);
}
and it almost works apart from at different angles the transparency doesn't work properly. For example, if I draw a cube from one side it will look transparent all the way through without being able to discern the two sides but from the other one side will appear darker as it is supposed to. It's as if it calculates the transparency too 'early' as in before the rotation. Am I doing something wrong? Should I move the rotation to below the transparency effects (i.e. before them in execution) or does the order of the triangles matter?
The order of the triangles matters. To get the desired effect for transparency you need to render the triangles in back to front order because the hardware blending works by reading the color for the fragment in the depth buffer and blending it with the fragment currently being shaded. That's why you are getting different results when you rotate your cube since you are not changing the order of the triangles in the cube. You may also want to look into Order Independent Transparency techniques.
Depending on how many triangles you have sorting them every frame can get really expensive. One approximation technique is to presort the triangles along the x, y, and z axes and then choose the sorted ordered that most closely matches your viewing direction. This only works to a certain extent. One popular type of order independent transparency technique is depth peeling. Here's a tutorial with some code for implementing it: http://mmmovania.blogspot.com/2010/11/order-independent-transparency.html?m=1. You might also want to read the original paper to get a better understanding of the technique: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.18.9286&rep=rep1&type=pdf.

Perfect filled triangle rendering algorithm?

Where can I get an algorithm to render filled triangles? Edit3: I cant use OpenGL for rendering it. I need the per-pixel algorithm for this.
My goal is to render a regular polygon from triangles, so if I use this triangle filling algorithm, the edges from each triangle wouldn't overlap (or make gaps between them), because then it would result into rendering errors if I use for example XOR to render the pixels.
Therefore, the render quality should match to OpenGL rendering, so I should be able to define - for example - a circle with N-vertices, and it would render like a circle with any size correctly; so it doesn't use only integer coordinates to render it like some triangle filling algorithms do.
I would need the ability to control the triangle filling myself: I could add my own logic on how each of the individual pixels would be rendered. So I need the bare code behind the rendering, to have full control on it. It should be efficient enough to draw tens of thousands of triangles without waiting more than a second perhaps. (I'm not sure how fast it can be at best, but I hope it wont take more than 10 seconds).
Preferred language would be C++, but I can convert other languages to my needs.
If there are no free algorithms for this, where can I learn to build one myself, and how hard would that actually be? (me=math noob).
I added OpenGL tag since this is somehow related to it.
Edit2: I tried the algo in here: http://joshbeam.com/articles/triangle_rasterization/ But it seems to be slightly broken, here is a circle with 64 triangles rendered with it:
But if you zoom in, you can see the errors:
Explanation: There is 2 pixels overlapping to the other triangle colors, which should not happen! (or transparency or XOR etc effects will produce bad rendering).
It seems like the errors are more visible on smaller circles. This is not acceptable if I want to have a XOR effect for the pixels.
What can I do to fix these, so it will fill it perfectly without overlapped pixels or gaps?
Edit4: I noticed that rendering very small circles isn't very good. I realised this was because the coordinates were indeed converted to integers. How can I treat the coordinates as floats and make it render the circle precisely and perfectly just like in OpenGL ? Here is example how bad the small circles look like:
Notice how perfect the OpenGL render is! THAT is what I want to achieve, without using OpenGL. NOTE: I dont just want to render perfect circle, but any polygon shape.
There's always the half-space method.
OpenGL uses the GPU to perform this job. This is accelerated in hardware and is called rasterization.
As far as i know the hardware implementation is based on the scan-line algorithm.
This used to be done by creating the outline and then filling in the horizontal lines. See this link for more details - http://joshbeam.com/articles/triangle_rasterization/
Edit: I don't think this will produce the lone pixels you are after, there should be a pixel on every line.
Your problem looks a lot like the problem one has when it comes to triangles sharing the very same edge. What is done by triangles sharing an edge is that one triangle is allowed to conquer the space while the other has to leave it blank.
When doing work with a graphic card usually one gets this behavior by applying a drawing order from left to right while also enabling a z-buffer test or testing if the pixel has ever been drawn. So if a pixel with the very same z-value is already set, changing the pixel is not allowed.
In your example with the circles the line of both neighboring circle segments are not exact. You have to check if the edges are calculated differently and why.
Whenever you draw two different shapes and you see something like that you can either fix your model (so they share all the edge vertexes), go for a z-buffer test or a color test.
You can also minimize the effect by drawing edges using a sub-buffer that has a higher resolution and down-sample it. Since this does not effect the whole area it is more cost effective in terms of space and time when compared to down-sampling the whole scene.

OpenGL texture mapping

I have two objects drawn on screen in openGL, one is a sphere using the GLU object and one is a texture mapped star. Regardless of the z coordinates, the texture mapped star always seems to draw in front. Is this normal openGL behavior? Is there a way to prevent this?
Note: I am working within the worldwind framework, so maybe something else is going on causing this. But I'm just wondering is it normal for the texture mapped objects to appear in front? I don't think so but I'm not sure...
This isn't a bug in worldwind, this is actually desired behavior. Using glVertex2f() is the same as using glVertex3f() and setting z = 0. So it simply draws the star at a plane very close to the viewer (also depending on your projection).
To solve your issue, you can either disable depth writes using glDepthMask(0), then draw the star, call glDepthMask(1) and then draw the sphere, which will now be in front of the star.
You can also use glDepthFunc(GL_GREATER) on the star or glDisable(GL_DEPTH_TEST) on the sphere to quickly achieve the same effect.
To make anything more complicated (such as star intersecting the sphere), you need to use matrices to put the star at the desired position.

Ways to implement manipulation handles in 3d view

I'm building a simple solid modeling application. Users need to be able to manipulate object in both orthogonal and perspective views. For example, when there's a box in the screen and the user clicks on it to select it, it needs to get 'handles' at the corners and in the center so that the user can move the mouse over such a handle and drag it to enlarge or move the box.
What strategies are there to do this, and which one is the best one? I can think of two obvious ones:
1) Treat the handles as 3d objects. I.e. for a box, add small boxes to the scene at the corners of the 'main' box. Problems: this won't work in perspective view, I'd need to determine the size of the boxes relative to the current zoom level (the handles need to have the same size no matter how far the user is zoomed in/out)
2) Add the handles after the scene has been rendered. Render to an offscreen buffer, determine the 2d locations of the corners somehow and use regular 2d drawing techniques to draw the handles. Problems: how will I do hittesting? I'd need to do a two-stage hittesting approach, as well; how do I draw in 2d on a 3d rendered image? Fall back to GDI?
There are probably more problems with both approaches. Is there an industry-standard way of tackling this problem?
I'm using OpenGL, if that makes a difference.
I would treat the handles as 3D objects. This provides many advantages - it's more consistant, they behave well, hit testing is easy, etc.
If you want the handles to be a constant size, you can still treat them as 3D objects, but you will have to scale their size as appropriate based off the distance to camera. This is a bit of a hassle, but since there are typically only a few handles, and these are usually small objects, it should be fine performance wise.
However, I'd actually say let the handles scale with the scene. As long as you pick a rendering style for the handle that makes them stand out (ie: bright orange boxes, etc), the perspective effects (smaller handles in the background) actually makes working with them easier for the end-user in many ways. It is difficult to get a sense of depth from a 3D scene - the perspective effects on the handles help provide more visual clues as to how "deep" the handle is into the screen.
First off, project the handle/corner co-ordinates onto the camera's plane (effectively converting them to 2D coordinates on the screen; normalize this against the screen dimensions.)
Here's some simple code to enable orthogonal/2D-overlay drawing:
void enable2D()
{
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
int wind[4];
glGetIntegerv(GL_VIEWPORT,wind);
glOrtho(0,wind[2],0,wind[3],-1,1);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
}
void disable2D()
{
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
enable2D() caches the current modelview/projection matrices and replaces the projection matrix with one normalized to the screen (i.e. the width/height of the screen) and restores the identity matrix for modelview.
After making this call, you can make glVertex2f() calls using screen/pixel coordinates, allowing you to draw in 2D! (This will also allow you to hit-test since you can easily get the mouse's current pixel coords.)
When you're done, call disable2D to restore your old modelview/projection matrices :)
The hardest part is computing where the hitboxes fall on the 2D plane and dealing with overlaying (if two project to the same place, which to select on click?)
Hope this helped :)
I've coded up a manipulator with handles for a 3d editing package, and ran into a lot of these same issues.
First, there's an open source manipulator. I couldn't find it in my most recent search, probably because there's a plethora of names for these things - 3d widgets, gizmos, manipulators, gimbals, etc.
Anyhow, the way I did it was to add a manipulator object to the scene that, when drawn, draws all of the handles. It does the same thing for bounding box computation, and selection.
Reed's idea about keeping them the same size is interesting for handles that exist on objects, and might work there. For a manipulator, I found that it was more of a 3d UI element, and it was much more usable if it did not change size. I had a bug where the size was only determined based on the active viewport, which resulted in horrible huge/tiny manipulators in other viewports, very useless. If you're going to add them to the scene, you might want to add them per-viewport, or make them actually have a fixed size.
I know the question is really old. But just in case someone needs it:
Interactive Techniques in Three-dimensional Scenes (Part 1): Moving 3D Objects with the Mouse using OpenGL 2.1
Article is good and has an interesting link section at the bottom.