I was wondering what the best approach would be to render selected triangles on a mesh in a different colour? I'm using OpenGL but the specific rendering system probably doesn't matter so much.
One approach would be to render the selected triangles over the top of the existing mesh, but I feel there must be a better way to do this using shaders?
I think the easiest way to accomplish this is to create a separate colour buffer for your triangles. You can subsequently use glBufferSubData() (see here) to revert the colour of deselected triangles, and update the colour on those that are newly selected that frame.
This assumes you know which index in your buffer the vertices of the triangles whose colour you want to change are located at.
It is also possible to let the additional buffer contain boolean values only, and overwrite the colour of selected triangle(s) with a value specified in a uniform variable.
Related
As part of my 2D game engine, I would like to be able set the order (from back to front) in which my sprites are rendered on screen by manipulating their z-index. This z-index sprite property is a floating point value which can range between the far and near planes of my orthographic projection (in the range of (-1.0, 1.0]).
Currently, in order to minimize unnecessary texture switching, I store my sprites in an un-ordered dictionary, where the keys are textures and the values are the corresponding ordered list of sprite quads using that particular texture. This dictionary is then parsed every frame to populate a giant VBO with all of the appropriate per-vertex attributes (position, texcoords, and a mat4 modelview matrix). This is great since I only need to then make one texture bind for each texture, and I have been pretty happy with its performance.
While this z-ordering issue has always been there in my code, it became really obvious when I was working with translucent textures and enabled alpha blending, as rendering sprites in the wrong order resulted in some sprites appearing opaque instead! After some testing, I have come to the following conclusions:
The order of my texture keys matters as this determines which corresponding sprite lists are written to my giant VBO first. This basically means that all of the sprites of the first texture appear below all of the sprites of the second texture.
If two sprites have the same texture, then the relative order of the two sprites in the sprite list associated with that texture matters (first sprite appears on the bottom again).
If I call glEnable(GL_DEPTH_TEST), then I need to set the z-index in increasing order to match the order of the sprite list; otherwise I get incorrectly opaque sprites.
If I call glDisable(GL_DEPTH_TEST), then my z-index value I set for the sprites is (obviously) ignored, so only rules 1 and 2 apply.
My question is, given a set of translucent and opaque sprites of various textures, how can I best order my sprites in my giant VBO every frame so as to minimize texture changes? Are the rules different for handling opaque and translucent sprites (and should they be handled in separate passes)? I also read that the alpha blending functions in OpenGL are order-dependent and that there are some order-independent techniques that can be used apparently to get around this, but this went over my head, so any light that could be shed on those types of techniques would be appreciated as well.
I have a program that displays a color surface. Then through some method (which is the focus of my thesis but unimportant here) I closely recreate the color surface. So this gives me two copies of the color surface and I want to find the 'difference' between the two outputs, to see how closely they resemble each other. So loosely speaking I want to render something like
abs(render_1 - render_2)
Because of the complicated structure of both color surfaces I can not directly calculate the difference before rendering. Is there some way that I can use GLSL shaders to do this? I was hoping that it is possible to first render one surface, then in the second render pass use a shader that queries the color already present at the render location, but I do not think this is possible. Any thought on how to do this?
It is possible. You can render first surface to frame buffer and then query value of pixel from this texture at secord render pass. Since color is 4d vector you can calculate distance between 1-st pixel fetched from texture and 2-nd pixel calculated in shader. Since difference will have been found you can calculate and visualize SNR.
Render each version into its own texture using a FBO, then in a third pass you can evaluate the difference between the values in the rendered pictures (using a shader).
First my problem: I'm trying to render to multiple buffers in an FBO. I set multiple buffers using glDrawBuffers, and rendering to them using the appropriate gl_FragData. All good and well, but in my situation one of the buffers should be downsampled, by a quarter to be exact (w/2, h/2).
Of course, I can do this by blitting those specific buffers afterwards or I can simply do the downsampling on the CPU (current solution). But then I read about viewport arrays and found this quote in the ARB specification, which seems to be exactly what I want, without any extra conversions.
Additionally, when combined with multiple framebuffer attachments, it
allows a different viewport rectangle to be selected for each.
Of course, the specification never mentions afterwards how to do this or what they actually mean, multiple framebuffer attachments is quite generic. I only noticed I can set a specific viewport as an output of the geometry shader (output gl_ViewportIndex). So I could call the geometry twice for each viewport in the array. But as far as I understand, this will simply call the fragment shader with another viewport transformation applied, not one per target buffer. This of course makes not much sense for my usecase, also I can't see how this could ever help to select a viewport per framebuffer attachment
For my situation it does not make much sense to add a geometry shader. And as viewport transform is only applied after the fragment shader, it does make sense to have a viewport per render target, which the previous quote seems to confirm. Is this actually possible, and if so, how would I accomplish this?
Oh, and I've tried the obvious already: resizing the renderbuffer of that target (let's say I use GL_COLOR_ATTACHMENT1) to the downsampled version and setting index 1 of the viewport array to the according size. I ended up with a picture of the lower left quadrant of the image, essentially telling me the viewport was unchanged.
Viewport arrays can only be used with geometry shaders; without them, array index 0 will be used for all rendering.
Remember: the viewport transform happens before rasterization. Thus, if you want to transform a triangle by multiple viewports, you're effectively asking the system to render that triangle multiple times. And the only way to do that is with a geometry shader that outputs the primitive multiple times.
I have geometry stored in a display list, but I'd like to be able to draw the same display list with different "tints" on them. For example, if I had a black and white skull in a display list, I'd like to set a red tint and draw a skull, then set a blue tint and draw the skull.
If I can get the RGBA values I know exactly how to transform them, but I'm not sure where I can intercept them. Currently the display lists do not contain textures, but they probably will in the future so it would be good if the answer works with or without textures.
Conceptually a display list is just a bunch of commands that are excuted when you glCallList. So whatever it contains it will just be as if you used those commands directly (but maybe more performant). So if your display list contains a bunch of geometry commands, how can you color them? Yes, you guessed it, using the usual glColor command right before calling the list:
glColor(...);
glCallList(...);
When you want your objects to have a texture and still be colorable you can just use set texture environment to GL_MODULATE (I guess you're not using shaders, otherwise the whole question would be quite obsolete, anyway). If you want your objects lit, well change glColor to glMaterial, of course.
But if you set the color inside the display list you don't have any chance to get it and change it. But I wouldn't advice you to use display lists, anyway. If you use them to store geometry and for reducing CPU-GPU copies and drawcall overhead, then why not use VBOs, which are exactly made for this (and don't suffer from such an indeterminate implementation).
in each frame (as in frames per second) I render, I make a smaller version of it with just the objects that the user can select (and any selection-obstructing objects). In that buffer I render each object in a different color.
When the user has mouseX and mouseY, I then look into that buffer what color corresponds with that position, and find the corresponding objects.
I can't work with FBO so I just render this buffer to a texture, and rescale the texture orthogonally to the screen, and use glReadPixels to read a "hot area" around mouse cursor.. I know, not the most efficient but performance is ok for now.
Now I have the problem that this buffer with "colored objects" has some accuracy problems. Of course I disable all lighting and frame shaders, but somehow I still get artifacts. Obviously I really need clean sheets of color without any variances.
Note that here I put all the color information in an unsigned byte in GL_RED. (assumiong for now I maximally have 255 selectable objects).
Are these caused by rescaling the texture? (I could replace this by looking up scaled coordinates int he small texture.), or do I need to disable some other flag to really get the colors that I want.
Can this technique even be used reliably?
It looks like you're using GL_LINEAR for your GL_TEXTURE_MAG_FILTER. Use GL_NEAREST instead if you don't want interpolated colors.
I could replace this by looking up scaled coordinates int he small texture.
You should. Rescaling is more expensive than converting the coordinates for sure.
That said, scaling a uniform texture should not introduce artifacts if you keep an integer ratio (like upscale 2x), with no fancy filtering. It looks blurry on the polygon edges, so I'm assuming that's not what you use.
Also, the rescaling should introduce variations only at the polygon boundaries. Did you check that there are no variations in the un-scaled texture ? That would confirm whether it's the scaling that introduces your "artifacts".
What exactly do you mean by "variance"? Please explain in more detail.
Now some suggestion: In case your rendering doesn't depend on stencil buffer operations, you could put the object ID into the stencil buffer in the render pass to the window itself, don't use the detour over a separate texture. On current hardware you usually get 8 bits of stencil. Of course the best solution, if you want to use a index buffer approach, is using multiple render targets and render the object ID into an index buffer together with color and the other stuff in one pass. See http://www.opengl.org/registry/specs/ARB/draw_buffers.txt