I am looking to create a glsl shader program that will give me a variable width outline around an arbitrary 2d texture as shown in the picture. Is this a reasonable job for the GPU? I've looked at edge-detection approaches but those would only reasonably provide a few pixels border. I want arbitrary width. Is this doable?
One approach would be to render the object to a texture with a bit larger scale and color it all the way your outline color should be.
Then you render it to another texture as it should be displayed.
In a third render pass you could then combine the 2 textures by choosing the outline texture, when the color-texture is empty and else the color texture.
This could be an expensive process, but it obviously depends on the scope of your project if that impacts performance too much.
Related
I have question about 3D rendering.
Deferred rendering is very powerful but popular for not being nice to MSAA.
I clearly see why, but I suddenly came up some idea to solve that.
It's simple : just do deferred rendering completely, and get screen image on texture. This texture(attached on framebuffer or whatever) is of course not-antialiased.
Here comes further processing : then next, draw full scene again but this time fragment shader looks up the exact same position on pre-rendered texture using texelFetch(). And output that. Done.
It's silly but I think it might work. If we draw the geometry again with deferred-rendered result as the output color, it means we re-render the scene with geometry.
So we can now provide super-sampled depth information, and the GPU will be able to perform MSAA with aliased color but super-sampled depth geometry. (It's similar with picking up only the 'center' of fragment and evaluating that on ordinary MSAA process).
I'm not sure whether this description makes sense or not. I tested using opengl, but doing that makes no difference with just deferred-rendering.
Does my idea work?
No, your idea does not work.
If you did not render the initial image with multisampling, reading from it later while doing multisampling will not magically create information that doesn't exist in that image.
In your method, every sample which corresponds to a particular pixel in the multisampled rendering will have the same color value. So if two primitives overlap in a pixel, writing to different samples, it won't matter, since both primitives will be generating the same color. All you would be doing is generating multiple different depth values within a pixel, and that doesn't actually contribute to an antialiased output (directly).
I have a program that displays a color surface. Then through some method (which is the focus of my thesis but unimportant here) I closely recreate the color surface. So this gives me two copies of the color surface and I want to find the 'difference' between the two outputs, to see how closely they resemble each other. So loosely speaking I want to render something like
abs(render_1 - render_2)
Because of the complicated structure of both color surfaces I can not directly calculate the difference before rendering. Is there some way that I can use GLSL shaders to do this? I was hoping that it is possible to first render one surface, then in the second render pass use a shader that queries the color already present at the render location, but I do not think this is possible. Any thought on how to do this?
It is possible. You can render first surface to frame buffer and then query value of pixel from this texture at secord render pass. Since color is 4d vector you can calculate distance between 1-st pixel fetched from texture and 2-nd pixel calculated in shader. Since difference will have been found you can calculate and visualize SNR.
Render each version into its own texture using a FBO, then in a third pass you can evaluate the difference between the values in the rendered pictures (using a shader).
I have an assignment to render a terrain from a greyscale 8bit bmp and get colors to the terrain from a texture 24bit bmp. I managed to get a proper landscape with heights and so on, and also I get the colors from the texture bitmap. The problem is that the full color rendered terrain is very "blocky", it shows right colors and height but it's so blocky. I use glShadeModel(GL_SMOOTH) but it still looks so blocky, almost like I can see the pixels from the bitmap. So any hints are appreciated.
Do you use the bitmap as texture, or do you set vertex colours from the bitmap? I suggest you use a texture, using the planar vertex position as texture coordinate.
One thing you have to take into consideration is when you are rendering are you using GL_TRIANGLES or GL_TRIANGLESTRIPS this makes a difference on performance, second if you are using lighting you have to define your normals and each triangle or each vertex of each triangle, the problem then becomes tricky because almost every triangle is on a different plane. Not having proper normals would make it look blocky. The third thing that makes a difference is how big or small the triangles are; the smaller the triangles or the more divisions in your [x,z] plane increases you resolution thus increases the visual quality, but also slows down your frame rate. You have to find a good balance between the two.
I been working in a new game, and finally reached the point where I started to code the motion of my main character but I have a doubt about how do that.
Previously, I make two games in Allegro, so the spritesheets are kind of easy to implement, because I establish the frame and position on the image, and save every frame in a different bitmap, but I know that do that with OpenGL it's not neccesary and cost a little bit more.
So, I been thinking in how save my spritesheet and used in my program and I have only one idea.
I loaded the image and transformed in a texture, in my function that help me animate I simply grab a portion of the texture to draw instead of store every single texture in my program.
This is the best way to do that?
Thanks beforehand for the help.
You're on the right track.
Things to consider:
leave enough dead space around each sprite so that the video card does not blend in texels from adjacent sprites at small scales.
set texture min/mag filtering appropriately. GL_NEAREST is OK if you're going for the blocky look.
if you want to be fancy and save some texture memory, there's no reason that the sprites have to be laid out in a regular grid. Smaller sprites can be packed closer in the texture.
if your sprites are being rendered from 3D models, you could output normal & displacement maps from the model into another texture, then combine them in a fragment shader for some awesome lighting and self-shadowing.
You got the right idea, if you have a bunch of sprites it is much better to just stick them all in one big textures. Just draw your sprites as textured quads whose texture coordinates index into the frame of the sprite. You can do a few optimizations, but most of them revolve around trying to get the most out of your texture memory and packing the sprites closely together with out blending issues.
I know that do that with OpenGL it's not neccesary and cost a little bit more.
Why not? There are no real downsides to putting a lot of sprites into a single texture. All you need to do is change the texture coordinates to pick the region in question out of the texture.
in each frame (as in frames per second) I render, I make a smaller version of it with just the objects that the user can select (and any selection-obstructing objects). In that buffer I render each object in a different color.
When the user has mouseX and mouseY, I then look into that buffer what color corresponds with that position, and find the corresponding objects.
I can't work with FBO so I just render this buffer to a texture, and rescale the texture orthogonally to the screen, and use glReadPixels to read a "hot area" around mouse cursor.. I know, not the most efficient but performance is ok for now.
Now I have the problem that this buffer with "colored objects" has some accuracy problems. Of course I disable all lighting and frame shaders, but somehow I still get artifacts. Obviously I really need clean sheets of color without any variances.
Note that here I put all the color information in an unsigned byte in GL_RED. (assumiong for now I maximally have 255 selectable objects).
Are these caused by rescaling the texture? (I could replace this by looking up scaled coordinates int he small texture.), or do I need to disable some other flag to really get the colors that I want.
Can this technique even be used reliably?
It looks like you're using GL_LINEAR for your GL_TEXTURE_MAG_FILTER. Use GL_NEAREST instead if you don't want interpolated colors.
I could replace this by looking up scaled coordinates int he small texture.
You should. Rescaling is more expensive than converting the coordinates for sure.
That said, scaling a uniform texture should not introduce artifacts if you keep an integer ratio (like upscale 2x), with no fancy filtering. It looks blurry on the polygon edges, so I'm assuming that's not what you use.
Also, the rescaling should introduce variations only at the polygon boundaries. Did you check that there are no variations in the un-scaled texture ? That would confirm whether it's the scaling that introduces your "artifacts".
What exactly do you mean by "variance"? Please explain in more detail.
Now some suggestion: In case your rendering doesn't depend on stencil buffer operations, you could put the object ID into the stencil buffer in the render pass to the window itself, don't use the detour over a separate texture. On current hardware you usually get 8 bits of stencil. Of course the best solution, if you want to use a index buffer approach, is using multiple render targets and render the object ID into an index buffer together with color and the other stuff in one pass. See http://www.opengl.org/registry/specs/ARB/draw_buffers.txt