Antialiasing algorithm? - c++

I have textures that i'm creating and would like to antialias them. I have access to each pixel's color, given this how could I antialias the entire texture?
Thanks

I'm sorry but true anti-aliasing does not consist in getting the average color from the neighbours as commented above. This will undoubtfully soften the edges but it's not anti-aliasing but blurring. True anti-aliasing just cannot be done properly on a bitmap, since it has to be calculated at drawing time to tell which pixels and/or edges must be "softened" and which ones must not. For instance: imagine you draw an horizontal line which must be exactly 1 pixel thick (say "high") and must be placed exactly on an integer screen row coordinate. Obviously, you'll want it unsoftened, and proper anti-aliasing algorithm will do it, drawing your line as a perfect row of solid pixels surrounded by perfect background-coloured pixels, with no tone blending at all. But if you take this same line once it's been drawn (i.e. bitmap) and apply the average method, you'll get blurring above and below the line, resulting a 3 pixels thick horizontal line, which is not the goal. Of course, everything could be achieved through the right coding but from a very different and much more complex approach.

The basic method for anti-aliasing is: for a pixel P, sample the pixels around it. P's new color is the average of its original color with those of the samples.
You might sample more or less pixels, change the size of the area around the pixel to sample, or randomly choose which pixels to sample.
As others have said, though: anti-aliasing isn't really something that's done to an image that's already a bitmap of pixels. It's a technique that's implemented in the 2D/3D rendering engine.

"Anti-aliasing" can refer to a broad range of different techniques. None of those would typically be something that you would do in advance to a texture.
Maybe you mean mip-mapping? http://en.wikipedia.org/wiki/Mipmap

It's not very meaningful to ask about antialiasing in terms this vague. It all depends on the nature of the textures and how they will be used.
Generally though, if you don't have control over the display environment and your textures might be scaled, rotated, or composited with other elements you don't have any control over, you should use transparency to indicate where your image ends and where it only partially fills a pixel.
It's also generally good to avoid sharp edges and features that are small relative to the amount of scaling that might be done.

Related

Subpixel rasterization on opaque backgrounds

I'm working on a subpixel rasterizer. The output is to be rendered on an opaque bitmap. I've come so far as to correctly render text white-on-black (because i can basically disregard the contents of the bitmap).
The problem is the blending. Each actually rendered pixel affects it's neighbours intensity levels as well, because of the lowpass filtering technique (I'm using the 5-tap fir - 1/9, 2/9, 3/9 etc.), and additionally alpha levels of the pixel to be rendered. This result then has to be alphablended onto the destination image, which is where the problem occurs...
The result of the pixels interactions has to be added together to achieve correct luminance - and the alphablended to the destination - but if I rasterize one pixel at a time, I 'loose' the information of the previous pixels, hence, further addition may lead to overflowing.
How is this supposed to be done? The only solution I can imagine would work is, to render to a separate image with alpha channels for each colour, then some complex blending algorithm, and lastly alphablend it to the destination.. Somehow.
However, I couldn't find any resources on how to actually do it - besides the basic concepts of lcd subpixel rendering and nice closeup images of monitor pixels. If anyone can help me along the way, I would be very grateful.
Tonight I awoke and could not fall asleep again.
I could not let all those brain energy get to waste and stumbled over exactly the same problem.
I came up with two different solutions, both unvalidated.
You have to use a 3 channel alpha mask, one for each subpixel, blend each color with its own alpha.
You can use the color channels each as alpha mask if you only render gray/BW font (1-color_value if you draw dark text on a light background color), again applying each color individualy. The color value itself should be considered 1 in this case.
Hope this helps a little, I filled ~2h of insomnia with it.
~ Jan

How to draw an array of pixels directly to the screen with OpenGL?

I want to write pixels directly to to screen (not using vertices and polygons). I have investigated a variety of answers to similar questions, the most notable ones here and here.
I see a couple ways drawing pixels to the screen might be possible, but they both seem to be indirect and use unnecessary floating point operations:
Draw a GL_POINT for each pixel on the screen. I've tried this and it works, but this seems like an inefficient way to draw pixels onto the screen. Why write my data in floating-points when it's going to be transformed into an array of pixel data.
Create a 2d quad that spans the entire screen and write a texture to it. Like the first options, this seems to be a roundabout way of putting pixels on the screen. The texture would still have to go through rasterization before getting put on the screen. Also textures must be square, and most screens are not square, so I'd have to handle that problem.
How do I get, a matrix of colors, where pixels[0][0] corresponds to the upper left corner and pixels[1920][1080] corresponds to the bottom right, onto the screen in the most direct and efficient way possible using OpenGL?
Writing directly to the framebuffer seems like the most promising choice, but I have only seen people using the framebuffer for shading.
First off: OpenGL is a drawing API designed to make use of a rasterizer system that ingests homogenous coordinates to define geometric primitives, which get transformed and, well rasterized. Merely drawing pixels is not what the OpenGL API is concerned with. Also most GPUs are floating point processors by nature and in fact can process floating point data more efficiently than integers.
Why write my data in floating-points when it's going to be transformed into an array of pixel data.
Because OpenGL is a rasterizer API, i.e. it takes primitive geometrical data and turns it into pixels. It doesn't deal with pixels as input data, except in the form of image objects (textures).
Also textures must be square, and most screens are not square, so I'd have to handle that problem.
Whoever told you that, or whereever you got that from: They are wrong. OpenGL-1.x had that constraint that textures had to be power-of-2 sized in either direction, but width and height may differ. Ever since OpenGL-2 texture sizes are completely arbitrary.
However a texture might not be the most efficient way to directly update single pixels on the screen either. It is however a great idea to first draw pixels of an pixel buffer, which for display is loaded into a texture, that then gets drawn onto a full viewport quad.
However if your goal is direct manipulation of on-screen pixels, without a rasterizer inbetween, then OpenGL is not the right API for the job. There are other, 2D graphics APIs that allow you to directly push pixels to the screen.
However pushing individual pixels is very inefficient. I strongly recomment operating on a pixel buffer, which is then blited or drawn as a whole for display. And doing it with OpenGL, drawing a full viewport, textured quad is as good for this, and as efficient as any other graphics API.

Preventing Overdraw in Isometric Art

Background:
I am creating a game that presents the world in an isometric perspective, achieved by drawing isometric tiles. My current implementation is naive, using the painter's method, drawing from back to front, from bottom to top, using surface blits from tile images.
The Problem:
I'm concerned (maybe unduly so, please let me know if this is the case) about overdraw. Here's a small snapshot of a single layer of tiles:
The areas hi-lit in pink are the areas where the back-to-front, bottom-to-top method blits pixels to the canvas more than once. This is a small and contrived example, but in practice I hope to accomplish something more along the lines of this:
(image credit eBoy)
With an image as complex as this, and a tile-based implementation, each screen pixel is drawn to several times before the final image is composited, which feels like it's really inefficient. Since these are just 2D images with, in the end, one-bit alpha masks, there aren't as many concerns as there would be with 3D (e.g. no wasted lighting or transform math) but it still seems there should be a more elegant way of determining whether a pixel should be drawn or not based on whether or not it would be occluded in the final composition.
Solutions?
The best solution I've come up with so far is to:
Reverse the drawing order and draw front-to-back, top-to-bottom.
Keep a single bit per pixel fake z buffer that records whether or not a pixel has been drawn yet.
Only draw a tile if some of the pixels it covers haven't been drawn yet.
Is there a better way to do this? Are blit operations superefficient and I'm tilting at windmills here?
Windmills. Especially if you're using OpenGL-accelerated SDL2 blits.

Perfect filled triangle rendering algorithm?

Where can I get an algorithm to render filled triangles? Edit3: I cant use OpenGL for rendering it. I need the per-pixel algorithm for this.
My goal is to render a regular polygon from triangles, so if I use this triangle filling algorithm, the edges from each triangle wouldn't overlap (or make gaps between them), because then it would result into rendering errors if I use for example XOR to render the pixels.
Therefore, the render quality should match to OpenGL rendering, so I should be able to define - for example - a circle with N-vertices, and it would render like a circle with any size correctly; so it doesn't use only integer coordinates to render it like some triangle filling algorithms do.
I would need the ability to control the triangle filling myself: I could add my own logic on how each of the individual pixels would be rendered. So I need the bare code behind the rendering, to have full control on it. It should be efficient enough to draw tens of thousands of triangles without waiting more than a second perhaps. (I'm not sure how fast it can be at best, but I hope it wont take more than 10 seconds).
Preferred language would be C++, but I can convert other languages to my needs.
If there are no free algorithms for this, where can I learn to build one myself, and how hard would that actually be? (me=math noob).
I added OpenGL tag since this is somehow related to it.
Edit2: I tried the algo in here: http://joshbeam.com/articles/triangle_rasterization/ But it seems to be slightly broken, here is a circle with 64 triangles rendered with it:
But if you zoom in, you can see the errors:
Explanation: There is 2 pixels overlapping to the other triangle colors, which should not happen! (or transparency or XOR etc effects will produce bad rendering).
It seems like the errors are more visible on smaller circles. This is not acceptable if I want to have a XOR effect for the pixels.
What can I do to fix these, so it will fill it perfectly without overlapped pixels or gaps?
Edit4: I noticed that rendering very small circles isn't very good. I realised this was because the coordinates were indeed converted to integers. How can I treat the coordinates as floats and make it render the circle precisely and perfectly just like in OpenGL ? Here is example how bad the small circles look like:
Notice how perfect the OpenGL render is! THAT is what I want to achieve, without using OpenGL. NOTE: I dont just want to render perfect circle, but any polygon shape.
There's always the half-space method.
OpenGL uses the GPU to perform this job. This is accelerated in hardware and is called rasterization.
As far as i know the hardware implementation is based on the scan-line algorithm.
This used to be done by creating the outline and then filling in the horizontal lines. See this link for more details - http://joshbeam.com/articles/triangle_rasterization/
Edit: I don't think this will produce the lone pixels you are after, there should be a pixel on every line.
Your problem looks a lot like the problem one has when it comes to triangles sharing the very same edge. What is done by triangles sharing an edge is that one triangle is allowed to conquer the space while the other has to leave it blank.
When doing work with a graphic card usually one gets this behavior by applying a drawing order from left to right while also enabling a z-buffer test or testing if the pixel has ever been drawn. So if a pixel with the very same z-value is already set, changing the pixel is not allowed.
In your example with the circles the line of both neighboring circle segments are not exact. You have to check if the edges are calculated differently and why.
Whenever you draw two different shapes and you see something like that you can either fix your model (so they share all the edge vertexes), go for a z-buffer test or a color test.
You can also minimize the effect by drawing edges using a sub-buffer that has a higher resolution and down-sample it. Since this does not effect the whole area it is more cost effective in terms of space and time when compared to down-sampling the whole scene.

Outline / Silhouette rendering with OpenGL

I know there are several techniques to achieve this, but none of them seems sufficient.
Using a sobel / laplace filter doesn't find all the correct edges (and finds unwanted edges), is slow and doesn't give me control over the outline width.
What i have settled on for now is rendering the backside of my objects first with a solid color and a little bigger than the actual objects. The result does look good, but i really want my outlines to have a constant width.
I already tried rendering the backside of my objects with thick wireframe lines. Gives me a constant outline width, but line width is deprecated, produces rendering artifacts and leaves gaps, if the outline abruptly changes direction (like on a cube for example). I have not yet tried using a third rendering pass drawing a point the size of the wireframe lines for each vertex, because of the other problems with this technique.
Any ideas?
edit I even looked at finding the edges myself using a geometry shader, as described in http://prideout.net/blog/?p=54, but it suffers from the same gaps as the wireframe technique.
edit I was able to get rid of the rendering artifacts with the wireframe technique by disabling the GL_DEPTH_TEST while drawing the outlines. Unfortunately i also lost the outlines on overlapping objects...
My goal is to get the same effect they use on characters in the Dragons Lair 3 game. Does anyone know how they did it?
in case you're after real edge detection, Ive found that you can get pretty good results with a convolution LoG (Laplacian over Gaussian) 5x5 kernel, applied to the depth buffer and blended over the rendered object (possibly with a decent FSAA). You need some tuning in the fragment shader in order to clamp the blended outline, but the results are good. (and its a matter of what you really want, btw)
note that:
1) Laplace filtering and log filtering are different things and produce different results
2) if you apply the convolution on the depth buffer, instead of the rendered image, you end up with totally different results, firthermore, if an outline width conrol is desired, a dilate filter followed by a selective-erode pass can be applied, this way you will end up with a render that closely match a hand drawn sketch made with a marker, and you have fine control over tip size but at the cost of 2 extra pass