Displaying images without the black portion - opengl

I am trying to display a bitmap using opengl but I don't want the black portion of the image to be displayed. I can do this in DirectX but not in opengl. In other words - I have images of plants with a black background and I want the plants to be drawn so they look realistic (without a black border).

You can do this using alpha-testing:
Add an alpha channel to your image before uploading it to a texture, 0.0 on black pixels and 1.0 everywhere else.
Enable alpha-testing with glEnable( GL_ALPHA_TEST )
glAlphaFunc( GL_LESS, 0.1f )
Render textured quad as usual. OpenGL will skip the zero-alpha texels thanks to the alpha-test.

There are a couple of ways you can do this.
One is to use an image editing program like Photoshop or GIMP to add an alpha channel to your image and then set the black portions of the image to a max alpha value. The upside to this is it allows you to decide which portions of the image you want to be transparent, since a fully programmatic approach can sometimes hide things you want to be seen.
Another method is to loop through every pixel in your bitmap and set the alpha based on some defined threshold (i.e. if you want true black, check to see if each color channel is at 255). The downside to this is it will occasionally cause some of your lines to disappear.
Also, you will need to make sure that you have actually enabled the alpha channel and test, as stated in the answer above. Make sure to double check the order of your calls as well, as this can cause a lot of issues when you're trying to use transparency.
That's about as much as I can suggest since you haven't posted the code itself, but hopefully it should be enough to at least get you on the way to a solution.

Related

OpenGL : Blending & feedback effect

I'm struggling on a simple project, as an example/sandbox, I'm rendering a small oscillating rectangle on my output. I'm not using glclearcolor() but instead, on every frame I draw a black rectangle before anything else, blending with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
My goal is to see as I play with the alpha of this black rectangle feedback of previous frames, slowly fading, some kind of trail, and it's more or less working.
My main problem though, the trail never really disappears, and the longer I try to get the trail, the worse it gets. Also, I have to quite crank the alpha before seeing any trail, I don't really understand why.
The default OpenGL framebuffer only uses 8 bits for each color component. You can increase this by using a custom framebuffer backed by floats, or 16 or 32-bit components.
I'm not sure whether not using glClearColor is the proper way to implement motion trail. It's possible that the last bit of alpha blending runs into precision/rounding problem, where 0.9 * 0x01 might give you back 0x01 (for each rgba octet). (although I would be surprised you can see the difference but who knows). If that's not the case I would switch to a proper glClearColor and then create a trail of boxes similar to how you do the leading box, with deterministic decay/resource freeing.

How to XOR the colors under a shape? (SDL2)

I have an artsy side-project that is running slower than I want it to. Basically, I want to draw a bunch of shapes and colors such that they XOR the shapes and colors that I've already drawn. The program makes things like this:
Which is seven black circles XORed onto the screen.
My method is quite slow, for each pixel, I'm looping through each circle to determine if it should be XORed.
I can draw circles with SDL_gfx, but I can't seem to find a drawing mode that XORs. My current thought process is to use a blending mode that will at least tell me if a specific pixel is odd or even. However, creating an SDL_Texture that can be rendered to ( SDL_TEXTUREACCESS_TARGET ) makes it unable to be directly manipulated ( SDL_TEXTUREACCESS_STREAMING ).
The simple question is, how do I apply a black circle such that it XORs the pixels below it?
I don't think there is a way to do this with SDL_Renderer and still have reasonable performance. You would have to do the work in an SDL_Surface and upload it again.
I wrote SDL_gpu to enable modern graphical effects with a similar style to SDL's built-in render API. This particular effect is trivial in GLSL if you've used it much. If you want to avoid custom shaders, this effect is probably possible with the expanded blend mode options that SDL_gpu has.

Opengl, blend only when destination pixels' alpha value is positive

I'm searching for a function/way to make blending work only when destination pixels' (i.e. the back buffer) alpha value is greater than 0.
What i'm looking for is something like the glAlphaFunc which tests the incoming fragments, but in my case i want to test the fragments already found in the back buffer.
Any ideas?
Thank you in advance
ps. I cannot do a pixel-by-pixel test in the drawing function because this is set as a callback function to the user.
Wait, your answer is somewhat confusing, but i think what you're looking for is something like this : opengl - blending with previous contents of framebuffer
Sorry for this, but i think it's better answering instead of commenting.
So, let me explain better giving an example.
Let's say we have to draw something (whatever the user wants, like a table) and after that (before swapping the buffers of course) we must draw over it the "saved" textures using blending.
Let's say we have to draw two transparent boxes. If those boxes are to be saved in a different texture, this can be done by:
Clear the screen with (0, 0, 0, 0)
set blend function (GL_ONE, GL_ZERO)
draw the box
save it to texture.
Now, whenever the user wants to redraw them all, he simply draws the main theme (the table) and over it draws the textures using blend function (GL_SOURCE_ALPHA, GL_ONE_MINUS_SOURCE_ALPHA).
This works fine. But if the user wants to save both boxes in one texture and the boxes overlap, how can we save the blending of those two boxes without blend them with the "cleared" background?
Summarizing, the final image of the whole painting should be a table with two boxes (let's say a yellow and a green box) over it, blended with function (GL_SOURCE_ALPHA, GL_ONE_MINUS_SOURCE_ALPHA).

Subpixel rasterization on opaque backgrounds

I'm working on a subpixel rasterizer. The output is to be rendered on an opaque bitmap. I've come so far as to correctly render text white-on-black (because i can basically disregard the contents of the bitmap).
The problem is the blending. Each actually rendered pixel affects it's neighbours intensity levels as well, because of the lowpass filtering technique (I'm using the 5-tap fir - 1/9, 2/9, 3/9 etc.), and additionally alpha levels of the pixel to be rendered. This result then has to be alphablended onto the destination image, which is where the problem occurs...
The result of the pixels interactions has to be added together to achieve correct luminance - and the alphablended to the destination - but if I rasterize one pixel at a time, I 'loose' the information of the previous pixels, hence, further addition may lead to overflowing.
How is this supposed to be done? The only solution I can imagine would work is, to render to a separate image with alpha channels for each colour, then some complex blending algorithm, and lastly alphablend it to the destination.. Somehow.
However, I couldn't find any resources on how to actually do it - besides the basic concepts of lcd subpixel rendering and nice closeup images of monitor pixels. If anyone can help me along the way, I would be very grateful.
Tonight I awoke and could not fall asleep again.
I could not let all those brain energy get to waste and stumbled over exactly the same problem.
I came up with two different solutions, both unvalidated.
You have to use a 3 channel alpha mask, one for each subpixel, blend each color with its own alpha.
You can use the color channels each as alpha mask if you only render gray/BW font (1-color_value if you draw dark text on a light background color), again applying each color individualy. The color value itself should be considered 1 in this case.
Hope this helps a little, I filled ~2h of insomnia with it.
~ Jan

OpenGl Rendering Transparent .png with Random White Pixels

I am working on a game with a friend and we are using openGl, glut, devIL, and c++ to render everything. Simply, Most of the .pngs we are using are rendering properly, but there are random pixels that are showing up as white.
These pixels fall into 2 categories. The first are pixels on the edge of the image. These are resulting from the anti-aliasing going on from photoshop's stroke feature (which i am trying to fix). The second is the more mysterious one. When the enemy is standing still, the texture looks fine, but as soon as it jumps a random white line appears on the top of it.
The line on top is of varying solidity (this shot is not the most solid)
It seems like a blending issue, but I am not as familiar with the way openGl handles the transparency (our code for transparency was learned from the other questions on stack overflow though I couldn't find anything on this issue, however). I am hoping something will fix both issues, but am more worried about the second.
Our current setup code:
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
Transparent areas of a bitmap also have a color. If it is 100% transparent, you usually can't see it. Photoshop usually fills white in these areas.
If you are using minifying or magnifying flags that are not GL_NEAREST, then you will have interpolation. If you interpolate in between two pixels, where one is blue and opaque, and the other is white and transparent, then you will get something that is 50% transparent and light-blue. You may also get the same problem with mimaps, as interpolation is used. If you use mipmaps, one solution is to generate them yourself. That way, you can ignore the transparent areas when doing the interpolations. See some good explanations here: http://answers.unity3d.com/questions/10302/messy-alpha-problem-white-around-edges.html
Why are you using png files? You save some disk space, but need to include complex libraries like devil. You don't save any space in the delivery of an application, as most tools that creates delivery packages have very efficient compression. And you don't save any memory on the GPU, which may be the most critical.
This looks like an artifact in your source PNG. Are you sure there are no such light opaque pixels there?
White line appearing on top could be a UV interpolation error from neighbor texture in your texture atlas (or padding if you pad your NPOT textures to POT with white opaque pixels). Thats why usually you need to pad textures with at least one edge pixel in every direction. That won't help with mipmaps though, as Lars said - you might need to use custom mipmap generation or drop it altogether.