I have some GDI code that's drawing semi-transparent triangles using System.Drawing.SolidBrush. I'm trying to reimplement the code using a proper 3D rendering API (OpenGL/Direct3D11), but I'm not sure what blend equation to use for these triangles to get the same output as the original GDI code.
I assume it's something relatively simple like additive blending (func=GL_FUNC_ADD, eq=GL_ONE,GL_ONE) or interpolation (func=GL_FUNC_ADD, eq=GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), but neither seems to look quite right. This is almost certainly to a bug in my new code, but I want to make sure I'm working towards the correct target before I continue. Does anybody know the appropriate blend equation?
EDIT: Here's the relevant C# code, stripped of context:
using System.Drawing;
SolidBrush b = new SolidBrush(Color.FromArgb(alpha,red,green,blue));
Point[] points = ...;
Graphics g;
g.FillPolygon(b,points);
My question is, what color will actually be written, in terms of the brush's RGBA and the destination pixel's RGBA? All the docs I can find just say "use alpha to control the Brush's transparency" or something equally vague; I'm curious what the actual blend equation is.
According to MSDN, System.Drawing.Graphics has a property named CompositingMode which can be either SourceOver or SourceCopy.
The default one is SourceOver, which
Specifies that when a color is rendered, it is blended with the background color. The blend is determined by the alpha component of the color being rendered.
As I have tested, the blend strategy used here is alpha composition, which determines the pixel value in the area image A over image B by the following equation:
Here αa and αb are the normalized alpha value of image A and image B; Ca and Cb are the RGBA values of image A and image B; Co is the RGBA value of the output image.
Related
I've created two cubes as a single object in Blender and applied two different image textures for each cube and exported them as .OBJ file. I've converted .OBJ to .USDZ file using XCODE and uploaded the .png image texture file as material for my object using the color_map tag. The material is applied to two cubes. However, I see one cube is Opaque and another cube is Transparent.
Not sure why I am having this issue, Can anyone help me out on this?
OBJ File = PBR_Cube.obj
Image Texture file = Combined.png
Code:
xcrun usdz_converter PBR_Cube.obj PBR_Cube.usdz -v -a -l \
-color_map Combined.png
I expect both the cubes to be Opaque
I know this is an old post, but I have the solution to this issue and have not found the answer anywhere online. The USDZ format (at least on mobile) interprets the base color rgba map as pre-multiplied alpha or rgb(a). If the model does not use any transparency it is safe and easy to simply use a 3-channel rgb output (jpeg works as a simple solution, or a png with a pre-multiplied alpha of 1). USDZ documentation recommends using separate material sets for transparent areas, but it is possible to use an opacity mask, this should be formatted as a png-24 with pre-multiplied alpha containing the opacity mask. In this rgb(a) format, the alpha is stored in relation to the rgb values, making the hex code for colors different than a straight rgba with alpha in the 'a' channel, but color values appear the same. This explains why colors are correct but transparency values are wrong. I used Substance Designer to combine my base color and opacity mask to a pre-multiplied png-24, and this worked perfectly. The process seems to be finicky/difficult in Photoshop or other image manipulation software, but is straightforward in Substance (I can give more details if anyone is interested). It could be possible to use an rgb jpeg colormap for the base color, and a single channel greyscale mask for opacity, but I'm not entirely sure if the USDZ converter would properly combine these, It is safer to use your own 4-channel png-24.
I am just learning the intricacies of OpenGL. What I would like to do is render a sprite onto a pre-existing texture. The texture will consist of terrain with some points alpha=1 and some points alpha=0. I would like the sprite to appear on a pixel of the texture if and only if the corresponding texture's pixel's alpha = 0. That is, for each pixel of the sprite, the output colour is:
Color of the sprite, if terrain alpha = 0.
Color of the terrain, if terrain alpha = 1.
Is this possible to do with blending function, if not how should I do it?
This is the exact opposite of the traditional blending function. The usual blend function is a linear interpolation between the source and destination colors, based on the source alpha.
What you want is a linear interpolation between the source and destination colors, based on the destination alpha. But you also want to invert the usual meaning; a destination alpha of 1 means that the destination color should be taken, not the source color.
That's pretty easy.
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA);
However, the above assumes that your sprites do not themselves have some from of inherent transparency. And most sprites do. That is, if the sprite alpha is 0 at some pixel, you don't want to overwrite the terrain color, no matter what the terrain's alpha is.
That makes this whole process excessively difficult. Pre-multiplying the alpha will not save you either, since black will just as easily overwrite the color in the terrain if there is no terrain color there.
In effect, you would need to do a linear interpolation based on neither the source nor the destination, but on a combination of them. I think multiplication of the two (src-alpha * (1 - dst-alpha)) would do a good job.
This is not possible with OpenGL's standard blending system. You would need to employ some form of programmatic blending technique. This typically involves read/modify/write operations using NV/ARB_texture_barrier or otherwise ping-ponging between bound textures.
I am looking to reproduce the glow effect from this tutorial, if I understand well, we convert the first image to an "alpha texture" (black and white), and we blur the (rgb * a) texture.
How is it possible to create this alpha texture, so that some colors go to the white, and the other go to the black? I found this : How to render a texture with alpha? but I don't really know how to use these answers.
Thanks
It appears you are misunderstanding what that diagram is showing you. It is actually all one texture, but (a) shows the RGB color and (b) shows the alpha channel. (c) shows what happens when you multiply RGB by A.
Alpha is not actually "black and white", it is an abstract concept and amounts to a range of values between 0.0 and 1.0. For the human brain to make sense out of it, it interprets that as black (0.0) and white (1.0). In reality, alpha is whatever you want it to be and unrelated to color (though it can be used to do something to color).
Typically the alpha channel would be generated by a post-process image filter, that looks for areas of the texture with significantly above average luminance. In modern graphics engines HDR is used and any part of the scene with a color too bright to be displayed on a monitor is a candidate for glowing. The intensity of this glow is derived from just how much brighter the lighting at that point is than the monitor can display.
In this case, however, it appears to be human created. Think of the alpha channel like a mask, some artist looked at the UFO and decided that the areas that appear non-black in figure (b) were supposed to glow so a non-zero alpha value was assigned (with alpha = 1.0 glowing the brightest).
Incidentally, you should not be blurring the alpha mask. You want to blur the result of RGB * A. If you just blurred the alpha mask, then this would not resemble glowing at all. The idea is to blur the lit parts of the UFO that are supposed to glow and then add that on top of the base UFO color.
I'm working on a subpixel rasterizer. The output is to be rendered on an opaque bitmap. I've come so far as to correctly render text white-on-black (because i can basically disregard the contents of the bitmap).
The problem is the blending. Each actually rendered pixel affects it's neighbours intensity levels as well, because of the lowpass filtering technique (I'm using the 5-tap fir - 1/9, 2/9, 3/9 etc.), and additionally alpha levels of the pixel to be rendered. This result then has to be alphablended onto the destination image, which is where the problem occurs...
The result of the pixels interactions has to be added together to achieve correct luminance - and the alphablended to the destination - but if I rasterize one pixel at a time, I 'loose' the information of the previous pixels, hence, further addition may lead to overflowing.
How is this supposed to be done? The only solution I can imagine would work is, to render to a separate image with alpha channels for each colour, then some complex blending algorithm, and lastly alphablend it to the destination.. Somehow.
However, I couldn't find any resources on how to actually do it - besides the basic concepts of lcd subpixel rendering and nice closeup images of monitor pixels. If anyone can help me along the way, I would be very grateful.
Tonight I awoke and could not fall asleep again.
I could not let all those brain energy get to waste and stumbled over exactly the same problem.
I came up with two different solutions, both unvalidated.
You have to use a 3 channel alpha mask, one for each subpixel, blend each color with its own alpha.
You can use the color channels each as alpha mask if you only render gray/BW font (1-color_value if you draw dark text on a light background color), again applying each color individualy. The color value itself should be considered 1 in this case.
Hope this helps a little, I filled ~2h of insomnia with it.
~ Jan
I am writing a Lights/Shadows system for my game using Java alongside the LWJGL. For each one of the Light-Emitting Entities I generate such a texture:
I should warn you that these Textures are full of (0, 0, 1) or (1, 0, 0) pixels, and the gradient effect is achieved with the alpha channel. I interpret the Alpha channel as a gradient factor.
Afterwards, I wish to blend every light/shadow texture together on a single texture, each at it's respective correct position. For that, I use a Framebuffer. I tried to achieve the desired effect using the following blend equation/function combination:
glBlendEquationSeparateEXT(GL_FUNC_ADD, GL_MAX);
glBlendFuncSeparateEXT(GL_SRC_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE);
I chose GL_ONE/GL_ONE for the Alpha Channel Blend Function arbitrarily, for GL_MAX will only do max(Sa, Da), as stated here, which means that the scaling factors are not used. The result of this combination is the following:
This image was obtained with Apple's OpenGL Driver Profiler, so I did not render it using my application (which could mess with the final result). The next step would be to render this texture over the actual game using multiply-blending, in order to darken the image, but the lights/shadows texture is obviously wrong, because we can see the edges of individual light/shadow textures over each other.
How should I proceed to achieve the desired result?
Edit:
I forgot to explain my choices for the scaling factors:
I think that it would be right to simply add the colors of each light (pondering each of them with their respective alpha values) and choose the alpha of the final fragment to be the biggest of each overlapping light.
Imagine that one of your texture rectangles was extended outside its current border with some arbitrary pattern, like pure green. Imagine further that we were somehow allowed to use two different blending functions, one inside the border, and one outside. You would get the same image you have here (none of the green showing) if outside the border you used the blend function
glBlendFuncSeparateEXT(GL_ZERO, GL_ONE, GL_ONE, GL_ONE)
We would then want whatever blending function we use inside to give us a continuous blending result. The blending function
glBlendFuncSeparateEXT(GL_SRC_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE)
would not give us such a result. It is not so much because the first parameter which would mean ignoring the source near the border (small alpha on the border, if I read your image description correctly). So it must be the second parameter. We want the destination only when the source alpha is small. Change GL_DST_ALPHA to GL_ONE_MINUS_SRC_ALPHA. This would be more standard, but maybe I'm not understanding your objectives?