Image negative effect is typically done like this:
pixel = rgb(1, 1 ,1) - pixel
but if the pixel color is close to gray, than:
pixel = rgb(1, 1, 1) - rgb(0.5, 0.5, 0.5) = 0.5
That's not a problem and it's how it should be, but for me it is, I am making a crosshair texture in my 3D game, which will be drawn in the center of the screen and I want it to have negative effect, reason for it is clearity, if I were to make crosshair white, it would not be visible when looking on white objects (I know I can make it with black outline so it is visible, but thats ugly), but it still has problems for grayish colors as I described, what can be done to fix that?
Related
This question already has answers here:
Blend mode on a transparent and semi transparent background
(3 answers)
Closed last year.
The "normal" source-over blend equation is
outColor = srcAlpha * srcColor + (1 - srcAlpha) * dstColor
This equation does not consider the destination alpha, and as such produces poor results when the destination alpha is not 1.0.
For example, consider the case of a 50%-opaque yellow source color over a destination that is fully transparent, but has a red color. [Edit: e.g. the RGBA buffer has values of [255, 0, 0, 255] in each channel.] The above equation results in 50% yellow blended with 50% red, tainting the yellow even though the background is fully transparent.
What is a blend equation that works with destination alpha, such that a source image with semi-transparent pixels blended over a fully-transparent target remains unchanged?
You should draw translucent objects from furthest to closest.
If you are going to draw N objects sorted by distance, when you render object i you only need take into account the alpha of i. All the alphas for objects < i have already been taken into account when they were drawn.
(image from: https://www.sterlingpartyrentals.com/product/color-gels-for-par-light/)
The dstAlpha is almost never used.
In your example, having transparent red in the destination... how was the red drawn in the first place if its fully transparent?
You should always draw fully opaque objects first, and make sure that the whole screen gets something opaque drawn in to it. In 3D you can for example use cubemaps for making sure that this is the case.
(image from: https://learnopengl.com/Advanced-OpenGL/Cubemaps)
Let's take the simplest case of rendering two overlapping transparent rectangles, one red and one green, both with alpha=0.5. Assume that the drawing order is from back to front, meaning that the rectangle farther from the camera is drawn first.
In realistic scenarios, irrespective of which rectangle happens to be in front, the overlapping color should be the same, i.e. RGBA = [0.5, 0.5, 0.0, 0.5].
In practice, however, assuming that we are blending with weights SRC_ALPHA and ONE_MINUS_SRC_ALPHA, the overlapping color is dominated by the color of the front rectangle, as in this image:
I believe this happens because the first rectangle is blended with the background color, and the second rectangle is then blended with the resultant color. With this logic, assuming white background, the overlapping color in the two cases works out to be:
Red on top: 0.5*(0.5*[1,1,1,0] + 0.5*[0,1,0,0.5]) + 0.5*[1,0,0,0.5] = [0.75, 0.50, 0.25, 0.375]
Green on top: 0.5*(0.5*[1,1,1,0] + 0.5*[1,0,0,0.5]) + 0.5*[0,1,0,0.5] = [0.50, 0.75, 0.25, 0.375]
which explains the dominance of the color on top. In principle, this could be easily corrected if all the objects were blended first, and the resultant color is blended with the background color.
Is there a way to achieve this in OpenGL?
Ideally, irrespective of which rectangle happens to be in front, the overlapping color should be the same
No, because when you use "SourceAlpha, InvSourceAlpha" blending, the the formula for calculating the final color is:
destRGB = destRGB * (1-sourceAlpha) + sourceRGB * sourceAlpha
This causes that the color of the rectangle which is drawn first, is multiplied by the alpha channel and add to the framebuffer. When the second rectangle is drawn, then the content of the framebuffer (which includes the color of the first rectangle) is multiplied again, but now by the inverse alpha channel of the second rectangle.
The color of the second rectangle is multiplied by alpha channel of the 2nd rectangle only:
destRGB = (destRGB * (1-Alpha_1) + RGB_1 * Alpha_1) * (1-Alpha_2) + RGB_2 * Alpha_2
or
destRGB = destRGB * (1-Alpha_1)*(1-Alpha_2) + RGB_1 * Alpha_1*(1-Alpha_2) + RGB_2 * Alpha_2
While RGB_2 is multiplied by Alpha_2, RGB_1 is multiplied by Alpha_1 * (1-Alpha_2).
So the result depends on the drawing order, if the color in the framebuffer is modified by the alpha channel of the new (source) color.
If you want to achieve an order independent effect, then the the color of the framebuffer must not be modified by the alpha channel of the source fragment. e.g.:
destRGB = destRGB * 1 + sourceRGB * sourceAlpha
Which can be achieved by the parameter GL_ONE for the destination factor of glBlendFunc:
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
Drawing transparent surfaces depends a lot on order. Most issues happen because you're using depth tests and writing to the depth buffer (in which case the result depends not only on which triangle is in front, but also on which triangle is drawn first). But if you ignore depth and just want to draw triangles one after another, your results still depend on the order in which you draw them, unless you use certain commutative blend functions.
Since you've been talking about stained glass, here's one option that works roughly like stained glass:
glBlendFunc(GL_ZERO, GL_SRC_COLOR)
This essentially multiplies each color channel of the destination by the corresponding color channel of the source. So if you draw a triangle with color (0.5, 1.0, 1.0), then it will basically divide the red channel of whatever it's been drawn onto by two. Drawing on a black destination will keep the pixel black, just like stained glass does.
To reduce the "opacity" of your stained glass, you'll have to mix your colors with (1.0, 1.0, 1.0). The alpha value is ignored.
As a bonus, this blend function is independent of the order you draw your shapes (assuming you've locked the depth buffer or disabled depth testing).
I'm trying to use QPainter::drawEllipse to draw circles. I want to be able to:
set the width of the stroke of the circle (QPen::width)
choose the shape of the pixels that are at the center of the circle (1x1, 1x2, 2x1 or 2x2)
optionally make the circle filled instead of stroked
ensure that the circle has the correct radius (even when the stroke width is greater than 1)
These goals are surprisingly difficult to achieve. This is an example of what I want to render (drawn by hand):
The image is 32x32 (scaled up to 512x512). The red center point is at (15, 15). The center is 1x2 so there's an extra red pixel below the center pixel. The stroke has a width of 2 pixels. If the stroke was made wider, pixels would be added to the inside of the circle. The bounding box of the circle is the same regardless of stroke width. The radius is 8 pixels. Each of the blue lines is 8 pixels long. Just to be clear, the red and blue pixels are just there for describing the circle. They are not part of my desired output.
What my problem really boils down to is rendering an ellipse that fits perfectly inside a rectangle. I can calculate the rectangle using the center point, the radius, and the center shape. That part is easy. Simply calling drawEllipse with this rectangle doesn't work. I think I have to adjust this rectangle somehow before calling drawEllipse but I'm not too sure how to adjust it. I've tried fiddling around with it and I found some solutions that work for some pen widths but not others.
Does the pen cap matter? I've been using RoundCap. Should I be using a different cap?
I'm almost at the point where I'm considering doing the pixel manipulation myself. I'm rendering onto a QImage and using the Source composite operation so my code might be slightly faster than drawEllipse. memset is about 10x faster than QImage::fill so writing faster code probably won't be too hard! I'd rather not have to do that though.
I stumbled upon a section in the docs that talks about how QRects are rendered. It describes the relationship between the rendered pixels and the logical rectangle. The rendered rectangle is bigger than the logical rectangle. All I have to do is make the logical rectangle smaller to compensate.
QRect adjustStrokedRect(const QRect rect, const int thickness) {
return QRect{
rect.left() + thickness / 2,
rect.top() + thickness / 2,
rect.width() - thickness,
rect.height() - thickness
};
}
Ok, so now I can get stroked rectangles to render in the right place. An ellipse is described by a QRect so what if I just apply this transformation to that rectangle?
Nope.
It sort of works if the thickness is 1, 2, 4, 6 but not 3, 5, 7. The circle is one pixel too small when the thickness is 3, 5, 7. So I tried adding 1 to the rectangle size if thickness % 2 == 1 && thickness != 1 but then an asymmetric circle is rendered from a square. For some combinations of position and size, a wonky asymmetric circle is rendered even when the size is square.
Here's a weird image that you can easily reproduce:
Produce it with this code:
QImage image{32, 32, QImage::Format_ARGB32_Premultiplied};
QPainter painter{&image};
QPen pen{Qt::NoBrush, 3.0, Qt::SolidLine, Qt::RoundCap, Qt::RoundJoin};
pen.setColor(QColor{0, 255, 0, 255});
painter.setPen(pen);
painter.drawEllipse(8, 8, 17, 17);
image.save("weird.png");
I simply don't understand how that's even possible. To me, it seems like drawEllipse is rendering an ellipse that just roughly fits within the rectangle. I haven't been able to find the relationship between the rectangle and the ellipse anywhere in the docs. Perhaps this is because it's a very loose relationship.
I have no trouble getting QPainter::drawEllipse to draw circles with a stroke width of 1 so for now I just won't allow thick circles in my application. If I can’t render it perfectly, I won’t render it at all. I'm not marking this answer as accepted though as I would still like this to work.
I probably am too late for this, but still, for future reference:
Unfortunately, Qt plots ellipses using Bezier curves (as of now, which might just change soon) , which is a pretty good approximation of an ellipse, but isn't perfect. Plotting a pixel-perfect ellipse would require a manual implementation at the pixel level.
Try setting this QPainter flag to true:
painter->setRenderHint(QPainter::Antialiasing, true);
Did the trick for me!
I am looking to reproduce the glow effect from this tutorial, if I understand well, we convert the first image to an "alpha texture" (black and white), and we blur the (rgb * a) texture.
How is it possible to create this alpha texture, so that some colors go to the white, and the other go to the black? I found this : How to render a texture with alpha? but I don't really know how to use these answers.
Thanks
It appears you are misunderstanding what that diagram is showing you. It is actually all one texture, but (a) shows the RGB color and (b) shows the alpha channel. (c) shows what happens when you multiply RGB by A.
Alpha is not actually "black and white", it is an abstract concept and amounts to a range of values between 0.0 and 1.0. For the human brain to make sense out of it, it interprets that as black (0.0) and white (1.0). In reality, alpha is whatever you want it to be and unrelated to color (though it can be used to do something to color).
Typically the alpha channel would be generated by a post-process image filter, that looks for areas of the texture with significantly above average luminance. In modern graphics engines HDR is used and any part of the scene with a color too bright to be displayed on a monitor is a candidate for glowing. The intensity of this glow is derived from just how much brighter the lighting at that point is than the monitor can display.
In this case, however, it appears to be human created. Think of the alpha channel like a mask, some artist looked at the UFO and decided that the areas that appear non-black in figure (b) were supposed to glow so a non-zero alpha value was assigned (with alpha = 1.0 glowing the brightest).
Incidentally, you should not be blurring the alpha mask. You want to blur the result of RGB * A. If you just blurred the alpha mask, then this would not resemble glowing at all. The idea is to blur the lit parts of the UFO that are supposed to glow and then add that on top of the base UFO color.
So I have no idea how I should be doing what I want do so I'll explain as best as I can.
http://i.stack.imgur.com/j65H8.jpg
So imagine that entire image is a 2d square 128x128 and each color I want to apply a texture to that part of the 2d square. Also I want it to stretch as well so Red, Aqua, Green and Purple never stretch in any direction but Pink stretches all directions and then Grey, Yellow, Black and Orange stretch in the longest direction (grey/orange = width expands, yellow/black = height expands). When stretched it should look like this:
http://i.stack.imgur.com/wJiKv.jpg
Also I am using C++.