OpenCV: How to draw a line with colors that are inversed relatively to the surface it should be drawn on? - c++

So we have an image. We want to draw a line that must definitely be seen. So how to draw a lines with colors that are inverted relatively to the surface it should be drawn on in each point?

The XOR trick is trivially different. It's not visually the most distinct, though, if only because it entirely ignores how human eyes work. For instance, on light greys, a saturated red color is visually quite distinct.
You might want to convert the color to HSV and check the saturation S. If low (greyscale), draw a red pixel. If the saturation is high, the hue is quite obvious, and a white or black pixel will stand out. Use black (V=0) if the the original pixel had a high V; use white if the original pixel had a low V (dark saturated color)
You can use the LineIterator method as suggested earlier.
(BTW, the XOR trick has quite bad cases too. 0x7F ^ 0xFF = 0x80. That's bloody hard to see)

Use a LineIterator and XOR the colour values of each pixel manually.

This is from the top of my head and I'm not a c++ dev, but it should be possible
to draw the line into a separate image and then mimic an invert blend mode...basically you need to get the 'negative'/inverted colour behind a pixel, which you get by subtracting the color bellow your line from the maximum colour value.
Something like:
uint invert(uint topPixel, uint bottomPixel) {
return (255 - bottomPixel);
}
Not sure how if colours are from 0 to 255 or from 0.0 to 1.0, but hopefully this illustrates the idea.

Related

Shader that replaces colors

I want to make a shader that replace a color to be applied to a plain color character, but I can't just replace the color because the image contains pixels that are an average of two border colors.
For example the image looks like this:
Assuming that I want to change the color of the shirt, I want to replace the red color for a green one, but at the edges there are pixels that are not red:
Any ideas how to calculate the resultant color of one of those pixels?
Do you know which are the major colours in advance?
If not then a simple solution for finding them is to generate a histogram — scan the entire image and for each pixel that is the same as all four of its neighbours, add one to a count for the colour it contains. At the end, keep only those colours that fill at least a non-negligible portion of the display, e.g. at least 5% of those pixels that are not transparent.
Dealing with black borders is easy: use a luminance/chrominance colour space, and always leave luminance alone, remapping only chrominance. Factoring out brightness has a bonus: it collapses colour substitution from a 3d problem to a 2d problem.
If this weren't GLSL then a solid solution might be for each pixel that is not one of the selected major colours might be (i) find the nearest pixel that is a major colour; (ii) then find the nearest pixel that is a major colour but not the one found in (i). Use normal linear algebra to figure out the distance of that pixel on the 2d line from the one colour to the other. Substitute the colours, reinterpolate and output.
Being that it is GLSL, so "find the nearest" isn't especially realistic, assuming the number of major colours is small then just do it as distance from those lines. E.g. suppose you have five colours. Then that's 10 potential colour transitions in total — from each of the five colours there are four other options, suggesting twenty transitions, but half of them are exactly the same as the other half because they're just e.g. red to blue instead of blue to red. So ten.
Load those up as uniforms and just figure out which transition gradient the colour is closest to. Substitute the basis colours. Output.
So, in net:
transform (R, G, B) to (Y, x, y) — whether YUV or YIQ or Y doesn't matter, just pick one;
perform distance from a line for (x, y) and the colour transition gradients identified for this image;
having found the transition this pixel is closest to and its distance along that transition, substitute the end points, remap;
recombine with the original Y, convert back to RGB and output.
That's two dot products per colour transition gradient to establish closest, then a single mix to generate the output (x, y)/
Let Rx, Gx, Bx = Pixel values of color X (Red in your case) to be removed/replaced.
Let Ry, Gy, By = Pixel values of color Y (Green in your case) to be used as new color.
Then you will iterate over all pixels and using clever condition (below), identify the pixel that needs to be processed.
If Rc is current value of the selected pixel color (does not matter what combination of red and yellow is), then final values of the pixel are:
Rf = Rc - Rx + Ry
Gf = Gc - Gx + Gy
Bf = Bc - Bx + By
Of course, this processing should NOT happy for all pixels. Clever condition to identify only relevant pixels could be : If pixel color is Red or least one adjacent pixel is Red/Yellow.
UPDATE: Another clever condition using current pixel only:
This involves removing border colors YELLOW or BLACK color from the current color and checking if it is RED.
Rc - R(yellow) == R(RED) AND
Gc - G(yellow) == G(RED) AND
Bc - B(yellow) == B(RED)
OR
Rc - R(black) == R(RED) AND
Gc - G(black) == G(RED) AND
Bc - B(black) == B(RED)

Wrong blending in OpenGL on small alpha value

I draw from texture a lot of white traingles. But when it are drawing on yellow circle, the points which contains a small alpha value(but not equal with 0) are blended wrong, and I get some darker pixels on screen(see on screenshot, it was zoomed in). Which can be the problem?
On blue background all are ok.
As #tklausi pointed out in the comments, this problem was related to the texture interpolation in combination with traditional alpha blending. At the transition from values with high alpha to "background" with alpha=0, you will get some interpolation results where alpha is > 0, and RGB is mixed with your "background" color.
#tlkausi's solution was to change the RGB values of the background to white. But this will result in the same issue as before: If your actual image has dark colors, you will see bright artifacts around it then.
The correct solution would be to repeat the RGB color of the actual border pixels, so that the interpolation will always result in the same color, just with a lower alpha value.
However, there is a much better solution: premultiplied alpha.
Instead of storing (R,G,B,a) in the texture per pixel, you store (aR,aG,aB,a). When blending, you don't use a*source + (1-a) * background, but just source + (1-a)*background. The difference is that you now have a "neutral element" (0,0,0,0) and interpolation towards that will not pose any issue. It works nicely with filtering, and is also good for mipmapping and other techniques.
In general, I would recommend to always use premultiplied alpha in favor of the "traditional" one. The premultiplication can be directly applied into the image file, or you can do it at texture upload, but it does incur no runtime costs at all.
More information about premultiplied alpha can be found in this MSDN blog article or over here at NVIDIA.

How to create a texture alpha, with white and black colors only, in GLSL?

I am looking to reproduce the glow effect from this tutorial, if I understand well, we convert the first image to an "alpha texture" (black and white), and we blur the (rgb * a) texture.
How is it possible to create this alpha texture, so that some colors go to the white, and the other go to the black? I found this : How to render a texture with alpha? but I don't really know how to use these answers.
Thanks
It appears you are misunderstanding what that diagram is showing you. It is actually all one texture, but (a) shows the RGB color and (b) shows the alpha channel. (c) shows what happens when you multiply RGB by A.
Alpha is not actually "black and white", it is an abstract concept and amounts to a range of values between 0.0 and 1.0. For the human brain to make sense out of it, it interprets that as black (0.0) and white (1.0). In reality, alpha is whatever you want it to be and unrelated to color (though it can be used to do something to color).
Typically the alpha channel would be generated by a post-process image filter, that looks for areas of the texture with significantly above average luminance. In modern graphics engines HDR is used and any part of the scene with a color too bright to be displayed on a monitor is a candidate for glowing. The intensity of this glow is derived from just how much brighter the lighting at that point is than the monitor can display.
In this case, however, it appears to be human created. Think of the alpha channel like a mask, some artist looked at the UFO and decided that the areas that appear non-black in figure (b) were supposed to glow so a non-zero alpha value was assigned (with alpha = 1.0 glowing the brightest).
Incidentally, you should not be blurring the alpha mask. You want to blur the result of RGB * A. If you just blurred the alpha mask, then this would not resemble glowing at all. The idea is to blur the lit parts of the UFO that are supposed to glow and then add that on top of the base UFO color.

Subpixel rasterization on opaque backgrounds

I'm working on a subpixel rasterizer. The output is to be rendered on an opaque bitmap. I've come so far as to correctly render text white-on-black (because i can basically disregard the contents of the bitmap).
The problem is the blending. Each actually rendered pixel affects it's neighbours intensity levels as well, because of the lowpass filtering technique (I'm using the 5-tap fir - 1/9, 2/9, 3/9 etc.), and additionally alpha levels of the pixel to be rendered. This result then has to be alphablended onto the destination image, which is where the problem occurs...
The result of the pixels interactions has to be added together to achieve correct luminance - and the alphablended to the destination - but if I rasterize one pixel at a time, I 'loose' the information of the previous pixels, hence, further addition may lead to overflowing.
How is this supposed to be done? The only solution I can imagine would work is, to render to a separate image with alpha channels for each colour, then some complex blending algorithm, and lastly alphablend it to the destination.. Somehow.
However, I couldn't find any resources on how to actually do it - besides the basic concepts of lcd subpixel rendering and nice closeup images of monitor pixels. If anyone can help me along the way, I would be very grateful.
Tonight I awoke and could not fall asleep again.
I could not let all those brain energy get to waste and stumbled over exactly the same problem.
I came up with two different solutions, both unvalidated.
You have to use a 3 channel alpha mask, one for each subpixel, blend each color with its own alpha.
You can use the color channels each as alpha mask if you only render gray/BW font (1-color_value if you draw dark text on a light background color), again applying each color individualy. The color value itself should be considered 1 in this case.
Hope this helps a little, I filled ~2h of insomnia with it.
~ Jan

Double buffered sprite issue in OpenGL

Unfortunately, taking a screenshot does no replicate the problem, so I'll have to explain.
My character is a QUAD with a texture bound to it. When I move this character in any direction, the 'back end' of the pixels have a green and red 'after-glow' or strip of pixels. Very hard to explain, but I am assuming it is a problem with the double buffering. Is there a known issue associated with moving sprites and trailing pixels?
My only guess at this point is that you are only using a subset of the texture (i.e. your UVs are not just 0 and 1), and you have some colored pixels outside the rect you're drawing, and due to bilinear filtering, you catch a glimpse of them.
When creating textures with alpha, be sure to create an outline around the visible part of the texture with the same color (i.e. if your texture is a brown wooden fence, make sure that all transparent pixels near the fence are brown too).
NOTE that some texture compression algorithms will remove the color value from a pixel if it is entirely transparent, so if necessary, write a test pixel shader that ignores alpha to make sure that your texture made it through the pipeline intact.