Shader that replaces colors - opengl

I want to make a shader that replace a color to be applied to a plain color character, but I can't just replace the color because the image contains pixels that are an average of two border colors.
For example the image looks like this:
Assuming that I want to change the color of the shirt, I want to replace the red color for a green one, but at the edges there are pixels that are not red:
Any ideas how to calculate the resultant color of one of those pixels?

Do you know which are the major colours in advance?
If not then a simple solution for finding them is to generate a histogram — scan the entire image and for each pixel that is the same as all four of its neighbours, add one to a count for the colour it contains. At the end, keep only those colours that fill at least a non-negligible portion of the display, e.g. at least 5% of those pixels that are not transparent.
Dealing with black borders is easy: use a luminance/chrominance colour space, and always leave luminance alone, remapping only chrominance. Factoring out brightness has a bonus: it collapses colour substitution from a 3d problem to a 2d problem.
If this weren't GLSL then a solid solution might be for each pixel that is not one of the selected major colours might be (i) find the nearest pixel that is a major colour; (ii) then find the nearest pixel that is a major colour but not the one found in (i). Use normal linear algebra to figure out the distance of that pixel on the 2d line from the one colour to the other. Substitute the colours, reinterpolate and output.
Being that it is GLSL, so "find the nearest" isn't especially realistic, assuming the number of major colours is small then just do it as distance from those lines. E.g. suppose you have five colours. Then that's 10 potential colour transitions in total — from each of the five colours there are four other options, suggesting twenty transitions, but half of them are exactly the same as the other half because they're just e.g. red to blue instead of blue to red. So ten.
Load those up as uniforms and just figure out which transition gradient the colour is closest to. Substitute the basis colours. Output.
So, in net:
transform (R, G, B) to (Y, x, y) — whether YUV or YIQ or Y doesn't matter, just pick one;
perform distance from a line for (x, y) and the colour transition gradients identified for this image;
having found the transition this pixel is closest to and its distance along that transition, substitute the end points, remap;
recombine with the original Y, convert back to RGB and output.
That's two dot products per colour transition gradient to establish closest, then a single mix to generate the output (x, y)/

Let Rx, Gx, Bx = Pixel values of color X (Red in your case) to be removed/replaced.
Let Ry, Gy, By = Pixel values of color Y (Green in your case) to be used as new color.
Then you will iterate over all pixels and using clever condition (below), identify the pixel that needs to be processed.
If Rc is current value of the selected pixel color (does not matter what combination of red and yellow is), then final values of the pixel are:
Rf = Rc - Rx + Ry
Gf = Gc - Gx + Gy
Bf = Bc - Bx + By
Of course, this processing should NOT happy for all pixels. Clever condition to identify only relevant pixels could be : If pixel color is Red or least one adjacent pixel is Red/Yellow.
UPDATE: Another clever condition using current pixel only:
This involves removing border colors YELLOW or BLACK color from the current color and checking if it is RED.
Rc - R(yellow) == R(RED) AND
Gc - G(yellow) == G(RED) AND
Bc - B(yellow) == B(RED)
OR
Rc - R(black) == R(RED) AND
Gc - G(black) == G(RED) AND
Bc - B(black) == B(RED)

Related

Why border texels get the same color when magnified/scaled up using Bilinear filtering?

As in Bilinear filtering, sampled color is calculated based on the weighted average of 4 closest texels, then why corner texels get the same color when magnified?
Eg:
In this case (image below) when a 3x3 image is magnified/scaled to 5x5 pixel image (using Bilinear filtering) corner 'Red' pixels get exact same color and border 'Green' as well?
In some documents, it is explained that corner texels are extended with the same color to give 4 adjacent texels which explains why corner 'Red' texels are getting the same color in 5x5 image but how come border 'Green' texels are getting same color (if they are calculated based on weighted average of 4 closest texels)
When you are using bilinear texture sampling, the texels in the texture are not treated as colored squares but as samples of a continuous color field. Here is this field for a red-green checkerboard, where the texture border is outlined:
The circles represent the texels, i.e., the sample locations of the texture. The colors between the samples are calculated by bilinear interpolation. As a special case, the interpolation between two adjacent texels is a simple linear interpolation. When x is between 0 and 1, then: color = (1 - x) * leftColor + x * rightColor.
The interpolation scheme only defines what happens in the area between the samples, i.e. not even up to the edge of the texture. What OpenGL uses to determine the missing area is the texture's or sampler's wrap mode. If you use GL_CLAMP_TO_EDGE, the texel values from the edge will just be repeated like in the example above. With this, we have defined the color field for arbitrary texture coordinates.
Now, when we render a 5x5 image, the fragments' colors are evaluated at the pixel centers. This looks like the following picture, where the fragment evaluation positions are marked with black dots:
Assuming that you draw a full-screen quad with texture coordinates ranging from 0 to 1, the texture coordinates at the fragment evaluation positions are interpolations of the vertices' texture coordinates. We can now just overlay the color field from before with the fragments and we will find the color that the bilinear sampler produces:
We can see a couple of things:
The central fragment coincides exactly with the red texel and therefore gets a perfect red color.
The central fragments on the edges fall exactly between two green samples (where one sample is a virtual sample outside of the texture). Therefore, they get a perfect green color. This is due to the wrap mode. Other wrap modes produce different colors. The interpolation is then: color = (1 - t) * outsideColor + t * insideColor, where t = 3 * (0.5 / 5 + 0.5 / 3) = 0.8 is the interpolation parameter.
The corner fragments are also interpolations from four texel colors (1 real inside the texture and three virtual outside). Again, due to the wrap mode, these will get a perfect red color.
All other colors are some interpolation of red and green.
You're looking at bilinear interpolation incorrectly. Look at it as a mapping from the destination pixel position to the source pixel position. So for each desintation pixel, there is a source coordinate that corresponds to it. This source coordinate is what determines the 4 neighboring pixels, as well as the bilinear weights assigned to them.
Let us number your pixels with (0, 0) at the top left.
Pixel (0, 0) in the destination image maps to the coordinate (0, 0) in the source image. The four neighboring pixels in the source image are (0, 0), (1, 0), (0, 1) and (1, 1). We compute the bilinear weights with simple math: the weight in the X direction for a particular pixel is 1 - (pixel.x - source.x), where source is the source coordinate. The same goes for Y. So the bilinear weights for each of the four neighboring pixels are (respective to the above order): (1, 1), (0, 0), (0, 0) and (0, 0).
In short, because the destination pixel mapped exactly to a source pixel, it gets exactly that source pixel's value. This is as it should be.

How to render an image by using gradient-domain screened and Poisson reconstruction

I am working on a project for my thesis and I am building my own path tracer. Afterwards, I have to modify it in such a way to be able to implement the following paper:
https://mediatech.aalto.fi/publications/graphics/GPT/kettunen2015siggraph_paper.pdf
Of course I DO NOT want you to read the paper, but I link it anyway for those who are more curious. In brief, instead of rendering an image by just using the normal path tracing procedure, I have to calculate the gradients for each pixel, which means: if before we were shooting only rays through each pixel, we now shoot also rays for the neighbouring pixels, 4 in total, left, right, top, bottom. Let me explain in other words, I shoot one ray through a pixel and calculate its final colour as for normal path tracing, but, moreover, I shoot rays for its neighbour pixels, calculate the same final colour for those and, in order to calculate the gradients, I subtract their final colours from the main pixel. It means that for each pixel I will have 5 values in total:
colour of the pixel
gradient with right pixel = colour of the right pixel - colour of the pixel
gradient with left pixel = colour of the left pixel - colour of the pixel
gradient with top pixel = colour of the top pixel - colour of the pixel
gradient with bottom pixel = colour of the bottom pixel - colour of the pixel
The problem is that I don't know how to build the final image by both using the main colour and the gradients. What the paper says is that I have to use the screened Poisson reconstruction.
"Screened Poisson reconstruction combines the image and its
gradients using a parameter α that specifies the relative weights of
the sampled image and the gradients".
Everywhere I search for this Poisson reconstruction I see, of course, a lot of math but it comes hard to apply it to my project. Any idea? Thanks in advance!

grouping pixels by color using opencv

I have grey scale images with objects darker than the background with each object and the background having the same shade throughout itself. There are mainly 3-4 "groups of shades" in each picture. I want to group these pixels to find the approximate background shade (brightness) to later extract it.
And a side question: How can I calculate the angles on a contour produced by findContours.or maybe the minimum angle on a contour.
I think that you can set a range to group pixels. For example, all the pixel which have intensity value in the range (50 - 100) should have the intensity value 100. Similarly, all the pixels which have intensity value in the range (100-150) should have intensity value 150. And so on.
After doing the above procedure, you can have have only 3-4 fixed values for all pixels (as you have mentioned that there are 3-4 groups in each image.)

Raphael getPixelColour

I have a colour with rgb values 17, 30, 62.
I did linear gradient fill of a rect with this colour.
On mouse over of rect i want to change the colour value in a text box.
Is there any function or ratio to increase and decrease RGB values programatically
Then i can get the colour back with Raphael.rgb
Raphael cannot help you get the color at a specified pixel, as it only deals with vector graphics (SVG/VML). The rendering is done by the browser.
In the color picker example, the color is obtained from the coordinates in the circle -- the picker knows beforehand, which color it would find in a specified point. It does not check the color of the pixel under the cursor.
If you have a linear gradient with known edge colors it is a matter of linear interpolation to figure out the color of the gradient in any point (unless there is some transparency involved). Find the distance from the two anchor points of the gradient, estimate the relative distance to each and combine the colors using these coefficients.

OpenCV: How to draw a line with colors that are inversed relatively to the surface it should be drawn on?

So we have an image. We want to draw a line that must definitely be seen. So how to draw a lines with colors that are inverted relatively to the surface it should be drawn on in each point?
The XOR trick is trivially different. It's not visually the most distinct, though, if only because it entirely ignores how human eyes work. For instance, on light greys, a saturated red color is visually quite distinct.
You might want to convert the color to HSV and check the saturation S. If low (greyscale), draw a red pixel. If the saturation is high, the hue is quite obvious, and a white or black pixel will stand out. Use black (V=0) if the the original pixel had a high V; use white if the original pixel had a low V (dark saturated color)
You can use the LineIterator method as suggested earlier.
(BTW, the XOR trick has quite bad cases too. 0x7F ^ 0xFF = 0x80. That's bloody hard to see)
Use a LineIterator and XOR the colour values of each pixel manually.
This is from the top of my head and I'm not a c++ dev, but it should be possible
to draw the line into a separate image and then mimic an invert blend mode...basically you need to get the 'negative'/inverted colour behind a pixel, which you get by subtracting the color bellow your line from the maximum colour value.
Something like:
uint invert(uint topPixel, uint bottomPixel) {
return (255 - bottomPixel);
}
Not sure how if colours are from 0 to 255 or from 0.0 to 1.0, but hopefully this illustrates the idea.