Linear Gradient with Angle & Strength [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want implement a function in C++/RealBasic to create a color gradient by the parameters:
Width and height of the image
2 colors of the gradient
Angle (direction) of the gradient
Strength of the gradient
The following links show some examples of the desired output image:
http://www.artima.com/articles/linear_gradients_in_flex_4.html, http://i.stack.imgur.com/4ssfj.png
I have found multiple examples but they give me only vertical and horizontal gradients, while I want to specify the angle and strength too.
Can someone help me please?
P.S.: I know only a little about geometry!! :(

Your question is very wide and as is, this is a pretty complex exercise with a lot of code, including image rendering, image format handling, writing file to disk, etc. These are not the matter of a single function. Because of this, I focus on making an arbitrary linear color gradient of 2 colors.
Linear color gradient
You can create a linear color "gradient" by linear interpolation between 2 colors. However simple linear interpolation makes really harsh-looking transitions. For visually more appealing results I recommend to use some kind of S-shaped interpolation curve like the Hermite interpolation based smoothstep.
Regarding the angle, you can define a line segment by the start (p0) and end (p1) points of the color gradient. Let's call the distance between them d01, so d01 = distance(p0, p1). Then for each pixel point p of the image, you have to compute the closest point p2 on this segment. Here is an example how to do that. Then compute t = distance(p0, p2) / d01. This will be the lerp parameter t in the range [0, 1].
Interpolate between the 2 gradient color by this t and you got the color for the given point p.
This can be implemented multiple ways. You can use OpenGL to render the image, then read the pixel buffer back to the RAM. If you are not familiar with OpenGL or the rendering process, you can write a function which takes a point (the 2D coordinates of a pixel) and returns an RGB color - so you can compute all the pixels of the image. Finally you can write the image to disk using an image format, but that's an another story.
The following are example C++14 implementations of some functions mentioned above.
Simple linear interpolation:
template <typename T, typename U>
T lerp(const T &a, const T &b, const U &t)
{
return (U(1) - t)*a + t*b;
}
, where a and b are the two values (colors in this case) you want to interpolate between, and t is the interpolation parameter in the range [0, 1] representing the transition between a and b.
Of course the above function requires a type T which supports multiplication by a scalar. You can simply use any 3D vector type for this purpose, since colors are actually coordinates in color space.
Distance between two 2D points:
#include <cmath>
auto length(const Point2 &p)
{
return std::sqrt(p.x*p.x + p.y*p.y);
}
auto distance(const Point2 &a, const Point2 &b)
{
Point delta = b - a;
return length(delta);
}
Image from https://developer.mozilla.org/en-US/docs/Web/CSS/linear-gradient

Related

Finding an RGB colour given an angle on a circle? [duplicate]

This question already has answers here:
Algorithm to convert RGB to HSV and HSV to RGB in range 0-255 for both
(14 answers)
Closed 4 years ago.
So if I have a circle ranging from red to violet(0-360), can I get a colour if I have an angle? I have been searching but I have only found code to convert from different formats and nothing really to do with angles. I would really like to know the math that goes behind this.
I'm just writing a c++ program for my arduino with a joystick and an RGB led. I've got the easy stuff done but I don't even know where to begin with the colour.
The RGB color space is based on cartesian coordinates. If you want an angle that means you want something akin to polar coordinates, the color spaces you are looking for is either HSL or HSV.
https://en.wikipedia.org/wiki/HSL_and_HSV#From_HSV
In HSV, you can for example use maxium Saturation and maxium Value, then you only have to pick the Hue (which is an angle).
That being said, you can also make-up one and use for example:
(R, G, B) = (256*cos(x), 256*cos(x + 120), 256*cos(x - 120))
Where cos is using degrees.

How to convert image to greyscale opengl c++ [duplicate]

This question already has answers here:
how to implement grayscale rendering in OpenGL?
(3 answers)
Closed 4 years ago.
Anyone know how to convert to grayscale, below is some skeleton code i need to use to do so. Specifically, convert "before" to grayscale,
apply the Sobel edge detection convolution filter, and store the
result in "after". before must be non-empty.
template <typename color_depth> void
edge_detect(gfx::image<color_depth>& after,
const gfx::image<color_depth>& before) {
// Check arguments.
assert(!before.empty());
// TODO: replace this function body with working code. Make sure
// to delete this comment.
// Hint: Use the grayscale(...) and extend_edges(...) filters to
// prepare for the Sobel convolution. Then compute the Sobel
// operator one pixel at a time. Finally use crop_extended_edges
// to un-do the earlier extend_edges.
}
This looks to be a homework question so I won't give a full implementation. I also can't tell if you want to convert to greyscale on the CPU or in a shader. Regardless of where you perform the conversion the formulas are the same.
There is no definitive method for converting to greyscale since since you're discarding information and whether the end results look correct is entirely subjective. Below are some common methods for converting from RGB to greyscale:
A naive approach is to find the colour channel with the highest value and just use that.
grey = max(colour.r, max(colour.g, colour.b));
The naive approach suffers in that certain areas of your image will lose detail completely if they contain none of the colour with the highest value. To prevent this we can use a simple average of all the colour components.
grey = (colour.r + colour.g + colour.b) / 3.0;
A 'better' method is to use the luma value. The human eye perceives some colour wavelengths better than others. So if we give more weight to those colours we produce a more plausible greyscale.
grey = dot_product(colour, vec3(0.299, 0.587, 0.114));
Yet another method is to 'desaturate' the image. This involves first converting from the RGB colour space to HSL. Then reducing the saturation to zero.

Deciphering a code [duplicate]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Please explain as to what happens to an image when we use histeq function in MATLAB? A mathematical explanation would be really helpful.
Histogram equalization seeks to flatten your image histogram. Basically, it models the image as a probability density function (or in simpler terms, a histogram where you normalize each entry by the total number of pixels in the image) and tries to ensure that the probability for a pixel to take on a particular intensity is equiprobable (with equal probability).
The premise behind histogram equalization is for images that have poor contrast. Images that look like they're too dark, or if they're too washed out, or if they're too bright are good candidates for you to apply histogram equalization. If you plot the histogram, the spread of the pixels is limited to a very narrow range. By doing histogram equalization, the histogram will thus flatten and give you a better contrast image. The effect of this with the histogram is that it stretches the dynamic range of your histogram.
In terms of the mathematical definition, I won't bore you with the details and I would love to have some LaTeX to do it here, but it isn't supported. As such, I defer you to this link that explains it in more detail: http://www.math.uci.edu/icamp/courses/math77c/demos/hist_eq.pdf
However, the final equation that you get for performing histogram equalization is essentially a 1-to-1 mapping. For each pixel in your image, you extract its intensity, then run it through this function. It then gives you an output intensity to be placed in your output image.
Supposing that p_i is the probability that you would encounter a pixel with intensity i in your image (take the histogram bin count for pixel intensity i and divide by the total number of pixels in your image). Given that you have L intensities in your image, the output intensity at this location given the intensity of i is dictated as:
g_i = floor( (L-1) * sum_{n=0}^{i} p_i )
You add up all of the probabilities from pixel intensity 0, then 1, then 2, all the way up to intensity i. This is familiarly known as the Cumulative Distribution Function.
MATLAB essentially performs histogram equalization using this approach. However, if you want to implement this yourself, it's actually pretty simple. Assume that you have an input image im that is of an unsigned 8-bit integer type.
function [out] = hist_eq(im, L)
if (~exist(L, 'var'))
L = 256;
end
h = imhist(im) / numel(im);
cdf = cumsum(h);
out = (L-1)*cdf(double(im)+1);
out = uint8(out);
This function takes in an image that is assumed to be unsigned 8-bit integer. You can optionally specify the number of levels for the output. Usually, L = 256 for an 8-bit image and so if you omit the second parameter, L would be assumed as such. The first line computes the probabilities. The next line computes the Cumulative Distribution Function (CDF). The next two lines after compute input/output using histogram equalization, and then convert back to unsigned 8-bit integer. Note that the uint8 casting implicitly performs the floor operation for us. You'll need to take note that we have to add an offset of 1 when accessing the CDF. The reason why is because MATLAB starts indexing at 1, while the intensities in your image start at 0.
The MATLAB command histeq pretty much does the same thing, except that if you call histeq(im), it assumes that you have 32 intensities in your image. Therefore, you can override the histeq function by specifying an additional parameter that specifies how many intensity values are seen in the image just like what we did above. As such, you would do histeq(im, 256);. Calling this in MATLAB, and using the function I wrote above should give you identical results.
As a bit of an exercise, let's use an image that is part of the MATLAB distribution called pout.tif. Let's also show its histogram.
im = imread('pout.tif');
figure;
subplot(2,1,1);
imshow(im);
subplot(2,1,2);
imhist(im);
As you can see, the image has poor contrast because most of the intensity values fit in a narrow range. Histogram equalization will flatten the image and thus increase the contrast of the image. As such, try doing this:
out = histeq(im, 256); %//or you can use my function: out = hist_eq(im);
figure;
subplot(2,1,1);
imshow(out);
subplot(2,1,2);
imhist(out);
This is what we get:
As you can see the contrast is better. Darker pixels tend to move towards the darker end, while lighter pixels get pushed towards the lighter end. Successful result I think! Bear in mind that not all images will give you a good result when you try and do histogram equalization. Image processing is mostly a trial and error thing, and so you put a mishmash of different techniques together until you get a good result.
This should hopefully get you started. Good luck!

Smooth color transition algorithm

I am looking for a general algorithm to smoothly transition between two colors.
For example, this image is taken from Wikipedia and shows a transition from orange to blue.
When I try to do the same using my code (C++), first idea that came to mind is using the HSV color space, but the annoying in-between colors show-up.
What is the good way to achieve this ? Seems to be related to diminution of contrast or maybe use a different color space ?
I have done tons of these in the past. The smoothing can be performed many different ways, but the way they are probably doing here is a simple linear approach. This is to say that for each R, G, and B component, they simply figure out the "y = m*x + b" equation that connects the two points, and use that to figure out the components in between.
m[RED] = (ColorRight[RED] - ColorLeft[RED]) / PixelsWidthAttemptingToFillIn
m[GREEN] = (ColorRight[GREEN] - ColorLeft[GREEN]) / PixelsWidthAttemptingToFillIn
m[BLUE] = (ColorRight[BLUE] - ColorLeft[BLUE]) / PixelsWidthAttemptingToFillIn
b[RED] = ColorLeft[RED]
b[GREEN] = ColorLeft[GREEN]
b[BLUE] = ColorLeft[BLUE]
Any new color in between is now:
NewCol[pixelXFromLeft][RED] = m[RED] * pixelXFromLeft + ColorLeft[RED]
NewCol[pixelXFromLeft][GREEN] = m[GREEN] * pixelXFromLeft + ColorLeft[GREEN]
NewCol[pixelXFromLeft][BLUE] = m[BLUE] * pixelXFromLeft + ColorLeft[BLUE]
There are many mathematical ways to create a transition, what we really want to do is understand what transition you really want to see. If you want to see the exact transition from the above image, it is worth looking at the color values of that image. I wrote a program way back in time to look at such images and output there values graphically. Here is the output of my program for the above pseudocolor scale.
Based upon looking at the graph, it IS more complex than a linear as I stated above. The blue component looks mostly linear, the red could be emulated to linear, the green however looks to have a more rounded shape. We could perform mathematical analysis of the green to better understand its mathematical function, and use that instead. You may find that a linear interpolation with an increasing slope between 0 and ~70 pixels with a linear decreasing slope after pixel 70 is good enough.
If you look at the bottom of the screen, this program gives some statistical measures of each color component, such as min, max, and average, as well as how many pixels wide the image read was.
A simple linear interpolation of the R,G,B values will do it.
trumpetlicks has shown that the image you used is not a pure linear interpolation. But I think an interpolation gives you the effect you're looking for. Below I show an image with a linear interpolation on top and your original image on the bottom.
And here's the (Python) code that produced it:
for y in range(height/2):
for x in range(width):
p = x / float(width - 1)
r = int((1.0-p) * r1 + p * r2 + 0.5)
g = int((1.0-p) * g1 + p * g2 + 0.5)
b = int((1.0-p) * b1 + p * b2 + 0.5)
pix[x,y] = (r,g,b)
The HSV color space is not a very good color space to use for smooth transitions. This is because the h value, hue, is just used to arbitrarily define different colors around the 'color wheel'. That means if you go between two colors far apart on the wheel, you'll have to dip through a bunch of other colors. Not smooth at all.
It would make a lot more sense to use RGB (or CMYK). These 'component' color spaces are better defined to make smooth transitions because they represent how much of each 'component' a color needs.
A linear transition (see #trumpetlicks answer) for each component value, R, G and B should look 'pretty good'. Anything more than 'pretty good' is going to require an actual human to tweak the values because there are differences and asymmetries to how our eyes perceive color values in different color groups that aren't represented in either RBG or CMYK (or any standard).
The wikipedia image is using the algorithm that Photoshop uses. Unfortunately, that algorithm is not publicly available.
I've been researching into this to build an algorithm that takes a grayscale image as input and colorises it artificially according to a color palette:
■■■■ Grayscale input ■■■■ Output ■■■■■■■■■■■■■■■
Just like many of the other solutions, the algorithm uses linear interpolation to make the transition between colours. With your example, smooth_color_transition() should be invoked with the following arguments:
QImage input("gradient.jpg");
QVector<QColor> colors;
colors.push_back(QColor(242, 177, 103)); // orange
colors.push_back(QColor(124, 162, 248)); // blue-ish
QImage output = smooth_color_transition(input, colors);
output.save("output.jpg");
A comparison of the original image VS output from the algorithm can be seen below:
(output)
(original)
The visual artefacts that can be observed in the output are already present in the input (grayscale). The input image got these artefacts when it was resized to 189x51.
Here's another example that was created with a more complex color palette:
■■■■ Grayscale input ■■■■ Output ■■■■■■■■■■■■■■■
Seems to me like it would be easier to create the gradient using RGB values. You should first calculate the change in color for each value based on the width of the gradient. The following pseudocode would need to be done for R, G, and B values.
redDifference = (redValue2 - redValue1) / widthOfGradient
You can then render each pixel with these values like so:
for (int i = 0; i < widthOfGradient; i++) {
int r = round(redValue1 + i * redDifference)
// ...repeat for green and blue
drawLine(i, r, g, b)
}
I know you specified that you're using C++, but I created a JSFiddle demonstrating this working with your first gradient as an example: http://jsfiddle.net/eumf7/

Cement Effect - Artistic Effect

I wish to give an effect to images, where the resultant image would appear as if it is painted on a rough cemented background, and the cemented background customizes itself near the edges to highlight them... Please help me in writing an algorithm to generate such an effect.
The first image is the original image
and the second image is the output im looking for.
please note the edges are detected and the mask changes near the edges to indicate the edges clearly
You need to read up on Bump Mapping. There are plenty of bump mapping algorithms.
The basic algorithm is:
for each pixel
Look up the position on the bump map texture that corresponds to the position on the bumped image.
Calculate the surface normal of the bump map
Add the surface normal from step 2 to the geometric surface normal (in case of an image it's a vector pointing up) so that the normal points in a new direction.
Calculate the interaction of the new 'bumpy' surface with lights in the scene using, for example, Phong shading -- light placement is up to you, and decides where will the shadows lie.
Finally, here's a plain C implementation for 2D images.
Starting with
1) the input image as R, G, B, and
2) a texture image, grayscale.
The images are likely in bytes, 0 to 255. Divide it by 255.0 so we have them as being from 0.0 to 1.0. This makes the math easier. For performance, you wouldn't actually do this but instead use clever fixed-point math, an implementation matter I leave to you.
First, to get the edge effects between different colored areas, add or subtract some fraction of the R, G, and B channels to the texture image:
texture_mod = texture - 0.2*R - 0.3*B
You could get fancier with with nonlinear forumulas, e.g. thresholding the R, G and B channels, or computing some mathematical expression involving them. This is always fun to experiment with; I'm not sure what would work best to recreate your example.
Next, compute an embossed version of texture_mod to create the lighting effect. This is the difference of the texture slid up and right one pixel (or however much you like), and the same texture slid. This give the 3D lighting effect.
emboss = shift(texture_mod, 1,1) - shift(texture_mod, -1, -1)
(Should you use texture_mod or the original texture data in this formula? Experiment and see.)
Here's the power step. Convert the input image to HSV space. (LAB or other colorspaces may work better, or not - experiment and see.) Note that in your desired final image, the cracks between the "mesas" are darker, so we will use the original texture_mod and the emboss difference to alter the V channel, with coefficients to control the strength of the effect:
Vmod = V * ( 1.0 + C_depth * texture_mod + C_light * emboss)
Both C_depth and C_light should be between 0 and 1, probably smaller fractions like 0.2 to 0.5 or so. You will need a fudge factor to keep Vmod from overflowing or clamping at its maximum - divide by (1+C_depth+C_light). Some clamping at the bright end may help the highlights look brighter. As always experiment and see...
As fine point, you could also modify the Saturation channel in some way, perhaps decreasing it where texture_mod is lower.
Finally, convert (H, S, Vmod) back to RGB color space.
If memory is tight or performance critical, you could skip the HSV conversion, and apply the Vmod formula instead to the individual R,G, B channels, but this will cause shifts in hue and saturation. It's a tradeoff between speed and good looks.
This is called bump mapping. It is used to give a non flat appearance to a surface.