This question already has answers here:
Algorithm to convert RGB to HSV and HSV to RGB in range 0-255 for both
(14 answers)
Closed 4 years ago.
So if I have a circle ranging from red to violet(0-360), can I get a colour if I have an angle? I have been searching but I have only found code to convert from different formats and nothing really to do with angles. I would really like to know the math that goes behind this.
I'm just writing a c++ program for my arduino with a joystick and an RGB led. I've got the easy stuff done but I don't even know where to begin with the colour.
The RGB color space is based on cartesian coordinates. If you want an angle that means you want something akin to polar coordinates, the color spaces you are looking for is either HSL or HSV.
https://en.wikipedia.org/wiki/HSL_and_HSV#From_HSV
In HSV, you can for example use maxium Saturation and maxium Value, then you only have to pick the Hue (which is an angle).
That being said, you can also make-up one and use for example:
(R, G, B) = (256*cos(x), 256*cos(x + 120), 256*cos(x - 120))
Where cos is using degrees.
Related
This question already has answers here:
how to implement grayscale rendering in OpenGL?
(3 answers)
Closed 4 years ago.
Anyone know how to convert to grayscale, below is some skeleton code i need to use to do so. Specifically, convert "before" to grayscale,
apply the Sobel edge detection convolution filter, and store the
result in "after". before must be non-empty.
template <typename color_depth> void
edge_detect(gfx::image<color_depth>& after,
const gfx::image<color_depth>& before) {
// Check arguments.
assert(!before.empty());
// TODO: replace this function body with working code. Make sure
// to delete this comment.
// Hint: Use the grayscale(...) and extend_edges(...) filters to
// prepare for the Sobel convolution. Then compute the Sobel
// operator one pixel at a time. Finally use crop_extended_edges
// to un-do the earlier extend_edges.
}
This looks to be a homework question so I won't give a full implementation. I also can't tell if you want to convert to greyscale on the CPU or in a shader. Regardless of where you perform the conversion the formulas are the same.
There is no definitive method for converting to greyscale since since you're discarding information and whether the end results look correct is entirely subjective. Below are some common methods for converting from RGB to greyscale:
A naive approach is to find the colour channel with the highest value and just use that.
grey = max(colour.r, max(colour.g, colour.b));
The naive approach suffers in that certain areas of your image will lose detail completely if they contain none of the colour with the highest value. To prevent this we can use a simple average of all the colour components.
grey = (colour.r + colour.g + colour.b) / 3.0;
A 'better' method is to use the luma value. The human eye perceives some colour wavelengths better than others. So if we give more weight to those colours we produce a more plausible greyscale.
grey = dot_product(colour, vec3(0.299, 0.587, 0.114));
Yet another method is to 'desaturate' the image. This involves first converting from the RGB colour space to HSL. Then reducing the saturation to zero.
This question already has answers here:
OpenCV cv::Mat set if
(3 answers)
Closed 6 years ago.
I have calculated anomaly image from grayscale image. The result is in the image below:
anomaly image with 5 anomalies
once I got result above I want to colorize anomalies with:
1. red color, if the anomaly area is greater than 10 px or
2. green color, if the anomaly area is les than or equal to 10 px.
For calculating properties of anomalies I used opencv function ''connectedComponentsWithStats()''. I can see that function calculated for me centroids of anomalies and also areas..
How can I now color all detected connected components?? In matlab I was using comething like pixelIdList to address each connected component and repmat(1,1,3) to extend binary image to RGB and then set up one of the chanels to ''true'', but how can I address all connected components in c++ ??
You could do the same thing in OpenCV. Change the channels of your output image to 3 (RGB/BGR). Then you can loop through each pixel and set it to the colour you want based on their other criteria like (area, centroid) in a nested if loop.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want implement a function in C++/RealBasic to create a color gradient by the parameters:
Width and height of the image
2 colors of the gradient
Angle (direction) of the gradient
Strength of the gradient
The following links show some examples of the desired output image:
http://www.artima.com/articles/linear_gradients_in_flex_4.html, http://i.stack.imgur.com/4ssfj.png
I have found multiple examples but they give me only vertical and horizontal gradients, while I want to specify the angle and strength too.
Can someone help me please?
P.S.: I know only a little about geometry!! :(
Your question is very wide and as is, this is a pretty complex exercise with a lot of code, including image rendering, image format handling, writing file to disk, etc. These are not the matter of a single function. Because of this, I focus on making an arbitrary linear color gradient of 2 colors.
Linear color gradient
You can create a linear color "gradient" by linear interpolation between 2 colors. However simple linear interpolation makes really harsh-looking transitions. For visually more appealing results I recommend to use some kind of S-shaped interpolation curve like the Hermite interpolation based smoothstep.
Regarding the angle, you can define a line segment by the start (p0) and end (p1) points of the color gradient. Let's call the distance between them d01, so d01 = distance(p0, p1). Then for each pixel point p of the image, you have to compute the closest point p2 on this segment. Here is an example how to do that. Then compute t = distance(p0, p2) / d01. This will be the lerp parameter t in the range [0, 1].
Interpolate between the 2 gradient color by this t and you got the color for the given point p.
This can be implemented multiple ways. You can use OpenGL to render the image, then read the pixel buffer back to the RAM. If you are not familiar with OpenGL or the rendering process, you can write a function which takes a point (the 2D coordinates of a pixel) and returns an RGB color - so you can compute all the pixels of the image. Finally you can write the image to disk using an image format, but that's an another story.
The following are example C++14 implementations of some functions mentioned above.
Simple linear interpolation:
template <typename T, typename U>
T lerp(const T &a, const T &b, const U &t)
{
return (U(1) - t)*a + t*b;
}
, where a and b are the two values (colors in this case) you want to interpolate between, and t is the interpolation parameter in the range [0, 1] representing the transition between a and b.
Of course the above function requires a type T which supports multiplication by a scalar. You can simply use any 3D vector type for this purpose, since colors are actually coordinates in color space.
Distance between two 2D points:
#include <cmath>
auto length(const Point2 &p)
{
return std::sqrt(p.x*p.x + p.y*p.y);
}
auto distance(const Point2 &a, const Point2 &b)
{
Point delta = b - a;
return length(delta);
}
Image from https://developer.mozilla.org/en-US/docs/Web/CSS/linear-gradient
This question already has an answer here:
Color detection in opencv
(1 answer)
Closed 8 years ago.
I want to convert a image of a yellow chess board on a wall to a black and white image in which all yellow(or any shades of yellow) portions gets converted to black and I may get a perfect chessboard image to use findchessboardcorners() function in opencv.Even though it works fine with gray scale image but I want to make findchessboardcorners() function work faster as it works with true black and white images.Please can anyone suggest a method to do so in opencv.
Convert to HSV
Inrange() for yellow region.
Using the above mask set Mat to zero corresponding to yellow location.
Try using thresholding after converting into greyscale and then feed it to the chessboardcorners function. That is not very process intensive.
I'm developing a software that detects boxers punching motion. At the moment i used color based segmentation using inRange function and set it to detect blue Minimum value and Blue Maximum value. The problem is that the range is quite wide and my cam at times picks out noise and segments objects of no interest. To improve the software i though of scanning image of a boxing glove and establishing exact Blue color Value before further processing.
It would make sens to me to store that value in a Vector and call it in inRange fiction
// My current function which takes the Minimum and Maximum values of Blue Color
Mat range_out;
inRange(blur_out, Scalar(100, 100, 100), Scalar(120, 255, 255), range_out);
So i would image the vector to go somewhere here.
Scan this above image compute the Blue value
Store this value in an array
recall the array in a inRange function
Could someone suggest a solution to this problem or direct me to a source of information where I can look for answers ?
since you are detecting the boxer gloves in motion so first use motion to separate it from other elements in the scene...use frame differentiation or optical flow to separate the glove and other moving areas from non moving areas...now in those moving area try for some colour detection...
Separe luminosity and cromaticity - your fixed range will not work very well in different light conditions. Your range is wide probably because you are trying to see "blue" in dark and on light at the same time. Convert your image to HSV (or La*b*) and discard V (or L), keeping H and S (or a* and b*).
Learn a color distribution instead a simple range - take some samples and compute a 2D
color histogram on H and S (a* or b*) for pixels on the glove. This histogram will be a model for the color distribution of your object. Then, use c2.calcBackProjection to detect the pixels of interest in your scene.
Clean the result using morphological close operation
Important: on step 2, play a little with different quantization values (ie, different numbers of bins).