how can I change RGB values according to percentage - c++

I want to change RGB values according to the percentage. means 0% should show Red , 50% should green and 100% will Blue color.I am working on fastled.I tried like this but didnt get the best result.can anyone suggest some good stuf??
int R,G,B;
int p = percentage;
if(p >= 0 and p <= 50){
R = abs(p - 100);
G = p*2;
}
if(p > 50 and p <= 100){
G = abs(p - 100);
B = p*2;
}
>! also tried
R = abs(p-100);
G = p/2;
B = p;
leds[0] = CRGB(R,G,B);
FastLED.show();

You need to convert percentage values to 8-bit binary values, i.e., convert values in the range [0,100] into values in the range [0,255] (which is [0x00,0xFF] in hex).
A simple scaling operation does this:
int r = pR * 255 / 100; // percentage red to hex
or equivalently:
int r = pR * 0xFF / 100; // percentage red to hex
The opposite conversion, from hex value to percentage, is just the reverse operation.
Note that since there are only 101 percentage values, you won't get all of the 256 possible 8-bit hex values when you do this conversion, but it should be close enough.

From your problem statement, you probably want something like this which generates RGB colors counter-clockwise around the sRGB color gamut from red to blue.
#include <array>
#include <string>
#include <cmath>
#include <iostream>
std::array<uint8_t, 3> getcolorpercent(double percent)
{
std::array<uint8_t, 3> rgb{};
int segment{static_cast<int>(percent/25)};
double percent_f = .01 * (percent - 25 * segment);
double col0{ 1 }, col1{ 1 };
if (segment % 2 == 0)
col1 = sqrt(4 * percent_f);
else
col0 = sqrt(1 - 4 * percent_f);
rgb[(segment / 2) % 3] = static_cast<uint8_t>(std::round(255*col0));
rgb[(1 + segment / 2) % 3] = static_cast<uint8_t>(std::round(255 * col1));
return rgb;
}
int main()
{
auto print = [](const std::array<uint8_t, 3> rgb, std::string descr) {
// note: 0+... is to convert uint8_t to int to precent interpreting as char
std::cout << descr << " red:" << 0+rgb[0] << " green:" << 0+rgb[1] << " blue:" << 0+rgb[2] << '\n';
};
std::array<uint8_t, 3> rgb_red = getcolorpercent(0);
std::array<uint8_t, 3> rgb_orange = getcolorpercent(15);
std::array<uint8_t, 3> rgb_yellow = getcolorpercent(25);
std::array<uint8_t, 3> rgb_cyan = getcolorpercent(75);
std::array<uint8_t, 3> rgb_violet = getcolorpercent(130);
print(rgb_red, "red=");
print(rgb_orange, "orange=");
print(rgb_yellow, "yellow=");
print(rgb_cyan, "cyan=");
print(rgb_violet, "violet=");
}
Output:
red= red:255 green:0 blue:0
orange= red:255 green:198 blue:0
yellow= red:255 green:255 blue:0
cyan= red:0 green:255 blue:255
violet= red:255 green:0 blue:228
This creates a (reversed) rainbow type from red to blue for 0% to 100%. Additionally, this has been expanded to allow percentages to exceed 100 which can be used to produces colors going from blue->violet->purple and back to red. Here's an image created from this going from percent 0 to 100:

Related

How to get the calculate the RGB values of a pixel from the luminance?

I want to compute the RGB values from the luminance.
The data that I know are :
the new luminance (the value that I want to apply)
the old luminance
the old RGB values.
We can compute the luminance from the RGB values like this :
uint8_t luminance = R * 0.21 + G * 0.71 + B * 0.07;
My code is :
// We create a function to set the luminance of a pixel
void jpegImage::setLuminance(uint8_t newLuminance, unsigned int x, unsigned int y) {
// If the X or Y value is out of range, we throw an error
if(x >= width) {
throw std::runtime_error("Error : in jpegImage::setLuminance : The X value is out of range");
}
else if(y >= height) {
throw std::runtime_error("Error : in jpegImage::setLuminance : The Y value is out of range");
}
// If the image is monochrome
if(pixelSize == 1) {
// We set the pixel value to the luminance
pixels[y][x] = newLuminance;
}
// Else if the image is colored, we throw an error
else if(pixelSize == 3) {
// I don't know how to proceed
// My image is stored in a std::vector<std::vector<uint8_t>> pixels;
// This is a list that contain the lines of the image
// Each line contains the RGB values of the following pixels
// For example an image with 2 columns and 3 lines
// [[R, G, B, R, G, B], [R, G, B, R, G, B], [R, G, B, R, G, B]]
// For example, the R value with x = 23, y = 12 is:
// pixels[12][23 * pixelSize];
// For example, the B value with x = 23, y = 12 is:
// pixels[12][23 * pixelSize + 2];
// (If the image is colored, the pixelSize will be 3 (R, G and B)
// (If the image is monochrome the pixelSIze will be 1 (just the luminance value)
}
}
How can I proceed ?
Thanks !
You don't need the old luminance if you have the original RGB.
Referencing https://www.fourcc.org/fccyvrgb.php for YUV to RGB conversion.
Compute U and V from original RGB:
```
V = (0.439 * R) - (0.368 * G) - (0.071 * B) + 128
U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128
```
Y is the new luminance normalized to a value between 0 and 255
Then just convert back to RGB:
B = 1.164(Y - 16) + 2.018(U - 128)
G = 1.164(Y - 16) - 0.813(V - 128) - 0.391(U - 128)
R = 1.164(Y - 16) + 1.596(V - 128)
Make sure you clamp your computed values of each equation to be in range of 0..255. Some of these formulas can convert a YUV or RGB value to something less than 0 or higher than 255.
There's also multiple formula for converting between YUV and RGB. (Different constants). I noticed the page listed above has a different computation for Y than you cited. They are all relatively close with different precisions and adjustments. For just changing the brightness of a pixel, almost any formula will do.
Updated
I originally deleted this answer after the OP suggested it wasn't working for him. I was too busy for the last few days to investigate, but I wrote some sample code to confirm my hypothesis. At the bottom of this answer is a snippet of GDI+ based code increases the luminance of an image by a variable amount. Along with the code is an image that I tested this out on and two conversions. One at 130% brightness. Another at 170% brightness.
Here's a sample conversion
Original Image
Updated Image (at 130% Y)
Updated Image (at 170% Y)
Source:
#define CLAMP(val) {val = (val > 255) ? 255 : ((val < 0) ? 0 : val);}
void Brighten(Gdiplus::BitmapData& dataIn, Gdiplus::BitmapData& dataOut, const double YMultiplier=1.3)
{
if ( ((dataIn.PixelFormat != PixelFormat24bppRGB) && (dataIn.PixelFormat != PixelFormat32bppARGB)) ||
((dataOut.PixelFormat != PixelFormat24bppRGB) && (dataOut.PixelFormat != PixelFormat32bppARGB)))
{
return;
}
if ((dataIn.Width != dataOut.Width) || (dataIn.Height != dataOut.Height))
{
// images sizes aren't the same
return;
}
const size_t incrementIn = dataIn.PixelFormat == PixelFormat24bppRGB ? 3 : 4;
const size_t incrementOut = dataOut.PixelFormat == PixelFormat24bppRGB ? 3 : 4;
const size_t width = dataIn.Width;
const size_t height = dataIn.Height;
for (size_t y = 0; y < height; y++)
{
auto ptrRowIn = (BYTE*)(dataIn.Scan0) + (y * dataIn.Stride);
auto ptrRowOut = (BYTE*)(dataOut.Scan0) + (y * dataOut.Stride);
for (size_t x = 0; x < width; x++)
{
uint8_t B = ptrRowIn[0];
uint8_t G = ptrRowIn[1];
uint8_t R = ptrRowIn[2];
uint8_t A = (incrementIn == 3) ? 0xFF : ptrRowIn[3];
auto Y = (0.257 * R) + (0.504 * G) + (0.098 * B) + 16;
auto V = (0.439 * R) - (0.368 * G) - (0.071 * B) + 128;
auto U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128;
Y *= YMultiplier;
auto newB = 1.164*(Y - 16) + 2.018*(U - 128);
auto newG = 1.164*(Y - 16) - 0.813*(V - 128) - 0.391*(U - 128);
auto newR = 1.164*(Y - 16) + 1.596*(V - 128);
CLAMP(newR);
CLAMP(newG);
CLAMP(newB);
ptrRowOut[0] = newB;
ptrRowOut[1] = newG;
ptrRowOut[2] = newR;
if (incrementOut == 4)
{
ptrRowOut[3] = A; // keep original alpha
}
ptrRowIn += incrementIn;
ptrRowOut += incrementOut;
}
}
}

BGR -> YCbCr conversion not working correctly

I am trying to manually convert an image from RBG (BGR in OpenCV) to the YCbCr color space.
My image is a png color image, 800 width and 600 height, 3 channels, 16 bit depth.
Here's how I tried solving this.
cv::Mat convertToYCbCr(cv::Mat image) {
// converts an RGB image to YCbCr
// cv::Mat: B-G-R
std::cout << "Converting image to YCbCr color space." << std::endl;
int i, j;
for (i = 0; i <= image.cols; i++) {
for (j = 0; j <= image.rows; j++) {
// R, G, B values
auto R = image.at<cv::Vec3d>(j, i)[2];
auto G = image.at<cv::Vec3d>(j, i)[1];
auto B = image.at<cv::Vec3d>(j, i)[0];
// Y'
auto Y = image.at<cv::Vec3d>(j,i)[0] = 0.299 * R + 0.587 * G + 0.114 * B + 16;
// Cb
auto Cb = image.at<cv::Vec3d>(j,i)[1] = 128 + (-0.169 * R -0.331 * G + 0.5 * B);
// Cr
auto Cr = image.at<cv::Vec3d>(j,i)[2] = 128 + (0.5 * R -0.419 * G -0.081 * B);
std::cout << "At conversion: Y = " << Y << ", Cb = " << Cb << ", "
<< Cr << std::endl;
}
}
std::cout << "Converting finished." << std::endl;
return image;
}
The image I receive looks like this:
What I am expecting is this (using OpenCV method):
The vertical lines hint maybe at something? Is my loop wrong? Can I even just "replace" the RGB values with YCbCr values and expect the image to look like the example? typeid() returns the same value for both images, N2cv3MatE.
The primary reason for incorrect results being observed is the incorrect data-type used to access the image. The correct type for accessing 16 bit unsigned pixels is cv::Vec3w (not cv::Vec3d).
The next issue is that the coefficients that are being using for conversion are designed for analog signals ( YPbPr ). For digital images, we have to use coefficients designed for digital images ( YCbCr ). You can find more details on the Wikipedia article on YCbCr in section ITU-R BT.601 conversion.
The piece of information missing from the article is that how will the coefficients change if the images are of 16 bit unsigned depth or 32 bit floating point depth? The answer to this is that we will have to scale the coefficients according to the bit depth of our image.
For images with 16 bit unsigned depth, the scaling should be performed as follows:
auto Y = (R * 65.481f * scale) + (G * 128.553f * scale) + (B * 24.966f * scale) + (16.0f * offset);
auto Cb = (R * -37.797f * scale) + (G * -74.203f * scale) + (B * 112.0f * scale) + (128.0f * offset);
auto Cr = (R * 112.0f * scale) + (G * -93.786f * scale) + (B * -18.214f * scale) + (128.0f * offset);
where scale is equal to 257.0/65535.0 and offset is equal to 257.0.
This conversion technique has been adopted from MATLAB source code for rgb2ycbcr function which references the following book describing the scaling:
C.A. Poynton, "A Technical Introduction to Digital Video", John Wiley
& Sons, Inc., 1996, Chapter 9, Page 175`
Now that the conversion has been done, the third issue we face is the visualization of image similar to that of OpenCV. When we perform color conversion with OpenCV, the output image is stored in the order YCrCb instead of the usual YCbCr. So to get the same image with our custom conversion logic, we have to store values in the relevant order.
A sample conversion code may look like this:
if(image.type() == CV_16UC3)
{
const float scale = 257.0f / 65535.0f;
const float offset = 257.0f;
for (int i = 0; i < image.cols; i++)
{
for (int j = 0; j < image.rows; j++)
{
auto R = image.at<cv::Vec3w>(j, i)[2];
auto G = image.at<cv::Vec3w>(j, i)[1];
auto B = image.at<cv::Vec3w>(j, i)[0];
auto Y = (R * 65.481f * scale) + (G * 128.553f * scale) + (B * 24.966f * scale) + (16.0f * offset);
auto Cb = (R * -37.797f * scale) + (G * -74.203f * scale) + (B * 112.0f * scale) + (128.0f * offset);
auto Cr = (R * 112.0f * scale) + (G * -93.786f * scale) + (B * -18.214f * scale) + (128.0f * offset);
image.at<cv::Vec3w>(j, i)[0] = (unsigned short)Y;
image.at<cv::Vec3w>(j, i)[1] = (unsigned short)Cr;
image.at<cv::Vec3w>(j, i)[2] = (unsigned short)Cb;
}
}
}
You should use cv::cvtColor
cvtColor(src, target_image, cv::COLOR_RGB2YCrCb);
Then just flip the second and third channels.
Though you could be getting that error because you're not casting the resulting values to ints.

How to get random value from assigned enum in c++?

I want a random color from this enum:
enum Color {
red = 10,
black = 3,
pink = 6,
rainbow=99
};
Color my_randowm_color = ...
How can i do that?
There is no way to enumerate the values of an enum.
You can use a table:
std::vector<int> colors = {red, black, pink, rainbow};
and then pick a random element from it.
Picking a random element left as an exercise.
Note: This is a different approach based on my understanding of what the OP desires.
What you can do is generate a random number in a range of 0-3. As you have four colors. Then store your colors in an array. And use a function returning a random number as index of that array. In this manner You will get random colors among the ones you have.
eg.
random() {
// func definition
// return random number in range of array indices
}
array = ["red","black","pink","rainbow"];
array[random()];
You can create a array containing possible enums and then generate random number for array index starting from 0 to total possible enum values.
#include <iostream>
int main()
{
enum Color { red = 10, black = 3, pink = 6, rainbow=99 };
int max = 3;
int min = 0;
srand(static_cast <unsigned int> (time(0)));
int randNum = rand() % (max - min + 1) + min;
Color forRandomPurpose[] = { red, black, pink, rainbow };
Color my_random_color = forRandomPurpose[randNum];
std::cout << "Hello World!\n";
std::cout << my_random_color << std::endl;
}
Just for fun (-:
Color random_color() {
int r = rand() % 4;
return static_cast<Color>((r + 1) * 3 + (r >= 2) + (r >= 3) * 86);
}
Live demo: https://wandbox.org/permlink/j4YNMqWs41QJFeOB

How to calculate Gaussian-weighted Circular Window?

I have a Matrix with values filled in every Field. The size is e.g. 15x15(225) now I want to calculate the Weight of every Field based on the Center Field of the Matrix. For a bigger distance, the value of the Pixel will be less weighted for the calculation. This should be look like a circle around the center Field. Here a example Image:
The small Rectangle is the centre field. The weighting should be a Gaussain-weighted circular window with a sigma of 1.5. How could I get this done? My thought was sth. like this where every Weight is filled in a Matrix with the same Size for the calculation afterwards.
expf = 1.f/(2.f * 1.5 * 1.5);
[...]
W[k] = (i*i + j*j) * expf;
Where i and j are the distanze from the centre pixel (e.g. for first iteration i = -7, j = -7)
For me this solution seemed to be fine, but the values I get are always very small e.g:
W[0]: 3.48362e-10
W[1]: 6.26123e-09
W[2]: 7.21553e-08
W[3]: 5.3316e-07
W[4]: 2.52596e-06
W[5]: 7.67319e-06
W[6]: 1.49453e-05
[...]
W[40]: 0.000523195
W[41]: 0.000110432
W[42]: 1.49453e-05
W[43]: 1.29687e-06
W[44]: 7.21553e-08
W[45]: 5.3316e-07
W[46]: 9.58266e-06
W[47]: 0.000110432
W[48]: 0.000815988
[...]
W[85]: 0.055638
W[86]: 0.0117436
W[87]: 0.00158933
W[88]: 0.000137913
[...]
W[149]: 7.67319e-06
W[150]: 2.52596e-06
W[151]: 4.53999e-05
W[152]: 0.000523195
W[153]: 0.00386592
Could it be, that the calculation of the weights is wrong?
The PDF of a multivariate normal distribution is
2 π -k / 2 |Σ|-0.5exp(-0.5 ((x - μ) |Σ|-1 ((x - μ))
For your case, this translates to
double weight(int i, int j, double var) {
return 1 / (2 * M_PI) * std::exp(-0.5 * (i * i + j * j) / var / var);
}
where i and j are centered at 0 and 0, and var is the variance.
Note:
This is the PDF. If you want the value to be 1 at the center, use weight(i, j, var) / weight(0, 0, var). Otherwise, you will indeed get small numbers.
The decay is specified by var - lower values will show larger decay.
The following code prints
$ g++ --std=c++11 gs.cpp && ./a.out
1
0.884706
1
4.78512e-06
for example
#include <cmath>
#include <iostream>
double weight(int i, int j, double var) {
return 1 / (2 * M_PI) * std::exp(-0.5 * (i * i + j * j) / var / var);
}
int main() {
{
const double f = weight(0, 0, 20);
std::cout << weight(0, 0, 20) / f << std::endl;
std::cout << weight(-7, -7, 20) / f << std::endl;
}
{
const double f = weight(0, 0, 2);
std::cout << weight(0, 0, 2) / f << std::endl;
std::cout << weight(-7, -7, 2) / f << std::endl;
}
}

Algorithm for adjustment of image levels

I need to implement in C++ algorithm for adjusting image levels that works similar to Levels function in Photoshop or GIMP. I.e. inputs are: color RGB image to be adjusted adjust, while point, black point, midtone point, output from/to values. But I didn't find yet any info on how to perform this adjustment. Probably someone recommend me algorithm description or materials to study.
To the moment I've came up with following code myself, but it doesn't give expected result, similar to what I can see, for example in the GIMP, image becomes too lightened. Below is my current fragment of the code:
const int normalBlackPoint = 0;
const int normalMidtonePoint = 127;
const int normalWhitePoint = 255;
const double normalLowRange = normalMidtonePoint - normalBlackPoint + 1;
const double normalHighRange = normalWhitePoint - normalMidtonePoint;
int blackPoint = 53;
int midtonePoint = 110;
int whitePoint = 168;
int outputFrom = 0;
int outputTo = 255;
double outputRange = outputTo - outputFrom + 1;
double lowRange = midtonePoint - blackPoint + 1;
double highRange = whitePoint - midtonePoint;
double fullRange = whitePoint - blackPoint + 1;
double lowPart = lowRange / fullRange;
double highPart = highRange / fullRange;
int dim(256);
cv::Mat lut(1, &dim, CV_8U);
for(int i = 0; i < 256; ++i)
{
double p = i > normalMidtonePoint
? (static_cast<double>(i - normalMidtonePoint) / normalHighRange) * highRange * highPart + lowPart
: (static_cast<double>(i + 1) / normalLowRange) * lowRange * lowPart;
int v = static_cast<int>(outputRange * p ) + outputFrom - 1;
if(v < 0) v = 0;
else if(v > 255) v = 255;
lut.at<uchar>(i) = v;
}
....
cv::Mat sourceImage = cv::imread(inputFileName, CV_LOAD_IMAGE_COLOR);
if(!sourceImage.data)
{
std::cerr << "Error: couldn't load image " << inputFileName << "." << std::endl;
continue;
}
#if 0
const int forwardConversion = CV_BGR2YUV;
const int reverseConversion = CV_YUV2BGR;
#else
const int forwardConversion = CV_BGR2Lab;
const int reverseConversion = CV_Lab2BGR;
#endif
cv::Mat convertedImage;
cv::cvtColor(sourceImage, convertedImage, forwardConversion);
// Extract the L channel
std::vector<cv::Mat> convertedPlanes(3);
cv::split(convertedImage, convertedPlanes);
cv::LUT(convertedPlanes[0], lut, convertedPlanes[0]);
//dst.copyTo(convertedPlanes[0]);
cv::merge(convertedPlanes, convertedImage);
cv::Mat resImage;
cv::cvtColor(convertedImage, resImage, reverseConversion);
cv::imwrite(outputFileName, resImage);
Pseudocode for Photoshop's Levels Adjustment
First, calculate the gamma correction value to use for the midtone adjustment (if desired). The following roughly simulates Photoshop's technique, which applies gamma 9.99-1.00 for midtone values 0-128, and 1.00-0.01 for 128-255.
Apply gamma correction:
Gamma = 1
MidtoneNormal = Midtones / 255
If Midtones < 128 Then
MidtoneNormal = MidtoneNormal * 2
Gamma = 1 + ( 9 * ( 1 - MidtoneNormal ) )
Gamma = Min( Gamma, 9.99 )
Else If Midtones > 128 Then
MidtoneNormal = ( MidtoneNormal * 2 ) - 1
Gamma = 1 - MidtoneNormal
Gamma = Max( Gamma, 0.01 )
End If
GammaCorrection = 1 / Gamma
Then, for each channel value R, G, B (0-255) for each pixel, do the following in order.
Apply the input levels:
ChannelValue = 255 * ( ( ChannelValue - ShadowValue ) /
( HighlightValue - ShadowValue ) )
Apply the midtones:
If Midtones <> 128 Then
ChannelValue = 255 * ( Pow( ( ChannelValue / 255 ), GammaCorrection ) )
End If
Apply the output levels:
ChannelValue = ( ChannelValue / 255 ) *
( OutHighlightValue - OutShadowValue ) + OutShadowValue
Where:
All channel and adjustment parameter values are integers, 0-255 inclusive
Shadow/Midtone/HighlightValue are the input adjustment values (defaults 0, 128, 255)
OutShadow/HighlightValue are the output adjustment values (defaults 0, 255)
You should optimize things and make sure values are kept in bounds (like 0-255 for each channel)
For a more accurate simulation of Photoshop, you can use a non-linear interpolation curve if Midtones < 128. Photoshop also chops off the darkest and lightest 0.1% of the values by default.
Ignoring the midtone/Gamma, the Levels function is a simple linear scaling.
All input values are first linearly scaled so that all values less or equal to the "black point" are set to 0, and all values greater than or equal white point are set to 255.
Then all values are linearly scaled from 0/255 to the output range.
For the mid-point—it depends what you actually mean by that.
In GIMP, there is a Gamma value. The Gamma value is a simple exponent of the input values (after restricting to the black/white points).
For Gamma == 1, the values are not changed.
For gamma < 1, the values are darkened.