WICConvertBitmapSource BGR to Gray unexpected pixel format conversion - c++

I am using WICConvertBitmapSource function to convert pixel format from BGR to Gray and I'm getting unexpected pixel values.
...
pIDecoder->GetFrame( 0, &pIDecoderFrame );
pIDecoderFrame->GetPixelFormat( &pixelFormat ); // GUID_WICPixelFormat24bppBGR
IWICBitmapSource * dst;
WICConvertBitmapSource( GUID_WICPixelFormat8bppGray, pIDecoderFrame, &dst );
Example on 4x3 image with following
BGR pixel values:
[ 0, 0, 255, 0, 255, 0, 255, 0, 0;
0, 255, 255, 255, 255, 0, 255, 0, 255;
0, 0, 0, 119, 119, 119, 255, 255, 255;
233, 178, 73, 233, 178, 73, 233, 178, 73]
Gray pixel values I am getting:
[127, 220, 76;
247, 230, 145;
0, 119, 255;
168, 168, 168]
Gray pixel values I expected to get (ITU-R BT.601 conversion)
[ 76, 149, 29;
225, 178, 105;
0, 119, 255;
152, 152, 152]
What kind of conversion is happening in the background, and is there a way to force conversion to my wanted behaviour?
Also worth mentioning, the conversions are working properly (as expected) for Gray -> BGR and BGRA -> BGR

As for the question "What kind of conversion is happening in the background": it seems like a different conversion algorithm is used. Using the WINE project to calculate the greyscale values, seem to give the same results, so it gives us a pretty good approximation what is happening. The formula used is R * 0.2126 + G * 0.7152 + B * 0.0722.
copypixels_to_8bppGray (source):
float gray = (bgr[2] * 0.2126f + bgr[1] * 0.7152f + bgr[0] * 0.0722f) / 255.0f;
In addition to that, it is corrected for the sRGB color space.
copypixels_to_8bppGray (source):
gray = to_sRGB_component(gray) * 255.0f;
to_sRGB_component (source):
static inline float to_sRGB_component(float f)
{
if (f <= 0.0031308f) return 12.92f * f;
return 1.055f * powf(f, 1.0f/2.4f) - 0.055f;
}
Plugging in some values:
B G R WINE You're getting
0 0 255 127.1021805 127
0 255 0 219.932749 220
255 0 0 75.96269736 76
0 255 255 246.7295889 247
255 255 0 229.4984163 230
255 0 255 145.3857605 145
0 0 0 12.92 0
As for the other question, I'm too unfamiliar with the framework to answer that, so I'll leave that open for others to answer.

Related

How to get data in an array in a pattern?

I am trying to get the x and y values out of a PointCloud2 data. This data is stored in an array in bytes of float32's. My question is, how can I get only the x and y data?
I will add a snippet of what the PointCloud2 data looks like, but for now I'd like to just be able to get the x - data. So for every 16 values, I only want the first 4 (I want the first 4 as this is how it is ordered and because the float32's are decomposed into 4 bytes each).
My original thought process was just to use a loop, but I am worried this may be too slow. This is as my array has more than 2000 values and I am getting 15 of these arrays per second. Is there any other way of doing this?
seq: 296
stamp:
secs: 1553456947
nsecs: 421859979
frame_id: "cloud"
height: 1
width: 811
fields:
-
name: "x"
offset: 0
datatype: 7
count: 1
-
name: "y"
offset: 4
datatype: 7
count: 1
-
name: "z"
offset: 8
datatype: 7
count: 1
-
name: "intensity"
offset: 12
datatype: 7
count: 1
is_bigendian: False
point_step: 16
row_step: 12976
data: [108, 171, 8, 191, 107, 171, 8, 191, 0, 0, 0, 0, 0, 0, 113, 70, 167,
197, 8, 191, 103, 95, 10, 191, 0, 0, 0, 0, 0, 240, 115, 70, 99, 101, 9, 191,
127, 161, 12, 191, 0, 0, 0, 0, 0, 200, 106, 70, 99, 50, 5, 191, 202, 237, 9,
191, 0, 0, 0, 0, 0, 76, 82, 70, 200, 22, 235, 190, 200, 74, 246, 190, 0, 0,
0, 0, 0, 132, 52, 70, 186, 111, 255, 190, 99, 95, 7, 191, 0, 0, 0, 0, 0, 24,
60, 70, 227, 1, 8, 191, 89, 217, 17, 191, 0, 0, 0, 0, 0, 64, 93, 70, 216,
183, 8, 191, 245, 84, 20, 191, 0, 0, 0, 0, 0, 236, 112, 70, 195, 94, 8, 191,
64, 177, 21, 191, 0, 0, 0, 0 ...
I would also like to add that I am a beginner at this sort of thing. I spoke with my friend and he said using mutex threading for this, but that seems way over my head.
Thanks!
I suspect that you'll be surprised how fast modern hardware can loop through 30000 points.
You should start off with your loop and some basic considerations. For example, for storage of our X/Y coordinates in a vector, reserve the required space in the vector before you enter your loop to avoid extra memory allocations. Once you have your loop working, evaluate the performance and maybe post another question if you think that your code is too slow.
A helpful quote by Donald E. Knuth that may apply:
Premature optimization is the root of all evil.
Edit; to provide some context: We had an application running on a low end i5 processor collecting 76,800 3D points at 15 fps via the Kinect SDK or via OpenNI (different sensor). We'd loop through the points and apply a basic transformation before storing each point into a pcl::PointCloud. We'd then loop through the point cloud points again for some analysis. The analysis portion far outweighed the cost of any basic loop. You will probably end up worrying about optimizing whatever evaluation logic you want apply to your data rather than basic things such as looping over the points.

OpenCV HSV Parsing Issue (C++)

First of all, I realize there are existing questions about converting an RGB image to an HSV image out there; I used one of those questions to help me write my code. However, I am getting values for HSV that don't make sense to me.
What I know about HSV I have gotten from this website. From this colorpicker, I have inferred that H is a number ranging from 0-360 degrees, S is a number ranging from 0-100%, and V is a number ranging from 0-100%. Therefore, I had assumed that my code (as follows) would return an H value between 0 and 360, and S/V values between 0 and 100. However, this is not the case.
I plugged my program's output into the above color picker, which all S/V values down to 100 when they exceeded 100. As you can see, the output is close to what it should be, but is not accurate. I feel like this is because I am interpreting the HSV values incorrectly.
For contex, I am going to establish a range for each color on the cube and from there look at the other faces and fill out the current setup of the cube in another program I have.
My code:
void get_color(Mat img, int x_offset, int y_offset)
{
Rect outline(x_offset - 2, y_offset - 2, 5, 5);
rectangle(img, outline, Scalar(255, 0, 0), 2);
Rect sample(x_offset, y_offset, 1, 1);
Mat rgb_image = img(sample);
Mat hsv_image;
cvtColor(rgb_image, hsv_image, CV_BGR2HSV);
Vec3b hsv = hsv_image.at<Vec3b>(0, 0);
int hue = hsv.val[0];
int saturation = hsv.val[1];
int value = hsv.val[2];
printf("H: %d, S: %d, V: %d \n", hue, saturation, value);
}
Output of the program:
H: 21, S: 120, V: 191 // top left cubie
H: 1, S: 180, V: 159 // top center cubie
H: 150, S: 2, V: 142 // top right cubie
H: 86, S: 11, V: 159 // middle left cubie
H: 75, S: 12, V: 133 // middle center cubie
H: 5, S: 182, V: 233 // middle right cubie
H: 68, S: 7, V: 156 // bottom left cubie
H: 25, S: 102, V: 137 // bottom center cubie
H: 107, S: 155, V: 69 // bottom right cubie
Starting image (pixel being extracted # center of each blue square):
Resulting colors (as the above color picker gave):
As you can see, the red and white is fairly accurate, but the orange and yellow are not correct and the blue is blatantly wrong; it is impossible for the pixel I looked at to actually be that color. What am I doing wrong? Any help would be greatly appreciated.
OpenCV has a funny way of representing its colors.
Hue - Represented as a number from 0-179 instead of 0-360. Therefore, multiply the H value by two before plugging it into a traditional color picker.
Saturation/Value - Represented as a number from 0-255. To get a percentage, divide given answer by 255 and multiply by 100 to get a percentage.
Everything works much more sensibly now. See this website for more details on OpenCV and HSV.

Advanced rgb2hsv conversion Matlab to opnecv/C++ access to pixel value

I am building a program in objective C/C++ and openCV. I am pretty skilled in Objective C but new to C++.
I am building custom RGB2HSV algorithm. My algorithm is slightly different from the openCV library cvtColor(in, out, CV_RGB2HSV).
The one I try to translate form Matlab to opencV/C++ produces so clear HSV image that no additional filtering is needed before further processing. Code below – Matlab code is self-explanatory.
I try to translate it to C++/openCV function out of it but I hit the wall trying to access pixel values of the image. I am new to C++.
I read a lot on the ways how to access Mat structure but usually I obtain either bunch of letters in a place of zeros or a number typically something like this “\202 k g”. When I try to do any multiplication operations on the say \202 the result has nothing to do with math.
Please help me to properly access the pixel values. Also in current version using uchar won’t work because some values are outside 0-255 range.
The algorithm is not mine. I cannot even point the source but it gives clearly better results than stock RGB2HSV.
Also the algorithm below is for one pixel. It needs to be applied each pixel in the image so in final version it need to wrapped with for { for {}} loops.
I also wish to share this method with community so everyone can benefit from it and saving on pre-filtering.
Please help me translate it to C++ / openCV. If possible with the best practices speed wise. Or at least how to clearly access the pixel value so it is workable with range of mathematical equations. Thanks in advance.
function[H, S, V] = rgb2hsvPixel(R,G,B)
% Algorithm:
% In case of 8-bit and 16-bit images, `R`, `G`, and `B` are converted to the
% floating-point format and scaled to fit the 0 to 1 range.
%
% V = max(R,G,B)
% S = / (V - min(R,G,B)) / V if V != 0
% \ 0 otherwise
% / 60*(G-B) / (V - min(R,G,B)) if V=R
% H = | 120 + 60*(B-R) / (V - min(R,G,B)) if V=G
% \ 240 + 60*(R-G) / (V - min(R,G,B)) if V=B
%
% If `H<0` then `H=H+360`. On output `0<=V<=1`, `0<=S<=1`, `0<=H<=360`.
red = (double(R)-16)*255/224; % \
green = (double(G)-16)*255/224; % }- R,G,B (0 <-> 255) -> (-18.2143 <-> 272.0759)
blue = (min(double(B)*2,240)-16)*255/224; % /
minV = min(red,min(green,blue));
value = max(red,max(green,blue));
delta = value - minV;
if(value~=0)
sat = (delta*255) / value;% s
if (delta ~= 0)
if( red == value )
hue = 60*( green - blue ) / delta; % between yellow & magenta
elseif( green == value )
hue = 120 + 60*( blue - red ) / delta; % between cyan & yellow
else
hue = 240 + 60*( red - green ) / delta; % between magenta & cyan
end
if( hue < 0 )
hue = hue + 360;
end
else
hue = 0;
sat = 0;
end
else
% r = g = b = 0
sat = 0;
hue = 0;
end
H = max(min(floor(((hue*255)/360)),255),0);
S = max(min(floor(sat),255),0);
V = max(min(floor(value),255),0);
end
To access the value of a pixel in a 3-channel, 8-bit precision image (type CV_8UC3) you have to do it like this:
cv::Mat image;
cv::Vec3b BGR = image.at<cv::Vec3b>(i,j);
If, as you say, 8-bit precision and range are not enough, you can declare a cv::Mat of type CV_32F to store floating point 32-bit numbers.
cv::Mat image(height, width, CV_32FC3);
//fill your image with data
for(int i = 0; i < image.rows; i++) {
for(int j = 0; j < image.cols; j++) {
cv::Vec3f BGR = image.at<cv::Vec3f>(i,j)
//process your pixel
cv::Vec3f HSV; //your calculated HSV values
image.at<cv::Vec3f>(i,j) = HSV;
}
}
Be aware that OpenCV stores rgb values in the BGR order and not RGB. Take a look at OpenCV docs to learn more about it.
If you are concerned by performance and fairly comfortable with pixel indexes, you can use directly the Mat ptr.
For example:
cv::Mat img = cv::Mat::zeros(4, 8, CV_8UC3);
uchar *ptr_row_img;
int cpt = 0;
for(int i = 0; i < img.rows; i++) {
ptr_row_img = img.ptr<uchar>(i);
for(int j = 0; j < img.cols; j++) {
for(int c = 0; c < img.channels(); c++, cpt++, ++ptr_row_img) {
*ptr_row_img = cpt;
}
}
}
std::cout << "img=\n" << img << std::endl;
The previous code should print:
img= [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23; 24, 25, 26,
27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
41, 42, 43, 44, 45, 46, 47; 48, 49, 50, 51, 52, 53, 54,
55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68,
69, 70, 71; 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82,
83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95]
The at access should be enough for most of the cases and is much more readable / less likely to make a mistake than using the ptr access.
References:
How to scan images, lookup tables and time measurement with OpenCV
C++: OpenCV: fast pixel iteration
Thanks everybody for help.
Thanks to your hints I constructed the custom rgb2hsv function C++/openCV.
From the top left respectively, edges after bgr->gray->edges, bgr->HSV->edges, bgr->customHSV->edges
Below each of them corresponding settings of the filters to achieve approximately the same clear results. The bigger the radius of a filter the more complex and time consuming computations.
It produces clearer edges in next steps of image processing.
It can be tweaked further experimenting with parameters in r g b channels:
red = (red-16)*1.1384; //255/244=1.1384
here 16 – the bigger the number the clearer V becomes
255/244 – also affect the outcome extending it beyond ranges 0-255, later to be clipped.
This numbers here seem to be golden ratio but anyone can adjust for specific needs.
With this function translating BGR to RGB can be avoided by directly connecting colors to proper channels in raw image.
Probably it is a little clumsy performance wise. In my case it serves in first step of color balance and histogram adjustment so speed is not that critical.
To use in constant processing video stream it need speed optimization, I think by using pointers and reducing loop complexity. Optimization is not exactly my cup of tea. So if someone helped to optimize it for the community that would be great.
Here it is ready to use:
Mat bgr2hsvCustom ( Mat& image )
{
//smallParam = 16;
for(int x = 0; x < image.rows; x++)
{
for(int y = 0; y<image.cols; y++)
{
//assigning vector to individual float BGR values
float blue = image.at<cv::Vec3b>(x,y)[0];
float green = image.at<cv::Vec3b>(x,y)[1];
float red = image.at<cv::Vec3b>(x,y)[2];
float sat, hue, minValue, maxValue, delta;
float const ang0 = 0; // func min and max don't accept varaible and number
float const ang240 = 240;
float const ang255 = 255;
red = (red-16)*1.1384; //255/244
green = (green-16)*1.1384;
blue = (min(blue*2,ang240)-16)*1.1384;
minValue = min(red,min(green,blue));
maxValue = max(red,max(green,blue));
delta = maxValue - minValue;
if (maxValue != 0)
{
sat = (delta*255) / maxValue;
if ( delta != 0)
{
if (red == maxValue){
hue = 60*(green - blue)/delta;
}
else if( green == maxValue ) {
hue = 120 + 60*( blue - red )/delta;
}
else{
hue = 240 + 60*( red - green )/delta;
}
if( hue < 0 ){
hue = hue + 360;
}
}
else{
sat = 0;
hue = 0;
}
}
else{
hue = 0;
sat = 0;
}
image.at<cv::Vec3b>(x,y)[0] = max(min(floor(maxValue),ang255),ang0); //V
image.at<cv::Vec3b>(x,y)[1] = max(min(floor(sat),ang255),ang0); //S
image.at<cv::Vec3b>(x,y)[2] = max(min(floor(((hue*255)/360)),ang255),ang0); //H
}
}
return image;
}

What does cast do exactly?

I was playing with integer array hashing and the different ways to go from a representation to the other. I ended up with the following:
void main(string[] args) {
import std.algorithm, std.array, std.conv, std.stdio, std.digest.md;
union hashU {
ubyte[] hashA;
int[] hashN;
};
hashU a;
auto md5 = new MD5Digest();
a.hashN = [1, 2, 3, 4, 5];
/* Using an union, no actual data conversion */
md5.put( a.hashA );
auto hash = md5.finish();
writeln(hash);
// [253, 255, 63, 4, 193, 99, 182, 232, 28, 231, 57, 107, 18, 254, 75, 175]
/* Using a cast... Doesn't match any of the other representations */
md5.put( cast(ubyte[])(a.hashN) );
hash = md5.finish();
writeln(hash);
// [254, 5, 74, 210, 231, 185, 139, 238, 103, 63, 159, 242, 45, 80, 240, 12]
/* Using .to! to convert from array to array */
md5.put( a.hashN.to!(ubyte[]) );
hash = md5.finish();
writeln(hash);
// [124, 253, 208, 120, 137, 179, 41, 93, 106, 85, 9, 20, 171, 53, 224, 104]
/* This matches the previous transformation */
md5.put( a.hashN.map!(x => x.to!ubyte).array );
hash = md5.finish();
writeln(hash);
// [124, 253, 208, 120, 137, 179, 41, 93, 106, 85, 9, 20, 171, 53, 224, 104]
}
My question is the following: what does the cast do? I'd have expected it to do either the same as .to! or the union trick, but it doesn't seem so.
I think Colin Grogan has it right, but his wording is a little confusing.
Using the union, the array is simply reinterpreted, no calculation/computation happens at all. The pointer and length of the int[] are reinterpreted to refer to ubyte elements. Before: 5 ints, after: 5 ubytes.
The cast is a little smarter than that: It adjusts the length of the array so that it refers to the same memory as before. Before: 5 ints, after: 20 ubytes (5*int.sizeof/ubyte.sizeof = 5*4/1 = 20).
Both the union and the cast reinterpret the bytes of the ints as ubytes. That is, an int value 1 will result in 4 ubytes: 0,0,0,1, or 1,0,0,0 depending on endianess.
The to variants convert every single element to the new element type. Before: 5 ints, after: 5 ubytes with the same values as the ints. If one of the ints couldn't be converted to a ubyte, to would throw an exception.
Printing the elements after the various conversions might help clarifying what happens where:
void main()
{
import std.algorithm, std.array, std.conv, std.stdio;
union hashU
{
ubyte[] hashA;
int[] hashN;
}
hashU a;
a.hashN = [1, 2, 3, 4, 5];
writeln( a.hashA ); /* union -> [1, 0, 0, 0, 2] (depends on endianess) */
writeln( cast(ubyte[])(a.hashN) );
/* cast -> [1, 0, 0, 0, 2, 0, 0, 0, 3, 0, 0, 0, 4, 0, 0, 0, 5, 0, 0, 0]
(depends on endianess) */
writeln( a.hashN.to!(ubyte[]) ); /* `to` -> [1, 2, 3, 4, 5] */
}
As far as I know, cast simply tells the compiler to start talking to the chunk of memory that you've cast as the type you've cast it to.
The last two options in your example actually convert the numbers to the new type, and so do some actual work. Which is why they end up being the same values.
The issue in your first two examples is that the int[] is larger in memory than the ubyte[]. (4 bytes per element vs 1 byte per element)
I've edited your first two methods:
/* Using an union, no actual data conversion */
md5.put( a.hashA );
auto hash = md5.finish();
writefln("Hash of: %s -> %s", a.hashA, hash);
// Hash of: [1, 0, 0, 0, 2] -> [253, 255, 63, 4, 193, 99, 182, 232, 28, 231, 57, 107, 18, 254, 75, 175]
// notice 5 bytes in first array
/* Using a cast... Doesn't match any of the other representations */
md5.put( cast(ubyte[])(a.hashN) );
hash = md5.finish();
writefln("Hash of: %s -> %s", cast(ubyte[])(a.hashN), hash);
// Hash of: [1, 0, 0, 0, 2, 0, 0, 0, 3, 0, 0, 0, 4, 0, 0, 0, 5, 0, 0, 0] -> [254, 5, 74, 210, 231, 185, 139, 238, 103, 63, 159, 242, 45, 80, 240, 12]
// notice 20 bytes (4 x 5 bytes) in first array
So in the first, you're reading the length of ubyte[]'s bytes. In the second your casting the length of int[]'s length to ubyte[].
Edit: Prob wasn't clear. A union is pretty dumb, it simply stores all the values in the same memory. When you go to read any of them out, it will only read X bits of that memory, depending on the length of the type you're reading.
So because you're reading the int[] THEN casting it, it reads all 20 of the bytes and casts them to a ubyte[]. Which is of course different than just reading the 5 bytes of the ubyte[] variable.
I think I made sense there :)
cast is a D operator used when developer needs to perform an explicit type conversion.

Which color gradient is used to color mandelbrot in wikipedia?

At Wikipedia's Mandelbrot set page there are really beautiful generated images of the Mandelbrot set.
I also just implemented my own Mandelbrot algorithm. Given n is the number of iterations used to calculate each pixel, I color them pretty simple from black to green to white like that (with C++ and Qt 5.0):
QColor mapping(Qt::white);
if (n <= MAX_ITERATIONS){
double quotient = (double) n / (double) MAX_ITERATIONS;
double color = _clamp(0.f, 1.f, quotient);
if (quotient > 0.5) {
// Close to the mandelbrot set the color changes from green to white
mapping.setRgbF(color, 1.f, color);
}
else {
// Far away it changes from black to green
mapping.setRgbF(0.f, color, 0.f);
}
}
return mapping;
My result looks like that:
I like it pretty much already, but which color gradient is used for the images in Wikipedia? How to calculate that gradient with a given n of iterations?
(This question is not about smoothing.)
The gradient is probably from Ultra Fractal. It is defined by 5 control points:
Position = 0.0 Color = ( 0, 7, 100)
Position = 0.16 Color = ( 32, 107, 203)
Position = 0.42 Color = (237, 255, 255)
Position = 0.6425 Color = (255, 170, 0)
Position = 0.8575 Color = ( 0, 2, 0)
where Position is in range [0, 1) and Color is RGB in range [0, 255].
The catch is that the colors are not linearly interpolated. The interpolation of colors is likely cubic (or something similar). Following image shows the difference between linear and Monotone cubic interpolation:
As you can see the cubic interpolation results in smoother and "prettier" gradient. I used monotone cubic interpolation to avoid "overshooting" of the [0, 255] color range that can be caused by cubic interpolation. Monotone cubic ensures that interpolated values are always in the range of input points.
I use following code to compute the color based on iteration i:
double smoothed = Math.Log2(Math.Log2(re * re + im * im) / 2); // log_2(log_2(|p|))
int colorI = (int)(Math.Sqrt(i + 10 - smoothed) * gradient.Scale) % colors.Length;
Color color = colors[colorI];
where i is the diverged iteration number, re and im are diverged coordinates, gradient.Scale is 256, and the colors is and array with pre-computed gradient colors showed above. Its length is 2048 in this case.
Well, I did some reverse engineering on the colours used in wikipedia using the Photoshop eyedropper. There are 16 colours in this gradient:
R G B
66 30 15 # brown 3
25 7 26 # dark violett
9 1 47 # darkest blue
4 4 73 # blue 5
0 7 100 # blue 4
12 44 138 # blue 3
24 82 177 # blue 2
57 125 209 # blue 1
134 181 229 # blue 0
211 236 248 # lightest blue
241 233 191 # lightest yellow
248 201 95 # light yellow
255 170 0 # dirty yellow
204 128 0 # brown 0
153 87 0 # brown 1
106 52 3 # brown 2
Simply using a modulo and an QColor array allows me to iterate through all colours in the gradient:
if (n < MAX_ITERATIONS && n > 0) {
int i = n % 16;
QColor mapping[16];
mapping[0].setRgb(66, 30, 15);
mapping[1].setRgb(25, 7, 26);
mapping[2].setRgb(9, 1, 47);
mapping[3].setRgb(4, 4, 73);
mapping[4].setRgb(0, 7, 100);
mapping[5].setRgb(12, 44, 138);
mapping[6].setRgb(24, 82, 177);
mapping[7].setRgb(57, 125, 209);
mapping[8].setRgb(134, 181, 229);
mapping[9].setRgb(211, 236, 248);
mapping[10].setRgb(241, 233, 191);
mapping[11].setRgb(248, 201, 95);
mapping[12].setRgb(255, 170, 0);
mapping[13].setRgb(204, 128, 0);
mapping[14].setRgb(153, 87, 0);
mapping[15].setRgb(106, 52, 3);
return mapping[i];
}
else return Qt::black;
The result looks pretty much like what I was looking for:
:)
I believe they're the default colours in Ultra Fractal. The evaluation version comes with source for a lot of the parameters, and I think that includes that colour map (if you can't infer it from the screenshot on the front page) and possibly also the logic behind dynamically scaling that colour map appropriately for each scene.
This is an extension of NightElfik's great answer.
The python library Scipy has monotone cubic interpolation methods in version 1.5.2 with pchip_interpolate. I included the code I used to create my gradient below. I decided to include helper values less than 0 and larger than 1 to help the interpolation wrap from the end to the beginning (no sharp corners).
#set up the control points for your gradient
yR_observed = [0, 0,32,237, 255, 0, 0, 32]
yG_observed = [2, 7, 107, 255, 170, 2, 7, 107]
yB_observed = [0, 100, 203, 255, 0, 0, 100, 203]
x_observed = [-.1425, 0, .16, .42, .6425, .8575, 1, 1.16]
#Create the arrays with the interpolated values
x = np.linspace(min(x_observed), max(x_observed), num=1000)
yR = pchip_interpolate(x_observed, yR_observed, x)
yG = pchip_interpolate(x_observed, yG_observed, x)
yB = pchip_interpolate(x_observed, yB_observed, x)
#Convert them back to python lists
x = list(x)
yR = list(yR)
yG = list(yG)
yB = list(yB)
#Find the indexs where x crosses 0 and crosses 1 for slicing
start = 0
end = 0
for i in x:
if i > 0:
start = x.index(i)
break
for i in x:
if i > 1:
end = x.index(i)
break
#Slice away the helper data in the begining and end leaving just 0 to 1
x = x[start:end]
yR = yR[start:end]
yG = yG[start:end]
yB = yB[start:end]
#Plot the values if you want
#plt.plot(x, yR, color = "red")
#plt.plot(x, yG, color = "green")
#plt.plot(x, yB, color = "blue")
#plt.show()