Related
So I've got some code that's intended to generate a Linear Gradient between two input colors:
struct color {
float r, g, b, a;
}
color produce_gradient(const color & c1, const color & c2, float ratio) {
color output_color;
output_color.r = c1.r + (c2.r - c1.r) * ratio;
output_color.g = c1.g + (c2.g - c1.g) * ratio;
output_color.b = c1.b + (c2.b - c1.b) * ratio;
output_color.a = c1.a + (c2.a - c1.a) * ratio;
return output_color;
}
I've also written (semantically identical) code into my shaders as well.
The problem is that using this kind of code produces "dark bands" in the middle where the colors meet, due to the quirks of how brightness translates between a computer screen and the raw data used to represent those pixels.
So the questions I have are:
Do I need to correct for gamma in the host function, the device function, both, or neither?
What's the best way to correct the function to properly handle gamma? Does the code I'm providing below convert the colors in a way that is appropriate?
Code:
color produce_gradient(const color & c1, const color & c2, float ratio) {
color output_color;
output_color.r = pow(pow(c1.r,2.2) + (pow(c2.r,2.2) - pow(c1.r,2.2)) * ratio, 1/2.2);
output_color.g = pow(pow(c1.g,2.2) + (pow(c2.g,2.2) - pow(c1.g,2.2)) * ratio, 1/2.2);
output_color.b = pow(pow(c1.b,2.2) + (pow(c2.b,2.2) - pow(c1.b,2.2)) * ratio, 1/2.2);
output_color.a = pow(pow(c1.a,2.2) + (pow(c2.a,2.2) - pow(c1.a,2.2)) * ratio, 1/2.2);
return output_color;
}
EDIT: For reference, here's a post that is related to this issue, for the purposes of explaining what the "bug" looks like in practice: https://graphicdesign.stackexchange.com/questions/64890/in-gimp-how-do-i-get-the-smudge-blur-tools-to-work-properly
I think there is a flaw in your code.
first i would make sure that 0 <= ratio <=1
second i would use the formula c1.x * (1-ratio) + c2.x *ratio
the way you have set up your calculations at the moment allow for negative results, which would explain the dark spots.
There is no pat answer for when you have to worry about gamma.
You generally want to work in linear color space when mixing, blending, computing lighting, etc.
If your inputs are not in linear space (e.g., that are gamma corrected or are in some color space like sRGB), then you generally want to convert them at once to linear. You haven't told us whether your inputs are in linear RGB.
When you're done, you want to ensure your linear values are corrected for the color space of the output device, whether that's a simple gamma or other color space transform. Again, there's no pat answer here, because you have to know if that conversion is being done for you implicitly at a lower level in the stack or if it's your responsibility.
That said, a lot of code gets away with cheating. They'll take their inputs in sRGB and apply alpha blending or fades as though they're in linear RGB and then output the results as is (probably with clamping). Sometimes that's a reasonable trade off.
your problem lies entirely in the field of perceptual color implementation.
to take care of perceptual lightness aberrations you can use one of the many algorithms found online
one such algorithm is Luma
float luma(color c){
return 0.30 * c.r + 0.59 * c.g + 0.11 * c.b;
}
at this point I would like to point out that the standard method would be to apply all algorithms in the perceptual color space, then convert to rgb color space for display.
colorRGB --(convert)--> colorPerceptual --(input)--> f (colorPerceptual) --(output)--> colorPerceptual' --(convert)--> colorRGB
but if you want to adjust for lightness only (perceptual chromatic aberrations will not be fixed), you can do it efficiently in the following manner
//define color of unit lightness. based on Luma algorithm
color unit_l(1/0.3/3, 1/0.59/3, 1/0.11/3);
color produce_gradient(const color & c1, const color & c2, float ratio) {
color output_color;
output_color.r = c1.r + (c2.r - c1.r) * ratio;
output_color.g = c1.g + (c2.g - c1.g) * ratio;
output_color.b = c1.b + (c2.b - c1.b) * ratio;
output_color.a = c1.a + (c2.a - c1.a) * ratio;
float target_lightness = luma(c1) + (luma(c2) - luma(c1)) * ratio; //linearly interpolate perceptual lightness
float delta_lightness = target_lightness - luma(output_color); //calculate required lightness change magnitude
//adjust lightness
output_color.g += unit_l.r * delta_lightness;
output_color.b += unit_l.g * delta_lightness;
output_color.a += unit_l.b * delta_lightness;
//at this point luma(output_color) approximately equals target_lightness which takes care of the perceptual lightness aberrations
return output_color;
}
Your second code example is perfectly correct, except that the alpha channel is generally not gamma corrected so you shouldn't use pow on it. For efficiency's sake it would be better to do the gamma correction once for each channel, instead of doubling up.
The general rule is that you must do gamma in both directions whenever you're adding or subtracting values. If you're only multiplying or dividing, it makes no difference: pow(pow(x, 2.2) * pow(y, 2.2), 1/2.2) is mathematically equivalent to x * y.
Sometimes you might find that you get better results by working in uncorrected space. For example if you're resizing an image, you should do gamma correction if you're downsizing but not if you're upsizing. I forget where I read this, but I verified it myself - the artifacts from upsizing were much less objectionable if you used gamma corrected pixel values vs. linear ones.
Given a system (a website for instance) that lets a user customize the background color for some section but not the font color (to keep number of options to a minimum), is there a way to programmatically determine if a "light" or "dark" font color is necessary?
I'm sure there is some algorithm, but I don't know enough about colors, luminosity, etc to figure it out on my own.
I encountered similar problem. I had to find a good method of selecting contrastive font color to display text labels on colorscales/heatmaps. It had to be universal method and generated color had to be "good looking", which means that simple generating complementary color was not good solution - sometimes it generated strange, very intensive colors that were hard to watch and read.
After long hours of testing and trying to solve this problem, I found out that the best solution is to select white font for "dark" colors, and black font for "bright" colors.
Here's an example of function I am using in C#:
Color ContrastColor(Color color)
{
int d = 0;
// Counting the perceptive luminance - human eye favors green color...
double luminance = (0.299 * color.R + 0.587 * color.G + 0.114 * color.B)/255;
if (luminance > 0.5)
d = 0; // bright colors - black font
else
d = 255; // dark colors - white font
return Color.FromArgb(d, d, d);
}
This was tested for many various colorscales (rainbow, grayscale, heat, ice, and many others) and is the only "universal" method I found out.
Edit
Changed the formula of counting a to "perceptive luminance" - it really looks better! Already implemented it in my software, looks great.
Edit 2
#WebSeed provided a great working example of this algorithm: http://codepen.io/WebSeed/full/pvgqEq/
Based on Gacek's answer but directly returning color constants (additional modifications see below):
public Color ContrastColor(Color iColor)
{
// Calculate the perceptive luminance (aka luma) - human eye favors green color...
double luma = ((0.299 * iColor.R) + (0.587 * iColor.G) + (0.114 * iColor.B)) / 255;
// Return black for bright colors, white for dark colors
return luma > 0.5 ? Color.Black : Color.White;
}
Note: I removed the inversion of the luma value to make bright colors have a higher value, what seems more natural to me and is also the 'default' calculation method.
(Edit: This has since been adopted in the original answer, too)
I used the same constants as Gacek from here since they worked great for me.
You can also implement this as an Extension Method using the following signature:
public static Color ContrastColor(this Color iColor)
You can then easily call it via
foregroundColor = backgroundColor.ContrastColor().
Thank you #Gacek. Here's a version for Android:
#ColorInt
public static int getContrastColor(#ColorInt int color) {
// Counting the perceptive luminance - human eye favors green color...
double a = 1 - (0.299 * Color.red(color) + 0.587 * Color.green(color) + 0.114 * Color.blue(color)) / 255;
int d;
if (a < 0.5) {
d = 0; // bright colors - black font
} else {
d = 255; // dark colors - white font
}
return Color.rgb(d, d, d);
}
And an improved (shorter) version:
#ColorInt
public static int getContrastColor(#ColorInt int color) {
// Counting the perceptive luminance - human eye favors green color...
double a = 1 - (0.299 * Color.red(color) + 0.587 * Color.green(color) + 0.114 * Color.blue(color)) / 255;
return a < 0.5 ? Color.BLACK : Color.WHITE;
}
My Swift implementation of Gacek's answer:
func contrastColor(color: UIColor) -> UIColor {
var d = CGFloat(0)
var r = CGFloat(0)
var g = CGFloat(0)
var b = CGFloat(0)
var a = CGFloat(0)
color.getRed(&r, green: &g, blue: &b, alpha: &a)
// Counting the perceptive luminance - human eye favors green color...
let luminance = 1 - ((0.299 * r) + (0.587 * g) + (0.114 * b))
if luminance < 0.5 {
d = CGFloat(0) // bright colors - black font
} else {
d = CGFloat(1) // dark colors - white font
}
return UIColor( red: d, green: d, blue: d, alpha: a)
}
Javascript [ES2015]
const hexToLuma = (colour) => {
const hex = colour.replace(/#/, '');
const r = parseInt(hex.substr(0, 2), 16);
const g = parseInt(hex.substr(2, 2), 16);
const b = parseInt(hex.substr(4, 2), 16);
return [
0.299 * r,
0.587 * g,
0.114 * b
].reduce((a, b) => a + b) / 255;
};
Ugly Python if you don't feel like writing it :)
'''
Input a string without hash sign of RGB hex digits to compute
complementary contrasting color such as for fonts
'''
def contrasting_text_color(hex_str):
(r, g, b) = (hex_str[:2], hex_str[2:4], hex_str[4:])
return '000' if 1 - (int(r, 16) * 0.299 + int(g, 16) * 0.587 + int(b, 16) * 0.114) / 255 < 0.5 else 'fff'
Thanks for this post.
For whoever might be interested, here's an example of that function in Delphi:
function GetContrastColor(ABGColor: TColor): TColor;
var
ADouble: Double;
R, G, B: Byte;
begin
if ABGColor <= 0 then
begin
Result := clWhite;
Exit; // *** EXIT RIGHT HERE ***
end;
if ABGColor = clWhite then
begin
Result := clBlack;
Exit; // *** EXIT RIGHT HERE ***
end;
// Get RGB from Color
R := GetRValue(ABGColor);
G := GetGValue(ABGColor);
B := GetBValue(ABGColor);
// Counting the perceptive luminance - human eye favors green color...
ADouble := 1 - (0.299 * R + 0.587 * G + 0.114 * B) / 255;
if (ADouble < 0.5) then
Result := clBlack // bright colors - black font
else
Result := clWhite; // dark colors - white font
end;
This is such a helpful answer. Thanks for it!
I'd like to share an SCSS version:
#function is-color-light( $color ) {
// Get the components of the specified color
$red: red( $color );
$green: green( $color );
$blue: blue( $color );
// Compute the perceptive luminance, keeping
// in mind that the human eye favors green.
$l: 1 - ( 0.299 * $red + 0.587 * $green + 0.114 * $blue ) / 255;
#return ( $l < 0.5 );
}
Now figuring out how to use the algorithm to auto-create hover colors for menu links. Light headers get a darker hover, and vice-versa.
Short Answer:
Calculate the luminance (Y) of the given color, and flip the text either black or white based on a pre-determined middle contrast figure. For a typical sRGB display, flip to white when Y < 0.4 (i.e. 40%)
Longer Answer
Not surprisingly, nearly every answer here presents some misunderstanding, and/or is quoting incorrect coefficients. The only answer that is actually close is that of Seirios, though it relies on WCAG 2 contrast which is known to be incorrect itself.
If I say "not surprisingly", it is due in part to the massive amount of misinformation on the internet on this particular subject. The fact this field is still a subject of active research and unsettled science adds to the fun. I come to this conclusion as the result of the last few years of research into a new contrast prediction method for readability.
The field of visual perception is dense and abstract, as well as developing, so it is common for misunderstandings to exist. For instance, HSV and HSL are not even close to perceptually accurate. For that you need a perceptually uniform model such as CIELAB or CIELUV or CIECAM02 etc.
Some misunderstandings have even made their way into standards, such as the contrast part of WCAG 2 (1.4.3), which has been demonstrated as incorrect over much of its range.
First Fix:
The coefficients shown in many answers here are (.299, .587, .114) and are wrong, as they pertain to a long obsolete system known as NTSC YIQ, the analog broadcast system in North America some decades ago. While they may still be used in some YCC encoding specs for backwards compatibility, they should not be used in an sRGB context.
The coefficients for sRGB and Rec.709 (HDTV) are:
Red: 0.2126
Green: 0.7152
Blue: 0.0722
Other color spaces like Rec2020 or AdobeRGB use different coefficients, and it is important to use the correct coefficients for a given color space.
The coefficients can not be applied directly to 8 bit sRGB encoded image or color data. The encoded data must first be linearized, then the coefficients applied to find the luminance (light value) of the given pixel or color.
For sRGB there is a piecewise transform, but as we are only interested in the perceived lightness contrast to find the point to "flip" the text from black to white, we can take a shortcut via the simple gamma method.
Andy's Shortcut to Luminance & Lightness
Divide each sRGB color by 255.0, then raise to the power of 2.2, then multiply by the coefficients and sum them to find estimated luminance.
let Ys = Math.pow(sR/255.0,2.2) * 0.2126 +
Math.pow(sG/255.0,2.2) * 0.7152 +
Math.pow(sB/255.0,2.2) * 0.0722; // Andy's Easy Luminance for sRGB. For Rec709 HDTV change the 2.2 to 2.4
Here, Y is the relative luminance from an sRGB monitor, on a 0.0 to 1.0 scale. This is not relative to perception though, and we need further transforms to fit our human visual perception of the relative lightness, and also of the perceived contrast.
The 40% Flip
But before we get there, if you are only looking for a basic point to flip the text from black to white or vice versa, the cheat is to use the Y we just derived, and make the flip point about Y = 0.40;. so for colors higher than 0.4 Y, make the text black #000 and for colors darker than 0.4 Y, make the text white #fff.
let textColor = (Ys < 0.4) ? "#fff" : "#000"; // Low budget down and dirty text flipper.
Why 40% and not 50%? Our human perception of lightness/darkness and of contrast is not linear. For a self illuminated display, it so happens that 0.4 Y is about middle contrast under most typical conditions.
Yes it varies, and yes this is an over simplification. But if you are flipping text black or white, the simple answer is a useful one.
Perceptual Bonus Round
Predicting the perception of a given color and lightness is still a subject of active research, and not entirely settled science. The L* (Lstar) of CIELAB or LUV has been used to predict perceptual lightness, and even to predict perceived contrast. However, L* works well for surface colors in a very defined/controlled environment, and does not work as well for self illuminated displays.
While this varies depending on not only the display type and calibration, but also your environment and the overall page content, if you take the Y from above, and raise it by around ^0.685 to ^0.75, you'll find that 0.5 is typically the middle point to flip the text from white to black.
let textColor = (Math.pow(Ys,0.75) < 0.5) ? "#fff" : "#000"; // perceptually based text flipper.
Using the exponent 0.685 will make the text color swap on a darker color, and using 0.8 will make the text swap on a lighter color.
Spatial Frequency Double Bonus Round
It is useful to note that contrast is NOT just the distance between two colors. Spatial frequency, in other words font weight and size, are also CRITICAL factors that cannot be ignored.
That said, you may find that when colors are in the midrange, that you'd want to increase the size and or weight of the font.
let textSize = "16px";
let textWeight = "normal";
let Ls = Math.pow(Ys,0.7);
if (Ls > 0.33 && Ls < 0.66) {
textSize = "18px";
textWeight = "bold";
} // scale up fonts for the lower contrast mid luminances.
Hue R U
It's outside the scope of this post to delve deeply, but above we are ignoring hue and chroma. Hue and chroma do have an effect, such as Helmholtz Kohlrausch, and the simpler luminance calculations above do not always predict intensity due to saturated hues.
To predict these more subtle aspects of perception, a complete appearance model is needed. R. Hunt, M. Fairshild, E. Burns are a few authors worth looking into if you want to plummet down the rabbit hole of human visual perception...
For this narrow purpose, we could re-weight the coefficients slightly, knowing that green makes up the majority of of luminance, and pure blue and pure red should always be the darkest of two colors. What tends to happen using the standard coefficients, is middle colors with a lot of blue or red may flip to black at a lower than ideal luminance, and colors with a high green component may do the opposite.
That said, I find this is best addressed by increasing font size and weight in the middle colors.
Putting it all together
So we'll assume you'll send this function a hex string, and it will return a style string that can be sent to a particular HTML element.
Check out the CODEPEN, inspired by the one Seirios did:
CodePen: Fancy Font Flipping
One of the things the Codepen code does is increase the text size for the lower contrast midrange. Here's a sample:
And if you want to play around with some of these concepts, see the SAPC development site at https://www.myndex.com/SAPC/ clicking on "research mode" provides interactive experiments to demonstrate these concepts.
Terms of enlightenment
Luminance: Y (relative) or L (absolute cd/m2) a spectrally weighted but otherwise linear measure of light. Not to be confused with "Luminosity".
Luminosity: light over time, useful in astronomy.
Lightness: L* (Lstar) perceptual lightness as defined by the CIE. Some models have a related lightness J*.
I had the same problem but i had to develop it in PHP. I used #Garek's solution and i also used this answer:
Convert hex color to RGB values in PHP to convert HEX color code to RGB.
So i'm sharing it.
I wanted to use this function with given Background HEX color, but not always starting from '#'.
//So it can be used like this way:
$color = calculateColor('#804040');
echo $color;
//or even this way:
$color = calculateColor('D79C44');
echo '<br/>'.$color;
function calculateColor($bgColor){
//ensure that the color code will not have # in the beginning
$bgColor = str_replace('#','',$bgColor);
//now just add it
$hex = '#'.$bgColor;
list($r, $g, $b) = sscanf($hex, "#%02x%02x%02x");
$color = 1 - ( 0.299 * $r + 0.587 * $g + 0.114 * $b)/255;
if ($color < 0.5)
$color = '#000000'; // bright colors - black font
else
$color = '#ffffff'; // dark colors - white font
return $color;
}
Flutter implementation
Color contrastColor(Color color) {
if (color == Colors.transparent || color.alpha < 50) {
return Colors.black;
}
double luminance = (0.299 * color.red + 0.587 * color.green + 0.114 * color.blue) / 255;
return luminance > 0.5 ? Colors.black : Colors.white;
}
Based on Gacek's answer, and after analyzing #WebSeed's example with the WAVE browser extension, I've come up with the following version that chooses black or white text based on contrast ratio (as defined in W3C's Web Content Accessibility Guidelines (WCAG) 2.1), instead of luminance.
This is the code (in javascript):
// As defined in WCAG 2.1
var relativeLuminance = function (R8bit, G8bit, B8bit) {
var RsRGB = R8bit / 255.0;
var GsRGB = G8bit / 255.0;
var BsRGB = B8bit / 255.0;
var R = (RsRGB <= 0.03928) ? RsRGB / 12.92 : Math.pow((RsRGB + 0.055) / 1.055, 2.4);
var G = (GsRGB <= 0.03928) ? GsRGB / 12.92 : Math.pow((GsRGB + 0.055) / 1.055, 2.4);
var B = (BsRGB <= 0.03928) ? BsRGB / 12.92 : Math.pow((BsRGB + 0.055) / 1.055, 2.4);
return 0.2126 * R + 0.7152 * G + 0.0722 * B;
};
var blackContrast = function(r, g, b) {
var L = relativeLuminance(r, g, b);
return (L + 0.05) / 0.05;
};
var whiteContrast = function(r, g, b) {
var L = relativeLuminance(r, g, b);
return 1.05 / (L + 0.05);
};
// If both options satisfy AAA criterion (at least 7:1 contrast), use preference
// else, use higher contrast (white breaks tie)
var chooseFGcolor = function(r, g, b, prefer = 'white') {
var Cb = blackContrast(r, g, b);
var Cw = whiteContrast(r, g, b);
if(Cb >= 7.0 && Cw >= 7.0) return prefer;
else return (Cb > Cw) ? 'black' : 'white';
};
A working example may be found in my fork of #WebSeed's codepen, which produces zero low contrast errors in WAVE.
As Kotlin / Android extension:
fun Int.getContrastColor(): Int {
// Counting the perceptive luminance - human eye favors green color...
val a = 1 - (0.299 * Color.red(this) + 0.587 * Color.green(this) + 0.114 * Color.blue(this)) / 255
return if (a < 0.5) Color.BLACK else Color.WHITE
}
An implementation for objective-c
+ (UIColor*) getContrastColor:(UIColor*) color {
CGFloat red, green, blue, alpha;
[color getRed:&red green:&green blue:&blue alpha:&alpha];
double a = ( 0.299 * red + 0.587 * green + 0.114 * blue);
return (a > 0.5) ? [[UIColor alloc]initWithRed:0 green:0 blue:0 alpha:1] : [[UIColor alloc]initWithRed:255 green:255 blue:255 alpha:1];
}
iOS Swift 3.0 (UIColor extension):
func isLight() -> Bool
{
if let components = self.cgColor.components, let firstComponentValue = components[0], let secondComponentValue = components[1], let thirdComponentValue = components[2] {
let firstComponent = (firstComponentValue * 299)
let secondComponent = (secondComponentValue * 587)
let thirdComponent = (thirdComponentValue * 114)
let brightness = (firstComponent + secondComponent + thirdComponent) / 1000
if brightness < 0.5
{
return false
}else{
return true
}
}
print("Unable to grab components and determine brightness")
return nil
}
Swift 4 Example:
extension UIColor {
var isLight: Bool {
let components = cgColor.components
let firstComponent = ((components?[0]) ?? 0) * 299
let secondComponent = ((components?[1]) ?? 0) * 587
let thirdComponent = ((components?[2]) ?? 0) * 114
let brightness = (firstComponent + secondComponent + thirdComponent) / 1000
return !(brightness < 0.6)
}
}
UPDATE - Found that 0.6 was a better test bed for the query
Note there is an algorithm for this in the google closure library that references a w3c recommendation: http://www.w3.org/TR/AERT#color-contrast. However, in this API you provide a list of suggested colors as a starting point.
/**
* Find the "best" (highest-contrast) of the suggested colors for the prime
* color. Uses W3C formula for judging readability and visual accessibility:
* http://www.w3.org/TR/AERT#color-contrast
* #param {goog.color.Rgb} prime Color represented as a rgb array.
* #param {Array<goog.color.Rgb>} suggestions Array of colors,
* each representing a rgb array.
* #return {!goog.color.Rgb} Highest-contrast color represented by an array.
*/
goog.color.highContrast = function(prime, suggestions) {
var suggestionsWithDiff = [];
for (var i = 0; i < suggestions.length; i++) {
suggestionsWithDiff.push({
color: suggestions[i],
diff: goog.color.yiqBrightnessDiff_(suggestions[i], prime) +
goog.color.colorDiff_(suggestions[i], prime)
});
}
suggestionsWithDiff.sort(function(a, b) { return b.diff - a.diff; });
return suggestionsWithDiff[0].color;
};
/**
* Calculate brightness of a color according to YIQ formula (brightness is Y).
* More info on YIQ here: http://en.wikipedia.org/wiki/YIQ. Helper method for
* goog.color.highContrast()
* #param {goog.color.Rgb} rgb Color represented by a rgb array.
* #return {number} brightness (Y).
* #private
*/
goog.color.yiqBrightness_ = function(rgb) {
return Math.round((rgb[0] * 299 + rgb[1] * 587 + rgb[2] * 114) / 1000);
};
/**
* Calculate difference in brightness of two colors. Helper method for
* goog.color.highContrast()
* #param {goog.color.Rgb} rgb1 Color represented by a rgb array.
* #param {goog.color.Rgb} rgb2 Color represented by a rgb array.
* #return {number} Brightness difference.
* #private
*/
goog.color.yiqBrightnessDiff_ = function(rgb1, rgb2) {
return Math.abs(
goog.color.yiqBrightness_(rgb1) - goog.color.yiqBrightness_(rgb2));
};
/**
* Calculate color difference between two colors. Helper method for
* goog.color.highContrast()
* #param {goog.color.Rgb} rgb1 Color represented by a rgb array.
* #param {goog.color.Rgb} rgb2 Color represented by a rgb array.
* #return {number} Color difference.
* #private
*/
goog.color.colorDiff_ = function(rgb1, rgb2) {
return Math.abs(rgb1[0] - rgb2[0]) + Math.abs(rgb1[1] - rgb2[1]) +
Math.abs(rgb1[2] - rgb2[2]);
};
base R version of #Gacek's answer to get luminance (you can apply your own threshold easily)
# vectorized
luminance = function(col) c(c(.299, .587, .114) %*% col2rgb(col)/255)
Usage:
luminance(c('black', 'white', '#236FAB', 'darkred', '#01F11F'))
# [1] 0.0000000 1.0000000 0.3730039 0.1629843 0.5698039
If you're manipulating color spaces for visual effect it's generally easier to work in HSL (Hue, Saturation and Lightness) than RGB. Moving colours in RGB to give naturally pleasing effects tends to be quite conceptually difficult, whereas converting into HSL, manipulating there, then converting back out again is more intuitive in concept and invariably gives better looking results.
Wikipedia has a good introduction to HSL and the closely related HSV. And there's free code around the net to do the conversion (for example here is a javascript implementation)
What precise transformation you use is a matter of taste, but personally I'd have thought reversing the Hue and Lightness components would be certain to generate a good high contrast colour as a first approximation, but you can easily go for more subtle effects.
You can have any hue text on any hue background and ensure that it is legible. I do it all the time. There's a formula for this in Javascript on Readable Text in Colour – STW*
As it says on that link, the formula is a variation on the inverse-gamma adjustment calculation, though a bit more manageable IMHO.
The menus on the right-hand side of that link and its associated pages use randomly-generated colours for text and background, always legible. So yes, clearly it can be done, no problem.
An Android variation that captures the alpha as well.
(thanks #thomas-vos)
/**
* Returns a colour best suited to contrast with the input colour.
*
* #param colour
* #return
*/
#ColorInt
public static int contrastingColour(#ColorInt int colour) {
// XXX https://stackoverflow.com/questions/1855884/determine-font-color-based-on-background-color
// Counting the perceptive luminance - human eye favors green color...
double a = 1 - (0.299 * Color.red(colour) + 0.587 * Color.green(colour) + 0.114 * Color.blue(colour)) / 255;
int alpha = Color.alpha(colour);
int d = 0; // bright colours - black font;
if (a >= 0.5) {
d = 255; // dark colours - white font
}
return Color.argb(alpha, d, d, d);
}
I would have commented on the answer by #MichaelChirico but I don't have enough reputation. So, here's an example in R with returning the colours:
get_text_colour <- function(
background_colour,
light_text_colour = 'white',
dark_text_colour = 'black',
threshold = 0.5
) {
background_luminance <- c(
c( .299, .587, .114 ) %*% col2rgb( background_colour ) / 255
)
return(
ifelse(
background_luminance < threshold,
light_text_colour,
dark_text_colour
)
)
}
> get_text_colour( background_colour = 'blue' )
[1] "white"
> get_text_colour( background_colour = c( 'blue', 'yellow', 'pink' ) )
[1] "white" "black" "black"
> get_text_colour( background_colour = c('black', 'white', '#236FAB', 'darkred', '#01F11F') )
[1] "white" "black" "white" "white" "black"
I have a series of 100 integer values which I need to reduce/subsample to 77 values for the purpose of fitting into a predefined space on screen. This gives a fraction of 77/100 values-per-pixel - not very neat.
Assuming the 77 is fixed and cannot be changed, what are some typical techniques for subsampling 100 numbers down to 77. I get a sense that it will be a jagged mapping, by which I mean the first new value is the average of [0, 1] then the next value is [3], then average [4, 5] etc. But how do I approach getting the pattern for this mapping?
I am working in C++, although I'm more interested in the technique than implementation.
Thanks in advance.
Either if you downsample or you oversample, you are trying to reconstruct a signal over nonsampled points in time... so you have to make some assumptions.
The sampling theorem tells you that if you sample a signal knowing that it has no frequency components over half the sampling frequency, you can continously and completely recover the signal over the whole timing period. There's a way to reconstruct the signal using sinc() functions (this is sin(x)/x)
sinc() (indeed sin(M_PI/Sampling_period*x)/M_PI/x) is a function that has the following properties:
Its value is 1 for x == 0.0 and 0 for x == k*Sampling_period with k == 0, +-1, +-2, ...
It has no frequency component over half of the sampling_frequency derived from Sampling_period.
So if you consider the sum of the functions F_x(x) = Y[k]*sinc(x/Sampling_period - k) to be the sinc function that equals the sampling value at position k and 0 at other sampling value and sum over all k in your sample, you'll get the best continous function that has the properties of not having components on frequencies over half the sampling frequency and have the same values as your samples set.
Said this, you can resample this function at whatever position you like, getting the best way to resample your data.
This is by far, a complicated way of resampling data, (it has also the problem of not being causal, so it cannot be implemented in real time) and you have several methods used in the past to simplify the interpolation. you have to constructo all the sinc functions for each sample point and add them together. Then you have to resample the resultant function to the new sampling points and give that as a result.
Next is an example of the interpolation method just described. It accepts some input data (in_sz samples) and output interpolated data with the method described before (I supposed the extremums coincide, which makes N+1 samples equal N+1 samples, and this makes the somewhat intrincate calculations of (in_sz - 1)/(out_sz - 1) in the code (change to in_sz/out_sz if you want to make plain N samples -> M samples conversion:
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
/* normalized sinc function */
double sinc(double x)
{
x *= M_PI;
if (x == 0.0) return 1.0;
return sin(x)/x;
} /* sinc */
/* interpolate a function made of in samples at point x */
double sinc_approx(double in[], size_t in_sz, double x)
{
int i;
double res = 0.0;
for (i = 0; i < in_sz; i++)
res += in[i] * sinc(x - i);
return res;
} /* sinc_approx */
/* do the actual resampling. Change (in_sz - 1)/(out_sz - 1) if you
* don't want the initial and final samples coincide, as is done here.
*/
void resample_sinc(
double in[],
size_t in_sz,
double out[],
size_t out_sz)
{
int i;
double dx = (double) (in_sz-1) / (out_sz-1);
for (i = 0; i < out_sz; i++)
out[i] = sinc_approx(in, in_sz, i*dx);
}
/* test case */
int main()
{
double in[] = {
0.0, 1.0, 0.5, 0.2, 0.1, 0.0,
};
const size_t in_sz = sizeof in / sizeof in[0];
const size_t out_sz = 5;
double out[out_sz];
int i;
for (i = 0; i < in_sz; i++)
printf("in[%d] = %.6f\n", i, in[i]);
resample_sinc(in, in_sz, out, out_sz);
for (i = 0; i < out_sz; i++)
printf("out[%.6f] = %.6f\n", (double) i * (in_sz-1)/(out_sz-1), out[i]);
return EXIT_SUCCESS;
} /* main */
There are different ways of interpolation (see wikipedia)
The linear one would be something like:
std::array<int, 77> sampling(const std::array<int, 100>& a)
{
std::array<int, 77> res;
for (int i = 0; i != 76; ++i) {
int index = i * 99 / 76;
int p = i * 99 % 76;
res[i] = ((p * a[index + 1]) + ((76 - p) * a[index])) / 76;
}
res[76] = a[99]; // done outside of loop to avoid out of bound access (0 * a[100])
return res;
}
Live example
Create 77 new pixels based on the weighted average of their positions.
As a toy example, think about the 3 pixel case which you want to subsample to 2.
Original (denote as multidimensional array original with RGB as [0, 1, 2]):
|----|----|----|
Subsample (denote as multidimensional array subsample with RGB as [0, 1, 2]):
|------|------|
Here, it is intuitive to see that the first subsample seems like 2/3 of the first original pixel and 1/3 of the next.
For the first subsample pixel, subsample[0], you make it the RGB average of the m original pixels that overlap, in this case original[0] and original[1]. But we do so in weighted fashion.
subsample[0][0] = original[0][0] * 2/3 + original[1][0] * 1/3 # for red
subsample[0][1] = original[0][1] * 2/3 + original[1][1] * 1/3 # for green
subsample[0][2] = original[0][2] * 2/3 + original[1][2] * 1/3 # for blue
In this example original[1][2] is the green component of the second original pixel.
Keep in mind for different subsampling you'll have to determine the set of original cells that contribute to the subsample, and then normalize to find the relative weights of each.
There are much more complex graphics techniques, but this one is simple and works.
Everything depends on what you wish to do with the data - how do you want to visualize it.
A very simple approach would be to render to a 100-wide image, and then smooth scale the image down to a narrower size. Whatever graphics/development framework you're using will surely support such an operation.
Say, though, that your goal might be to retain certain qualities of the data, such as minima and maxima. In such a case, for each bin, you're drawing a line of darker color up to the minimum value, and then continue with a lighter color up to the maximum. Or, you could, instead of just putting a pixel at the average value, you draw a line from the minimum to the maximum.
Finally, you might wish to render as if you had 77 values only - then the goal is to somehow transform the 100 values down to 77. This will imply some kind of an interpolation. Linear or quadratic interpolation is easy, but adds distortions to the signal. Ideally, you'd probably want to throw a sinc interpolator at the problem. A good list of them can be found here. For theoretical background, look here.
We have some old devices that don't support non-pot textures and we have a function that converts ARGB textures to next power of 2 texture. The problem is that it's quite slow and we're wondering if there is a better approach to convert these textures.
void PotTexture()
{
size_t u2 = 1; while (u2 < imageData.width) u2 *= 2;
size_t v2 = 1; while (v2 < imageData.height) v2 *= 2;
std::vector<unsigned char> pottedImageData;
pottedImageData.resize(u2 * v2 * 4);
size_t y, x, c;
for (y = 0; y < imageData.height; y++)
{
for (x = 0; x < imageData.width; x++)
{
for (c = 0; c < 4; c++)
{
pottedImageData[4 * u2 * y + 4 * x + c] = imageData.convertedData[4 * imageData.width * y + 4 * x + c];
}
}
}
imageData.width = u2;
imageData.height = v2;
std::swap(imageData.convertedData, pottedImageData);
}
On some devices this can easily use 100% of the CPU so any optimizations would be amazing. Are there any existing functions that I could look at that perform this conversion?
Edit:
I've optimized the above loop slightly to:
for (y = 0; y < imageData.height; y++)
{
memcpy(
&(pottedImageData[y * u2 * 4]),
&(imageData.convertedData[y * imageData.width * 4]),
imageData.width * 4);
}
Even devices that don't support NPOT texture should support NPOT load.
Create the texture as an exact power of 2 and NO CONTENT using glTexImage2D, passing a null pointer for data.
data may be a null pointer. In this case, texture memory is allocated to accommodate a texture of width width and height height. You can then download subtextures to initialize this texture memory. The image is undefined if the user tries to apply an uninitialized portion of the texture image to a primitive.
Then use glTexSubImage2D to upload a NPOT image, which occupies only a portion of the total texture. This can be done without any CPU-side image rearrangement.
Having had a similar problem in a program I wrote, I took a very different approach. Rather than stretch the source texture, I just copied it into the top left corner of an otherwise empty power-of-two texture.
Then in the pixel shader you use a pair of floats to adjust s,t values so you fetch from just the top left corner.
float sAdjust = static_cast<float>(textureWidth) / static_cast<float>(containerWidth)
float tAdjust = static_cast<float>(textureHeight) / static_cast<float>(containerHeight)
is how you compute them, and to use them you'll get a Vec2 holding the s,t coordinates, just multiply s by sAdjust and t by tAdjust before using them to fetch. If you're using Direct3D, it'd be something akin to this:
D3DXVECTOR4 stAdjust;
stAdjust.x = sAdjust;
stAdjust.y = tAdjust;
// Transfer stAdjust into a float4 inside your pixel shader, call it stAdjust in there
Now in the pixel shader assume you have:
float2 texcoord;
float4 stAdjust;
you just say:
texcoord.x = texcoord.x * stAdjust.x;
texcoord.y = texcoord.y * stAdjust.y;
before using texcoord. Sorry I can't tell you how to do this in GLSL, but you get the general idea.
Okay, the very first optimization can be done here:
size_t u2 = 1; while (u2 < imageData.width) u2 *= 2;
size_t v2 = 1; while (v2 < imageData.height) v2 *= 2;
What you want to do is (for each dimension) find the floor of the logarithm-base2 (log2) and put that into 2**n+1. The standard math library has function log2 but it operates on floating point. But we can use is. 2**n can be written as 1 << n. So this gives
size_t const dim_p2_… = 1 << (int)floor(log2(dim_…)+1);
Better but not yet ideal, because of that float conversion. The Bit Twiddling hacks document has a few functions for integer ilog2: https://graphics.stanford.edu/~seander/bithacks.html#IntegerLog
But we're still not optimal. Let me introduce you to Compiler intrinsics, which translate into machine instructions, if the machine in question can do it on the metal.
GNU GCC: int __builtin_ffs (unsigned int x), which returns one plus the index of the least significant 1-bit of x, or if x is zero, returns zero.
MSVC++: _BitScanReverse, which returns the length of the run of the most significant bits set zero. So _BitScanReverse is like builtin_ffs - 1 (there's also a builtin_clz which behaves exactly like BitScanReverse.
So we can do
#define ilog2_p1(x) (__builtin_ffs(x))
or
#define ilog2_p1(x) (__BitScanReverse(x)+1)
and use that.
size_t const dim_p2_… = 1 << (int)floor(ilog2_p1(dim_…));
While we're at bit twiddling: We can save that whole ordeal if a texture is already in power of two format. A few years ago I (independently) rediscovered the wonderfully portable bit twiddling trick, exploiting the properties of complement-2 integers. You can also find it in the bit twiddles document. But the type neutral, concise macro form is rarely seen. So here it is:
#define ISPOW2(x) ( (x) && !( (x) & ((x) - 1) ) )
You're using C++ so templates are in order:
template<typename T> bool ispow2(T const x) { return x && !( x & (x - 1) ); }
Then Ben Voight already told you, how to use glTexSubImage2D to load that into the texture. Also have a look at the GL_ARB_texture_rectangle extension, that allows to load NPOT textures, but without the ability for mipmapping and advanced filtering. But it might be a viable choice for you.
If you ever feel the need to scale the texture it's always worth looking into dual spaces. In this case the spatial frequency domain dual space. Upscaling a signal is essentially a impulse response. As such it can be described as a convolution. Convolutions usually are O(n²) in complexity. But due to the Fourier Convolution theorem in Fourier space the equivalent is simple multiplication, so it becomes O(n). FFT can be done with O(n log n), so the total complexity is about O(n + 2n log n), which is much better.
I am fond of random generation - and random colors - so I decided to combine them both and made a simple 2d landscape generator. What my idea was is to, depending on how high a block is, (yes, the terrain is made of blocks) make it lighter or darker, where things nearest the top are lighter, and towards the bottom are darker. I got it working in grayscale, but as I figured out, you cannot really use a base RGB color and make it lighter, given that the ratio between RGB values, or anything of the sort, seem to be unusable. Solution? HSL. Or perhaps HSV, to be honest I still don't know the difference. I am referring to H 0-360, S & V/L = 0-100. Although... well, 360 = 0, so that is 360 values, but if you actually have 0-100, that is 101. Is it really 0-359 and 1-100 (or 0-99?), but color selection editors (currently referring to GIMP... MS paint had over 100 saturation) allow you to input such values?
Anyhow, I found a formula for HSL->RGB conversion (here & here. As far as I know, the final formulas are the same, but nonetheless I will provide the code (note that this is from the latter easyrgb.com link):
Hue_2_RGB
float Hue_2_RGB(float v1, float v2, float vH) //Function Hue_2_RGB
{
if ( vH < 0 )
vH += 1;
if ( vH > 1 )
vH -= 1;
if ( ( 6 * vH ) < 1 )
return ( v1 + ( v2 - v1 ) * 6 * vH );
if ( ( 2 * vH ) < 1 )
return ( v2 );
if ( ( 3 * vH ) < 2 )
return ( v1 + ( v2 - v1 ) * ( ( 2 / 3 ) - vH ) * 6 );
return ( v1 );
}
and the other piece of code:
float var_1 = 0, var_2 = 0;
if (saturation == 0) //HSL from 0 to 1
{
red = luminosity * 255; //RGB results from 0 to 255
green = luminosity * 255;
blue = luminosity * 255;
}
else
{
if ( luminosity < 0.5 )
var_2 = luminosity * (1 + saturation);
else
var_2 = (luminosity + saturation) - (saturation * luminosity);
var_1 = 2 * luminosity - var_2;
red = 255 * Hue_2_RGB(var_1, var_2, hue + ( 1 / 3 ) );
green = 255 * Hue_2_RGB( var_1, var_2, hue );
blue = 255 * Hue_2_RGB( var_1, var_2, hue - ( 1 / 3 ) );
}
Sorry, not sure of a good way to fix the whitespace on those.
I replaced H, S, L values with my own names, hue, saturation, and luminosity. I looked it back over, but unless I am missing something I replaced it correctly. The hue_2_RGB function, though, is completely unedited, besides the parts needed for C++. (e.g. variable type). I also used to have ints for everything - R, G, B, H, S, L - then it occured to me... HSL was a floating point for the formula - or at least, it would seem it should be. So I made variable used (var_1, var_2, all the v's, R, G, B, hue, saturation, luminosity) to floats. So I don't beleive it is some sort of data loss error here. Additionally, before entering the formula, I have hue /= 360, saturation /= 100, and luminosity /= 100. Note that before that point, I have hue = 59, saturation = 100, and luminosity = 70. I believe I got the hue right as 360 to ensure 0-1, but trying /= 100 didn't fix it either.
and so, my question is, why is the formula not working? Thanks if you can help.
EDIT: if the question is not clear, please comment on it.
Your premise is wrong. You can just scale the RGB color. The Color class in Java for example includes commands called .darker() and .brighter(), these use a factor of .7 but you can use anything you want.
public Color darker() {
return new Color(Math.max((int)(getRed() *FACTOR), 0),
Math.max((int)(getGreen()*FACTOR), 0),
Math.max((int)(getBlue() *FACTOR), 0),
getAlpha());
}
public Color brighter() {
int r = getRed();
int g = getGreen();
int b = getBlue();
int alpha = getAlpha();
/* From 2D group:
* 1. black.brighter() should return grey
* 2. applying brighter to blue will always return blue, brighter
* 3. non pure color (non zero rgb) will eventually return white
*/
int i = (int)(1.0/(1.0-FACTOR));
if ( r == 0 && g == 0 && b == 0) {
return new Color(i, i, i, alpha);
}
if ( r > 0 && r < i ) r = i;
if ( g > 0 && g < i ) g = i;
if ( b > 0 && b < i ) b = i;
return new Color(Math.min((int)(r/FACTOR), 255),
Math.min((int)(g/FACTOR), 255),
Math.min((int)(b/FACTOR), 255),
alpha);
}
In short, multiply all three colors by the same static factor and you will have the same ratio of colors. It's a lossy operation and you need to be sure to crimp the colors to stay in range (which is more lossy than the rounding error).
Frankly any conversion to RGB to HSV is just math, and changing the HSV V factor is just math and changing it back is more math. You don't need any of that. You can just do the math. Which is going to be make the max component color greater without messing up the ratio between the colors.
--
If the question is more specific and you simply want better results. There are better ways to calculate this. You rather than static scaling the lightness (L does not refer to luminosity) you can convert to a luma component. Which is basically weighted in a specific way. Color science and computing is dealing with human observers and they are more important than the actual math. To account for some of these human quirks there's a need to "fix things" to be more similar to what the average human perceives. Luma scales as follows:
Y = 0.2126 R + 0.7152 G + 0.0722 B
This similarly is reflected in the weights 30,59,11 which are wrongly thought to be good color distance weights. These weighs are the color's contribution to the human perception of brightness. For example the brightest blue is seen by humans to be pretty dark. Whereas yellow (exactly opposed to blue) is seen to be so damned bright that you can't even make it out against a white background. A number of colorspaces Y'CbCr included account for these differences in perception of lightness by scaling. Then you can change that value and it will be scaled again when you scale it back.
Resulting in a different color, which should be more akin to what humans would say is a "lighter" version of the same color. There are better and better approximations of this human system and so using better and fancier math to account for it will typically give you better and better results.
For a good overview that touches on these issues.
http://www.compuphase.com/cmetric.htm